FIX:(#627) Rebooted #1's showing incorrectly on pull-list/matching, FIX:(#630) identify cbr/cbz files only in filechecker, FIX:(#632) NZBGet Download directory option added, IMP: Manual Annual add now supported via comicid directly (on bottom of comicdetails screen), IMP: Publisher now accounted for when searching, IMP: / added for file format values, IMP: Store/Publication dates now used to help better matching during et all, FIX:(#620) dognzb/nzb.su should be working (again), IMP: Very generic API implemented (taken from headphones), lots of other fixes...

This commit is contained in:
evilhero 2014-02-26 14:48:50 -05:00
parent bbf9c0f0e6
commit 606114c743
21 changed files with 1591 additions and 400 deletions

61
API_REFERENCE Normal file
View File

@ -0,0 +1,61 @@
Because of how Mylar is based upon headphones, logistically the API that rembo10 has created for headphones
will work reasonably well for Mylar - with some obvious changes. With that said, this was all taken from headphones
as a starting base and will be added to as required.
The API is really new and probably needs alot of cleaning up and to be tested out properly.
There are no error codes yet, but will be added soon.
General structure:
http://localhost:8090 + HTTP_ROOT + /api?apikey=$apikey&cmd=$command
Data returned in json format.
If executing a command like "delComic" or "addComic" you'll get back an "OK", else, you'll get the data you requested
$commands&parameters[&optionalparameters]:
getIndex (fetch data from index page. Returns: ArtistName, ArtistSortName, ArtistID, Status, DateAdded,
[LatestAlbum, ReleaseDate, AlbumID], HaveTracks, TotalTracks,
IncludeExtras, LastUpdated, [ArtworkURL, ThumbURL]: a remote url to the artwork/thumbnail. To get the cached image path, see getArtistArt command.
ThumbURL is added/updated when an artist is added/updated. If your using the database method to get the artwork,
it's more reliable to use the ThumbURL than the ArtworkURL)
getComic&id=$comicid (fetch artist data. returns the artist object (see above) and album info: Status, AlbumASIN, DateAdded, AlbumTitle, ArtistName, ReleaseDate, AlbumID, ArtistID, Type, ArtworkURL: hosted image path. For cached image, see getAlbumArt command)
getIssue&id=$comicid (fetch data from album page. Returns the album object, a description object and a tracks object. Tracks contain: AlbumASIN, AlbumTitle, TrackID, Format, TrackDuration (ms), ArtistName, TrackTitle, AlbumID, ArtistID, Location, TrackNumber, CleanName (stripped of punctuation /styling), BitRate)
getUpcoming (Returns: Status, AlbumASIN, DateAdded, AlbumTitle, ArtistName, ReleaseDate, AlbumID, ArtistID, Type)
getWanted (Returns: Status, AlbumASIN, DateAdded, AlbumTitle, ArtistName, ReleaseDate, AlbumID, ArtistID, Type)
getHistory (Returns: Status, DateAdded, Title, URL (nzb), FolderName, AlbumID, Size (bytes))
getLogs (not working yet)
findArtist&name=$artistname[&limit=$limit] (perform artist query on musicbrainz. Returns: url, score, name, uniquename (contains disambiguation info), id)
findAlbum&name=$albumname[&limit=$limit] (perform album query on musicbrainz. Returns: title, url (artist), id (artist), albumurl, albumid, score, uniquename (artist - with disambiguation)
addComic&id=$comicid (add an comic to the db by comicid)
addAlbum&id=$releaseid (add an album to the db by album release id)
delComic&id=$comicid (delete Comic from db by comicid)
pauseComic&id=$artistid (pause an comic in db)
resumeComic&id=$artistid (resume an comic in db)
refreshComic&id=$comicid (refresh info for comic in db)
queueIssue&id=$issueid (Mark an issue as wanted and start the search.
unqueueIssue&id=$issueid (Unmark issue as wanted / i.e. mark as skipped)
forceSearch (force search for wanted issues - not launched in a separate thread so it may take a bit to complete)
forceProcess (force post process issues in download directory - also not launched in a separate thread)
getVersion (Returns some version information: git_path, install_type, current_version, installed_version, commits_behind
checkGithub (updates the version information above and returns getVersion data)
shutdown (shut down mylar)
restart (restart mylar)
update (update mylar - you may want to check the install type in get version and not allow this if type==exe)

View File

@ -401,8 +401,12 @@
</table> </table>
</form> </form>
</div> </div>
%if annuals: %if annuals:
<h1>Annuals</h1> <h1>Annuals</h1>
%for aninfo in annualinfo:
${aninfo['annualComicName']}<a href="annualDelete?comicid=${comic['ComicID']}&ReleaseComicID=${aninfo['annualComicID']}"><img src="interfaces/default/images/x.png" height="10" width="10"/></a>
%endfor
<form action="markissues" method="get" id="markissues"> <form action="markissues" method="get" id="markissues">
<div id="markissue">Mark selected annuals as <div id="markissue">Mark selected annuals as
@ -445,6 +449,7 @@
<tbody> <tbody>
%for annual in annuals: %for annual in annuals:
<% <%
if annual['Status'] == 'Skipped': if annual['Status'] == 'Skipped':
grade = 'Z' grade = 'Z'
@ -483,17 +488,29 @@
<a href="#" title="Mark issue as Wanted" onclick="doAjaxCall('queueissue?ComicID=${annual['ComicID']}&IssueID=${annual['IssueID']}&ComicIssue=${annual['Issue_Number']}&ComicYear=${annual['IssueDate']}&mode=${amode}',$(this),'table')"><img src="interfaces/default/images/wanted_icon.png" height="25" width="25" /></a> <a href="#" title="Mark issue as Wanted" onclick="doAjaxCall('queueissue?ComicID=${annual['ComicID']}&IssueID=${annual['IssueID']}&ComicIssue=${annual['Issue_Number']}&ComicYear=${annual['IssueDate']}&mode=${amode}',$(this),'table')"><img src="interfaces/default/images/wanted_icon.png" height="25" width="25" /></a>
<a href="#" title="Mark issue as Skipped" onclick="doAjaxCall('unqueueissue?IssueID=${annual['IssueID']}&ComicID=${annual['ComicID']}',$(this),'table')" data-success="'${annual['Issue_Number']}' has been marked as skipped"><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" /></a> <a href="#" title="Mark issue as Skipped" onclick="doAjaxCall('unqueueissue?IssueID=${annual['IssueID']}&ComicID=${annual['ComicID']}',$(this),'table')" data-success="'${annual['Issue_Number']}' has been marked as skipped"><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" /></a>
<a href="#" title="Add to Reading List"><img src="interfaces/default/images/glasses-icon.png" height="25" width="25" /></a> <a href="#" title="Add to Reading List"><img src="interfaces/default/images/glasses-icon.png" height="25" width="25" /></a>
<a href="#" title="Retry" onclick="doAjaxCall('queueissue?ComicID=${annual['ComicID']}&IssueID=${annual['IssueID']}&ComicIssue=${annual['Issue_Number']}&mode=${amode}', $(this),'table')" data-success="Retrying the same version of '${issue['ComicName']}'"><img src="interfaces/default/images/retry_icon.png" height="25" width="25" /></a> <a href="#" title="Retry" onclick="doAjaxCall('queueissue?ComicID=${annual['ComicID']}&IssueID=${annual['IssueID']}&ComicIssue=${annual['Issue_Number']}&mode=${amode}', $(this),'table')" data-success="Retrying the same version of '${annual['ComicName']}'"><img src="interfaces/default/images/retry_icon.png" height="25" width="25" /></a>
<a href="#" title="Archive" onclick="doAjaxCall('archiveissue?IssueID=${annual['IssueID']}',$(this),'table')"><img src="interfaces/default/images/archive_icon.png" height="25" width="25" title="Mark issue as Archived" /></a> <a href="#" title="Archive" onclick="doAjaxCall('archiveissue?IssueID=${annual['IssueID']}',$(this),'table')"><img src="interfaces/default/images/archive_icon.png" height="25" width="25" title="Mark issue as Archived" /></a>
</td> </td>
</tr> </tr>
%endfor %endfor
</tbody> </tbody>
</table>
</form> </form>
</table>
%endif %endif
<div style="position:relative; width:960px; height:10px; margin:10px auto;">
<form action="manual_annual_add" method="GET">
<input type="hidden" name="comicid" value=${comic['ComicID']}>
<input type="hidden" name="comicname" value=${comic['ComicName'] |u}>
<input type="hidden" name="comicyear" value=${comic['ComicYear']}>
<div style="position:absolute; top:30px; right:0px;">
<center><label><strong><a href="#" title="Enter the ComicID of the annual(s) you want to add to the series"/>Comic ID</a></strong></label>
<input type="text" name="manual_comicid" size="10"><input type="image" src="interfaces/default/images/submit.png" height="25" width="25" /></center>
</div>
</form>
</div>
</%def> </%def>
<%def name="headIncludes()"> <%def name="headIncludes()">
@ -653,7 +670,17 @@
{ 'bVisible': false, 'aTargets': [1] }, { 'bVisible': false, 'aTargets': [1] },
{ 'sType': 'numeric', 'aTargets': [1] }, { 'sType': 'numeric', 'aTargets': [1] },
{ 'iDataSort': [1], 'aTargets': [2] } { 'iDataSort': [1], 'aTargets': [2] }
] ],
"oLanguage": {
"sLengthMenu":"",
"sEmptyTable": "No issue information available",
"sInfo":"Showing _TOTAL_ issues",
"sInfoEmpty":"Showing 0 to 0 of 0 issues",
"sInfoFiltered":"",
"sSearch": ""},
"bStateSave": true,
"bFilter": false,
"iDisplayLength": 10
}); });
resetFilters("issue"); resetFilters("issue");

View File

@ -137,6 +137,20 @@
</fieldset> </fieldset>
</td> </td>
<td> <td>
<fieldset>
<legend>API</legend>
<div class="row checkbox">
<input id="api_enabled" type="checkbox" onclick="initConfigCheckbox($(this));" name="api_enabled" value="1" ${config['api_enabled']} /><label>Enable API</label>
</div>
<div class="apioptions">
<div Class="row">
<label>API key</label>
<input type="text" name="api_key" id="api_key" value="${config['api_key']}" size="20">
<input type="button" value="Generate" id="generate_api">
<small>Current API key: <strong>${config['api_key']}</strong></small>
</div>
</div>
</fieldset>
<fieldset> <fieldset>
<legend>Interval</legend> <legend>Interval</legend>
<div class="row"> <div class="row">
@ -195,11 +209,11 @@
<table class="configtable" summary="Download Settings"> <table class="configtable" summary="Download Settings">
<tr> <tr>
<td> <td>
<fieldset> <fieldset>
<div class="row checkbox"> <legend>Usenet</legend>
<input id="use_sabnzbd" type="checkbox" onclick="initConfigCheckbox($(this))"; name="use_sabnzbd" value="1" ${config['use_sabnzbd']} /><label>SABnbzd</label> <input type="radio" name="nzb_downloader" id="nzb_downloader_sabnzbd" value="0" ${config['nzb_downloader_sabnzbd']}>Sabnzbd <input type="radio" name="nzb_downloader" id="nzb_downloader_nzbget" value="1" ${config['nzb_downloader_nzbget']}> NZBget <input type="radio" name="nzb_downloader" id="nzb_downloader_blackhole" value="2" ${config['nzb_downloader_blackhole']}>Black Hole
</div> </fieldset>
<div class="config"> <fieldset id="sabnzbd_options">
<div class="row"> <div class="row">
<label>SABnzbd Host:</label> <label>SABnzbd Host:</label>
<input type="text" name="sab_host" value="${config['sab_host']}" size="30"> <input type="text" name="sab_host" value="${config['sab_host']}" size="30">
@ -208,27 +222,27 @@
<div class="row"> <div class="row">
<label>SABnzbd Username</label> <label>SABnzbd Username</label>
<input type="text" name="sab_username" value="${config['sab_user']}" size="20"> <input type="text" name="sab_username" value="${config['sab_user']}" size="20">
</div> </div>
<div class="row"> <div class="row">
<label>SABnzbd API:</label> <label>SABnzbd API:</label>
<input type="text" name="sab_apikey" value="${config['sab_api']}" size="36"> <input type="text" name="sab_apikey" value="${config['sab_api']}" size="36">
</div> </div>
<div class="row"> <div class="row">
<label>SABnzbd Password:</label> <label>SABnzbd Password:</label>
<input type="password" name="sab_password" value="${config['sab_pass']}" size="20"> <input type="password" name="sab_password" value="${config['sab_pass']}" size="20">
</div> </div>
<div class="row"> <div class="row">
<label>SABnzbd Download Directory</label> <label>SABnzbd Download Directory</label>
<input type="text" name="sab_directory" value="${config['sab_directory']}" size="36" /> <input type="text" name="sab_directory" value="${config['sab_directory']}" size="36" />
<small>Where your SAB downloads go... (optional)</small> <small>Where your SAB downloads go... (optional)</small>
</div> </div>
<div class="row"> <div class="row">
<label>SABnzbd Category:</label> <label>SABnzbd Category:</label>
<input type="text" name="sab_category" value="${config['sab_cat']}" size="20"> <input type="text" name="sab_category" value="${config['sab_cat']}" size="20">
</div> </div>
<div class="row"> <div class="row">
<label>SAB Priority</label> <label>SAB Priority</label>
<select name="sab_priority"> <select name="sab_priority">
%for prio in ['Default', 'Low', 'Normal', 'High', 'Paused']: %for prio in ['Default', 'Low', 'Normal', 'High', 'Paused']:
@ -241,16 +255,13 @@
<option value=${prio} ${outputselect}>${prio}</option> <option value=${prio} ${outputselect}>${prio}</option>
%endfor %endfor
</select> </select>
</div> </div>
<div class="row"> <div class="row">
<a href="#" style="float:right" type="button" onclick="doAjaxCall('SABtest',$(this))" data-success="Sucessfully tested SABnzbd connection" data-error="Error testing SABnzbd connection"><span class="ui-icon ui-icon-extlink"></span>Test SABnzbd</a> <a href="#" style="float:right" type="button" onclick="doAjaxCall('SABtest',$(this))" data-success="Sucessfully tested SABnzbd connection" data-error="Error testing SABnzbd connection"><span class="ui-icon ui-icon-extlink"></span>Test SABnzbd</a>
</div> </div>
</div> </fieldset>
<div class="row checkbox"> <fieldset id="nzbget_options">
<input id="use_nzbget" type="checkbox" onclick="initConfigCheckbox($(this))"; name="use_nzbget" value="1" ${config['use_nzbget']} /><label>NZBGet</label>
</div>
<div class="config">
<div class="row"> <div class="row">
<label>NZBGet Host:</label> <label>NZBGet Host:</label>
<input type="text" name="nzbget_host" value="${config['nzbget_host']}" size="30"> <input type="text" name="nzbget_host" value="${config['nzbget_host']}" size="30">
@ -268,6 +279,11 @@
<label>NZBGet Password:</label> <label>NZBGet Password:</label>
<input type="password" name="nzbget_password" value="${config['nzbget_pass']}" size="20"> <input type="password" name="nzbget_password" value="${config['nzbget_pass']}" size="20">
</div> </div>
<div class="row">
<label>NZBGet Download Directory</label>
<input type="text" name="nzbget_directory" value="${config['nzbget_directory']}" size="36" />
<small>Where your NZBGet downloads go... (optional)</small>
</div>
<div class="row"> <div class="row">
<label>NZBGet Category:</label> <label>NZBGet Category:</label>
<input type="text" name="nzbget_category" value="${config['nzbget_cat']}" size="20"> <input type="text" name="nzbget_category" value="${config['nzbget_cat']}" size="20">
@ -287,32 +303,23 @@
</select> </select>
</div> </div>
</div> </fieldset>
</div> <fieldset id="blackhole_options">
</fieldset>
</td>
<td>
<legend>Usenet</legend>
<fieldset>
<div class="row checkbox">
<input id="useblackhole" type="checkbox" onclick="initConfigCheckbox($(this));" name="blackhole" value=1 ${config['use_blackhole']} /><label>Use Black Hole</label>
</div>
<div class="config">
<div class="row"> <div class="row">
<label>Black Hole Directory</label> <label>Black Hole Directory</label>
<input type="text" name="blackhole_dir" value="${config['blackhole_dir']}" size="30"> <input type="text" name="blackhole_dir" value="${config['blackhole_dir']}" size="30">
<small>Folder your Download program watches for NZBs</small> <small>Folder your Download program watches for NZBs</small>
</div> </div>
</div> </fieldset>
</fieldset> <fieldset id="general_nzb_options">
<fieldset>
<div class="checkbox row"> <div class="checkbox row">
<label>Usenet Retention (in days)</label> <label>Usenet Retention (in days)</label>
<input type="text" name="usenet_retention" value="${config['usenet_retention']}" size$ <input type="text" name="usenet_retention" value="${config['usenet_retention']}" size="10">
</div> </div>
</fieldset> </fieldset>
</td>
<td>
<legend>Torrents</legend> <legend>Torrents</legend>
<fieldset> <fieldset>
<div class="row checkbox"> <div class="row checkbox">
@ -354,10 +361,9 @@
<input type="text" name="seedbox_watchdir" value="${config['seedbox_watchdir']}" size="30"><br/> <input type="text" name="seedbox_watchdir" value="${config['seedbox_watchdir']}" size="30"><br/>
<small>Folder path your torrent seedbox client watches</small> <small>Folder path your torrent seedbox client watches</small>
</div> </div>
</div> </div>
</div> </div>
</fieldset> </fieldset>
</td> </td>
</tr> </tr>
@ -624,7 +630,7 @@
<label> File Format</label> <label> File Format</label>
<input type="text" name="file_format" value="${config['file_format']}" size="43"> <input type="text" name="file_format" value="${config['file_format']}" size="43">
<% <%
file_options = "$Series = SeriesName\n$Year = SeriesYear\n$Annual = Annual (word)\n$Issue = IssueNumber\n$VolumeY = V{SeriesYear}\n$VolumeN = V{Volume#}" file_options = "$Series = SeriesName\n$Year = SeriesYear\n$Annual = Annual (word)\n$Issue = IssueNumber\n$VolumeY = V{SeriesYear}\n$VolumeN = V{Volume#}\n$month = publication month number\n$monthname = publication month name"
%> %>
<a href="#" title="${file_options}"><img src="interfaces/default/images/info32.png" height="16" alt="" /></a> <a href="#" title="${file_options}"><img src="interfaces/default/images/info32.png" height="16" alt="" /></a>
<small>Use: $Series, $Year, $Issue<br /> <small>Use: $Series, $Year, $Issue<br />
@ -849,6 +855,26 @@
{ {
if ($("#api_enabled").is(":checked"))
{
$("#apioptions").show();
}
else
{
$("#apioptions").hide();
}
$("#api_enabled").click(function(){
if ($("#api_enabled").is(":checked"))
{
$("#apioptions").slideDown();
}
else
{
$("#apioptions").slideUp();
}
});
if ($("#prowl").is(":checked")) if ($("#prowl").is(":checked"))
{ {
$("#prowloptions").show(); $("#prowloptions").show();
@ -927,8 +953,38 @@
{ {
$("#boxcaroptions").slideUp(); $("#boxcaroptions").slideUp();
} }
if ($("#nzb_downloader_sabnzbd").is(":checked"))
{
$("#sabnzbd_options").show();
$("#nzbget_options,#blackhole_options").hide();
}
if ($("#nzb_downloader_nzbget").is(":checked"))
{
$("#sabnzbd_options,#blackhole_options").hide();
$("#nzbget_options").show();
}
if ($("#nzb_downloader_blackhole").is(":checked"))
{
$("#sabnzbd_options,#nzbget_options").hide();
$("#blackhole_options").show();
}
}); });
$('input[type=radio]').change(function(){
if ($("#nzb_downloader_sabnzbd").is(":checked"))
{
$("#nzbget_options,#blackhole_options").fadeOut("fast", function() { $("#sabnzbd_options").fadeIn() });
}
if ($("#nzb_downloader_nzbget").is(":checked"))
{
$("#sabnzbd_options,#blackhole_options").fadeOut("fast", function() { $("#nzbget_options").fadeIn() });
}
if ($("#nzb_downloader_blackhole").is(":checked"))
{
$("#sabnzbd_options,#nzbget_options").fadeOut("fast", function() { $("#blackhole_options").fadeIn() });
}
});
var deletedNewznabs = 0; var deletedNewznabs = 0;
@ -937,6 +993,17 @@
deletedNewznabs = deletedNewznabs + 1; deletedNewznabs = deletedNewznabs + 1;
}); });
$('#api_key').click(function(){ $('#api_key').select() });
$("#generate_api").click(function(){
$.get('generateAPI',
function(data){
if (data.error != undefined) {
alert(data.error);
return;
}
$('#api_key').val(data);
});
});
$("#add_newznab").click(function() { $("#add_newznab").click(function() {
@ -958,9 +1025,7 @@
}); });
initActions(); initActions();
initConfigCheckbox("#launch_browser"); initConfigCheckbox("#launch_browser");
initConfigCheckbox("#use_sabnzbd"); initConfigCheckbox("#enable_api");
initConfigCheckbox("#use_nzbget");
initConfigCheckbox("#useblackhole");
initConfigCheckbox("#usenewznab"); initConfigCheckbox("#usenewznab");
initConfigCheckbox("#usenzbsu"); initConfigCheckbox("#usenzbsu");
initConfigCheckbox("#usedognzb"); initConfigCheckbox("#usedognzb");

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View File

@ -32,7 +32,7 @@
<td class="comicyear">${result['comicyear']}</a></td> <td class="comicyear">${result['comicyear']}</a></td>
<td class="issues">${result['issues']}</td> <td class="issues">${result['issues']}</td>
<td class="add" nowrap="nowrap"><a href="addComic?comicid=${result['comicid']}&comicname=${result['name'] |u}&comicyear=${result['comicyear']}&comicpublisher=${result['publisher']}&comicimage=${result['comicimage']}&comicissues=${result['issues']}&imported=${imported}&ogcname=${ogcname}"><span class="ui-icon ui-icon-plus"></span> Add this Comic</a></td> <td class="add" nowrap="nowrap"><a href="addComic?comicid=${result['comicid']}&comicname=${result['name'] |u}&comicyear=${result['comicyear']}&comicpublisher=${result['publisher'] |u}&comicimage=${result['comicimage']}&comicissues=${result['issues']}&imported=${imported}&ogcname=${ogcname}"><span class="ui-icon ui-icon-plus"></span> Add this Comic</a></td>
</tr> </tr>
%endfor %endfor
%endif %endif

View File

@ -148,7 +148,8 @@ class PostProcessor(object):
# if the SAB Directory option is enabled, let's use that folder name and append the jobname. # if the SAB Directory option is enabled, let's use that folder name and append the jobname.
if mylar.SAB_DIRECTORY is not None and mylar.SAB_DIRECTORY is not 'None' and len(mylar.SAB_DIRECTORY) > 4: if mylar.SAB_DIRECTORY is not None and mylar.SAB_DIRECTORY is not 'None' and len(mylar.SAB_DIRECTORY) > 4:
self.nzb_folder = os.path.join(mylar.SAB_DIRECTORY, self.nzb_name).encode(mylar.SYS_ENCODING) self.nzb_folder = os.path.join(mylar.SAB_DIRECTORY, self.nzb_name).encode(mylar.SYS_ENCODING)
logger.fdebug('SABnzbd Download folder option enabled. Directory set to : ' + self.nzb_folder)
#lookup nzb_name in nzblog table to get issueid #lookup nzb_name in nzblog table to get issueid
#query SAB to find out if Replace Spaces enabled / not as well as Replace Decimals #query SAB to find out if Replace Spaces enabled / not as well as Replace Decimals
@ -172,6 +173,10 @@ class PostProcessor(object):
if mylar.USE_NZBGET==1: if mylar.USE_NZBGET==1:
logger.fdebug("Using NZBGET") logger.fdebug("Using NZBGET")
logger.fdebug("NZB name as passed from NZBGet: " + self.nzb_name) logger.fdebug("NZB name as passed from NZBGet: " + self.nzb_name)
# if the NZBGet Directory option is enabled, let's use that folder name and append the jobname.
if mylar.NZBGET_DIRECTORY is not None and mylar.NZBGET_DIRECTORY is not 'None' and len(mylar.NZBGET_DIRECTORY) > 4:
self.nzb_folder = os.path.join(mylar.NZBGET_DIRECTORY, self.nzb_name).encode(mylar.SYS_ENCODING)
logger.fdebug('NZBGET Download folder option enabled. Directory set to : ' + self.nzb_folder)
myDB = db.DBConnection() myDB = db.DBConnection()
if self.nzb_name == 'Manual Run': if self.nzb_name == 'Manual Run':
@ -195,12 +200,14 @@ class PostProcessor(object):
watchvals = {"SeriesYear": cs['ComicYear'], watchvals = {"SeriesYear": cs['ComicYear'],
"LatestDate": cs['LatestDate'], "LatestDate": cs['LatestDate'],
"ComicVersion": cs['ComicVersion'], "ComicVersion": cs['ComicVersion'],
"Publisher": cs['ComicPublisher'],
"Total": cs['Total']} "Total": cs['Total']}
watchmatch = filechecker.listFiles(self.nzb_folder,cs['ComicName'],cs['AlternateSearch'], manual=watchvals) watchmatch = filechecker.listFiles(self.nzb_folder,cs['ComicName'],cs['ComicPublisher'],cs['AlternateSearch'], manual=watchvals)
if watchmatch['comiccount'] == 0: # is None: if watchmatch['comiccount'] == 0: # is None:
nm+=1 nm+=1
continue continue
else: else:
print 'i made it here...'
fn = 0 fn = 0
fccnt = int(watchmatch['comiccount']) fccnt = int(watchmatch['comiccount'])
if len(watchmatch) == 1: continue if len(watchmatch) == 1: continue
@ -260,12 +267,32 @@ class PostProcessor(object):
if issuechk is None: if issuechk is None:
logger.info("No corresponding issue # found for " + str(cs['ComicID'])) logger.info("No corresponding issue # found for " + str(cs['ComicID']))
else: else:
logger.info("Found matching issue # " + str(fcdigit) + " for ComicID: " + str(cs['ComicID']) + " / IssueID: " + str(issuechk['IssueID'])) datematch = "True"
manual_list.append({"ComicLocation": tmpfc['ComicLocation'], if len(watchmatch) > 1:
"ComicID": cs['ComicID'], #if the # of matches is more than 1, we need to make sure we get the right series
"IssueID": issuechk['IssueID'], #compare the ReleaseDate for the issue, to the found issue date in the filename.
"IssueNumber": issuechk['Issue_Number'], #if ReleaseDate doesn't exist, use IssueDate
"ComicName": cs['ComicName']}) #if no issue date was found, then ignore.
if issuechk['ReleaseDate'] is not None:
if int(issuechk['ReleaseDate'][:4]) < int(tmpfc['ComicYear']):
logger.fdebug(str(issuechk['ReleaseDate']) + ' is before the issue year of ' + str(tmpfc['ComicYear']) + ' that was discovered in the filename')
datematch = "False"
else:
if int(issuechk['IssueDate'][:4]) < int(tmpfc['ComicYear']):
logger.fdebug(str(issuechk['IssueDate']) + ' is before the issue year ' + str(tmpfc['ComicYear']) + ' that was discovered in the filename')
datematch = "False"
else:
logger.info("Found matching issue # " + str(fcdigit) + " for ComicID: " + str(cs['ComicID']) + " / IssueID: " + str(issuechk['IssueID']))
if datematch == "True":
manual_list.append({"ComicLocation": tmpfc['ComicLocation'],
"ComicID": cs['ComicID'],
"IssueID": issuechk['IssueID'],
"IssueNumber": issuechk['Issue_Number'],
"ComicName": cs['ComicName']})
else:
logger.fdebug('Incorrect series - not populating..continuing post-processing')
ccnt+=1 ccnt+=1
#print manual_list #print manual_list
wdc+=1 wdc+=1
@ -563,6 +590,8 @@ class PostProcessor(object):
issueyear = issuenzb['IssueDate'][:4] issueyear = issuenzb['IssueDate'][:4]
self._log("Issue Year: " + str(issueyear), logger.DEBUG) self._log("Issue Year: " + str(issueyear), logger.DEBUG)
logger.fdebug("Issue Year : " + str(issueyear)) logger.fdebug("Issue Year : " + str(issueyear))
month = issuenzb['IssueDate'][5:7].replace('-','').strip()
month_name = helpers.fullmonth(month)
# comicnzb= myDB.action("SELECT * from comics WHERE comicid=?", [comicid]).fetchone() # comicnzb= myDB.action("SELECT * from comics WHERE comicid=?", [comicid]).fetchone()
publisher = comicnzb['ComicPublisher'] publisher = comicnzb['ComicPublisher']
self._log("Publisher: " + publisher, logger.DEBUG) self._log("Publisher: " + publisher, logger.DEBUG)
@ -675,6 +704,8 @@ class PostProcessor(object):
'$publisher': publisher.lower(), '$publisher': publisher.lower(),
'$VolumeY': 'V' + str(seriesyear), '$VolumeY': 'V' + str(seriesyear),
'$VolumeN': comversion, '$VolumeN': comversion,
'$monthname': monthname,
'$month': month,
'$Annual': 'Annual' '$Annual': 'Annual'
} }

View File

@ -54,6 +54,7 @@ __INITIALIZED__ = False
started = False started = False
DATA_DIR = None DATA_DIR = None
DBLOCK = False
CONFIG_FILE = None CONFIG_FILE = None
CFG = None CFG = None
@ -74,6 +75,8 @@ HTTP_HOST = None
HTTP_USERNAME = None HTTP_USERNAME = None
HTTP_PASSWORD = None HTTP_PASSWORD = None
HTTP_ROOT = None HTTP_ROOT = None
API_ENABLED = False
API_KEY = None
LAUNCH_BROWSER = False LAUNCH_BROWSER = False
LOGVERBOSE = 1 LOGVERBOSE = 1
GIT_PATH = None GIT_PATH = None
@ -111,8 +114,6 @@ PREFERRED_QUALITY = 0
CORRECT_METADATA = False CORRECT_METADATA = False
MOVE_FILES = False MOVE_FILES = False
RENAME_FILES = False RENAME_FILES = False
BLACKHOLE = False
BLACKHOLE_DIR = None
FOLDER_FORMAT = None FOLDER_FORMAT = None
FILE_FORMAT = None FILE_FORMAT = None
REPLACE_SPACES = False REPLACE_SPACES = False
@ -151,7 +152,9 @@ CVINFO = False
LOG_LEVEL = None LOG_LEVEL = None
POST_PROCESSING = 1 POST_PROCESSING = 1
USE_SABNZBD = True NZB_DOWNLOADER = None #0 = sabnzbd, #1 = nzbget, #2 = blackhole
USE_SABNZBD = False
SAB_HOST = None SAB_HOST = None
SAB_USERNAME = None SAB_USERNAME = None
SAB_PASSWORD = None SAB_PASSWORD = None
@ -167,6 +170,10 @@ NZBGET_USERNAME = None
NZBGET_PASSWORD = None NZBGET_PASSWORD = None
NZBGET_PRIORITY = None NZBGET_PRIORITY = None
NZBGET_CATEGORY = None NZBGET_CATEGORY = None
NZBGET_DIRECTORY = None
USE_BLACKHOLE = False
BLACKHOLE_DIR = None
PROVIDER_ORDER = None PROVIDER_ORDER = None
@ -309,11 +316,11 @@ def initialize():
with INIT_LOCK: with INIT_LOCK:
global __INITIALIZED__, FULL_PATH, PROG_DIR, VERBOSE, DAEMON, COMICSORT, DATA_DIR, CONFIG_FILE, CFG, CONFIG_VERSION, LOG_DIR, CACHE_DIR, LOGVERBOSE, OLDCONFIG_VERSION, OS_DETECT, OS_LANG, OS_ENCODING, \ global __INITIALIZED__, FULL_PATH, PROG_DIR, VERBOSE, DAEMON, COMICSORT, DATA_DIR, CONFIG_FILE, CFG, CONFIG_VERSION, LOG_DIR, CACHE_DIR, LOGVERBOSE, OLDCONFIG_VERSION, OS_DETECT, OS_LANG, OS_ENCODING, \
HTTP_PORT, HTTP_HOST, HTTP_USERNAME, HTTP_PASSWORD, HTTP_ROOT, LAUNCH_BROWSER, GIT_PATH, \ HTTP_PORT, HTTP_HOST, HTTP_USERNAME, HTTP_PASSWORD, HTTP_ROOT, API_ENABLED, API_KEY, LAUNCH_BROWSER, GIT_PATH, \
CURRENT_VERSION, LATEST_VERSION, CHECK_GITHUB, CHECK_GITHUB_ON_STARTUP, CHECK_GITHUB_INTERVAL, USER_AGENT, DESTINATION_DIR, \ CURRENT_VERSION, LATEST_VERSION, CHECK_GITHUB, CHECK_GITHUB_ON_STARTUP, CHECK_GITHUB_INTERVAL, USER_AGENT, DESTINATION_DIR, \
DOWNLOAD_DIR, USENET_RETENTION, SEARCH_INTERVAL, NZB_STARTUP_SEARCH, INTERFACE, AUTOWANT_ALL, AUTOWANT_UPCOMING, ZERO_LEVEL, ZERO_LEVEL_N, COMIC_COVER_LOCAL, HIGHCOUNT, \ DOWNLOAD_DIR, USENET_RETENTION, SEARCH_INTERVAL, NZB_STARTUP_SEARCH, INTERFACE, AUTOWANT_ALL, AUTOWANT_UPCOMING, ZERO_LEVEL, ZERO_LEVEL_N, COMIC_COVER_LOCAL, HIGHCOUNT, \
LIBRARYSCAN, LIBRARYSCAN_INTERVAL, DOWNLOAD_SCAN_INTERVAL, USE_SABNZBD, SAB_HOST, SAB_USERNAME, SAB_PASSWORD, SAB_APIKEY, SAB_CATEGORY, SAB_PRIORITY, SAB_DIRECTORY, BLACKHOLE, BLACKHOLE_DIR, ADD_COMICS, COMIC_DIR, IMP_MOVE, IMP_RENAME, IMP_METADATA, \ LIBRARYSCAN, LIBRARYSCAN_INTERVAL, DOWNLOAD_SCAN_INTERVAL, NZB_DOWNLOADER, USE_SABNZBD, SAB_HOST, SAB_USERNAME, SAB_PASSWORD, SAB_APIKEY, SAB_CATEGORY, SAB_PRIORITY, SAB_DIRECTORY, USE_BLACKHOLE, BLACKHOLE_DIR, ADD_COMICS, COMIC_DIR, IMP_MOVE, IMP_RENAME, IMP_METADATA, \
USE_NZBGET, NZBGET_HOST, NZBGET_PORT, NZBGET_USERNAME, NZBGET_PASSWORD, NZBGET_CATEGORY, NZBGET_PRIORITY, NZBSU, NZBSU_UID, NZBSU_APIKEY, DOGNZB, DOGNZB_UID, DOGNZB_APIKEY, NZBX,\ USE_NZBGET, NZBGET_HOST, NZBGET_PORT, NZBGET_USERNAME, NZBGET_PASSWORD, NZBGET_CATEGORY, NZBGET_PRIORITY, NZBGET_DIRECTORY, NZBSU, NZBSU_UID, NZBSU_APIKEY, DOGNZB, DOGNZB_UID, DOGNZB_APIKEY, NZBX,\
NEWZNAB, NEWZNAB_NAME, NEWZNAB_HOST, NEWZNAB_APIKEY, NEWZNAB_UID, NEWZNAB_ENABLED, EXTRA_NEWZNABS, NEWZNAB_EXTRA, \ NEWZNAB, NEWZNAB_NAME, NEWZNAB_HOST, NEWZNAB_APIKEY, NEWZNAB_UID, NEWZNAB_ENABLED, EXTRA_NEWZNABS, NEWZNAB_EXTRA, \
RAW, RAW_PROVIDER, RAW_USERNAME, RAW_PASSWORD, RAW_GROUPS, EXPERIMENTAL, ALTEXPERIMENTAL, \ RAW, RAW_PROVIDER, RAW_USERNAME, RAW_PASSWORD, RAW_GROUPS, EXPERIMENTAL, ALTEXPERIMENTAL, \
ENABLE_META, CMTAGGER_PATH, INDIE_PUB, BIGGIE_PUB, IGNORE_HAVETOTAL, PROVIDER_ORDER, \ ENABLE_META, CMTAGGER_PATH, INDIE_PUB, BIGGIE_PUB, IGNORE_HAVETOTAL, PROVIDER_ORDER, \
@ -350,6 +357,8 @@ def initialize():
HTTP_USERNAME = check_setting_str(CFG, 'General', 'http_username', '') HTTP_USERNAME = check_setting_str(CFG, 'General', 'http_username', '')
HTTP_PASSWORD = check_setting_str(CFG, 'General', 'http_password', '') HTTP_PASSWORD = check_setting_str(CFG, 'General', 'http_password', '')
HTTP_ROOT = check_setting_str(CFG, 'General', 'http_root', '/') HTTP_ROOT = check_setting_str(CFG, 'General', 'http_root', '/')
API_ENABLED = bool(check_setting_int(CFG, 'General', 'api_enabled', 0))
API_KEY = check_setting_str(CFG, 'General', 'api_key', '')
LAUNCH_BROWSER = bool(check_setting_int(CFG, 'General', 'launch_browser', 1)) LAUNCH_BROWSER = bool(check_setting_int(CFG, 'General', 'launch_browser', 1))
LOGVERBOSE = bool(check_setting_int(CFG, 'General', 'logverbose', 1)) LOGVERBOSE = bool(check_setting_int(CFG, 'General', 'logverbose', 1))
GIT_PATH = check_setting_str(CFG, 'General', 'git_path', '') GIT_PATH = check_setting_str(CFG, 'General', 'git_path', '')
@ -387,7 +396,7 @@ def initialize():
RENAME_FILES = bool(check_setting_int(CFG, 'General', 'rename_files', 0)) RENAME_FILES = bool(check_setting_int(CFG, 'General', 'rename_files', 0))
FOLDER_FORMAT = check_setting_str(CFG, 'General', 'folder_format', '$Series ($Year)') FOLDER_FORMAT = check_setting_str(CFG, 'General', 'folder_format', '$Series ($Year)')
FILE_FORMAT = check_setting_str(CFG, 'General', 'file_format', '$Series $Issue ($Year)') FILE_FORMAT = check_setting_str(CFG, 'General', 'file_format', '$Series $Issue ($Year)')
BLACKHOLE = bool(check_setting_int(CFG, 'General', 'blackhole', 0)) USE_BLACKHOLE = bool(check_setting_int(CFG, 'General', 'use_blackhole', 0))
BLACKHOLE_DIR = check_setting_str(CFG, 'General', 'blackhole_dir', '') BLACKHOLE_DIR = check_setting_str(CFG, 'General', 'blackhole_dir', '')
REPLACE_SPACES = bool(check_setting_int(CFG, 'General', 'replace_spaces', 0)) REPLACE_SPACES = bool(check_setting_int(CFG, 'General', 'replace_spaces', 0))
REPLACE_CHAR = check_setting_str(CFG, 'General', 'replace_char', '') REPLACE_CHAR = check_setting_str(CFG, 'General', 'replace_char', '')
@ -488,7 +497,13 @@ def initialize():
ENABLE_CBT = bool(check_setting_int(CFG, 'Torrents', 'enable_cbt', 0)) ENABLE_CBT = bool(check_setting_int(CFG, 'Torrents', 'enable_cbt', 0))
CBT_PASSKEY = check_setting_str(CFG, 'Torrents', 'cbt_passkey', '') CBT_PASSKEY = check_setting_str(CFG, 'Torrents', 'cbt_passkey', '')
USE_SABNZBD = bool(check_setting_int(CFG, 'SABnzbd', 'use_sabnzbd', 0)) #this needs to have it's own category - for now General will do.
NZB_DOWNLOADER = check_setting_int(CFG, 'General', 'nzb_downloader', 0)
#legacy support of older config - reload into old values for consistency.
if NZB_DOWNLOADER == 0: USE_SABNZBD = True
elif NZB_DOWNLOADER == 1: USE_NZBGET = True
elif NZB_DOWNLOADER == 2: USE_BLACKHOLE = True
#USE_SABNZBD = bool(check_setting_int(CFG, 'SABnzbd', 'use_sabnzbd', 0))
SAB_HOST = check_setting_str(CFG, 'SABnzbd', 'sab_host', '') SAB_HOST = check_setting_str(CFG, 'SABnzbd', 'sab_host', '')
SAB_USERNAME = check_setting_str(CFG, 'SABnzbd', 'sab_username', '') SAB_USERNAME = check_setting_str(CFG, 'SABnzbd', 'sab_username', '')
SAB_PASSWORD = check_setting_str(CFG, 'SABnzbd', 'sab_password', '') SAB_PASSWORD = check_setting_str(CFG, 'SABnzbd', 'sab_password', '')
@ -504,13 +519,17 @@ def initialize():
elif SAB_PRIORITY == "4": SAB_PRIORITY = "Paused" elif SAB_PRIORITY == "4": SAB_PRIORITY = "Paused"
else: SAB_PRIORITY = "Default" else: SAB_PRIORITY = "Default"
USE_NZBGET = bool(check_setting_int(CFG, 'NZBGet', 'use_nzbget', 0)) #USE_NZBGET = bool(check_setting_int(CFG, 'NZBGet', 'use_nzbget', 0))
NZBGET_HOST = check_setting_str(CFG, 'NZBGet', 'nzbget_host', '') NZBGET_HOST = check_setting_str(CFG, 'NZBGet', 'nzbget_host', '')
NZBGET_PORT = check_setting_str(CFG, 'NZBGet', 'nzbget_port', '') NZBGET_PORT = check_setting_str(CFG, 'NZBGet', 'nzbget_port', '')
NZBGET_USERNAME = check_setting_str(CFG, 'NZBGet', 'nzbget_username', '') NZBGET_USERNAME = check_setting_str(CFG, 'NZBGet', 'nzbget_username', '')
NZBGET_PASSWORD = check_setting_str(CFG, 'NZBGet', 'nzbget_password', '') NZBGET_PASSWORD = check_setting_str(CFG, 'NZBGet', 'nzbget_password', '')
NZBGET_CATEGORY = check_setting_str(CFG, 'NZBGet', 'nzbget_category', '') NZBGET_CATEGORY = check_setting_str(CFG, 'NZBGet', 'nzbget_category', '')
NZBGET_PRIORITY = check_setting_str(CFG, 'NZBGet', 'nzbget_priority', '') NZBGET_PRIORITY = check_setting_str(CFG, 'NZBGet', 'nzbget_priority', '')
NZBGET_DIRECTORY = check_setting_str(CFG, 'NZBGet', 'nzbget_directory', '')
#USE_BLACKHOLE = bool(check_setting_int(CFG, 'General', 'use_blackhole', 0))
BLACKHOLE_DIR = check_setting_str(CFG, 'General', 'blackhole_dir', '')
PR_NUM = 0 # provider counter here (used for provider orders) PR_NUM = 0 # provider counter here (used for provider orders)
PR = [] PR = []
@ -716,9 +735,15 @@ def initialize():
# With the addition of NZBGet, it's possible that both SAB and NZBget are unchecked initially. # With the addition of NZBGet, it's possible that both SAB and NZBget are unchecked initially.
# let's force default SAB. # let's force default SAB.
if USE_NZBGET == 0 and USE_SABNZBD == 0 : #if NZB_DOWNLOADER == None:
logger.info('No Download Server option given - defaulting to SABnzbd.') # logger.info('No Download Option selected - default to SABnzbd.')
USE_SABNZBD = 1 # NZB_DOWNLOADER = 0
# USE_SABNZBD = 1
#else:
# logger.info('nzb_downloader is set to : ' + str(NZB_DOWNLOADER))
#if USE_NZBGET == 0 and USE_SABNZBD == 0 :
# logger.info('No Download Server option given - defaulting to SABnzbd.')
# USE_SABNZBD = 1
# Get the currently installed version - returns None, 'win32' or the git hash # Get the currently installed version - returns None, 'win32' or the git hash
# Also sets INSTALL_TYPE variable to 'win', 'git' or 'source' # Also sets INSTALL_TYPE variable to 'win', 'git' or 'source'
@ -851,6 +876,8 @@ def config_write():
new_config['General']['http_username'] = HTTP_USERNAME new_config['General']['http_username'] = HTTP_USERNAME
new_config['General']['http_password'] = HTTP_PASSWORD new_config['General']['http_password'] = HTTP_PASSWORD
new_config['General']['http_root'] = HTTP_ROOT new_config['General']['http_root'] = HTTP_ROOT
new_config['General']['api_enabled'] = int(API_ENABLED)
new_config['General']['api_key'] = API_KEY
new_config['General']['launch_browser'] = int(LAUNCH_BROWSER) new_config['General']['launch_browser'] = int(LAUNCH_BROWSER)
new_config['General']['log_dir'] = LOG_DIR new_config['General']['log_dir'] = LOG_DIR
new_config['General']['logverbose'] = int(LOGVERBOSE) new_config['General']['logverbose'] = int(LOGVERBOSE)
@ -890,7 +917,7 @@ def config_write():
new_config['General']['rename_files'] = int(RENAME_FILES) new_config['General']['rename_files'] = int(RENAME_FILES)
new_config['General']['folder_format'] = FOLDER_FORMAT new_config['General']['folder_format'] = FOLDER_FORMAT
new_config['General']['file_format'] = FILE_FORMAT new_config['General']['file_format'] = FILE_FORMAT
new_config['General']['blackhole'] = int(BLACKHOLE) #new_config['General']['use_blackhole'] = int(USE_BLACKHOLE)
new_config['General']['blackhole_dir'] = BLACKHOLE_DIR new_config['General']['blackhole_dir'] = BLACKHOLE_DIR
new_config['General']['replace_spaces'] = int(REPLACE_SPACES) new_config['General']['replace_spaces'] = int(REPLACE_SPACES)
new_config['General']['replace_char'] = REPLACE_CHAR new_config['General']['replace_char'] = REPLACE_CHAR
@ -939,6 +966,7 @@ def config_write():
flattened_providers.append(item) flattened_providers.append(item)
new_config['General']['provider_order'] = flattened_providers new_config['General']['provider_order'] = flattened_providers
new_config['General']['nzb_downloader'] = NZB_DOWNLOADER
new_config['Torrents'] = {} new_config['Torrents'] = {}
new_config['Torrents']['enable_torrents'] = int(ENABLE_TORRENTS) new_config['Torrents']['enable_torrents'] = int(ENABLE_TORRENTS)
@ -957,9 +985,8 @@ def config_write():
new_config['Torrents']['enable_cbt'] = int(ENABLE_CBT) new_config['Torrents']['enable_cbt'] = int(ENABLE_CBT)
new_config['Torrents']['cbt_passkey'] = CBT_PASSKEY new_config['Torrents']['cbt_passkey'] = CBT_PASSKEY
new_config['SABnzbd'] = {} new_config['SABnzbd'] = {}
new_config['SABnzbd']['use_sabnzbd'] = int(USE_SABNZBD) #new_config['SABnzbd']['use_sabnzbd'] = int(USE_SABNZBD)
new_config['SABnzbd']['sab_host'] = SAB_HOST new_config['SABnzbd']['sab_host'] = SAB_HOST
new_config['SABnzbd']['sab_username'] = SAB_USERNAME new_config['SABnzbd']['sab_username'] = SAB_USERNAME
new_config['SABnzbd']['sab_password'] = SAB_PASSWORD new_config['SABnzbd']['sab_password'] = SAB_PASSWORD
@ -969,14 +996,14 @@ def config_write():
new_config['SABnzbd']['sab_directory'] = SAB_DIRECTORY new_config['SABnzbd']['sab_directory'] = SAB_DIRECTORY
new_config['NZBGet'] = {} new_config['NZBGet'] = {}
new_config['NZBGet']['use_nzbget'] = int(USE_NZBGET) #new_config['NZBGet']['use_nzbget'] = int(USE_NZBGET)
new_config['NZBGet']['nzbget_host'] = NZBGET_HOST new_config['NZBGet']['nzbget_host'] = NZBGET_HOST
new_config['NZBGet']['nzbget_port'] = NZBGET_PORT new_config['NZBGet']['nzbget_port'] = NZBGET_PORT
new_config['NZBGet']['nzbget_username'] = NZBGET_USERNAME new_config['NZBGet']['nzbget_username'] = NZBGET_USERNAME
new_config['NZBGet']['nzbget_password'] = NZBGET_PASSWORD new_config['NZBGet']['nzbget_password'] = NZBGET_PASSWORD
new_config['NZBGet']['nzbget_category'] = NZBGET_CATEGORY new_config['NZBGet']['nzbget_category'] = NZBGET_CATEGORY
new_config['NZBGet']['nzbget_priority'] = NZBGET_PRIORITY new_config['NZBGet']['nzbget_priority'] = NZBGET_PRIORITY
new_config['NZBGet']['nzbget_directory'] = NZBGET_DIRECTORY
new_config['NZBsu'] = {} new_config['NZBsu'] = {}
new_config['NZBsu']['nzbsu'] = int(NZBSU) new_config['NZBsu']['nzbsu'] = int(NZBSU)
@ -1104,7 +1131,7 @@ def dbcheck():
c.execute('CREATE TABLE IF NOT EXISTS importresults (impID TEXT, ComicName TEXT, ComicYear TEXT, Status TEXT, ImportDate TEXT, ComicFilename TEXT, ComicLocation TEXT, WatchMatch TEXT)') c.execute('CREATE TABLE IF NOT EXISTS importresults (impID TEXT, ComicName TEXT, ComicYear TEXT, Status TEXT, ImportDate TEXT, ComicFilename TEXT, ComicLocation TEXT, WatchMatch TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS readlist (IssueID TEXT, ComicName TEXT, Issue_Number TEXT, Status TEXT, DateAdded TEXT, Location TEXT, inCacheDir TEXT, SeriesYear TEXT, ComicID TEXT)') c.execute('CREATE TABLE IF NOT EXISTS readlist (IssueID TEXT, ComicName TEXT, Issue_Number TEXT, Status TEXT, DateAdded TEXT, Location TEXT, inCacheDir TEXT, SeriesYear TEXT, ComicID TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS readinglist(StoryArcID TEXT, ComicName TEXT, IssueNumber TEXT, SeriesYear TEXT, IssueYEAR TEXT, StoryArc TEXT, TotalIssues TEXT, Status TEXT, inCacheDir TEXT, Location TEXT, IssueArcID TEXT, ReadingOrder INT, IssueID TEXT)') c.execute('CREATE TABLE IF NOT EXISTS readinglist(StoryArcID TEXT, ComicName TEXT, IssueNumber TEXT, SeriesYear TEXT, IssueYEAR TEXT, StoryArc TEXT, TotalIssues TEXT, Status TEXT, inCacheDir TEXT, Location TEXT, IssueArcID TEXT, ReadingOrder INT, IssueID TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS annuals (IssueID TEXT, Issue_Number TEXT, IssueName TEXT, IssueDate TEXT, Status TEXT, ComicID TEXT, GCDComicID TEXT, Location TEXT, ComicSize TEXT, Int_IssueNumber INT, ComicName TEXT)') c.execute('CREATE TABLE IF NOT EXISTS annuals (IssueID TEXT, Issue_Number TEXT, IssueName TEXT, IssueDate TEXT, Status TEXT, ComicID TEXT, GCDComicID TEXT, Location TEXT, ComicSize TEXT, Int_IssueNumber INT, ComicName TEXT, ReleaseDate TEXT, ReleaseComicID TEXT, ReleaseComicName TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS rssdb (Title TEXT UNIQUE, Link TEXT, Pubdate TEXT, Site TEXT, Size TEXT)') c.execute('CREATE TABLE IF NOT EXISTS rssdb (Title TEXT UNIQUE, Link TEXT, Pubdate TEXT, Site TEXT, Size TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS futureupcoming (ComicName TEXT, IssueNumber TEXT, ComicID TEXT, IssueID TEXT, IssueDate TEXT, Publisher TEXT, Status TEXT, DisplayComicName TEXT)') c.execute('CREATE TABLE IF NOT EXISTS futureupcoming (ComicName TEXT, IssueNumber TEXT, ComicID TEXT, IssueID TEXT, IssueDate TEXT, Publisher TEXT, Status TEXT, DisplayComicName TEXT)')
conn.commit conn.commit
@ -1283,6 +1310,20 @@ def dbcheck():
except: except:
c.execute('ALTER TABLE issues ADD COLUMN AltIssueNumber TEXT') c.execute('ALTER TABLE issues ADD COLUMN AltIssueNumber TEXT')
try:
c.execute('SELECT ReleaseDate from annuals')
except:
c.execute('ALTER TABLE annuals ADD COLUMN ReleaseDate TEXT')
try:
c.execute('SELECT ReleaseComicID from annuals')
except:
c.execute('ALTER TABLE annuals ADD COLUMN ReleaseComicID TEXT')
try:
c.execute('SELECT ReleaseComicName from annuals')
except:
c.execute('ALTER TABLE annuals ADD COLUMN ReleaseComicName TEXT')
#if it's prior to Wednesday, the issue counts will be inflated by one as the online db's everywhere #if it's prior to Wednesday, the issue counts will be inflated by one as the online db's everywhere
#prepare for the next 'new' release of a series. It's caught in updater.py, so let's just store the #prepare for the next 'new' release of a series. It's caught in updater.py, so let's just store the
@ -1309,6 +1350,7 @@ def dbcheck():
#let's delete errant comics that are stranded (ie. Comicname = Comic ID: ) #let's delete errant comics that are stranded (ie. Comicname = Comic ID: )
c.execute("DELETE from COMICS WHERE ComicName='None' OR ComicName LIKE 'Comic ID%' OR ComicName is NULL") c.execute("DELETE from COMICS WHERE ComicName='None' OR ComicName LIKE 'Comic ID%' OR ComicName is NULL")
c.execute("DELETE from ISSUES WHERE ComicName='None' OR ComicName LIKE 'Comic ID%' OR ComicName is NULL")
logger.info('Ensuring DB integrity - Removing all Erroneous Comics (ie. named None)') logger.info('Ensuring DB integrity - Removing all Erroneous Comics (ie. named None)')
logger.info('Correcting Null entries that make the main page break on startup.') logger.info('Correcting Null entries that make the main page break on startup.')

342
mylar/api.py Normal file
View File

@ -0,0 +1,342 @@
# This file is part of Mylar.
#
# Mylar is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Mylar is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Mylar. If not, see <http://www.gnu.org/licenses/>.
import mylar
from mylar import db, mb, importer, search, PostProcessor, versioncheck, logger
import lib.simplejson as simplejson
from xml.dom.minidom import Document
import copy
cmd_list = [ 'getIndex', 'getComic', 'getComic', 'getUpcoming', 'getWanted', 'getHistory', 'getLogs',
'findComic', 'findIssue', 'addComic', 'delComic', 'pauseComic', 'resumeComic', 'refreshComic',
'addIssue', 'queueIssue', 'unqueueIssue', 'forceSearch', 'forceProcess', 'getVersion', 'checkGithub',
'shutdown', 'restart', 'update', 'getComicInfo', 'getIssueInfo']
class Api(object):
def __init__(self):
self.apikey = None
self.cmd = None
self.id = None
self.kwargs = None
self.data = None
self.callback = None
def checkParams(self,*args,**kwargs):
if not mylar.API_ENABLED:
self.data = 'API not enabled'
return
if not mylar.API_KEY:
self.data = 'API key not generated'
return
if len(mylar.API_KEY) != 32:
self.data = 'API key not generated correctly'
return
if 'apikey' not in kwargs:
self.data = 'Missing api key'
return
if kwargs['apikey'] != mylar.API_KEY:
self.data = 'Incorrect API key'
return
else:
self.apikey = kwargs.pop('apikey')
if 'cmd' not in kwargs:
self.data = 'Missing parameter: cmd'
return
if kwargs['cmd'] not in cmd_list:
self.data = 'Unknown command: %s' % kwargs['cmd']
return
else:
self.cmd = kwargs.pop('cmd')
self.kwargs = kwargs
self.data = 'OK'
def fetchData(self):
if self.data == 'OK':
logger.info('Recieved API command: ' + self.cmd)
methodToCall = getattr(self, "_" + self.cmd)
result = methodToCall(**self.kwargs)
if 'callback' not in self.kwargs:
if type(self.data) == type(''):
return self.data
else:
return simplejson.dumps(self.data)
else:
self.callback = self.kwargs['callback']
self.data = simplejson.dumps(self.data)
self.data = self.callback + '(' + self.data + ');'
return self.data
else:
return self.data
def _dic_from_query(self,query):
myDB = db.DBConnection()
rows = myDB.select(query)
rows_as_dic = []
for row in rows:
row_as_dic = dict(zip(row.keys(), row))
rows_as_dic.append(row_as_dic)
return rows_as_dic
def _getIndex(self, **kwargs):
self.data = self._dic_from_query('SELECT * from comics order by ComicSortName COLLATE NOCASE')
return
def _getComic(self, **kwargs):
if 'id' not in kwargs:
self.data = 'Missing parameter: id'
return
else:
self.id = kwargs['id']
comic = self._dic_from_query('SELECT * from comics WHERE ComicID="' + self.id + '"')
issues = self._dic_from_query('SELECT * from issues WHERE ComicID="' + self.id + '"order by Int_IssueNumber DESC')
if mylar.ANNUALS_ON:
annuals = self._dic_from_query('SELECT * FROM annuals WHERE ComicID="' + self.id + '"')
else: annuals = None
self.data = { 'comic': comic, 'issues': issues, 'annuals': annuals }
return
def _getHistory(self, **kwargs):
self.data = self._dic_from_query('SELECT * from snatched order by DateAdded DESC')
return
def _getUpcoming(self, **kwargs):
self.data = self._dic_from_query("SELECT * from upcoming WHERE IssueID is NULL order by IssueDate DESC")
return
def _getWanted(self, **kwargs):
self.data = self._dic_from_query("SELECT * from issues WHERE Status='Wanted'")
return
def _getLogs(self, **kwargs):
pass
def _findArtist(self, **kwargs):
if 'name' not in kwargs:
self.data = 'Missing parameter: name'
return
if 'limit' in kwargs:
limit = kwargs['limit']
else:
limit=50
self.data = mb.findArtist(kwargs['name'], limit)
def _findAlbum(self, **kwargs):
if 'name' not in kwargs:
self.data = 'Missing parameter: name'
return
if 'limit' in kwargs:
limit = kwargs['limit']
else:
limit=50
self.data = mb.findRelease(kwargs['name'], limit)
def _addArtist(self, **kwargs):
if 'id' not in kwargs:
self.data = 'Missing parameter: id'
return
else:
self.id = kwargs['id']
try:
importer.addComictoDB(self.id)
except Exception, e:
self.data = e
return
def _delComic(self, **kwargs):
if 'id' not in kwargs:
self.data = 'Missing parameter: id'
return
else:
self.id = kwargs['id']
myDB = db.DBConnection()
myDB.action('DELETE from comics WHERE ComicID="' + self.id + '"')
myDB.action('DELETE from issues WHERE ComicID="' + self.id + '"')
myDB.action('DELETE from upcoming WHERE ComicID="' + self.id + '"')
def _pauseComic(self, **kwargs):
if 'id' not in kwargs:
self.data = 'Missing parameter: id'
return
else:
self.id = kwargs['id']
myDB = db.DBConnection()
controlValueDict = {'ComicID': self.id}
newValueDict = {'Status': 'Paused'}
myDB.upsert("comics", newValueDict, controlValueDict)
def _resumeComic(self, **kwargs):
if 'id' not in kwargs:
self.data = 'Missing parameter: id'
return
else:
self.id = kwargs['id']
myDB = db.DBConnection()
controlValueDict = {'ComicID': self.id}
newValueDict = {'Status': 'Active'}
myDB.upsert("comics", newValueDict, controlValueDict)
def _refreshComic(self, **kwargs):
if 'id' not in kwargs:
self.data = 'Missing parameter: id'
return
else:
self.id = kwargs['id']
try:
importer.addComictoDB(self.id)
except Exception, e:
self.data = e
return
def _addComic(self, **kwargs):
if 'id' not in kwargs:
self.data = 'Missing parameter: id'
return
else:
self.id = kwargs['id']
try:
importer.addReleaseById(self.id)
except Exception, e:
self.data = e
return
def _queueIssue(self, **kwargs):
if 'id' not in kwargs:
self.data = 'Missing parameter: id'
return
else:
self.id = kwargs['id']
myDB = db.DBConnection()
controlValueDict = {'IssueID': self.id}
newValueDict = {'Status': 'Wanted'}
myDB.upsert("issues", newValueDict, controlValueDict)
search.searchforissue(self.id)
def _unqueueIssue(self, **kwargs):
if 'id' not in kwargs:
self.data = 'Missing parameter: id'
return
else:
self.id = kwargs['id']
myDB = db.DBConnection()
controlValueDict = {'IssueID': self.id}
newValueDict = {'Status': 'Skipped'}
myDB.upsert("issues", newValueDict, controlValueDict)
def _forceSearch(self, **kwargs):
search.searchforissue()
def _forceProcess(self, **kwargs):
PostProcessor.forcePostProcess()
def _getVersion(self, **kwargs):
self.data = {
'git_path' : mylar.GIT_PATH,
'install_type' : mylar.INSTALL_TYPE,
'current_version' : mylar.CURRENT_VERSION,
'latest_version' : mylar.LATEST_VERSION,
'commits_behind' : mylar.COMMITS_BEHIND,
}
def _checkGithub(self, **kwargs):
versioncheck.checkGithub()
self._getVersion()
def _shutdown(self, **kwargs):
mylar.SIGNAL = 'shutdown'
def _restart(self, **kwargs):
mylar.SIGNAL = 'restart'
def _update(self, **kwargs):
mylar.SIGNAL = 'update'
def _getArtistArt(self, **kwargs):
if 'id' not in kwargs:
self.data = 'Missing parameter: id'
return
else:
self.id = kwargs['id']
self.data = cache.getArtwork(ComicID=self.id)
def _getIssueArt(self, **kwargs):
if 'id' not in kwargs:
self.data = 'Missing parameter: id'
return
else:
self.id = kwargs['id']
self.data = cache.getArtwork(IssueID=self.id)
def _getComicInfo(self, **kwargs):
if 'id' not in kwargs:
self.data = 'Missing parameter: id'
return
else:
self.id = kwargs['id']
self.data = cache.getInfo(ComicID=self.id)
def _getIssueInfo(self, **kwargs):
if 'id' not in kwargs:
self.data = 'Missing parameter: id'
return
else:
self.id = kwargs['id']
self.data = cache.getInfo(IssueID=self.id)

View File

@ -31,14 +31,15 @@ def pulldetails(comicid,type,issueid=None,offset=1):
comicapi='583939a3df0a25fc4e8b7a29934a13078002dc27' comicapi='583939a3df0a25fc4e8b7a29934a13078002dc27'
if type == 'comic': if type == 'comic':
PULLURL= mylar.CVURL + 'volume/4050-' + str(comicid) + '/?api_key=' + str(comicapi) + '&format=xml&field_list=name,count_of_issues,issues,start_year,site_detail_url,image,publisher,description,first_issue' if not comicid.startswith('4050-'): comicid = '4050-' + comicid
PULLURL= mylar.CVURL + 'volume/' + str(comicid) + '/?api_key=' + str(comicapi) + '&format=xml&field_list=name,count_of_issues,issues,start_year,site_detail_url,image,publisher,description,first_issue,deck'
elif type == 'issue': elif type == 'issue':
if mylar.CV_ONLY: if mylar.CV_ONLY:
cv_type = 'issues' cv_type = 'issues'
searchset = 'filter=volume:' + str(comicid) + '&field_list=cover_date,description,id,image,issue_number,name,date_last_updated,store_date' searchset = 'filter=volume:' + str(comicid) + '&field_list=cover_date,description,id,image,issue_number,name,date_last_updated,store_date'
else: else:
cv_type = 'volume/' + str(comicid) cv_type = 'volume/' + str(comicid)
searchset = 'name,count_of_issues,issues,start_year,site_detail_url,image,publisher,description' searchset = 'name,count_of_issues,issues,start_year,site_detail_url,image,publisher,description,store_date'
PULLURL = mylar.CVURL + str(cv_type) + '/?api_key=' + str(comicapi) + '&format=xml&' + str(searchset) + '&offset=' + str(offset) PULLURL = mylar.CVURL + str(cv_type) + '/?api_key=' + str(comicapi) + '&format=xml&' + str(searchset) + '&offset=' + str(offset)
elif type == 'firstissue': elif type == 'firstissue':
#this is used ONLY for CV_ONLY #this is used ONLY for CV_ONLY
@ -131,39 +132,93 @@ def GetComicInfo(comicid,dom):
except: except:
comic['ComicYear'] = '0000' comic['ComicYear'] = '0000'
comic['ComicURL'] = dom.getElementsByTagName('site_detail_url')[trackcnt].firstChild.wholeText comic['ComicURL'] = dom.getElementsByTagName('site_detail_url')[trackcnt].firstChild.wholeText
desdeck = 0
#the description field actually holds the Volume# - so let's grab it #the description field actually holds the Volume# - so let's grab it
try: try:
descchunk = dom.getElementsByTagName('description')[0].firstChild.wholeText descchunk = dom.getElementsByTagName('description')[0].firstChild.wholeText
comic['ComicDescription'] = drophtml(descchunk) comic_desc = drophtml(descchunk)
desdeck +=1
except: except:
comic['ComicDescription'] = 'None' comic_desc = 'None'
#extract the first 60 characters
comicDes = comic['ComicDescription'][:60] #sometimes the deck has volume labels
if 'volume' in comicDes.lower(): try:
#found volume - let's grab it. deckchunk = dom.getElementsByTagName('deck')[0].firstChild.wholeText
v_find = comicDes.lower().find('volume') comic_deck = deckchunk
#arbitrarily grab the next 10 chars (6 for volume + 1 for space + 3 for the actual vol #) desdeck +=1
#increased to 10 to allow for text numbering (+5 max) except:
vfind = comicDes[v_find:v_find+15] comic_deck = 'None'
volconv = ''
basenums = {'zero':'0','one':'1','two':'2','three':'3','four':'4','five':'5','six':'6','seven':'7','eight':'8'} comic['ComicVersion'] = 'noversion'
for nums in basenums: #logger.info('comic_desc:' + comic_desc)
if nums in vfind.lower(): #logger.info('comic_deck:' + comic_deck)
sconv = basenums[nums] #logger.info('desdeck: ' + str(desdeck))
volconv = re.sub(nums, sconv, vfind.lower()) while (desdeck > 0):
break if desdeck == 1:
if volconv != '': if comic_desc == 'None':
vfind = volconv comicDes = comic_deck[:30]
if '(' in vfind: else:
#bracket detected in versioning' #extract the first 60 characters
vfindit = re.findall('[^()]+', vfind) comicDes = comic_desc[:60].replace('New 52', '')
vfind = vfindit[0] elif desdeck == 2:
vf = re.findall('[^<>]+', vfind) #extract the characters from the deck
comic['ComicVersion'] = re.sub("[^0-9]", "", vf[0]) comicDes = comic_deck[:30].replace('New 52', '')
else:
logger.info("Volume information found! Adding to series record : volume " + comic['ComicVersion']) break
else:
comic['ComicVersion'] = "noversion" i = 0
while (i < 2):
if 'volume' in comicDes.lower():
#found volume - let's grab it.
v_find = comicDes.lower().find('volume')
#arbitrarily grab the next 10 chars (6 for volume + 1 for space + 3 for the actual vol #)
#increased to 10 to allow for text numbering (+5 max)
#sometimes it's volume 5 and ocassionally it's fifth volume.
if i == 0:
vfind = comicDes[v_find:v_find+15] #if it's volume 5 format
basenums = {'zero':'0','one':'1','two':'2','three':'3','four':'4','five':'5','six':'6','seven':'7','eight':'8','nine':'9','ten':'10'}
#logger.fdebug(str(i) + ': ' + str(vfind))
else:
vfind = comicDes[:v_find] # if it's fifth volume format
basenums = {'zero':'0','first':'1','second':'2','third':'3','fourth':'4','fifth':'5','sixth':'6','seventh':'7','eighth':'8','nineth':'9','tenth':'10'}
#logger.fdebug(str(i) + ': ' + str(vfind))
volconv = ''
for nums in basenums:
if nums in vfind.lower():
sconv = basenums[nums]
vfind = re.sub(nums, sconv, vfind.lower())
break
#logger.fdebug('volconv: ' + str(volconv))
#now we attempt to find the character position after the word 'volume'
if i == 0:
volthis = vfind.lower().find('volume')
volthis = volthis + 6 # add on the actual word to the position so that we can grab the subsequent digit
vfind = vfind[volthis:volthis+4] #grab the next 4 characters ;)
elif i == 1:
volthis = vfind.lower().find('volume')
vfind = vfind[volthis-4:volthis] #grab the next 4 characters ;)
if '(' in vfind:
#bracket detected in versioning'
vfindit = re.findall('[^()]+', vfind)
vfind = vfindit[0]
vf = re.findall('[^<>]+', vfind)
ledigit = re.sub("[^0-9]", "", vf[0])
if ledigit != '':
comic['ComicVersion'] = ledigit
logger.info("Volume information found! Adding to series record : volume " + comic['ComicVersion'])
break
i+=1
else:
i+=1
if comic['ComicVersion'] == 'noversion':
logger.info('comic[ComicVersion]:' + str(comic['ComicVersion']))
desdeck -=1
else:
break
if vari == "yes": if vari == "yes":
comic['ComicIssues'] = str(cntit) comic['ComicIssues'] = str(cntit)
@ -235,11 +290,17 @@ def GetIssuesInfo(comicid,dom):
tempissue['CoverDate'] = subtrack.getElementsByTagName('cover_date')[0].firstChild.wholeText tempissue['CoverDate'] = subtrack.getElementsByTagName('cover_date')[0].firstChild.wholeText
except: except:
tempissue['CoverDate'] = '0000-00-00' tempissue['CoverDate'] = '0000-00-00'
try:
tempissue['StoreDate'] = subtrack.getElementsByTagName('store_date')[0].firstChild.wholeText
except:
tempissue['StoreDate'] = '0000-00-00'
tempissue['Issue_Number'] = subtrack.getElementsByTagName('issue_number')[0].firstChild.wholeText tempissue['Issue_Number'] = subtrack.getElementsByTagName('issue_number')[0].firstChild.wholeText
issuech.append({ issuech.append({
'Comic_ID': comicid,
'Issue_ID': tempissue['Issue_ID'], 'Issue_ID': tempissue['Issue_ID'],
'Issue_Number': tempissue['Issue_Number'], 'Issue_Number': tempissue['Issue_Number'],
'Issue_Date': tempissue['CoverDate'], 'Issue_Date': tempissue['CoverDate'],
'Store_Date': tempissue['StoreDate'],
'Issue_Name': tempissue['Issue_Name'] 'Issue_Name': tempissue['Issue_Name']
}) })

View File

@ -27,7 +27,7 @@ def file2comicmatch(watchmatch):
#print ("match: " + str(watchmatch)) #print ("match: " + str(watchmatch))
pass pass
def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None): def listFiles(dir,watchcomic,Publisher,AlternateSearch=None,manual=None,sarc=None):
# use AlternateSearch to check for filenames that follow that naming pattern # use AlternateSearch to check for filenames that follow that naming pattern
# ie. Star Trek TNG Doctor Who Assimilation won't get hits as the # ie. Star Trek TNG Doctor Who Assimilation won't get hits as the
@ -35,9 +35,9 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
# we need to convert to ascii, as watchcomic is utf-8 and special chars f'it up # we need to convert to ascii, as watchcomic is utf-8 and special chars f'it up
u_watchcomic = watchcomic.encode('ascii', 'ignore').strip() u_watchcomic = watchcomic.encode('ascii', 'ignore').strip()
logger.fdebug('comic: ' + watchcomic) logger.fdebug('[FILECHECKER] comic: ' + watchcomic)
basedir = dir basedir = dir
logger.fdebug('Looking in: ' + dir) logger.fdebug('[FILECHECKER] Looking in: ' + dir)
watchmatch = {} watchmatch = {}
comiclist = [] comiclist = []
comiccnt = 0 comiccnt = 0
@ -64,8 +64,14 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
'B', 'B',
'C'] 'C']
extensions = ('.cbr', '.cbz')
for item in os.listdir(basedir): for item in os.listdir(basedir):
if item == 'cover.jpg' or item == 'cvinfo': continue if item == 'cover.jpg' or item == 'cvinfo': continue
if not item.endswith(extensions):
logger.fdebug('[FILECHECKER] filename not a valid cbr/cbz - ignoring: ' + item)
continue
#print item #print item
#subname = os.path.join(basedir, item) #subname = os.path.join(basedir, item)
subname = item subname = item
@ -86,7 +92,7 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
elif subit.lower()[:3] == 'vol': elif subit.lower()[:3] == 'vol':
#if in format vol.2013 etc #if in format vol.2013 etc
#because the '.' in Vol. gets removed, let's loop thru again after the Vol hit to remove it entirely #because the '.' in Vol. gets removed, let's loop thru again after the Vol hit to remove it entirely
logger.fdebug('volume indicator detected as version #:' + str(subit)) logger.fdebug('[FILECHECKER] volume indicator detected as version #:' + str(subit))
subname = re.sub(subit, '', subname) subname = re.sub(subit, '', subname)
volrem = subit volrem = subit
@ -104,42 +110,73 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
find19 = i.find('19') find19 = i.find('19')
if find19: if find19:
stf = i[find19:4].strip() stf = i[find19:4].strip()
logger.fdebug('stf is : ' + str(stf)) logger.fdebug('[FILECHECKER] stf is : ' + str(stf))
if stf.isdigit(): if stf.isdigit():
numberinseries = 'True' numberinseries = 'True'
logger.fdebug('numberinseries: ' + numberinseries) logger.fdebug('[FILECHECKER] numberinseries: ' + numberinseries)
#remove the brackets.. #remove the brackets..
subnm = re.findall('[^()]+', subname) subnm = re.findall('[^()]+', subname)
logger.fdebug('subnm len : ' + str(len(subnm))) logger.fdebug('[FILECHECKER] subnm len : ' + str(len(subnm)))
if len(subnm) == 1: if len(subnm) == 1:
logger.fdebug(str(len(subnm)) + ': detected invalid filename - attempting to detect year to continue') logger.fdebug('[FILECHECKER] ' + str(len(subnm)) + ': detected invalid filename - attempting to detect year to continue')
#if the series has digits this f's it up. #if the series has digits this f's it up.
if numberinseries == 'True': if numberinseries == 'True':
#we need to remove the series from the subname and then search the remainder. #we need to remove the series from the subname and then search the remainder.
watchname = re.sub('[-\:\;\!\'\/\?\+\=\_\%\.]', '', watchcomic) #remove spec chars for watchcomic match. watchname = re.sub('[-\:\;\!\'\/\?\+\=\_\%\.]', '', watchcomic) #remove spec chars for watchcomic match.
logger.fdebug('watch-cleaned: ' + str(watchname)) logger.fdebug('[FILECHECKER] watch-cleaned: ' + str(watchname))
subthis = re.sub('.cbr', '', subname) subthis = re.sub('.cbr', '', subname)
subthis = re.sub('.cbz', '', subthis) subthis = re.sub('.cbz', '', subthis)
subthis = re.sub('[-\:\;\!\'\/\?\+\=\_\%\.]', '', subthis) subthis = re.sub('[-\:\;\!\'\/\?\+\=\_\%\.]', '', subthis)
logger.fdebug('sub-cleaned: ' + str(subthis)) logger.fdebug('[FILECHECKER] sub-cleaned: ' + str(subthis))
subthis = subthis[len(watchname):] #remove watchcomic subthis = subthis[len(watchname):] #remove watchcomic
#we need to now check the remainder of the string for digits assuming it's a possible year #we need to now check the remainder of the string for digits assuming it's a possible year
logger.fdebug('new subname: ' + str(subthis)) logger.fdebug('[FILECHECKER] new subname: ' + str(subthis))
subname = re.sub('(.*)\s+(19\d{2}|20\d{2})(.*)', '\\1 (\\2) \\3', subthis) subname = re.sub('(.*)\s+(19\d{2}|20\d{2})(.*)', '\\1 (\\2) \\3', subthis)
subname = watchcomic + subname subname = watchcomic + subname
subnm = re.findall('[^()]+', subname) subnm = re.findall('[^()]+', subname)
else: else:
subname = re.sub('(.*)\s+(19\d{2}|20\d{2})(.*)', '\\1 (\\2) \\3', subname) subit = re.sub('(.*)\s+(19\d{2}|20\d{2})(.*)', '\\1 (\\2) \\3', subname)
subthis2 = re.sub('.cbr', '', subit)
subthis1 = re.sub('.cbz', '', subthis2)
subname = re.sub('[-\:\;\!\'\/\?\+\=\_\%\.]', '', subthis1)
subnm = re.findall('[^()]+', subname) subnm = re.findall('[^()]+', subname)
if Publisher.lower() in subname.lower():
#if the Publisher is given within the title or filename even (for some reason, some people
#have this to distinguish different titles), let's remove it entirely.
lenm = len(subnm)
cnt = 0
pub_removed = None
while (cnt < lenm):
if subnm[cnt] is None: break
if subnm[cnt] == ' ':
pass
else:
logger.fdebug(str(cnt) + ". Bracket Word: " + str(subnm[cnt]))
if Publisher.lower() in subnm[cnt].lower() and cnt >= 1:
logger.fdebug('Publisher detected within title : ' + str(subnm[cnt]))
logger.fdebug('cnt is : ' + str(cnt) + ' --- Publisher is: ' + Publisher)
pub_removed = subnm[cnt]
#-strip publisher if exists here-
logger.fdebug('removing publisher from title')
subname_pubremoved = re.sub(pub_removed, '', subname)
logger.fdebug('pubremoved : ' + str(subname_pubremoved))
subname_pubremoved = re.sub('\(\)', '', subname_pubremoved) #remove empty brackets
subname_pubremoved = re.sub('\s+', ' ', subname_pubremoved) #remove spaces > 1
logger.fdebug('blank brackets removed: ' + str(subname_pubremoved))
subnm = re.findall('[^()]+', subname_pubremoved)
break
cnt+=1
subname = subnm[0] subname = subnm[0]
if len(subnm): if len(subnm):
# if it still has no year (brackets), check setting and either assume no year needed. # if it still has no year (brackets), check setting and either assume no year needed.
subname = subname subname = subname
logger.fdebug('subname no brackets: ' + str(subname)) logger.fdebug('[FILECHECKER] subname no brackets: ' + str(subname))
subname = re.sub('\_', ' ', subname) subname = re.sub('\_', ' ', subname)
nonocount = 0 nonocount = 0
charpos = 0 charpos = 0
@ -162,22 +199,22 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
else: else:
sublimit = subname[j+1:j+2] sublimit = subname[j+1:j+2]
if sublimit.isdigit(): if sublimit.isdigit():
logger.fdebug('possible negative issue detected.') logger.fdebug('[FILECHECKER] possible negative issue detected.')
nonocount = nonocount + subcnt - 1 nonocount = nonocount + subcnt - 1
detneg = "yes" detneg = "yes"
elif '-' in watchcomic and i < len(watchcomic): elif '-' in watchcomic and i < len(watchcomic):
logger.fdebug('- appears in series title.') logger.fdebug('[FILECHECKER] - appears in series title.')
logger.fdebug('up to - :' + subname[:j+1].replace('-', ' ')) logger.fdebug('[FILECHECKER] up to - :' + subname[:j+1].replace('-', ' '))
logger.fdebug('after - :' + subname[j+1:]) logger.fdebug('[FILECHECKER] after - :' + subname[j+1:])
subname = subname[:j+1].replace('-', ' ') + subname[j+1:] subname = subname[:j+1].replace('-', ' ') + subname[j+1:]
logger.fdebug('new subname is : ' + str(subname)) logger.fdebug('[FILECHECKER] new subname is : ' + str(subname))
should_restart = True should_restart = True
leavehyphen = True leavehyphen = True
i+=1 i+=1
if detneg == "no" or leavehyphen == False: if detneg == "no" or leavehyphen == False:
subname = re.sub(str(nono), ' ', subname) subname = re.sub(str(nono), ' ', subname)
nonocount = nonocount + subcnt nonocount = nonocount + subcnt
#logger.fdebug(str(nono) + " detected " + str(subcnt) + " times.") #logger.fdebug('[FILECHECKER] (str(nono) + " detected " + str(subcnt) + " times.")
# segment '.' having a . by itself will denote the entire string which we don't want # segment '.' having a . by itself will denote the entire string which we don't want
elif nono == '.': elif nono == '.':
x = 0 x = 0
@ -186,7 +223,7 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
while x < subcnt: while x < subcnt:
fndit = subname.find(nono, fndit) fndit = subname.find(nono, fndit)
if subname[fndit-1:fndit].isdigit() and subname[fndit+1:fndit+2].isdigit(): if subname[fndit-1:fndit].isdigit() and subname[fndit+1:fndit+2].isdigit():
logger.fdebug('decimal issue detected.') logger.fdebug('[FILECHECKER] decimal issue detected.')
dcspace+=1 dcspace+=1
x+=1 x+=1
if dcspace == 1: if dcspace == 1:
@ -204,7 +241,7 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
#print ("space before check: " + str(subname[fndit-1:fndit])) #print ("space before check: " + str(subname[fndit-1:fndit]))
#print ("space after check: " + str(subname[fndit+1:fndit+2])) #print ("space after check: " + str(subname[fndit+1:fndit+2]))
if subname[fndit-1:fndit] == ' ' and subname[fndit+1:fndit+2] == ' ': if subname[fndit-1:fndit] == ' ' and subname[fndit+1:fndit+2] == ' ':
logger.fdebug('blankspace detected before and after ' + str(nono)) logger.fdebug('[FILECHECKER] blankspace detected before and after ' + str(nono))
blspc+=1 blspc+=1
x+=1 x+=1
subname = re.sub(str(nono), ' ', subname) subname = re.sub(str(nono), ' ', subname)
@ -213,7 +250,7 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
modwatchcomic = re.sub('[\_\#\,\/\:\;\.\!\$\%\'\?\@\-]', ' ', u_watchcomic) modwatchcomic = re.sub('[\_\#\,\/\:\;\.\!\$\%\'\?\@\-]', ' ', u_watchcomic)
#if leavehyphen == False: #if leavehyphen == False:
# logger.fdebug('removing hyphen for comparisons') # logger.fdebug('[FILECHECKER] ('removing hyphen for comparisons')
# modwatchcomic = re.sub('-', ' ', modwatchcomic) # modwatchcomic = re.sub('-', ' ', modwatchcomic)
# subname = re.sub('-', ' ', subname) # subname = re.sub('-', ' ', subname)
detectand = False detectand = False
@ -221,7 +258,7 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
modwatchcomic = re.sub('\&', ' and ', modwatchcomic) modwatchcomic = re.sub('\&', ' and ', modwatchcomic)
if ' the ' in modwatchcomic.lower(): if ' the ' in modwatchcomic.lower():
modwatchcomic = re.sub("\\bthe\\b", "", modwatchcomic.lower()) modwatchcomic = re.sub("\\bthe\\b", "", modwatchcomic.lower())
logger.fdebug('new modwatchcomic: ' + str(modwatchcomic)) logger.fdebug('[FILECHECKER] new modwatchcomic: ' + str(modwatchcomic))
detectthe = True detectthe = True
modwatchcomic = re.sub('\s+', ' ', str(modwatchcomic)).strip() modwatchcomic = re.sub('\s+', ' ', str(modwatchcomic)).strip()
if '&' in subname: if '&' in subname:
@ -251,17 +288,17 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
AS_Alt.append(altsearchcomic) AS_Alt.append(altsearchcomic)
#if '_' in subname: #if '_' in subname:
# subname = subname.replace('_', ' ') # subname = subname.replace('_', ' ')
logger.fdebug('watchcomic:' + str(modwatchcomic) + ' ..comparing to found file: ' + str(subname)) logger.fdebug('[FILECHECKER] watchcomic:' + str(modwatchcomic) + ' ..comparing to found file: ' + str(subname))
if modwatchcomic.lower() in subname.lower() or any(x.lower() in subname.lower() for x in AS_Alt):#altsearchcomic.lower() in subname.lower(): if modwatchcomic.lower() in subname.lower() or any(x.lower() in subname.lower() for x in AS_Alt):#altsearchcomic.lower() in subname.lower():
comicpath = os.path.join(basedir, item) comicpath = os.path.join(basedir, item)
logger.fdebug( modwatchcomic + ' - watchlist match on : ' + comicpath) logger.fdebug('[FILECHECKER] ' + modwatchcomic + ' - watchlist match on : ' + comicpath)
comicsize = os.path.getsize(comicpath) comicsize = os.path.getsize(comicpath)
#print ("Comicsize:" + str(comicsize)) #print ("Comicsize:" + str(comicsize))
comiccnt+=1 comiccnt+=1
stann = 0 stann = 0
if 'annual' in subname.lower(): if 'annual' in subname.lower():
logger.fdebug('Annual detected - proceeding') logger.fdebug('[FILECHECKER] Annual detected - proceeding')
jtd_len = subname.lower().find('annual') jtd_len = subname.lower().find('annual')
cchk = modwatchcomic cchk = modwatchcomic
else: else:
@ -272,11 +309,11 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
cchk = cchk_ls[0] cchk = cchk_ls[0]
#print "something: " + str(cchk) #print "something: " + str(cchk)
logger.fdebug('we should remove ' + str(nonocount) + ' characters') logger.fdebug('[FILECHECKER] we should remove ' + str(nonocount) + ' characters')
findtitlepos = subname.find('-') findtitlepos = subname.find('-')
if charpos != 0: if charpos != 0:
logger.fdebug('detected ' + str(len(charpos)) + ' special characters') logger.fdebug('[FILECHECKER] detected ' + str(len(charpos)) + ' special characters')
i=0 i=0
while (i < len(charpos)): while (i < len(charpos)):
for i,j in enumerate(charpos): for i,j in enumerate(charpos):
@ -284,22 +321,22 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
#print subname #print subname
#print "digitchk: " + str(subname[j:]) #print "digitchk: " + str(subname[j:])
if j >= len(subname): if j >= len(subname):
logger.fdebug('end reached. ignoring remainder.') logger.fdebug('[FILECHECKER] end reached. ignoring remainder.')
break break
elif subname[j:] == '-': elif subname[j:] == '-':
if i <= len(subname) and subname[i+1].isdigit(): if i <= len(subname) and subname[i+1].isdigit():
logger.fdebug('negative issue detected.') logger.fdebug('[FILECHECKER] negative issue detected.')
#detneg = "yes" #detneg = "yes"
elif j > findtitlepos: elif j > findtitlepos:
if subname[j:] == '#': if subname[j:] == '#':
if subname[i+1].isdigit(): if subname[i+1].isdigit():
logger.fdebug('# detected denoting issue#, ignoring.') logger.fdebug('[FILECHECKER] # detected denoting issue#, ignoring.')
else: else:
nonocount-=1 nonocount-=1
elif '-' in watchcomic and i < len(watchcomic): elif '-' in watchcomic and i < len(watchcomic):
logger.fdebug('- appears in series title, ignoring.') logger.fdebug('[FILECHECKER] - appears in series title, ignoring.')
else: else:
logger.fdebug('special character appears outside of title - ignoring @ position: ' + str(charpos[i])) logger.fdebug('[FILECHECKER] special character appears outside of title - ignoring @ position: ' + str(charpos[i]))
nonocount-=1 nonocount-=1
i+=1 i+=1
@ -313,9 +350,9 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
removest = subname.find(' ') # the - gets removed above so we test for the first blank space... removest = subname.find(' ') # the - gets removed above so we test for the first blank space...
if subname[:removest].isdigit(): if subname[:removest].isdigit():
jtd_len += removest + 1 # +1 to account for space in place of - jtd_len += removest + 1 # +1 to account for space in place of -
logger.fdebug('adjusted jtd_len to : ' + str(removest) + ' because of story-arc reading order tags') logger.fdebug('[FILECHECKER] adjusted jtd_len to : ' + str(removest) + ' because of story-arc reading order tags')
logger.fdebug('nonocount [' + str(nonocount) + '] cchk [' + cchk + '] length [' + str(len(cchk)) + ']') logger.fdebug('[FILECHECKER] nonocount [' + str(nonocount) + '] cchk [' + cchk + '] length [' + str(len(cchk)) + ']')
#if detectand: #if detectand:
# jtd_len = jtd_len - 2 # char substitution diff between & and 'and' = 2 chars # jtd_len = jtd_len - 2 # char substitution diff between & and 'and' = 2 chars
@ -324,54 +361,54 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
#justthedigits = item[jtd_len:] #justthedigits = item[jtd_len:]
logger.fdebug('final jtd_len to prune [' + str(jtd_len) + ']') logger.fdebug('[FILECHECKER] final jtd_len to prune [' + str(jtd_len) + ']')
logger.fdebug('before title removed from FILENAME [' + str(item) + ']') logger.fdebug('[FILECHECKER] before title removed from FILENAME [' + str(item) + ']')
logger.fdebug('after title removed from FILENAME [' + str(item[jtd_len:]) + ']') logger.fdebug('[FILECHECKER] after title removed from FILENAME [' + str(item[jtd_len:]) + ']')
logger.fdebug('creating just the digits using SUBNAME, pruning first [' + str(jtd_len) + '] chars from [' + subname + ']') logger.fdebug('[FILECHECKER] creating just the digits using SUBNAME, pruning first [' + str(jtd_len) + '] chars from [' + subname + ']')
justthedigits_1 = subname[jtd_len:].strip() justthedigits_1 = subname[jtd_len:].strip()
logger.fdebug('after title removed from SUBNAME [' + justthedigits_1 + ']') logger.fdebug('[FILECHECKER] after title removed from SUBNAME [' + justthedigits_1 + ']')
#remove the title if it appears #remove the title if it appears
#findtitle = justthedigits.find('-') #findtitle = justthedigits.find('-')
#if findtitle > 0 and detneg == "no": #if findtitle > 0 and detneg == "no":
# justthedigits = justthedigits[:findtitle] # justthedigits = justthedigits[:findtitle]
# logger.fdebug("removed title from name - is now : " + str(justthedigits)) # logger.fdebug('[FILECHECKER] ("removed title from name - is now : " + str(justthedigits))
justthedigits = justthedigits_1.split(' ', 1)[0] justthedigits = justthedigits_1.split(' ', 1)[0]
digitsvalid = "false" digitsvalid = "false"
for jdc in list(justthedigits): for jdc in list(justthedigits):
#logger.fdebug('jdc:' + str(jdc)) #logger.fdebug('[FILECHECKER] ('jdc:' + str(jdc))
if not jdc.isdigit(): if not jdc.isdigit():
#logger.fdebug('alpha') #logger.fdebug('[FILECHECKER] ('alpha')
jdc_start = justthedigits.find(jdc) jdc_start = justthedigits.find(jdc)
alpha_isschk = justthedigits[jdc_start:] alpha_isschk = justthedigits[jdc_start:]
#logger.fdebug('alpha_isschk:' + str(alpha_isschk)) #logger.fdebug('[FILECHECKER] ('alpha_isschk:' + str(alpha_isschk))
for issexcept in issue_exceptions: for issexcept in issue_exceptions:
if issexcept.lower() in alpha_isschk.lower() and len(alpha_isschk) <= len(issexcept): if issexcept.lower() in alpha_isschk.lower() and len(alpha_isschk) <= len(issexcept):
logger.fdebug('ALPHANUMERIC EXCEPTION : [' + justthedigits + ']') logger.fdebug('[FILECHECKER] ALPHANUMERIC EXCEPTION : [' + justthedigits + ']')
digitsvalid = "true" digitsvalid = "true"
break break
if digitsvalid == "true": break if digitsvalid == "true": break
try: try:
tmpthedigits = justthedigits_1.split(' ', 1)[1] tmpthedigits = justthedigits_1.split(' ', 1)[1]
logger.fdebug('If the series has a decimal, this should be a number [' + tmpthedigits + ']') logger.fdebug('[FILECHECKER] If the series has a decimal, this should be a number [' + tmpthedigits + ']')
if 'cbr' in tmpthedigits.lower() or 'cbz' in tmpthedigits.lower(): if 'cbr' in tmpthedigits.lower() or 'cbz' in tmpthedigits.lower():
tmpthedigits = tmpthedigits[:-3].strip() tmpthedigits = tmpthedigits[:-3].strip()
logger.fdebug('Removed extension - now we should just have a number [' + tmpthedigits + ']') logger.fdebug('[FILECHECKER] Removed extension - now we should just have a number [' + tmpthedigits + ']')
poss_alpha = tmpthedigits poss_alpha = tmpthedigits
if poss_alpha.isdigit(): if poss_alpha.isdigit():
digitsvalid = "true" digitsvalid = "true"
if justthedigits.lower() == 'annual': if justthedigits.lower() == 'annual':
logger.fdebug('ANNUAL DETECTED [' + poss_alpha + ']') logger.fdebug('[FILECHECKER] ANNUAL DETECTED [' + poss_alpha + ']')
justthedigits += ' ' + poss_alpha justthedigits += ' ' + poss_alpha
else: else:
justthedigits += '.' + poss_alpha justthedigits += '.' + poss_alpha
logger.fdebug('DECIMAL ISSUE DETECTED [' + justthedigits + ']') logger.fdebug('[FILECHECKER] DECIMAL ISSUE DETECTED [' + justthedigits + ']')
else: else:
for issexcept in issue_exceptions: for issexcept in issue_exceptions:
decimalexcept = False decimalexcept = False
@ -382,7 +419,7 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
if decimalexcept: if decimalexcept:
issexcept = '.' + issexcept issexcept = '.' + issexcept
justthedigits += issexcept #poss_alpha justthedigits += issexcept #poss_alpha
logger.fdebug('ALPHANUMERIC EXCEPTION. COMBINING : [' + justthedigits + ']') logger.fdebug('[FILECHECKER] ALPHANUMERIC EXCEPTION. COMBINING : [' + justthedigits + ']')
digitsvalid = "true" digitsvalid = "true"
break break
except: except:
@ -391,7 +428,7 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
# justthedigits = justthedigits.split(' ', 1)[0] # justthedigits = justthedigits.split(' ', 1)[0]
#if the issue has an alphanumeric (issue_exceptions, join it and push it through) #if the issue has an alphanumeric (issue_exceptions, join it and push it through)
logger.fdebug('JUSTTHEDIGITS [' + justthedigits + ']' ) logger.fdebug('[FILECHECKER] JUSTTHEDIGITS [' + justthedigits + ']' )
if digitsvalid == "true": if digitsvalid == "true":
pass pass
else: else:
@ -403,7 +440,7 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
b4dec = justthedigits[:tmpdec] b4dec = justthedigits[:tmpdec]
a4dec = justthedigits[tmpdec+1:] a4dec = justthedigits[tmpdec+1:]
if a4dec.isdigit() and b4dec.isdigit(): if a4dec.isdigit() and b4dec.isdigit():
logger.fdebug('DECIMAL ISSUE DETECTED') logger.fdebug('[FILECHECKER] DECIMAL ISSUE DETECTED')
digitsvalid = "true" digitsvalid = "true"
else: else:
try: try:
@ -418,11 +455,11 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
# else: # else:
# logger.fdebug('NO DECIMALS DETECTED') # logger.fdebug('[FILECHECKER] NO DECIMALS DETECTED')
# digitsvalid = "false" # digitsvalid = "false"
# if justthedigits.lower() == 'annual': # if justthedigits.lower() == 'annual':
# logger.fdebug('ANNUAL [' + tmpthedigits.split(' ', 1)[1] + ']') # logger.fdebug('[FILECHECKER] ANNUAL [' + tmpthedigits.split(' ', 1)[1] + ']')
# justthedigits += ' ' + tmpthedigits.split(' ', 1)[1] # justthedigits += ' ' + tmpthedigits.split(' ', 1)[1]
# digitsvalid = "true" # digitsvalid = "true"
# else: # else:
@ -432,19 +469,19 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
# if poss_alpha.isdigit(): # if poss_alpha.isdigit():
# digitsvalid = "true" # digitsvalid = "true"
# justthedigits += '.' + poss_alpha # justthedigits += '.' + poss_alpha
# logger.fdebug('DECIMAL ISSUE DETECTED [' + justthedigits + ']') # logger.fdebug('[FILECHECKER] DECIMAL ISSUE DETECTED [' + justthedigits + ']')
# for issexcept in issue_exceptions: # for issexcept in issue_exceptions:
# if issexcept.lower() in poss_alpha.lower() and len(poss_alpha) <= len(issexcept): # if issexcept.lower() in poss_alpha.lower() and len(poss_alpha) <= len(issexcept):
# justthedigits += poss_alpha # justthedigits += poss_alpha
# logger.fdebug('ALPHANUMERIC EXCEPTION. COMBINING : [' + justthedigits + ']') # logger.fdebug('[FILECHECKER] ALPHANUMERIC EXCEPTION. COMBINING : [' + justthedigits + ']')
# digitsvalid = "true" # digitsvalid = "true"
# break # break
# except: # except:
# pass # pass
logger.fdebug('final justthedigits [' + justthedigits + ']') logger.fdebug('[FILECHECKER] final justthedigits [' + justthedigits + ']')
if digitsvalid == "false": if digitsvalid == "false":
logger.fdebug('Issue number not properly detected...ignoring.') logger.fdebug('[FILECHECKER] Issue number not properly detected...ignoring.')
comiccnt -=1 # remove the entry from the list count as it was incorrrectly tallied. comiccnt -=1 # remove the entry from the list count as it was incorrrectly tallied.
continue continue
@ -455,21 +492,35 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
# in case it matches on an Alternate Search pattern, set modwatchcomic to the cchk value # in case it matches on an Alternate Search pattern, set modwatchcomic to the cchk value
modwatchcomic = cchk modwatchcomic = cchk
logger.fdebug('cchk = ' + cchk.lower()) logger.fdebug('[FILECHECKER] cchk = ' + cchk.lower())
logger.fdebug('modwatchcomic = ' + modwatchcomic.lower()) logger.fdebug('[FILECHECKER] modwatchcomic = ' + modwatchcomic.lower())
logger.fdebug('subname = ' + subname.lower()) logger.fdebug('[FILECHECKER] subname = ' + subname.lower())
comyear = manual['SeriesYear'] comyear = manual['SeriesYear']
issuetotal = manual['Total'] issuetotal = manual['Total']
comicvolume = manual['ComicVersion'] comicvolume = manual['ComicVersion']
logger.fdebug('SeriesYear: ' + str(comyear)) logger.fdebug('[FILECHECKER] SeriesYear: ' + str(comyear))
logger.fdebug('IssueTotal: ' + str(issuetotal)) logger.fdebug('[FILECHECKER] IssueTotal: ' + str(issuetotal))
logger.fdebug('Comic Volume: ' + str(comicvolume)) logger.fdebug('[FILECHECKER] Comic Volume: ' + str(comicvolume))
logger.fdebug('volume detected: ' + str(volrem)) logger.fdebug('[FILECHECKER] volume detected: ' + str(volrem))
if comicvolume:
ComVersChk = re.sub("[^0-9]", "", comicvolume)
if ComVersChk == '' or ComVersChk == '1':
ComVersChk = 0
else:
ComVersChk = 0
# even if it's a V1, we need to pull the date for the given issue ID and get the publication year
# for the issue. Because even if it's a V1, if there are additional Volumes then it's possible that
# it will take the incorrect series. (ie. Detective Comics (1937) & Detective Comics (2011).
# If issue #28 (2013) is found, it exists in both series, and because DC 1937 is a V1, it will bypass
# the year check which will result in the incorrect series being picked (1937)
#set the issue/year threshold here. #set the issue/year threshold here.
# 2013 - (24issues/12) = 2011. # 2013 - (24issues/12) = 2011.
#minyear = int(comyear) - (int(issuetotal) / 12) #minyear = int(comyear) - (int(issuetotal) / 12)
maxyear = manual['LatestDate'][:4] # yyyy-mm-dd maxyear = manual['LatestDate'][:4] # yyyy-mm-dd
#subnm defined at being of module. #subnm defined at being of module.
@ -477,28 +528,28 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
#print ("there are " + str(lenm) + " words.") #print ("there are " + str(lenm) + " words.")
cnt = 0 cnt = 0
yearmatch = "false" yearmatch = "none"
vers4year = "no" vers4year = "no"
vers4vol = "no" vers4vol = "no"
for ct in subsplit: for ct in subsplit:
if ct.lower().startswith('v') and ct[1:].isdigit(): if ct.lower().startswith('v') and ct[1:].isdigit():
logger.fdebug("possible versioning..checking") logger.fdebug('[FILECHECKER] possible versioning..checking')
#we hit a versioning # - account for it #we hit a versioning # - account for it
if ct[1:].isdigit(): if ct[1:].isdigit():
if len(ct[1:]) == 4: #v2013 if len(ct[1:]) == 4: #v2013
logger.fdebug("Version detected as " + str(ct)) logger.fdebug('[FILECHECKER] Version detected as ' + str(ct))
vers4year = "yes" #re.sub("[^0-9]", " ", str(ct)) #remove the v vers4year = "yes" #re.sub("[^0-9]", " ", str(ct)) #remove the v
break break
else: else:
if len(ct) < 4: if len(ct) < 4:
logger.fdebug("Version detected as " + str(ct)) logger.fdebug('[FILECHECKER] Version detected as ' + str(ct))
vers4vol = str(ct) vers4vol = str(ct)
break break
logger.fdebug("false version detection..ignoring.") logger.fdebug('[FILECHECKER] false version detection..ignoring.')
versionmatch = "false"
if vers4year is not "no" or vers4vol is not "no": if vers4year is not "no" or vers4vol is not "no":
yearmatch = "false"
if comicvolume: #is not "None" and comicvolume is not None: if comicvolume: #is not "None" and comicvolume is not None:
D_ComicVersion = re.sub("[^0-9]", "", comicvolume) D_ComicVersion = re.sub("[^0-9]", "", comicvolume)
@ -509,42 +560,62 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
F_ComicVersion = re.sub("[^0-9]", "", volrem) F_ComicVersion = re.sub("[^0-9]", "", volrem)
S_ComicVersion = str(comyear) S_ComicVersion = str(comyear)
logger.fdebug("FCVersion: " + str(F_ComicVersion)) logger.fdebug('[FILECHECKER] FCVersion: ' + str(F_ComicVersion))
logger.fdebug("DCVersion: " + str(D_ComicVersion)) logger.fdebug('[FILECHECKER] DCVersion: ' + str(D_ComicVersion))
logger.fdebug("SCVersion: " + str(S_ComicVersion)) logger.fdebug('[FILECHECKER] SCVersion: ' + str(S_ComicVersion))
#if annualize == "true" and int(ComicYear) == int(F_ComicVersion): #if annualize == "true" and int(ComicYear) == int(F_ComicVersion):
# logger.fdebug("We matched on versions for annuals " + str(volrem)) # logger.fdebug('[FILECHECKER] ("We matched on versions for annuals " + str(volrem))
if int(F_ComicVersion) == int(D_ComicVersion) or int(F_ComicVersion) == int(S_ComicVersion): if int(F_ComicVersion) == int(D_ComicVersion) or int(F_ComicVersion) == int(S_ComicVersion):
logger.fdebug("We matched on versions..." + str(volrem)) logger.fdebug('[FILECHECKER] We matched on versions...' + str(volrem))
versionmatch = "true"
else:
logger.fdebug('[FILECHECKER] Versions wrong. Ignoring possible match.')
#else:
while (cnt < len_sm):
if subnm[cnt] is None: break
if subnm[cnt] == ' ':
pass
else:
logger.fdebug('[FILECHECKER] ' + str(cnt) + ' Bracket Word: ' + str(subnm[cnt]))
#if ComVersChk == 0:
# logger.fdebug('[FILECHECKER] Series version detected as V1 (only series in existance with that title). Bypassing year check')
# yearmatch = "true"
# break
if subnm[cnt][:-2] == '19' or subnm[cnt][:-2] == '20':
logger.fdebug('[FILECHECKER] year detected: ' + str(subnm[cnt]))
result_comyear = subnm[cnt]
if int(result_comyear) <= int(maxyear):
logger.fdebug('[FILECHECKER] ' + str(result_comyear) + ' is within the series range of ' + str(comyear) + '-' + str(maxyear))
#still possible for incorrect match if multiple reboots of series end/start in same year
yearmatch = "true"
break
else:
logger.fdebug('[FILECHECKER] ' + str(result_comyear) + ' - not right - year not within series range of ' + str(comyear) + '-' + str(maxyear))
yearmatch = "false"
break
cnt+=1
if versionmatch == "false":
if yearmatch == "false":
logger.fdebug('[FILECHECKER] Failed to match on both version and issue year.')
continue
else:
logger.fdebug('[FILECHECKER] Matched on versions, not on year - continuing.')
else:
if yearmatch == "false":
logger.fdebug('[FILECHECKER] Matched on version, but not on year - continuing.')
else:
logger.fdebug('[FILECHECKER] Matched on both version, and issue year - continuing.')
if yearmatch == "none":
if ComVersChk == 0:
logger.fdebug('[FILECHECKER] Series version detected as V1 (only series in existance with that title). Bypassing year check.')
yearmatch = "true" yearmatch = "true"
else: else:
logger.fdebug("Versions wrong. Ignoring possible match.") continue
else:
while (cnt < len_sm):
if subnm[cnt] is None: break
if subnm[cnt] == ' ':
pass
else:
logger.fdebug(str(cnt) + ". Bracket Word: " + str(subnm[cnt]))
if subnm[cnt][:-2] == '19' or subnm[cnt][:-2] == '20':
logger.fdebug("year detected: " + str(subnm[cnt]))
result_comyear = subnm[cnt]
if int(result_comyear) <= int(maxyear):
logger.fdebug(str(result_comyear) + ' is within the series range of ' + str(comyear) + '-' + str(maxyear))
#still possible for incorrect match if multiple reboots of series end/start in same year
yearmatch = "true"
break
else:
logger.fdebug(str(result_comyear) + ' - not right - year not within series range of ' + str(comyear) + '-' + str(maxyear))
yearmatch = "false"
break
cnt+=1
if yearmatch == "false": continue
if 'annual' in subname.lower(): if 'annual' in subname.lower():
subname = re.sub('annual', '', subname.lower()) subname = re.sub('annual', '', subname.lower())
@ -554,66 +625,81 @@ def listFiles(dir,watchcomic,AlternateSearch=None,manual=None,sarc=None):
# if it's an alphanumeric with a space, rejoin, so we can remove it cleanly just below this. # if it's an alphanumeric with a space, rejoin, so we can remove it cleanly just below this.
substring_removal = None substring_removal = None
poss_alpha = subname.split(' ')[-1:] poss_alpha = subname.split(' ')[-1:]
logger.fdebug('poss_alpha: ' + str(poss_alpha)) logger.fdebug('[FILECHECKER] poss_alpha: ' + str(poss_alpha))
logger.fdebug('lenalpha: ' + str(len(''.join(poss_alpha)))) logger.fdebug('[FILECHECKER] lenalpha: ' + str(len(''.join(poss_alpha))))
for issexcept in issue_exceptions: for issexcept in issue_exceptions:
if issexcept.lower()in str(poss_alpha).lower() and len(''.join(poss_alpha)) <= len(issexcept): if issexcept.lower()in str(poss_alpha).lower() and len(''.join(poss_alpha)) <= len(issexcept):
#get the last 2 words so that we can remove them cleanly #get the last 2 words so that we can remove them cleanly
substring_removal = ' '.join(subname.split(' ')[-2:]) substring_removal = ' '.join(subname.split(' ')[-2:])
substring_join = ''.join(subname.split(' ')[-2:]) substring_join = ''.join(subname.split(' ')[-2:])
logger.fdebug('substring_removal: ' + str(substring_removal)) logger.fdebug('[FILECHECKER] substring_removal: ' + str(substring_removal))
logger.fdebug('substring_join: ' + str(substring_join)) logger.fdebug('[FILECHECKER] substring_join: ' + str(substring_join))
break break
if substring_removal is not None: if substring_removal is not None:
sub_removed = subname.replace('_', ' ').replace(substring_removal, substring_join) sub_removed = subname.replace('_', ' ').replace(substring_removal, substring_join)
else: else:
sub_removed = subname.replace('_', ' ') sub_removed = subname.replace('_', ' ')
logger.fdebug('sub_removed: ' + str(sub_removed)) logger.fdebug('[FILECHECKER] sub_removed: ' + str(sub_removed))
split_sub = sub_removed.rsplit(' ',1)[0].split(' ') #removes last word (assuming it's the issue#) split_sub = sub_removed.rsplit(' ',1)[0].split(' ') #removes last word (assuming it's the issue#)
split_mod = modwatchcomic.replace('_', ' ').split() #batman split_mod = modwatchcomic.replace('_', ' ').split() #batman
logger.fdebug('split_sub: ' + str(split_sub)) logger.fdebug('[FILECHECKER] split_sub: ' + str(split_sub))
logger.fdebug('split_mod: ' + str(split_mod)) logger.fdebug('[FILECHECKER] split_mod: ' + str(split_mod))
x = len(split_sub)-1 x = len(split_sub)-1
scnt = 0 scnt = 0
if x > len(split_mod)-1: if x > len(split_mod)-1:
logger.fdebug('number of words do not match...aborting.') logger.fdebug('[FILECHECKER] number of words do not match...aborting.')
else: else:
while ( x > -1 ): while ( x > -1 ):
print str(split_sub[x]) + ' comparing to ' + str(split_mod[x]) print str(split_sub[x]) + ' comparing to ' + str(split_mod[x])
if str(split_sub[x]).lower() == str(split_mod[x]).lower(): if str(split_sub[x]).lower() == str(split_mod[x]).lower():
scnt+=1 scnt+=1
logger.fdebug('word match exact. ' + str(scnt) + '/' + str(len(split_mod))) logger.fdebug('[FILECHECKER] word match exact. ' + str(scnt) + '/' + str(len(split_mod)))
x-=1 x-=1
wordcnt = int(scnt) wordcnt = int(scnt)
logger.fdebug('scnt:' + str(scnt)) logger.fdebug('[FILECHECKER] scnt:' + str(scnt))
totalcnt = int(len(split_mod)) totalcnt = int(len(split_mod))
logger.fdebug('split_mod length:' + str(totalcnt)) logger.fdebug('[FILECHECKER] split_mod length:' + str(totalcnt))
try: try:
spercent = (wordcnt/totalcnt) * 100 spercent = (wordcnt/totalcnt) * 100
except ZeroDivisionError: except ZeroDivisionError:
spercent = 0 spercent = 0
logger.fdebug('we got ' + str(spercent) + ' percent.') logger.fdebug('[FILECHECKER] we got ' + str(spercent) + ' percent.')
if int(spercent) >= 80: if int(spercent) >= 80:
logger.fdebug("this should be considered an exact match.") logger.fdebug('[FILECHECKER] this should be considered an exact match.Justthedigits:' + justthedigits)
else: else:
logger.fdebug('failure - not an exact match.') logger.fdebug('[FILECHECKER] failure - not an exact match.')
continue continue
comiclist.append({ if manual:
'ComicFilename': item, print item
'ComicLocation': comicpath, print comicpath
'ComicSize': comicsize, print comicsize
'JusttheDigits': justthedigits print result_comyear
}) print justthedigits
comiclist.append({
'ComicFilename': item,
'ComicLocation': comicpath,
'ComicSize': comicsize,
'ComicYear': result_comyear,
'JusttheDigits': justthedigits
})
print('appended.')
else:
comiclist.append({
'ComicFilename': item,
'ComicLocation': comicpath,
'ComicSize': comicsize,
'JusttheDigits': justthedigits
})
watchmatch['comiclist'] = comiclist watchmatch['comiclist'] = comiclist
else: else:
pass pass
#print ("directory found - ignoring") #print ("directory found - ignoring")
logger.fdebug('you have a total of ' + str(comiccnt) + ' ' + watchcomic + ' comics') logger.fdebug('[FILECHECKER] you have a total of ' + str(comiccnt) + ' ' + watchcomic + ' comics')
watchmatch['comiccount'] = comiccnt watchmatch['comiccount'] = comiccnt
return watchmatch return watchmatch

View File

@ -59,7 +59,8 @@ def Startit(searchName, searchIssue, searchYear, ComicVersion, IssDateFix):
totNum = len(feed.entries) totNum = len(feed.entries)
tallycount += len(feed.entries) tallycount += len(feed.entries)
keyPair = {} #keyPair = {}
keyPair = []
regList = [] regList = []
countUp = 0 countUp = 0
@ -68,7 +69,11 @@ def Startit(searchName, searchIssue, searchYear, ComicVersion, IssDateFix):
while countUp < totNum: while countUp < totNum:
urlParse = feed.entries[countUp].enclosures[0] urlParse = feed.entries[countUp].enclosures[0]
#keyPair[feed.entries[countUp].title] = feed.entries[countUp].link #keyPair[feed.entries[countUp].title] = feed.entries[countUp].link
keyPair[feed.entries[countUp].title] = urlParse["href"] #keyPair[feed.entries[countUp].title] = urlParse["href"]
keyPair.append({"title": feed.entries[countUp].title,
"link": urlParse["href"],
"length": urlParse["length"],
"pubdate": feed.entries[countUp].updated})
countUp=countUp+1 countUp=countUp+1
@ -90,13 +95,14 @@ def Startit(searchName, searchIssue, searchYear, ComicVersion, IssDateFix):
except_list=['releases', 'gold line', 'distribution', '0-day', '0 day'] except_list=['releases', 'gold line', 'distribution', '0-day', '0 day']
for title, link in keyPair.items(): for entry in keyPair:
title = entry['title']
#logger.fdebug("titlesplit: " + str(title.split("\""))) #logger.fdebug("titlesplit: " + str(title.split("\"")))
splitTitle = title.split("\"") splitTitle = title.split("\"")
noYear = 'False' noYear = 'False'
for subs in splitTitle: for subs in splitTitle:
logger.fdebug(subs) #logger.fdebug('sub:' + subs)
regExCount = 0 regExCount = 0
if len(subs) > 10 and not any(d in subs.lower() for d in except_list): if len(subs) > 10 and not any(d in subs.lower() for d in except_list):
#Looping through dictionary to run each regEx - length + regex is determined by regexList up top. #Looping through dictionary to run each regEx - length + regex is determined by regexList up top.
@ -128,8 +134,10 @@ def Startit(searchName, searchIssue, searchYear, ComicVersion, IssDateFix):
if noYear == 'False': if noYear == 'False':
entries.append({ entries.append({
'title': subs, 'title': subs,
'link': str(link) 'link': entry['link'],
'pubdate': entry['pubdate'],
'length': entry['length']
}) })
break # break out so we don't write more shit. break # break out so we don't write more shit.

View File

@ -382,6 +382,8 @@ def rename_param(comicid, comicname, issue, ofilename, comicyear=None, issueid=N
logger.fdebug('Pretty Comic Issue is : ' + str(prettycomiss)) logger.fdebug('Pretty Comic Issue is : ' + str(prettycomiss))
issueyear = issuenzb['IssueDate'][:4] issueyear = issuenzb['IssueDate'][:4]
month = issuenzb['IssueDate'][5:7].replace('-','').strip()
month_name = fullmonth(month)
logger.fdebug('Issue Year : ' + str(issueyear)) logger.fdebug('Issue Year : ' + str(issueyear))
comicnzb= myDB.action("SELECT * from comics WHERE comicid=?", [comicid]).fetchone() comicnzb= myDB.action("SELECT * from comics WHERE comicid=?", [comicid]).fetchone()
publisher = comicnzb['ComicPublisher'] publisher = comicnzb['ComicPublisher']
@ -428,6 +430,8 @@ def rename_param(comicid, comicname, issue, ofilename, comicyear=None, issueid=N
'$publisher': publisher.lower(), '$publisher': publisher.lower(),
'$VolumeY': 'V' + str(seriesyear), '$VolumeY': 'V' + str(seriesyear),
'$VolumeN': comversion, '$VolumeN': comversion,
'$monthname': monthname,
'$month': month,
'$Annual': 'Annual' '$Annual': 'Annual'
} }
@ -886,3 +890,34 @@ def checkFolder():
result = PostProcess.Process() result = PostProcess.Process()
logger.info('Finished checking for newly snatched downloads') logger.info('Finished checking for newly snatched downloads')
def LoadAlternateSearchNames(seriesname_alt, comicid):
import logger
#seriesname_alt = db.comics['AlternateSearch']
AS_Alt = []
Alternate_Names = {}
alt_count = 0
logger.fdebug('seriesname_alt:' + str(seriesname_alt))
if seriesname_alt is None or seriesname_alt == 'None':
logger.fdebug('no Alternate name given. Aborting search.')
return "no results"
else:
chkthealt = seriesname_alt.split('##')
if chkthealt == 0:
AS_Alternate = seriesname_alt
AS_Alt.append(seriesname_alt)
for calt in chkthealt:
AS_Alter = re.sub('##','',calt)
u_altsearchcomic = AS_Alter.encode('ascii', 'ignore').strip()
AS_formatrem_seriesname = re.sub('\s+', ' ', u_altsearchcomic)
if AS_formatrem_seriesname[:1] == ' ': AS_formatrem_seriesname = AS_formatrem_seriesname[1:]
AS_Alt.append({"AlternateName": AS_formatrem_seriesname})
alt_count+=1
Alternate_Names['AlternateName'] = AS_Alt
Alternate_Names['ComicID'] = comicid
Alternate_Names['Count'] = alt_count
logger.info('AlternateNames returned:' + str(Alternate_Names))
return Alternate_Names

View File

@ -43,7 +43,7 @@ def is_exists(comicid):
return False return False
def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,calledfrom=None): def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,calledfrom=None,annload=None):
# Putting this here to get around the circular import. Will try to use this to update images at later date. # Putting this here to get around the circular import. Will try to use this to update images at later date.
# from mylar import cache # from mylar import cache
@ -143,6 +143,19 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
#let's do the Annual check here. #let's do the Annual check here.
if mylar.ANNUALS_ON: if mylar.ANNUALS_ON:
#we need to check first to see if there are pre-existing annuals that have been manually added, or else they'll get
#wiped out.
annualids = [] #to be used to make sure an ID isn't double-loaded
if annload is None:
pass
else:
for manchk in annload:
if manchk['ReleaseComicID'] is not None or manchk['ReleaseComicID'] is not None: #if it exists, then it's a pre-existing add.
#print str(manchk['ReleaseComicID']), comic['ComicName'], str(SeriesYear), str(comicid)
manualAnnual(manchk['ReleaseComicID'], comic['ComicName'], SeriesYear, comicid)
annualids.append(manchk['ReleaseComicID'])
annualcomicname = re.sub('[\,\:]', '', comic['ComicName']) annualcomicname = re.sub('[\,\:]', '', comic['ComicName'])
#----- CBDB (outdated) #----- CBDB (outdated)
@ -177,7 +190,6 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
#print "annualyear: " + str(annualval['AnnualYear']) #print "annualyear: " + str(annualval['AnnualYear'])
logger.fdebug('annualyear:' + str(annualyear)) logger.fdebug('annualyear:' + str(annualyear))
sresults = mb.findComic(annComicName, mode, issue=None) sresults = mb.findComic(annComicName, mode, issue=None)
logger.fdebug('sresults : ' + str(sresults))
type='comic' type='comic'
@ -188,7 +200,7 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
num_res = 0 num_res = 0
while (num_res < len(sresults)): while (num_res < len(sresults)):
sr = sresults[num_res] sr = sresults[num_res]
#logger.fdebug("description:" + sr['description']) logger.fdebug("description:" + sr['description'])
if 'paperback' in sr['description'] or 'collecting' in sr['description'] or 'reprints' in sr['description'] or 'collected' in sr['description']: if 'paperback' in sr['description'] or 'collecting' in sr['description'] or 'reprints' in sr['description'] or 'collected' in sr['description']:
logger.fdebug('tradeback/collected edition detected - skipping ' + str(sr['comicid'])) logger.fdebug('tradeback/collected edition detected - skipping ' + str(sr['comicid']))
else: else:
@ -196,6 +208,10 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
logger.fdebug(str(comicid) + ' found. Assuming it is part of the greater collection.') logger.fdebug(str(comicid) + ' found. Assuming it is part of the greater collection.')
issueid = sr['comicid'] issueid = sr['comicid']
logger.fdebug(str(issueid) + ' added to series list as an Annual') logger.fdebug(str(issueid) + ' added to series list as an Annual')
if issueid in annualids:
logger.fdebug(str(issueid) + ' already exists & was refreshed.')
num_res+=1 # need to manually increment since not a for-next loop
continue
issued = cv.getComic(issueid,'issue') issued = cv.getComic(issueid,'issue')
if len(issued) is None or len(issued) == 0: if len(issued) is None or len(issued) == 0:
logger.fdebug('Could not find any annual information...') logger.fdebug('Could not find any annual information...')
@ -213,14 +229,18 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
issnum = str(firstval['Issue_Number']) issnum = str(firstval['Issue_Number'])
issname = cleanname issname = cleanname
issdate = str(firstval['Issue_Date']) issdate = str(firstval['Issue_Date'])
stdate = str(firstval['Store_Date'])
newCtrl = {"IssueID": issid} newCtrl = {"IssueID": issid}
newVals = {"Issue_Number": issnum, newVals = {"Issue_Number": issnum,
"Int_IssueNumber": helpers.issuedigits(issnum), "Int_IssueNumber": helpers.issuedigits(issnum),
"IssueDate": issdate, "IssueDate": issdate,
"IssueName": issname, "ReleaseDate": stdate,
"ComicID": comicid, "IssueName": issname,
"ComicName": comic['ComicName'], "ComicID": comicid,
"Status": "Skipped"} "ComicName": comic['ComicName'],
"ReleaseComicID": re.sub('4050-','',firstval['Comic_ID']).strip(),
"ReleaseComicName": sr['name'],
"Status": "Skipped"}
myDB.upsert("annuals", newVals, newCtrl) myDB.upsert("annuals", newVals, newCtrl)
n+=1 n+=1
num_res+=1 num_res+=1
@ -613,6 +633,7 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
#print ("issnum: " + str(issnum)) #print ("issnum: " + str(issnum))
issname = cleanname issname = cleanname
issdate = str(firstval['Issue_Date']) issdate = str(firstval['Issue_Date'])
storedate = str(firstval['Store_Date'])
if issnum.isdigit(): if issnum.isdigit():
int_issnum = int( issnum ) * 1000 int_issnum = int( issnum ) * 1000
else: else:
@ -741,6 +762,7 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
"IssueName": issname, "IssueName": issname,
"Issue_Number": issnum, "Issue_Number": issnum,
"IssueDate": issdate, "IssueDate": issdate,
"ReleaseDate": storedate,
"Int_IssueNumber": int_issnum}) "Int_IssueNumber": int_issnum})
#logger.info('issuedata: ' + str(issuedata)) #logger.info('issuedata: ' + str(issuedata))
@ -836,10 +858,10 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
#check for existing files... #check for existing files...
statbefore = myDB.action("SELECT * FROM issues WHERE ComicID=? AND Issue_Number=?", [comicid,str(latestiss)]).fetchone() statbefore = myDB.action("SELECT * FROM issues WHERE ComicID=? AND Issue_Number=?", [comicid,str(latestiss)]).fetchone()
logger.fdebug('issue: ' + str(latestiss) + ' status before chk :' + statbefore['Status']) logger.fdebug('issue: ' + str(latestiss) + ' status before chk :' + str(statbefore['Status']))
updater.forceRescan(comicid) updater.forceRescan(comicid)
statafter = myDB.action("SELECT * FROM issues WHERE ComicID=? AND Issue_Number=?", [comicid,str(latestiss)]).fetchone() statafter = myDB.action("SELECT * FROM issues WHERE ComicID=? AND Issue_Number=?", [comicid,str(latestiss)]).fetchone()
logger.fdebug('issue: ' + str(latestiss) + ' status after chk :' + statafter['Status']) logger.fdebug('issue: ' + str(latestiss) + ' status after chk :' + str(statafter['Status']))
if pullupd is None: if pullupd is None:
# lets' check the pullist for anything at this time as well since we're here. # lets' check the pullist for anything at this time as well since we're here.
@ -848,7 +870,7 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
logger.fdebug('latestissue: #' + str(latestiss)) logger.fdebug('latestissue: #' + str(latestiss))
chkstats = myDB.action("SELECT * FROM issues WHERE ComicID=? AND Issue_Number=?", [comicid,str(latestiss)]).fetchone() chkstats = myDB.action("SELECT * FROM issues WHERE ComicID=? AND Issue_Number=?", [comicid,str(latestiss)]).fetchone()
logger.fdebug('latestissue status: ' + chkstats['Status']) logger.fdebug('latestissue status: ' + chkstats['Status'])
if chkstats['Status'] == 'Skipped' or chkstats['Status'] == 'Wanted': # or chkstats['Status'] == 'Snatched': if chkstats['Status'] == 'Skipped' or chkstats['Status'] == 'Wanted' or chkstats['Status'] == 'Snatched':
logger.info('Checking this week pullist for new issues of ' + comic['ComicName']) logger.info('Checking this week pullist for new issues of ' + comic['ComicName'])
updater.newpullcheck(comic['ComicName'], comicid) updater.newpullcheck(comic['ComicName'], comicid)
@ -869,7 +891,9 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
logger.info('Already have the latest issue : #' + str(latestiss)) logger.info('Already have the latest issue : #' + str(latestiss))
if calledfrom == 'addbyid': if calledfrom == 'addbyid':
return comic['ComicName'], SeriesYear logger.info('Sucessfully added ' + comic['ComicName'] + ' (' + str(SeriesYear) + ') by directly using the ComicVine ID')
return
def GCDimport(gcomicid, pullupd=None,imported=None,ogcname=None): def GCDimport(gcomicid, pullupd=None,imported=None,ogcname=None):
# this is for importing via GCD only and not using CV. # this is for importing via GCD only and not using CV.
@ -1224,6 +1248,7 @@ def issue_collection(issuedata,nostatus):
"IssueName": issue['IssueName'], "IssueName": issue['IssueName'],
"Issue_Number": issue['Issue_Number'], "Issue_Number": issue['Issue_Number'],
"IssueDate": issue['IssueDate'], "IssueDate": issue['IssueDate'],
"ReleaseDate": issue['ReleaseDate'],
"Int_IssueNumber": issue['Int_IssueNumber'] "Int_IssueNumber": issue['Int_IssueNumber']
} }
@ -1235,7 +1260,7 @@ def issue_collection(issuedata,nostatus):
# Only change the status & add DateAdded if the issue is already in the database # Only change the status & add DateAdded if the issue is already in the database
if iss_exists is None: if iss_exists is None:
newValueDict['DateAdded'] = helpers.today() newValueDict['DateAdded'] = helpers.today()
#print "issue doesn't exist in db." print 'issue #' + str(issue['Issue_Number']) + 'does not exist in db.'
if mylar.AUTOWANT_ALL: if mylar.AUTOWANT_ALL:
newValueDict['Status'] = "Wanted" newValueDict['Status'] = "Wanted"
elif issue['IssueDate'] > helpers.today() and mylar.AUTOWANT_UPCOMING: elif issue['IssueDate'] > helpers.today() and mylar.AUTOWANT_UPCOMING:
@ -1244,12 +1269,12 @@ def issue_collection(issuedata,nostatus):
newValueDict['Status'] = "Skipped" newValueDict['Status'] = "Skipped"
else: else:
#print ("Existing status : " + str(iss_exists['Status'])) #logger.info('Existing status for issue #' + str(issue['Issue_Number']) + ' : ' + str(iss_exists['Status']))
newValueDict['Status'] = iss_exists['Status'] newValueDict['Status'] = iss_exists['Status']
else: else:
#print ("Not changing the status at this time - reverting to previous module after to re-append existing status") #logger.info("Not changing the status at this time - reverting to previous module after to re-append existing status")
newValueDict['Status'] = "Skipped" pass #newValueDict['Status'] = "Skipped"
try: try:
myDB.upsert("issues", newValueDict, controlValueDict) myDB.upsert("issues", newValueDict, controlValueDict)
@ -1259,3 +1284,53 @@ def issue_collection(issuedata,nostatus):
myDB.action("DELETE FROM comics WHERE ComicID=?", [issue['ComicID']]) myDB.action("DELETE FROM comics WHERE ComicID=?", [issue['ComicID']])
return return
def manualAnnual(manual_comicid, comicname, comicyear, comicid):
#called when importing/refreshing an annual that was manually added.
myDB = db.DBConnection()
issueid = manual_comicid
logger.fdebug(str(issueid) + ' added to series list as an Annual')
sr = cv.getComic(manual_comicid, 'comic')
logger.info('Attempting to integrate ' + sr['ComicName'] + ' (' + str(issueid) + ') to the existing series of ' + comicname + '(' + str(comicyear) + ')')
if len(sr) is None or len(sr) == 0:
logger.fdebug('Could not find any information on the series indicated : ' + str(manual_comicid))
pass
else:
n = 0
noissues = sr['ComicIssues']
logger.fdebug('there are ' + str(noissues) + ' annuals within this series.')
issued = cv.getComic(re.sub('4050-','',manual_comicid).strip(),'issue')
while (n < int(noissues)):
try:
firstval = issued['issuechoice'][n]
except IndexError:
break
cleanname = helpers.cleanName(firstval['Issue_Name'])
issid = str(firstval['Issue_ID'])
issnum = str(firstval['Issue_Number'])
issname = cleanname
issdate = str(firstval['Issue_Date'])
stdate = str(firstval['Store_Date'])
logger.fdebug('comicname:' + str(comicname))
logger.fdebug('comicid:' + str(comicid))
logger.fdebug('issid:' + str(issid))
logger.fdebug('cleanname:' + str(cleanname))
logger.fdebug('issnum:' + str(issnum))
logger.fdebug('issdate:' + str(issdate))
logger.fdebug('stdate:' + str(stdate))
newCtrl = {"IssueID": issid}
newVals = {"Issue_Number": issnum,
"Int_IssueNumber": helpers.issuedigits(issnum),
"IssueDate": issdate,
"ReleaseDate": stdate,
"IssueName": issname,
"ComicID": comicid, #this is the series ID
"ReleaseComicID": re.sub('4050-','',manual_comicid).strip(), #this is the series ID for the annual(s)
"ComicName": comicname, #series ComicName
"ReleaseComicName" :sr['ComicName'], #series ComicName for the manual_comicid
"Status": "Skipped"}
#need to add in the values for the new series to be added.
#"M_ComicName": sr['ComicName'],
#"M_ComicID": manual_comicid}
myDB.upsert("annuals", newVals, newCtrl)
n+=1
return

View File

@ -128,6 +128,16 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
watchfound = 0 watchfound = 0
datelist = ['jan','feb','mar','apr','may','jun','jul','aug','sep','oct','nov','dec']
# datemonth = {'one':1,'two':2,'three':3,'four':4,'five':5,'six':6,'seven':7,'eight':8,'nine':9,'ten':10,'eleven':$
# #search for number as text, and change to numeric
# for numbs in basnumbs:
# #print ("numbs:" + str(numbs))
# if numbs in ComicName.lower():
# numconv = basnumbs[numbs]
# #print ("numconv: " + str(numconv))
for i in comic_list: for i in comic_list:
print i['ComicFilename'] print i['ComicFilename']
@ -157,15 +167,16 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
#because the '.' in Vol. gets removed, let's loop thru again after the Vol hit to remove it entirely #because the '.' in Vol. gets removed, let's loop thru again after the Vol hit to remove it entirely
logger.fdebug('volume indicator detected as version #:' + str(subit)) logger.fdebug('volume indicator detected as version #:' + str(subit))
cfilename = re.sub(subit, '', cfilename) cfilename = re.sub(subit, '', cfilename)
volyr = re.sub("[^0-9]", " ", subit) cfilename = " ".join(cfilename.split())
volyr = re.sub("[^0-9]", " ", subit).strip()
logger.fdebug('volume year set as : ' + str(volyr))
cm_cn = 0 cm_cn = 0
#we need to track the counter to make sure we are comparing the right array parts #we need to track the counter to make sure we are comparing the right array parts
#this takes care of the brackets :) #this takes care of the brackets :)
m = re.findall('[^()]+', cfilename) m = re.findall('[^()]+', cfilename)
lenm = len(m) lenm = len(m)
print ("there are " + str(lenm) + " words.") logger.fdebug("there are " + str(lenm) + " words.")
cnt = 0 cnt = 0
yearmatch = "false" yearmatch = "false"
foundonwatch = "False" foundonwatch = "False"
@ -187,7 +198,7 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
extensions = ('cbr', 'cbz') extensions = ('cbr', 'cbz')
if comic_andiss.lower().endswith(extensions): if comic_andiss.lower().endswith(extensions):
comic_andiss = comic_andiss[:-4] comic_andiss = comic_andiss[:-4]
print ("removed extension from filename.") logger.fdebug("removed extension from filename.")
#now we have to break up the string regardless of formatting. #now we have to break up the string regardless of formatting.
#let's force the spaces. #let's force the spaces.
comic_andiss = re.sub('_', ' ', comic_andiss) comic_andiss = re.sub('_', ' ', comic_andiss)
@ -199,42 +210,41 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
decimaldetect = 'no' decimaldetect = 'no'
for i in reversed(xrange(len(cs))): for i in reversed(xrange(len(cs))):
#start at the end. #start at the end.
print ("word: " + str(cs[i])) logger.fdebug("word: " + str(cs[i]))
#assume once we find issue - everything prior is the actual title #assume once we find issue - everything prior is the actual title
#idetected = no will ignore everything so it will assume all title #idetected = no will ignore everything so it will assume all title
if cs[i][:-2] == '19' or cs[i][:-2] == '20' and idetected == 'no': if cs[i][:-2] == '19' or cs[i][:-2] == '20' and idetected == 'no':
print ("year detected: " + str(cs[i])) logger.fdebug("year detected: " + str(cs[i]))
ydetected = 'yes' ydetected = 'yes'
result_comyear = cs[i] result_comyear = cs[i]
elif cs[i].isdigit() and idetected == 'no' or '.' in cs[i]: elif cs[i].isdigit() and idetected == 'no' or '.' in cs[i]:
issue = cs[i] issue = cs[i]
print ("issue detected : " + str(issue)) logger.fdebug("issue detected : " + str(issue))
idetected = 'yes' idetected = 'yes'
if '.' in cs[i]: if '.' in cs[i]:
#make sure it's a number on either side of decimal and assume decimal issue. #make sure it's a number on either side of decimal and assume decimal issue.
decst = cs[i].find('.') decst = cs[i].find('.')
dec_st = cs[i][:decst] dec_st = cs[i][:decst]
dec_en = cs[i][decst+1:] dec_en = cs[i][decst+1:]
print ("st: " + str(dec_st)) logger.fdebug("st: " + str(dec_st))
print ("en: " + str(dec_en)) logger.fdebug("en: " + str(dec_en))
if dec_st.isdigit() and dec_en.isdigit(): if dec_st.isdigit() and dec_en.isdigit():
print ("decimal issue detected...adjusting.") logger.fdebug("decimal issue detected...adjusting.")
issue = dec_st + "." + dec_en issue = dec_st + "." + dec_en
print ("issue detected: " + str(issue)) logger.fdebug("issue detected: " + str(issue))
idetected = 'yes' idetected = 'yes'
else: else:
print ("false decimal represent. Chunking to extra word.") logger.fdebug("false decimal represent. Chunking to extra word.")
cn = cn + cs[i] + " " cn = cn + cs[i] + " "
break break
elif '\#' in cs[i] or decimaldetect == 'yes': elif '\#' in cs[i] or decimaldetect == 'yes':
print ("issue detected: " + str(cs[i])) logger.fdebug("issue detected: " + str(cs[i]))
idetected = 'yes' idetected = 'yes'
else: cn = cn + cs[i] + " " else: cn = cn + cs[i] + " "
if ydetected == 'no': if ydetected == 'no':
#assume no year given in filename... #assume no year given in filename...
result_comyear = "0000" result_comyear = "0000"
print ("cm?: " + str(cn)) logger.fdebug("cm?: " + str(cn))
if issue is not '999999': if issue is not '999999':
comiss = issue comiss = issue
else: else:
@ -252,11 +262,20 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
print ("com_NAME : " + com_NAME) print ("com_NAME : " + com_NAME)
yearmatch = "True" yearmatch = "True"
else: else:
logger.fdebug('checking ' + m[cnt])
# we're assuming that the year is in brackets (and it should be damnit) # we're assuming that the year is in brackets (and it should be damnit)
if m[cnt][:-2] == '19' or m[cnt][:-2] == '20': if m[cnt][:-2] == '19' or m[cnt][:-2] == '20':
print ("year detected: " + str(m[cnt])) print ("year detected: " + str(m[cnt]))
ydetected = 'yes' ydetected = 'yes'
result_comyear = m[cnt] result_comyear = m[cnt]
elif m[cnt][:3].lower() in datelist:
logger.fdebug('possible issue date format given - verifying')
#if the date of the issue is given as (Jan 2010) or (January 2010) let's adjust.
#keeping in mind that ',' and '.' are already stripped from the string
if m[cnt][-4:].isdigit():
ydetected = 'yes'
result_comyear = m[cnt][-4:]
logger.fdebug('Valid Issue year of ' + str(result_comyear) + 'detected in format of ' + str(m[cnt]))
cnt+=1 cnt+=1
splitit = [] splitit = []
@ -447,6 +466,13 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
else: else:
if result_comyear is None: if result_comyear is None:
result_comyear = volyr result_comyear = volyr
if volno is None:
if volyr is None:
vol_label = None
else:
vol_label = volyr
else:
vol_label = volno
print ("adding " + com_NAME + " to the import-queue!") print ("adding " + com_NAME + " to the import-queue!")
impid = com_NAME + "-" + str(result_comyear) + "-" + str(comiss) impid = com_NAME + "-" + str(result_comyear) + "-" + str(comiss)
@ -456,6 +482,7 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
"watchmatch": watchmatch, "watchmatch": watchmatch,
"comicname" : com_NAME, "comicname" : com_NAME,
"comicyear" : result_comyear, "comicyear" : result_comyear,
"volume" : vol_label,
"comfilename" : comfilename, "comfilename" : comfilename,
"comlocation" : comlocation.decode(mylar.SYS_ENCODING) "comlocation" : comlocation.decode(mylar.SYS_ENCODING)
}) })

View File

@ -29,7 +29,14 @@ mb_lock = threading.Lock()
def pullsearch(comicapi,comicquery,offset): def pullsearch(comicapi,comicquery,offset):
u_comicquery = urllib.quote(comicquery.encode('utf-8').strip()) u_comicquery = urllib.quote(comicquery.encode('utf-8').strip())
PULLURL = mylar.CVURL + 'search?api_key=' + str(comicapi) + '&resources=volume&query=' + u_comicquery + '&field_list=id,name,start_year,site_detail_url,count_of_issues,image,publisher,description&format=xml&page=' + str(offset) u_comicquery = u_comicquery.replace(" ", "%20")
# as of 02/15/2014 this is buggered up.
#PULLURL = mylar.CVURL + 'search?api_key=' + str(comicapi) + '&resources=volume&query=' + u_comicquery + '&field_list=id,name,start_year,site_detail_url,count_of_issues,image,publisher,description&format=xml&page=' + str(offset)
# 02/22/2014 use the volume filter label to get the right results.
PULLURL = mylar.CVURL + 'volumes?api_key=' + str(comicapi) + '&filter=name:' + u_comicquery + '&field_list=id,name,start_year,site_detail_url,count_of_issues,image,publisher,description&format=xml&page=' + str(offset) #offset=' + str(offset) # 2012/22/02 - CVAPI flipped back to offset instead of page
#all these imports are standard on most modern python implementations #all these imports are standard on most modern python implementations
#download the file: #download the file:
try: try:
@ -59,8 +66,9 @@ def findComic(name, mode, issue, limityear=None):
#print ("limityear: " + str(limityear)) #print ("limityear: " + str(limityear))
if limityear is None: limityear = 'None' if limityear is None: limityear = 'None'
comicquery = name
#comicquery=name.replace(" ", "%20") #comicquery=name.replace(" ", "%20")
comicquery=name.replace(" ", " AND ") #comicquery=name.replace(" ", " AND ")
comicapi='583939a3df0a25fc4e8b7a29934a13078002dc27' comicapi='583939a3df0a25fc4e8b7a29934a13078002dc27'
offset = 1 offset = 1
@ -68,15 +76,19 @@ def findComic(name, mode, issue, limityear=None):
searched = pullsearch(comicapi,comicquery,1) searched = pullsearch(comicapi,comicquery,1)
if searched is None: return False if searched is None: return False
totalResults = searched.getElementsByTagName('number_of_total_results')[0].firstChild.wholeText totalResults = searched.getElementsByTagName('number_of_total_results')[0].firstChild.wholeText
#print ("there are " + str(totalResults) + " search results...") logger.fdebug("there are " + str(totalResults) + " search results...")
if not totalResults: if not totalResults:
return False return False
countResults = 0 countResults = 0
while (countResults < int(totalResults)): while (countResults < int(totalResults)):
#print ("querying " + str(countResults)) #logger.fdebug("querying " + str(countResults))
if countResults > 0: if countResults > 0:
#new api - have to change to page # instead of offset count #new api - have to change to page # instead of offset count
offsetcount = (countResults/100) + 1 offsetcount = (countResults/100) + 1
#2012/22/02 - CV API flipped back to offset usage instead of page :(
#if countResults == 1: offsetcount = 0
#else: offsetcount = countResults
searched = pullsearch(comicapi,comicquery,offsetcount) searched = pullsearch(comicapi,comicquery,offsetcount)
comicResults = searched.getElementsByTagName('volume') comicResults = searched.getElementsByTagName('volume')
body = '' body = ''

View File

@ -36,7 +36,7 @@ def tehMain(forcerss=None):
#function for looping through nzbs/torrent feeds #function for looping through nzbs/torrent feeds
if mylar.ENABLE_TORRENTS: if mylar.ENABLE_TORRENTS:
logger.fdebug("[RSS] Initiating Torrent RSS Check.") logger.fdebug('[RSS] Initiating Torrent RSS Check.')
if mylar.ENABLE_KAT: if mylar.ENABLE_KAT:
logger.fdebug('[RSS] Initiating Torrent RSS Feed Check on KAT.') logger.fdebug('[RSS] Initiating Torrent RSS Feed Check on KAT.')
torrents(pickfeed='3') torrents(pickfeed='3')
@ -44,7 +44,7 @@ def tehMain(forcerss=None):
logger.fdebug('[RSS] Initiating Torrent RSS Feed Check on CBT.') logger.fdebug('[RSS] Initiating Torrent RSS Feed Check on CBT.')
torrents(pickfeed='1') torrents(pickfeed='1')
torrents(pickfeed='4') torrents(pickfeed='4')
logger.fdebug('RSS] Initiating RSS Feed Check for NZB Providers.') logger.fdebug('[RSS] Initiating RSS Feed Check for NZB Providers.')
nzbs() nzbs()
logger.fdebug('[RSS] RSS Feed Check/Update Complete') logger.fdebug('[RSS] RSS Feed Check/Update Complete')
logger.fdebug('[RSS] Watchlist Check for new Releases') logger.fdebug('[RSS] Watchlist Check for new Releases')
@ -285,32 +285,37 @@ def nzbs(provider=None):
if nonexp == "yes": if nonexp == "yes":
#print str(ft) + " sites checked. There are " + str(totNum) + " entries to be updated." #print str(ft) + " sites checked. There are " + str(totNum) + " entries to be updated."
#print feedme #print feedme
#i = 0
for ft in feedthis: for ft in feedthis:
sitei = 0
site = ft['site'] site = ft['site']
logger.fdebug(str(site) + " now being updated...") logger.fdebug(str(site) + " now being updated...")
logger.fdebug('feedthis:' + str(ft))
for entry in ft['feed'].entries: for entry in ft['feed'].entries:
if site == 'dognzb': if site == 'dognzb':
#because the rss of dog doesn't carry the enclosure item, we'll use the newznab size value #because the rss of dog doesn't carry the enclosure item, we'll use the newznab size value
if entry.attrib.get('name') == 'size': tmpsz = 0
tmpsz = entry.attrib.get('value') #for attr in entry['newznab:attrib']:
# if attr('@name') == 'size':
# tmpsz = attr['@value']
# logger.fdebug('size retrieved as ' + str(tmpsz))
# break
feeddata.append({ feeddata.append({
'Site': site, 'Site': site,
'Title': ft['feed'].entries[i].title, 'Title': entry.title, #ft['feed'].entries[i].title,
'Link': ft['feed'].entries[i].link, 'Link': entry.link, #ft['feed'].entries[i].link,
'Pubdate': ft['feed'].entries[i].updated, 'Pubdate': entry.updated, #ft['feed'].entries[i].updated,
'Size': tmpsz 'Size': tmpsz
}) })
else: else:
#this should work for all newznabs (nzb.su included) #this should work for all newznabs (nzb.su included)
#only difference is the size of the file between this and above (which is probably the same) #only difference is the size of the file between this and above (which is probably the same)
tmpsz = ft['feed'].entries[i].enclosures[0] tmpsz = entry.enclosures[0] #ft['feed'].entries[i].enclosures[0]
feeddata.append({ feeddata.append({
'Site': site, 'Site': site,
'Title': ft['feed'].entries[i].title, 'Title': entry.title, #ft['feed'].entries[i].title,
'Link': ft['feed'].entries[i].link, 'Link': entry.link, #ft['feed'].entries[i].link,
'Pubdate': ft['feed'].entries[i].updated, 'Pubdate': entry.updated, #ft['feed'].entries[i].updated,
'Size': tmpsz['length'] 'Size': tmpsz['length']
}) })
@ -319,9 +324,10 @@ def nzbs(provider=None):
#logger.fdebug("Link: " + str(feeddata[i]['Link'])) #logger.fdebug("Link: " + str(feeddata[i]['Link']))
#logger.fdebug("pubdate: " + str(feeddata[i]['Pubdate'])) #logger.fdebug("pubdate: " + str(feeddata[i]['Pubdate']))
#logger.fdebug("size: " + str(feeddata[i]['Size'])) #logger.fdebug("size: " + str(feeddata[i]['Size']))
i+=1 sitei+=1
logger.info(str(site) + ' : ' + str(i) + ' entries indexed.') logger.info(str(site) + ' : ' + str(sitei) + ' entries indexed.')
i+=sitei
logger.info('[RSS] ' + str(i) + ' entries have been indexed and are now going to be stored for caching.')
rssdbupdate(feeddata,i,'usenet') rssdbupdate(feeddata,i,'usenet')
return return
@ -446,7 +452,7 @@ def torrentdbsearch(seriesname,issue,comicid=None,nzbprov=None):
titletemp = re.sub('cbr', '', str(titletemp)) titletemp = re.sub('cbr', '', str(titletemp))
titletemp = re.sub('cbz', '', str(titletemp)) titletemp = re.sub('cbz', '', str(titletemp))
titletemp = re.sub('none', '', str(titletemp)) titletemp = re.sub('none', '', str(titletemp))
if i == 0: if i == 0:
rebuiltline = str(titletemp) rebuiltline = str(titletemp)
else: else:
@ -465,13 +471,13 @@ def torrentdbsearch(seriesname,issue,comicid=None,nzbprov=None):
seriesname_mod = re.sub('[\&]', ' ', seriesname_mod) seriesname_mod = re.sub('[\&]', ' ', seriesname_mod)
foundname_mod = re.sub('[\&]', ' ', foundname_mod) foundname_mod = re.sub('[\&]', ' ', foundname_mod)
formatrem_seriesname = re.sub('[\'\!\@\#\$\%\:\;\=\?\.\-]', '',seriesname_mod) formatrem_seriesname = re.sub('[\'\!\@\#\$\%\:\;\=\?\.\-\/]', '',seriesname_mod)
formatrem_seriesname = re.sub('[\/]', '-', formatrem_seriesname) #formatrem_seriesname = re.sub('[\/]', '-', formatrem_seriesname) #not necessary since seriesname in a torrent file won't have /
formatrem_seriesname = re.sub('\s+', ' ', formatrem_seriesname) formatrem_seriesname = re.sub('\s+', ' ', formatrem_seriesname)
if formatrem_seriesname[:1] == ' ': formatrem_seriesname = formatrem_seriesname[1:] if formatrem_seriesname[:1] == ' ': formatrem_seriesname = formatrem_seriesname[1:]
formatrem_torsplit = re.sub('[\'\!\@\#\$\%\:\;\\=\?\.\-]', '',foundname_mod) formatrem_torsplit = re.sub('[\'\!\@\#\$\%\:\;\\=\?\.\-\/]', '',foundname_mod)
formatrem_torsplit = re.sub('[\/]', '-', formatrem_torsplit) #formatrem_torsplit = re.sub('[\/]', '-', formatrem_torsplit) #not necessary since if has a /, should be removed in above line
formatrem_torsplit = re.sub('\s+', ' ', formatrem_torsplit) formatrem_torsplit = re.sub('\s+', ' ', formatrem_torsplit)
logger.fdebug(str(len(formatrem_torsplit)) + ' - formatrem_torsplit : ' + formatrem_torsplit.lower()) logger.fdebug(str(len(formatrem_torsplit)) + ' - formatrem_torsplit : ' + formatrem_torsplit.lower())
logger.fdebug(str(len(formatrem_seriesname)) + ' - formatrem_seriesname :' + formatrem_seriesname.lower()) logger.fdebug(str(len(formatrem_seriesname)) + ' - formatrem_seriesname :' + formatrem_seriesname.lower())

View File

@ -33,12 +33,14 @@ import time
import urlparse import urlparse
from xml.dom.minidom import parseString from xml.dom.minidom import parseString
import urllib2 import urllib2
from datetime import datetime import email.utils
import datetime
def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, IssueDate, IssueID, AlternateSearch=None, UseFuzzy=None, ComicVersion=None, SARC=None, IssueArcID=None, mode=None, rsscheck=None, ComicID=None): def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, IssueID, AlternateSearch=None, UseFuzzy=None, ComicVersion=None, SARC=None, IssueArcID=None, mode=None, rsscheck=None, ComicID=None):
if ComicYear == None: ComicYear = '2014' if ComicYear == None: ComicYear = '2014'
else: ComicYear = str(ComicYear)[:4] else: ComicYear = str(ComicYear)[:4]
if Publisher == 'IDW Publishing': Publisher = 'IDW'
logger.info('Publisher is : ' + str(Publisher))
if mode == 'want_ann': if mode == 'want_ann':
logger.info("Annual issue search detected. Appending to issue #") logger.info("Annual issue search detected. Appending to issue #")
#anything for mode other than None indicates an annual. #anything for mode other than None indicates an annual.
@ -179,7 +181,7 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, IssueDate, IssueI
torprov = 'KAT' torprov = 'KAT'
if searchmode == 'rss': if searchmode == 'rss':
findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, torprov, torpr, IssDateFix, IssueID, UseFuzzy, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID) findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, torprov, torpr, IssDateFix, IssueID, UseFuzzy, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID)
if findit == 'yes': if findit == 'yes':
logger.fdebug("findit = found!") logger.fdebug("findit = found!")
break break
@ -192,12 +194,12 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, IssueDate, IssueI
for calt in chkthealt: for calt in chkthealt:
AS_Alternate = re.sub('##','',calt) AS_Alternate = re.sub('##','',calt)
logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear)) logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear))
findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, torprov, torp, IssDateFix, IssueID, UseFuzzy, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID) findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, torprov, torp, IssDateFix, IssueID, UseFuzzy, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID)
if findit == 'yes': if findit == 'yes':
break break
else: else:
findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, torprov, torpr, IssDateFix, IssueID, UseFuzzy, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID) findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, torprov, torpr, IssDateFix, IssueID, UseFuzzy, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID)
if findit == 'yes': if findit == 'yes':
logger.fdebug("findit = found!") logger.fdebug("findit = found!")
break break
@ -210,7 +212,7 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, IssueDate, IssueI
for calt in chkthealt: for calt in chkthealt:
AS_Alternate = re.sub('##','',calt) AS_Alternate = re.sub('##','',calt)
logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear)) logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear))
findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, torprov, torp, IssDateFix, IssueID, UseFuzzy, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID) findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, torprov, torp, IssDateFix, IssueID, UseFuzzy, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID)
if findit == 'yes': if findit == 'yes':
break break
@ -260,7 +262,7 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, IssueDate, IssueI
#if it's rss - search both seriesname/alternates via rss then return. #if it's rss - search both seriesname/alternates via rss then return.
if searchmode == 'rss': if searchmode == 'rss':
if mylar.ENABLE_RSS: if mylar.ENABLE_RSS:
findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID) findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID)
if findit == 'yes': if findit == 'yes':
logger.fdebug("Found via RSS.") logger.fdebug("Found via RSS.")
break break
@ -273,7 +275,7 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, IssueDate, IssueI
for calt in chkthealt: for calt in chkthealt:
AS_Alternate = re.sub('##','',calt) AS_Alternate = re.sub('##','',calt)
logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear)) logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear))
findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID) findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID)
if findit == 'yes': if findit == 'yes':
break break
if findit == 'yes': if findit == 'yes':
@ -284,7 +286,7 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, IssueDate, IssueI
break break
else: else:
#normal api-search here. #normal api-search here.
findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID) findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID)
if findit == 'yes': if findit == 'yes':
logger.fdebug("Found via API.") logger.fdebug("Found via API.")
break break
@ -296,7 +298,7 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, IssueDate, IssueI
for calt in chkthealt: for calt in chkthealt:
AS_Alternate = re.sub('##','',calt) AS_Alternate = re.sub('##','',calt)
logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear)) logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear))
findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID) findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID)
if findit == 'yes': if findit == 'yes':
break break
if findit == 'yes': if findit == 'yes':
@ -312,7 +314,7 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, IssueDate, IssueI
nzbprov = nzbprovider[nzbpr] nzbprov = nzbprovider[nzbpr]
if searchmode == 'rss': if searchmode == 'rss':
if mylar.ENABLE_RSS: if mylar.ENABLE_RSS:
findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS='yes', ComicID=ComicID) findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS='yes', ComicID=ComicID)
if findit == 'yes': if findit == 'yes':
logger.fdebug("Found via RSS on " + nzbprov) logger.fdebug("Found via RSS on " + nzbprov)
break break
@ -324,7 +326,7 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, IssueDate, IssueI
for calt in chkthealt: for calt in chkthealt:
AS_Alternate = re.sub('##','',calt) AS_Alternate = re.sub('##','',calt)
logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear)) logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear))
findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID) findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID)
if findit == 'yes': if findit == 'yes':
logger.fdebug("Found via RSS Alternate Naming on " + nzbprov) logger.fdebug("Found via RSS Alternate Naming on " + nzbprov)
break break
@ -333,7 +335,7 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, IssueDate, IssueI
break break
else: else:
#normal api-search here. #normal api-search here.
findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID) findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID)
if findit == 'yes': if findit == 'yes':
logger.fdebug("Found via API on " + nzbprov) logger.fdebug("Found via API on " + nzbprov)
break break
@ -344,7 +346,7 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, IssueDate, IssueI
for calt in chkthealt: for calt in chkthealt:
AS_Alternate = re.sub('##','',calt) AS_Alternate = re.sub('##','',calt)
logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate)) logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate))
findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID) findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID)
if findit == 'yes': if findit == 'yes':
break break
if findit == 'yes': if findit == 'yes':
@ -362,7 +364,7 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, IssueDate, IssueI
return findit, nzbprov return findit, nzbprov
def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host=None, ComicVersion=None, SARC=None, IssueArcID=None, RSS=None, ComicID=None): def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host=None, ComicVersion=None, SARC=None, IssueArcID=None, RSS=None, ComicID=None):
if nzbprov == 'nzb.su': if nzbprov == 'nzb.su':
apikey = mylar.NZBSU_APIKEY apikey = mylar.NZBSU_APIKEY
@ -586,9 +588,9 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, Is
#if bb is not None: logger.fdebug("results: " + str(bb)) #if bb is not None: logger.fdebug("results: " + str(bb))
elif nzbprov != 'experimental': elif nzbprov != 'experimental':
if nzbprov == 'dognzb': if nzbprov == 'dognzb':
findurl = "http://dognzb.cr/api?t=search&q=" + str(comsearch) + "&o=xml&cat=7030&apikey=" + str(mylar.DOGNZB_APIKEY) findurl = "https://dognzb.cr/api?t=search&q=" + str(comsearch) + "&o=xml&cat=7030"
elif nzbprov == 'nzb.su': elif nzbprov == 'nzb.su':
findurl = "https://nzb.su/api?t=search&q=" + str(comsearch) + "&o=xml&cat=7030&apikey=" + str(mylar.NZBSU_APIKEY) findurl = "https://nzb.su/api?t=search&q=" + str(comsearch) + "&o=xml&cat=7030"
elif nzbprov == 'newznab': elif nzbprov == 'newznab':
#let's make sure the host has a '/' at the end, if not add it. #let's make sure the host has a '/' at the end, if not add it.
if host_newznab[len(host_newznab)-1:len(host_newznab)] != '/': if host_newznab[len(host_newznab)-1:len(host_newznab)] != '/':
@ -601,13 +603,14 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, Is
# helper function to replace apikey here so we avoid logging it ;) # helper function to replace apikey here so we avoid logging it ;)
findurl = findurl + "&apikey=" + str(apikey) findurl = findurl + "&apikey=" + str(apikey)
logsearch = helpers.apiremove(str(findurl),'nzb') logsearch = helpers.apiremove(str(findurl),'nzb')
logger.fdebug("search-url: " + str(logsearch))
### IF USENET_RETENTION is set, honour it ### IF USENET_RETENTION is set, honour it
### For newznab sites, that means appending "&maxage=<whatever>" on the URL ### For newznab sites, that means appending "&maxage=<whatever>" on the URL
if mylar.USENET_RETENTION != None: if mylar.USENET_RETENTION != None:
findurl = findurl + "&maxage=" + str(mylar.USENET_RETENTION) findurl = findurl + "&maxage=" + str(mylar.USENET_RETENTION)
logger.fdebug("search-url: " + str(findurl))
# Add a user-agent # Add a user-agent
#print ("user-agent:" + str(mylar.USER_AGENT)) #print ("user-agent:" + str(mylar.USER_AGENT))
request = urllib2.Request(findurl) request = urllib2.Request(findurl)
@ -646,12 +649,19 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, Is
except Exception, e: except Exception, e:
logger.warn('Error fetching data from %s: %s' % (nzbprov, e)) logger.warn('Error fetching data from %s: %s' % (nzbprov, e))
data = False data = False
#logger.info('data: ' + data)
if data: if data:
bb = feedparser.parse(data) bb = feedparser.parse(data)
else: else:
bb = "no results" bb = "no results"
#logger.info('Search results:' + str(bb))
try:
if bb['feed']['error']:
logger.error('[ERROR CODE: ' + str(bb['feed']['error']['code']) + '] ' + str(bb['feed']['error']['description']))
bb = "no results"
except:
#logger.info('no errors on data retrieval...proceeding')
pass
elif nzbprov == 'experimental': elif nzbprov == 'experimental':
#bb = parseit.MysterBinScrape(comsearch[findloop], comyear) #bb = parseit.MysterBinScrape(comsearch[findloop], comyear)
bb = findcomicfeed.Startit(u_ComicName, isssearch, comyear, ComicVersion, IssDateFix) bb = findcomicfeed.Startit(u_ComicName, isssearch, comyear, ComicVersion, IssDateFix)
@ -667,7 +677,7 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, Is
else: else:
for entry in bb['entries']: for entry in bb['entries']:
logger.fdebug("checking search result: " + entry['title']) logger.fdebug("checking search result: " + entry['title'])
if nzbprov != "experimental" and nzbprov != "CBT": if nzbprov != "experimental" and nzbprov != "CBT" and nzbprov != "dognzb":
if RSS == "yes": if RSS == "yes":
comsize_b = entry['length'] comsize_b = entry['length']
else: else:
@ -698,11 +708,67 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, Is
logger.fdebug("Failure to meet the Maximium size threshold - skipping") logger.fdebug("Failure to meet the Maximium size threshold - skipping")
continue continue
#---- date constaints.
# if the posting date is prior to the publication date, dump it and save the time.
#logger.info('entry' + str(entry))
if nzbprov == 'experimental' or nzbprov =='CBT':
pubdate = entry['pubdate']
else:
try:
pubdate = entry['updated']
except:
try:
pubdate = entry['pubdate']
except:
logger.fdebug('invalid date found. Unable to continue - skipping result.')
continue
#use store date instead of publication date for comparisons since publication date is usually +2 months
if StoreDate is None or StoreDate is '0000-00-00':
stdate = IssueDate
else:
stdate = StoreDate
#logger.fdebug('Posting date of : ' + str(pubdate))
# convert it to a tuple
dateconv = email.utils.parsedate_tz(pubdate)
# convert it to a numeric time, then subtract the timezone difference (+/- GMT)
postdate_int = time.mktime(dateconv[:len(dateconv)-1]) - dateconv[-1]
#logger.fdebug('Issue date of : ' + str(stdate))
#convert it to a Thu, 06 Feb 2014 00:00:00 format
issue_convert = datetime.datetime.strptime(stdate.rstrip(), '%Y-%m-%d')
#logger.fdebug('issue_convert:' + str(issue_convert))
issconv = issue_convert.strftime('%a, %d %b %Y %H:%M:%S')
#logger.fdebug('issue date is :' + str(issconv))
#convert it to a tuple
econv = email.utils.parsedate_tz(issconv)
#logger.fdebug('econv:' + str(econv))
#convert it to a numeric
issuedate_int = time.mktime(econv[:len(econv)-1])
if postdate_int < issuedate_int:
logger.fdebug(str(pubdate) + ' is before store date of ' + str(stdate) + '. Ignoring search result as this is not the right issue.')
continue
else:
logger.fdebug(str(pubdate) + ' is after store date of ' + str(stdate))
# -- end size constaints. # -- end size constaints.
thisentry = entry['title'] thisentry = entry['title']
logger.fdebug("Entry: " + thisentry) logger.fdebug("Entry: " + thisentry)
cleantitle = re.sub('[\_\.]', ' ', entry['title']) cleantitle = thisentry
#remove the extension.
extensions = ('.cbr', '.cbz')
if cleantitle.lower().endswith(extensions):
fd, ext = os.path.splitext(cleantitle)
logger.fdebug("Removed extension from filename: " + ext)
#name = re.sub(str(ext), '', str(subname))
cleantitle = fd
if 'mixed format' in cleantitle.lower():
cleantitle = re.sub('mixed format', '', cleantitle).strip()
logger.fdebug('removed extra information after issue # that is not necessary: ' + str(cleantitle))
cleantitle = re.sub('[\_\.]', ' ', cleantitle)
cleantitle = helpers.cleanName(cleantitle) cleantitle = helpers.cleanName(cleantitle)
# this is new - if title contains a '&' in the title it will assume the filename has ended at that point # this is new - if title contains a '&' in the title it will assume the filename has ended at that point
# which causes false positives (ie. wolverine & the x-men becomes the x-men, which matches on x-men. # which causes false positives (ie. wolverine & the x-men becomes the x-men, which matches on x-men.
@ -733,29 +799,30 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, Is
ComVersChk = 0 ComVersChk = 0
else: else:
ComVersChk = 0 ComVersChk = 0
ctchk = cleantitle.split()
for ct in ctchk:
if ct.lower().startswith('v') and ct[1:].isdigit():
logger.fdebug("possible versioning..checking")
#we hit a versioning # - account for it
if ct[1:].isdigit():
if len(ct[1:]) == 4: #v2013
logger.fdebug("Version detected as " + str(ct))
vers4year = "yes" #re.sub("[^0-9]", " ", str(ct)) #remove the v
#cleantitle = re.sub(ct, "(" + str(vers4year) + ")", cleantitle)
#logger.fdebug("volumized cleantitle : " + cleantitle)
break
else:
if len(ct) < 4:
logger.fdebug("Version detected as " + str(ct))
vers4vol = str(ct)
break
logger.fdebug("false version detection..ignoring.")
if len(re.findall('[^()]+', cleantitle)) == 1 or 'cover only' in cleantitle.lower(): if len(re.findall('[^()]+', cleantitle)) == 1 or 'cover only' in cleantitle.lower():
#some sites don't have (2013) or whatever..just v2 / v2013. Let's adjust: #some sites don't have (2013) or whatever..just v2 / v2013. Let's adjust:
#this handles when there is NO YEAR present in the title, otherwise versioning is way below. #this handles when there is NO YEAR present in the title, otherwise versioning is way below.
ctchk = cleantitle.split()
for ct in ctchk:
if ct.lower().startswith('v') and ct[1:].isdigit():
logger.fdebug("possible versioning..checking")
#we hit a versioning # - account for it
if ct[1:].isdigit():
if len(ct[1:]) == 4: #v2013
logger.fdebug("Version detected as " + str(ct))
vers4year = "yes" #re.sub("[^0-9]", " ", str(ct)) #remove the v
#cleantitle = re.sub(ct, "(" + str(vers4year) + ")", cleantitle)
#logger.fdebug("volumized cleantitle : " + cleantitle)
break
else:
if len(ct) < 4:
logger.fdebug("Version detected as " + str(ct))
vers4vol = str(ct)
break
logger.fdebug("false version detection..ignoring.")
if vers4year == "no" and vers4vol == "no": if vers4year == "no" and vers4vol == "no":
# if the series is a v1, let's remove the requirements for year and volume label # if the series is a v1, let's remove the requirements for year and volume label
# even if it's a v1, the nzbname might not contain a valid year format (20xx) or v3, # even if it's a v1, the nzbname might not contain a valid year format (20xx) or v3,
@ -767,7 +834,6 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, Is
if len(re.findall('[^()]+', cleantitle)): if len(re.findall('[^()]+', cleantitle)):
logger.fdebug("detected invalid nzb filename - attempting to detect year to continue") logger.fdebug("detected invalid nzb filename - attempting to detect year to continue")
cleantitle = re.sub('(.*)\s+(19\d{2}|20\d{2})(.*)', '\\1 (\\2) \\3', cleantitle) cleantitle = re.sub('(.*)\s+(19\d{2}|20\d{2})(.*)', '\\1 (\\2) \\3', cleantitle)
continue
else: else:
logger.fdebug("invalid nzb and/or cover only - skipping.") logger.fdebug("invalid nzb and/or cover only - skipping.")
cleantitle = "abcdefghijk 0 (1901).cbz" cleantitle = "abcdefghijk 0 (1901).cbz"
@ -793,6 +859,7 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, Is
#print ("there are " + str(lenm) + " words.") #print ("there are " + str(lenm) + " words.")
cnt = 0 cnt = 0
yearmatch = "false" yearmatch = "false"
pub_removed = None
while (cnt < lenm): while (cnt < lenm):
if m[cnt] is None: break if m[cnt] is None: break
@ -802,6 +869,9 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, Is
logger.fdebug(str(cnt) + ". Bracket Word: " + str(m[cnt])) logger.fdebug(str(cnt) + ". Bracket Word: " + str(m[cnt]))
if cnt == 0: if cnt == 0:
comic_andiss = m[cnt] comic_andiss = m[cnt]
if 'mixed format' in comic_andiss.lower():
comic_andiss = re.sub('mixed format', '', comic_andiss).strip()
logger.fdebug('removed extra information after issue # that is not necessary: ' + str(comic_andiss))
logger.fdebug("Comic: " + str(comic_andiss)) logger.fdebug("Comic: " + str(comic_andiss))
logger.fdebug("UseFuzzy is : " + str(UseFuzzy)) logger.fdebug("UseFuzzy is : " + str(UseFuzzy))
logger.fdebug('ComVersChk : ' + str(ComVersChk)) logger.fdebug('ComVersChk : ' + str(ComVersChk))
@ -844,6 +914,25 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, Is
elif UseFuzzy == "1": yearmatch = "true" elif UseFuzzy == "1": yearmatch = "true"
if Publisher.lower() in m[cnt].lower() and cnt >= 1:
#if the Publisher is given within the title or filename even (for some reason, some people
#have this to distinguish different titles), let's remove it entirely.
logger.fdebug('Publisher detected within title : ' + str(m[cnt]))
logger.fdebug('cnt is : ' + str(cnt) + ' --- Publisher is: ' + Publisher)
pub_removed = m[cnt]
#-strip publisher if exists here-
logger.fdebug('removing publisher from title')
cleantitle_pubremoved = re.sub(pub_removed, '', cleantitle)
logger.fdebug('pubremoved : ' + str(cleantitle_pubremoved))
cleantitle_pubremoved = re.sub('\(\)', '', cleantitle_pubremoved) #remove empty brackets
cleantitle_pubremoved = re.sub('\s+', ' ', cleantitle_pubremoved) #remove spaces > 1
logger.fdebug('blank brackets removed: ' + str(cleantitle_pubremoved))
#reset the values to initial without the publisher in the title
m = re.findall('[^()]+', cleantitle_pubremoved)
lenm = len(m)
cnt = 0
yearmatch = "false"
continue
if 'digital' in m[cnt] and len(m[cnt]) == 7: if 'digital' in m[cnt] and len(m[cnt]) == 7:
logger.fdebug("digital edition detected") logger.fdebug("digital edition detected")
pass pass
@ -1161,7 +1250,7 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, nzbprov, nzbpr, Is
#blackhole functinality--- #blackhole functinality---
#let's download the file to a temporary cache. #let's download the file to a temporary cache.
sent_to = None sent_to = None
if mylar.BLACKHOLE and nzbprov != 'CBT' and nzbprov != 'KAT': if mylar.USE_BLACKHOLE and nzbprov != 'CBT' and nzbprov != 'KAT':
logger.fdebug("using blackhole directory at : " + str(mylar.BLACKHOLE_DIR)) logger.fdebug("using blackhole directory at : " + str(mylar.BLACKHOLE_DIR))
if os.path.exists(mylar.BLACKHOLE_DIR): if os.path.exists(mylar.BLACKHOLE_DIR):
#pretty this biatch up. #pretty this biatch up.
@ -1378,6 +1467,7 @@ def searchforissue(issueid=None, new=False, rsscheck=None):
'IssueID': iss['IssueID'], 'IssueID': iss['IssueID'],
'Issue_Number': iss['Issue_Number'], 'Issue_Number': iss['Issue_Number'],
'IssueDate': iss['IssueDate'], 'IssueDate': iss['IssueDate'],
'StoreDate': iss['ReleaseDate'],
'mode': 'want' 'mode': 'want'
}) })
elif stloop == 2: elif stloop == 2:
@ -1387,6 +1477,7 @@ def searchforissue(issueid=None, new=False, rsscheck=None):
'IssueID': iss['IssueID'], 'IssueID': iss['IssueID'],
'Issue_Number': iss['Issue_Number'], 'Issue_Number': iss['Issue_Number'],
'IssueDate': iss['IssueDate'], 'IssueDate': iss['IssueDate'],
'StoreDate': iss['ReleaseDate'], #need to replace with Store date
'mode': 'want_ann' 'mode': 'want_ann'
}) })
stloop-=1 stloop-=1
@ -1397,8 +1488,10 @@ def searchforissue(issueid=None, new=False, rsscheck=None):
comic = myDB.action("SELECT * from comics WHERE ComicID=? AND ComicName != 'None'", [result['ComicID']]).fetchone() comic = myDB.action("SELECT * from comics WHERE ComicID=? AND ComicName != 'None'", [result['ComicID']]).fetchone()
foundNZB = "none" foundNZB = "none"
SeriesYear = comic['ComicYear'] SeriesYear = comic['ComicYear']
Publisher = comic['ComicPublisher']
AlternateSearch = comic['AlternateSearch'] AlternateSearch = comic['AlternateSearch']
IssueDate = result['IssueDate'] IssueDate = result['IssueDate']
StoreDate = result['StoreDate']
UseFuzzy = comic['UseFuzzy'] UseFuzzy = comic['UseFuzzy']
ComicVersion = comic['ComicVersion'] ComicVersion = comic['ComicVersion']
if result['IssueDate'] == None: if result['IssueDate'] == None:
@ -1406,8 +1499,8 @@ def searchforissue(issueid=None, new=False, rsscheck=None):
else: else:
ComicYear = str(result['IssueDate'])[:4] ComicYear = str(result['IssueDate'])[:4]
mode = result['mode'] mode = result['mode']
if (mylar.NZBSU or mylar.DOGNZB or mylar.EXPERIMENTAL or mylar.NEWZNAB or mylar.NZBX or mylar.ENABLE_KAT or mylar.ENABLE_CBT) and (mylar.USE_SABNZBD or mylar.USE_NZBGET or mylar.ENABLE_TORRENTS): if (mylar.NZBSU or mylar.DOGNZB or mylar.EXPERIMENTAL or mylar.NEWZNAB or mylar.NZBX or mylar.ENABLE_KAT or mylar.ENABLE_CBT) and (mylar.USE_SABNZBD or mylar.USE_NZBGET or mylar.ENABLE_TORRENTS or mylar.USE_BLACKHOLE):
foundNZB, prov = search_init(comic['ComicName'], result['Issue_Number'], str(ComicYear), comic['ComicYear'], IssueDate, result['IssueID'], AlternateSearch, UseFuzzy, ComicVersion, SARC=None, IssueArcID=None, mode=mode, rsscheck=rsscheck, ComicID=result['ComicID']) foundNZB, prov = search_init(comic['ComicName'], result['Issue_Number'], str(ComicYear), comic['ComicYear'], Publisher, IssueDate, StoreDate, result['IssueID'], AlternateSearch, UseFuzzy, ComicVersion, SARC=None, IssueArcID=None, mode=mode, rsscheck=rsscheck, ComicID=result['ComicID'])
if foundNZB == "yes": if foundNZB == "yes":
#print ("found!") #print ("found!")
updater.foundsearch(result['ComicID'], result['IssueID'], mode=mode, provider=prov) updater.foundsearch(result['ComicID'], result['IssueID'], mode=mode, provider=prov)
@ -1426,8 +1519,10 @@ def searchforissue(issueid=None, new=False, rsscheck=None):
ComicID = result['ComicID'] ComicID = result['ComicID']
comic = myDB.action('SELECT * FROM comics where ComicID=?', [ComicID]).fetchone() comic = myDB.action('SELECT * FROM comics where ComicID=?', [ComicID]).fetchone()
SeriesYear = comic['ComicYear'] SeriesYear = comic['ComicYear']
Publisher = comic['ComicPublisher']
AlternateSearch = comic['AlternateSearch'] AlternateSearch = comic['AlternateSearch']
IssueDate = result['IssueDate'] IssueDate = result['IssueDate']
StoreDate = result['ReleaseDate']
UseFuzzy = comic['UseFuzzy'] UseFuzzy = comic['UseFuzzy']
ComicVersion = comic['ComicVersion'] ComicVersion = comic['ComicVersion']
if result['IssueDate'] == None: if result['IssueDate'] == None:
@ -1436,10 +1531,10 @@ def searchforissue(issueid=None, new=False, rsscheck=None):
IssueYear = str(result['IssueDate'])[:4] IssueYear = str(result['IssueDate'])[:4]
foundNZB = "none" foundNZB = "none"
if (mylar.NZBSU or mylar.DOGNZB or mylar.EXPERIMENTAL or mylar.NEWZNAB or mylar.NZBX) and (mylar.USE_SABNZBD or mylar.USE_NZBGET): if (mylar.NZBSU or mylar.DOGNZB or mylar.EXPERIMENTAL or mylar.NEWZNAB or mylar.NZBX or mylar.ENABLE_KAT or mylar.ENABLE_CBT) and (mylar.USE_SABNZBD or mylar.USE_NZBGET or mylar.ENABLE_TORRENTS or mylar.USE_BLACKHOLE):
foundNZB, prov = search_init(result['ComicName'], result['Issue_Number'], str(IssueYear), comic['ComicYear'], IssueDate, result['IssueID'], AlternateSearch, UseFuzzy, ComicVersion, mode=mode, ComicID=ComicID) foundNZB, prov = search_init(comic['ComicName'], result['Issue_Number'], str(IssueYear), comic['ComicYear'], Publisher, IssueDate, StoreDate, result['IssueID'], AlternateSearch, UseFuzzy, ComicVersion, SARC=None, IssueArcID=None, mode=mode, rsscheck=rsscheck, ComicID=result['ComicID'])
if foundNZB == "yes": if foundNZB == "yes":
logger.fdebug("I found " + result['ComicName'] + ' #:' + str(result['Issue_Number'])) logger.fdebug("I found " + comic['ComicName'] + ' #:' + str(result['Issue_Number']))
updater.foundsearch(ComicID=result['ComicID'], IssueID=result['IssueID'], mode=mode, provider=prov) updater.foundsearch(ComicID=result['ComicID'], IssueID=result['IssueID'], mode=mode, provider=prov)
else: else:
pass pass
@ -1462,14 +1557,15 @@ def searchIssueIDList(issuelist):
foundNZB = "none" foundNZB = "none"
SeriesYear = comic['ComicYear'] SeriesYear = comic['ComicYear']
AlternateSearch = comic['AlternateSearch'] AlternateSearch = comic['AlternateSearch']
Publisher = comic['ComicPublisher']
UseFuzzy = comic['UseFuzzy'] UseFuzzy = comic['UseFuzzy']
ComicVersion = comic['ComicVersion'] ComicVersion = comic['ComicVersion']
if issue['IssueDate'] == None: if issue['IssueDate'] == None:
IssueYear = comic['ComicYear'] IssueYear = comic['ComicYear']
else: else:
IssueYear = str(issue['IssueDate'])[:4] IssueYear = str(issue['IssueDate'])[:4]
if (mylar.NZBSU or mylar.DOGNZB or mylar.EXPERIMENTAL or mylar.NEWZNAB or mylar.NZBX or mylar.ENABLE_CBT or mylar.ENABLE_KAT) and (mylar.USE_SABNZBD or mylar.USE_NZBGET or mylar.ENABLE_TORRENTS): if (mylar.NZBSU or mylar.DOGNZB or mylar.EXPERIMENTAL or mylar.NEWZNAB or mylar.NZBX or mylar.ENABLE_CBT or mylar.ENABLE_KAT) and (mylar.USE_SABNZBD or mylar.USE_NZBGET or mylar.ENABLE_TORRENTS or mylar.USE_BLACKHOLE):
foundNZB, prov = search_init(comic['ComicName'], issue['Issue_Number'], str(IssueYear), comic['ComicYear'], issue['IssueDate'], issue['IssueID'], AlternateSearch, UseFuzzy, ComicVersion, SARC=None, IssueArcID=None, mode=mode, ComicID=issue['ComicID']) foundNZB, prov = search_init(comic['ComicName'], issue['Issue_Number'], str(IssueYear), comic['ComicYear'], Publisher, issue['IssueDate'], issue['ReleaseDate'], issue['IssueID'], AlternateSearch, UseFuzzy, ComicVersion, SARC=None, IssueArcID=None, mode=mode, ComicID=issue['ComicID'])
if foundNZB == "yes": if foundNZB == "yes":
#print ("found!") #print ("found!")
updater.foundsearch(ComicID=issue['ComicID'], IssueID=issue['IssueID'], mode=mode, provider=prov) updater.foundsearch(ComicID=issue['ComicID'], IssueID=issue['IssueID'], mode=mode, provider=prov)

View File

@ -117,6 +117,11 @@ def solicit(month, year):
connection = sqlite3.connect(str(mylardb)) connection = sqlite3.connect(str(mylardb))
cursor = connection.cursor() cursor = connection.cursor()
# we should extract the issues that are being watched, but no data is available yet ('Watch For' status)
# once we get the data, store it, wipe the existing table, retrieve the new data, populate the data into
# the table, recheck the series against the current watchlist and then restore the Watch For data.
cursor.executescript('drop table if exists future;') cursor.executescript('drop table if exists future;')
cursor.execute("CREATE TABLE IF NOT EXISTS future (SHIPDATE, PUBLISHER text, ISSUE text, COMIC VARCHAR(150), EXTRA text, STATUS text, FutureID text, ComicID text);") cursor.execute("CREATE TABLE IF NOT EXISTS future (SHIPDATE, PUBLISHER text, ISSUE text, COMIC VARCHAR(150), EXTRA text, STATUS text, FutureID text, ComicID text);")

View File

@ -236,7 +236,7 @@ def upcoming_update(ComicID, ComicName, IssueNumber, IssueDate, forcecheck=None,
elif issuechk['Status'] == "Wanted": elif issuechk['Status'] == "Wanted":
logger.fdebug('...Status already Wanted .. not changing.') logger.fdebug('...Status already Wanted .. not changing.')
else: else:
logger.fdebug('...Already have issue - keeping existing status of : ' + issuechk['Status']) logger.fdebug('...Already have issue - keeping existing status of : ' + str(issuechk['Status']))
if issuechk is None: if issuechk is None:
myDB.upsert("upcoming", newValue, controlValue) myDB.upsert("upcoming", newValue, controlValue)
@ -271,6 +271,8 @@ def upcoming_update(ComicID, ComicName, IssueNumber, IssueDate, forcecheck=None,
def weekly_update(ComicName,IssueNumber,CStatus,CID,futurepull=None): def weekly_update(ComicName,IssueNumber,CStatus,CID,futurepull=None):
logger.fdebug('weekly_update of table : ' + str(ComicName) + ' #:' + str(IssueNumber))
logger.fdebug('weekly_update of table : ' + str(CStatus))
# here we update status of weekly table... # here we update status of weekly table...
# added Issue to stop false hits on series' that have multiple releases in a week # added Issue to stop false hits on series' that have multiple releases in a week
# added CStatus to update status flags on Pullist screen # added CStatus to update status flags on Pullist screen
@ -459,9 +461,9 @@ def forceRescan(ComicID,archive=None):
rescan = myDB.action('SELECT * FROM comics WHERE ComicID=?', [ComicID]).fetchone() rescan = myDB.action('SELECT * FROM comics WHERE ComicID=?', [ComicID]).fetchone()
logger.info('Now checking files for ' + rescan['ComicName'] + ' (' + str(rescan['ComicYear']) + ') in ' + rescan['ComicLocation'] ) logger.info('Now checking files for ' + rescan['ComicName'] + ' (' + str(rescan['ComicYear']) + ') in ' + rescan['ComicLocation'] )
if archive is None: if archive is None:
fc = filechecker.listFiles(dir=rescan['ComicLocation'], watchcomic=rescan['ComicName'], AlternateSearch=rescan['AlternateSearch']) fc = filechecker.listFiles(dir=rescan['ComicLocation'], watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=rescan['AlternateSearch'])
else: else:
fc = filechecker.listFiles(dir=archive, watchcomic=rescan['ComicName'], AlternateSearch=rescan['AlternateSearch']) fc = filechecker.listFiles(dir=archive, watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=rescan['AlternateSearch'])
iscnt = rescan['Total'] iscnt = rescan['Total']
havefiles = 0 havefiles = 0
if mylar.ANNUALS_ON: if mylar.ANNUALS_ON:

View File

@ -119,10 +119,13 @@ class WebInterface(object):
for curResult in issues: for curResult in issues:
baseissues = {'skipped':1,'wanted':2,'archived':3,'downloaded':4,'ignored':5} baseissues = {'skipped':1,'wanted':2,'archived':3,'downloaded':4,'ignored':5}
for seas in baseissues: for seas in baseissues:
if seas in curResult['Status'].lower(): if curResult['Status'] is None:
sconv = baseissues[seas] continue
isCounts[sconv]+=1 else:
continue if seas in curResult['Status'].lower():
sconv = baseissues[seas]
isCounts[sconv]+=1
continue
isCounts = { isCounts = {
"Skipped" : str(isCounts[1]), "Skipped" : str(isCounts[1]),
"Wanted" : str(isCounts[2]), "Wanted" : str(isCounts[2]),
@ -145,13 +148,27 @@ class WebInterface(object):
} }
if mylar.ANNUALS_ON: if mylar.ANNUALS_ON:
annuals = myDB.select("SELECT * FROM annuals WHERE ComicID=?", [ComicID]) annuals = myDB.select("SELECT * FROM annuals WHERE ComicID=?", [ComicID])
#we need to load in the annual['ReleaseComicName'] and annual['ReleaseComicID']
#then group by ReleaseComicID, in an attempt to create seperate tables for each different annual series.
#this should allow for annuals, specials, one-shots, etc all to be included if desired.
acnt = 0
aName = []
annualinfo = {}
for ann in annuals:
if not any(d.get('annualComicID', None) == str(ann['ReleaseComicID']) for d in aName):
aName.append({"annualComicName": ann['ReleaseComicName'],
"annualComicID" : ann['ReleaseComicID']})
#logger.info('added : ' + str(ann['ReleaseComicID']))
acnt+=1
annualinfo = aName
#annualinfo['count'] = acnt
else: annuals = None else: annuals = None
return serve_template(templatename="comicdetails.html", title=comic['ComicName'], comic=comic, issues=issues, comicConfig=comicConfig, isCounts=isCounts, series=series, annuals=annuals) return serve_template(templatename="comicdetails.html", title=comic['ComicName'], comic=comic, issues=issues, comicConfig=comicConfig, isCounts=isCounts, series=series, annuals=annuals, annualinfo=aName)
comicDetails.exposed = True comicDetails.exposed = True
def searchit(self, name, issue=None, mode=None, type=None): def searchit(self, name, issue=None, mode=None, type=None):
if type is None: type = 'comic' # let's default this to comic search only for the time being (will add story arc, characters, etc later) if type is None: type = 'comic' # let's default this to comic search only for the time being (will add story arc, characters, etc later)
else: print (str(type) + " mode enabled.") else: logger.fdebug(str(type) + " mode enabled.")
#mode dictates type of search: #mode dictates type of search:
# --series ... search for comicname displaying all results # --series ... search for comicname displaying all results
# --pullseries ... search for comicname displaying a limited # of results based on issue # --pullseries ... search for comicname displaying a limited # of results based on issue
@ -162,6 +179,12 @@ class WebInterface(object):
if type == 'comic' and mode == 'pullseries': if type == 'comic' and mode == 'pullseries':
searchresults = mb.findComic(name, mode, issue=issue) searchresults = mb.findComic(name, mode, issue=issue)
elif type == 'comic' and mode == 'series': elif type == 'comic' and mode == 'series':
if name.startswith('4050-'):
mismatch = "no"
comicid = re.sub('4050-','', name)
logger.info('Attempting to add directly by ComicVineID: ' + str(comicid) + '. I sure hope you know what you are doing.')
threading.Thread(target=importer.addComictoDB, args=[comicid,mismatch,None]).start()
raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % comicid)
searchresults = mb.findComic(name, mode, issue=None) searchresults = mb.findComic(name, mode, issue=None)
elif type == 'comic' and mode == 'want': elif type == 'comic' and mode == 'want':
searchresults = mb.findComic(name, mode, issue) searchresults = mb.findComic(name, mode, issue)
@ -279,12 +302,8 @@ class WebInterface(object):
mismatch = "no" mismatch = "no"
logger.info('Attempting to add directly by ComicVineID: ' + str(comicid)) logger.info('Attempting to add directly by ComicVineID: ' + str(comicid))
if comicid.startswith('4050-'): comicid = re.sub('4050-','', comicid) if comicid.startswith('4050-'): comicid = re.sub('4050-','', comicid)
comicname, year = importer.addComictoDB(comicid,mismatch) threading.Thread(target=importer.addComictoDB, args=[comicid,mismatch,None]).start()
if comicname is None: raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % comicid)
logger.error('There was an error during the add, check the mylar.log file for futher details.')
else:
logger.info('Sucessfully added ' + comicname + ' (' + str(year) + ') to your watchlist')
raise cherrypy.HTTPRedirect("home")
addbyid.exposed = True addbyid.exposed = True
def wanted_Export(self): def wanted_Export(self):
@ -456,14 +475,33 @@ class WebInterface(object):
#in order to update to JUST CV_ONLY, we need to delete the issues for a given series so it's a clea$ #in order to update to JUST CV_ONLY, we need to delete the issues for a given series so it's a clea$
logger.fdebug("Gathering the status of all issues for the series.") logger.fdebug("Gathering the status of all issues for the series.")
issues = myDB.select('SELECT * FROM issues WHERE ComicID=?', [ComicID]) issues = myDB.select('SELECT * FROM issues WHERE ComicID=?', [ComicID])
annload = [] #initiate the list here so we don't error out below.
if mylar.ANNUALS_ON: if mylar.ANNUALS_ON:
issues += myDB.select('SELECT * FROM annuals WHERE ComicID=?', [ComicID]) #now we load the annuals into memory to pass through to importer when refreshing so that it can
#refresh even the manually added annuals.
annual_load = myDB.select('SELECT * FROM annuals WHERE ComicID=?', [ComicID])
logger.fdebug('checking annual db')
for annthis in annual_load:
if not any(d['ReleaseComicID'] == annthis['ReleaseComicID'] for d in annload):
#print 'matched on annual'
annload.append({
'ReleaseComicID': annthis['ReleaseComicID'],
'ReleaseComicName': annthis['ReleaseComicName'],
'ComicID': annthis['ComicID'],
'ComicName': annthis['ComicName']
})
#print 'added annual'
issues += annual_load #myDB.select('SELECT * FROM annuals WHERE ComicID=?', [ComicID])
#store the issues' status for a given comicid, after deleting and readding, flip the status back to$ #store the issues' status for a given comicid, after deleting and readding, flip the status back to$
logger.fdebug("Deleting all issue data.") logger.fdebug("Deleting all issue data.")
myDB.select('DELETE FROM issues WHERE ComicID=?', [ComicID]) myDB.select('DELETE FROM issues WHERE ComicID=?', [ComicID])
myDB.select('DELETE FROM annuals WHERE ComicID=?', [ComicID]) myDB.select('DELETE FROM annuals WHERE ComicID=?', [ComicID])
logger.fdebug("Refreshing the series and pulling in new data using only CV.") logger.fdebug("Refreshing the series and pulling in new data using only CV.")
mylar.importer.addComictoDB(ComicID,mismatch,calledfrom='dbupdate') mylar.importer.addComictoDB(ComicID,mismatch,calledfrom='dbupdate',annload=annload)
#reload the annuals here.
issues_new = myDB.select('SELECT * FROM issues WHERE ComicID=?', [ComicID]) issues_new = myDB.select('SELECT * FROM issues WHERE ComicID=?', [ComicID])
annuals = [] annuals = []
ann_list = [] ann_list = []
@ -476,17 +514,37 @@ class WebInterface(object):
icount = 0 icount = 0
for issue in issues: for issue in issues:
for issuenew in issues_new: for issuenew in issues_new:
#logger.info('issuenew:' + str(issuenew['IssueID']) + ' : ' + str(issuenew['Status']))
#logger.info('issuenew:' + str(issue['IssueID']) + ' : ' + str(issue['Status']))
if issuenew['IssueID'] == issue['IssueID'] and issuenew['Status'] != issue['Status']: if issuenew['IssueID'] == issue['IssueID'] and issuenew['Status'] != issue['Status']:
#if the status is now Downloaded/Snatched, keep status. ctrlVAL = {"IssueID": issue['IssueID']}
if issuenew['Status'] == 'Downloaded' or issue['Status'] == 'Snatched': #if the status is None and the original status is either Downloaded / Archived, keep status & stats
break if issuenew['Status'] == None and (issue['Status'] == 'Downloaded' or issue['Status'] == 'Archived'):
#change the status to the previous status newVAL = {"Location": issue['Location'],
ctrlVAL = {'IssueID': issue['IssueID']} "ComicSize": issue['ComicSize'],
newVAL = {'Status': issue['Status']} "Status": issue['Status']}
#if the status is now Downloaded/Snatched, keep status & stats (downloaded only)
elif issuenew['Status'] == 'Downloaded' or issue['Status'] == 'Snatched':
newVAL = {"Location": issue['Location'],
"ComicSize": issue['ComicSize']}
if issuenew['Status'] == 'Downloaded':
newVAL['Status'] = issuenew['Status']
else:
newVAL['Status'] = issue['Status']
elif issue['Status'] == 'Archived':
newVAL = {"Status": issue['Status'],
"Location": issue['Location'],
"ComicSize": issue['ComicSize']}
else:
#change the status to the previous status
newVAL = {"Status": issue['Status']}
if any(d['IssueID'] == str(issue['IssueID']) for d in ann_list): if any(d['IssueID'] == str(issue['IssueID']) for d in ann_list):
logger.fdebug("annual detected for " + str(issue['IssueID']) + " #: " + str(issue['Issue_Number'])) logger.fdebug("annual detected for " + str(issue['IssueID']) + " #: " + str(issue['Issue_Number']))
myDB.upsert("Annuals", newVAL, ctrlVAL) myDB.upsert("Annuals", newVAL, ctrlVAL)
else: else:
#logger.info('writing issuedata: ' + str(newVAL))
myDB.upsert("Issues", newVAL, ctrlVAL) myDB.upsert("Issues", newVAL, ctrlVAL)
icount+=1 icount+=1
break break
@ -613,7 +671,7 @@ class WebInterface(object):
controlValueDict = {"IssueArcID": IssueArcID} controlValueDict = {"IssueArcID": IssueArcID}
newStatus = {"Status": "Wanted"} newStatus = {"Status": "Wanted"}
myDB.upsert("readinglist", newStatus, controlValueDict) myDB.upsert("readinglist", newStatus, controlValueDict)
foundcom, prov = search.search_init(ComicName=ComicName, IssueNumber=ComicIssue, ComicYear=ComicYear, SeriesYear=None, IssueDate=None, IssueID=None, AlternateSearch=None, UseFuzzy=None, ComicVersion=None, SARC=SARC, IssueArcID=IssueArcID) foundcom, prov = search.search_init(ComicName=ComicName, IssueNumber=ComicIssue, ComicYear=ComicYear, SeriesYear=None, Publisher=None, IssueDate=None, StoreDate=None, IssueID=None, AlternateSearch=None, UseFuzzy=None, ComicVersion=None, SARC=SARC, IssueArcID=IssueArcID)
if foundcom == "yes": if foundcom == "yes":
logger.info(u"Downloaded " + ComicName + " #" + ComicIssue + " (" + str(ComicYear) + ")") logger.info(u"Downloaded " + ComicName + " #" + ComicIssue + " (" + str(ComicYear) + ")")
#raise cherrypy.HTTPRedirect("readlist") #raise cherrypy.HTTPRedirect("readlist")
@ -627,7 +685,7 @@ class WebInterface(object):
ComicYear = str(cyear['SHIPDATE'])[:4] ComicYear = str(cyear['SHIPDATE'])[:4]
if ComicYear == '': ComicYear = now.year if ComicYear == '': ComicYear = now.year
logger.info(u"Marking " + ComicName + " " + ComicIssue + " as wanted...") logger.info(u"Marking " + ComicName + " " + ComicIssue + " as wanted...")
foundcom, prov = search.search_init(ComicName=ComicName, IssueNumber=ComicIssue, ComicYear=ComicYear, SeriesYear=None, IssueDate=cyear['SHIPDATE'], IssueID=None, AlternateSearch=None, UseFuzzy=None, ComicVersion=None) foundcom, prov = search.search_init(ComicName=ComicName, IssueNumber=ComicIssue, ComicYear=ComicYear, SeriesYear=None, Publisher=None, IssueDate=cyear['SHIPDATE'], StoreDate=cyear['SHIPDATE'], IssueID=None, AlternateSearch=None, UseFuzzy=None, ComicVersion=None)
if foundcom == "yes": if foundcom == "yes":
logger.info(u"Downloaded " + ComicName + " " + ComicIssue ) logger.info(u"Downloaded " + ComicName + " " + ComicIssue )
raise cherrypy.HTTPRedirect("pullist") raise cherrypy.HTTPRedirect("pullist")
@ -640,6 +698,7 @@ class WebInterface(object):
if mode == 'want': if mode == 'want':
logger.info(u"Marking " + ComicName + " issue: " + ComicIssue + " as wanted...") logger.info(u"Marking " + ComicName + " issue: " + ComicIssue + " as wanted...")
myDB.upsert("issues", newStatus, controlValueDict) myDB.upsert("issues", newStatus, controlValueDict)
logger.info('Written to db.')
else: else:
logger.info(u"Marking " + ComicName + " Annual: " + ComicIssue + " as wanted...") logger.info(u"Marking " + ComicName + " Annual: " + ComicIssue + " as wanted...")
myDB.upsert("annuals", newStatus, controlValueDict) myDB.upsert("annuals", newStatus, controlValueDict)
@ -650,18 +709,31 @@ class WebInterface(object):
# newStatus = {"Status": "Wanted"} # newStatus = {"Status": "Wanted"}
# myDB.upsert("issues", newStatus, controlValueDict) # myDB.upsert("issues", newStatus, controlValueDict)
#for future reference, the year should default to current year (.datetime) #for future reference, the year should default to current year (.datetime)
print 'before db'
if mode == 'want': if mode == 'want':
issues = myDB.action("SELECT IssueDate FROM issues WHERE IssueID=?", [IssueID]).fetchone() issues = myDB.action("SELECT IssueDate, ReleaseDate FROM issues WHERE IssueID=?", [IssueID]).fetchone()
elif mode == 'want_ann': elif mode == 'want_ann':
issues = myDB.action("SELECT IssueDate FROM annuals WHERE IssueID=?", [IssueID]).fetchone() issues = myDB.action("SELECT IssueDate, ReleaseDate FROM annuals WHERE IssueID=?", [IssueID]).fetchone()
print 'after db'
if ComicYear == None: if ComicYear == None:
ComicYear = str(issues['IssueDate'])[:4] ComicYear = str(issues['IssueDate'])[:4]
print 'after year'
if issues['ReleaseDate'] is None:
logger.info('No Store Date found for given issue. This is probably due to not Refreshing the Series beforehand.')
logger.info('I Will assume IssueDate as Store Date, but you should probably Refresh the Series and try again if required.')
storedate = issues['IssueDate']
else:
storedate = issues['ReleaseDate']
print 'there'
miy = myDB.action("SELECT * FROM comics WHERE ComicID=?", [ComicID]).fetchone() miy = myDB.action("SELECT * FROM comics WHERE ComicID=?", [ComicID]).fetchone()
print 'miy'
SeriesYear = miy['ComicYear'] SeriesYear = miy['ComicYear']
AlternateSearch = miy['AlternateSearch'] AlternateSearch = miy['AlternateSearch']
Publisher = miy['ComicPublisher']
UseAFuzzy = miy['UseFuzzy'] UseAFuzzy = miy['UseFuzzy']
ComicVersion = miy['ComicVersion'] ComicVersion = miy['ComicVersion']
foundcom, prov = search.search_init(ComicName, ComicIssue, ComicYear, SeriesYear, issues['IssueDate'], IssueID, AlternateSearch, UseAFuzzy, ComicVersion, mode=mode, ComicID=ComicID) print 'here'
foundcom, prov = search.search_init(ComicName, ComicIssue, ComicYear, SeriesYear, Publisher, issues['IssueDate'], storedate, IssueID, AlternateSearch, UseAFuzzy, ComicVersion, mode=mode, ComicID=ComicID)
if foundcom == "yes": if foundcom == "yes":
# file check to see if issue exists and update 'have' count # file check to see if issue exists and update 'have' count
if IssueID is not None: if IssueID is not None:
@ -925,7 +997,7 @@ class WebInterface(object):
timenow = datetime.datetime.now().strftime('%Y%m%d') #convert to yyyymmdd timenow = datetime.datetime.now().strftime('%Y%m%d') #convert to yyyymmdd
tmpdate = re.sub("[^0-9]", "", upc['IssueDate']) #convert date to numerics only (should be in yyyymmdd) tmpdate = re.sub("[^0-9]", "", upc['IssueDate']) #convert date to numerics only (should be in yyyymmdd)
logger.fdebug('comparing pubdate of: ' + str(tmpdate) + ' to now date of: ' + str(timenow)) #logger.fdebug('comparing pubdate of: ' + str(tmpdate) + ' to now date of: ' + str(timenow))
if int(tmpdate) >= int(timenow): if int(tmpdate) >= int(timenow):
if upc['Status'] == 'Wanted': if upc['Status'] == 'Wanted':
@ -1009,10 +1081,14 @@ class WebInterface(object):
raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % [comicid]) raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % [comicid])
skipped2wanted.exposed = True skipped2wanted.exposed = True
def annualDelete(self, comicid): def annualDelete(self, comicid, ReleaseComicID=None):
myDB = db.DBConnection() myDB = db.DBConnection()
myDB.action("DELETE FROM annuals WHERE ComicID=?", [comicid]) if ReleaseComicID is None:
logger.fdebug("Deleted all annuals from DB for ComicID of " + str(comicid)) myDB.action("DELETE FROM annuals WHERE ComicID=?", [comicid])
logger.fdebug("Deleted all annuals from DB for ComicID of " + str(comicid))
else:
myDB.action("DELETE FROM annuals WHERE ReleaseComicID=?", [ReleaseComicID])
logger.fdebug("Deleted selected annual from DB with a ComicID of " + str(ReleaseComicID))
raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % [comicid]) raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % [comicid])
annualDelete.exposed = True annualDelete.exposed = True
@ -1392,7 +1468,7 @@ class WebInterface(object):
dstloc = os.path.join(mylar.DESTINATION_DIR, 'StoryArcs', arc['storyarc']) dstloc = os.path.join(mylar.DESTINATION_DIR, 'StoryArcs', arc['storyarc'])
logger.fdebug('destination location set to : ' + dstloc) logger.fdebug('destination location set to : ' + dstloc)
filechk = filechecker.listFiles(dstloc, arc['ComicName'], sarc='true') filechk = filechecker.listFiles(dstloc, arc['ComicName'], Publisher=None, sarc='true')
fn = 0 fn = 0
fccnt = filechk['comiccount'] fccnt = filechk['comiccount']
while (fn < fccnt): while (fn < fccnt):
@ -1505,14 +1581,14 @@ class WebInterface(object):
logger.fdebug(want['ComicName'] + " -- #" + str(want['IssueNumber'])) logger.fdebug(want['ComicName'] + " -- #" + str(want['IssueNumber']))
logger.info(u"Story Arc : " + str(SARC) + " queueing selected issue...") logger.info(u"Story Arc : " + str(SARC) + " queueing selected issue...")
logger.info(u"IssueArcID : " + str(IssueArcID)) logger.info(u"IssueArcID : " + str(IssueArcID))
foundcom, prov = search.search_init(ComicName=want['ComicName'], IssueNumber=want['IssueNumber'], ComicYear=want['IssueYear'], SeriesYear=want['SeriesYear'], IssueDate=None, IssueID=None, AlternateSearch=None, UseFuzzy=None, ComicVersion=None, SARC=SARC, IssueArcID=IssueArcID, mode=None, rsscheck=None, ComicID=None) foundcom, prov = search.search_init(ComicName=want['ComicName'], IssueNumber=want['IssueNumber'], ComicYear=want['IssueYear'], SeriesYear=want['SeriesYear'], Publisher=None, IssueDate=None, StoreDate=None, IssueID=None, AlternateSearch=None, UseFuzzy=None, ComicVersion=None, SARC=SARC, IssueArcID=IssueArcID, mode=None, rsscheck=None, ComicID=None)
else: else:
# it's a watched series # it's a watched series
s_comicid = issuechk['ComicID'] s_comicid = issuechk['ComicID']
s_issueid = issuechk['IssueID'] s_issueid = issuechk['IssueID']
logger.fdebug("-- watched series queue.") logger.fdebug("-- watched series queue.")
logger.fdebug(issuechk['ComicName'] + " -- #" + str(issuechk['Issue_Number'])) logger.fdebug(issuechk['ComicName'] + " -- #" + str(issuechk['Issue_Number']))
foundcom, prov = search.search_init(ComicName=issuechk['ComicName'], IssueNumber=issuechk['Issue_Number'], ComicYear=issuechk['IssueYear'], SeriesYear=issuechk['SeriesYear'], IssueDate=None, IssueID=issuechk['IssueID'], AlternateSearch=None, UseFuzzy=None, ComicVersion=None, SARC=SARC, IssueArcID=IssueArcID) foundcom, prov = search.search_init(ComicName=issuechk['ComicName'], IssueNumber=issuechk['Issue_Number'], ComicYear=issuechk['IssueYear'], SeriesYear=issuechk['SeriesYear'], Publisher=None, IssueDate=None, StoreDate=issuechk['ReleaseDate'], IssueID=issuechk['IssueID'], AlternateSearch=None, UseFuzzy=None, ComicVersion=None, SARC=SARC, IssueArcID=IssueArcID)
if foundcom == "yes": if foundcom == "yes":
print "sucessfully found." print "sucessfully found."
@ -1539,14 +1615,14 @@ class WebInterface(object):
logger.fdebug(watchchk['ComicName'] + " -- #" + str(watchchk['IssueNumber'])) logger.fdebug(watchchk['ComicName'] + " -- #" + str(watchchk['IssueNumber']))
logger.info(u"Story Arc : " + str(SARC) + " queueing selected issue...") logger.info(u"Story Arc : " + str(SARC) + " queueing selected issue...")
logger.info(u"IssueArcID : " + str(IssueArcID)) logger.info(u"IssueArcID : " + str(IssueArcID))
foundcom, prov = search.search_init(ComicName=watchchk['ComicName'], IssueNumber=watchchk['IssueNumber'], ComicYear=watchchk['IssueYEAR'], SeriesYear=watchchk['SeriesYear'], IssueDate=None, IssueID=None, AlternateSearch=None, UseFuzzy=None, ComicVersion=None, SARC=SARC, IssueArcID=IssueArcID, mode=None, rsscheck=None, ComicID=None) foundcom, prov = search.search_init(ComicName=watchchk['ComicName'], IssueNumber=watchchk['IssueNumber'], ComicYear=watchchk['IssueYEAR'], SeriesYear=watchchk['SeriesYear'], Publisher=watchchk['ComicPublisher'], IssueDate=None, StoreDate=None, IssueID=None, AlternateSearch=None, UseFuzzy=None, ComicVersion=None, SARC=SARC, IssueArcID=IssueArcID, mode=None, rsscheck=None, ComicID=None)
else: else:
# it's a watched series # it's a watched series
s_comicid = issuechk['ComicID'] s_comicid = issuechk['ComicID']
s_issueid = issuechk['IssueID'] s_issueid = issuechk['IssueID']
logger.fdebug("-- watched series queue.") logger.fdebug("-- watched series queue.")
logger.fdebug(issuechk['ComicName'] + " -- #" + str(issuechk['Issue_Number'])) logger.fdebug(issuechk['ComicName'] + " -- #" + str(issuechk['Issue_Number']))
foundcom,prov = search.search_init(ComicName=issuechk['ComicName'], IssueNumber=issuechk['Issue_Number'], ComicYear=issuechk['IssueYear'], SeriesYear=issuechk['SeriesYear'], IssueDate=None, IssueID=issuechk['IssueID'], AlternateSearch=None, UseFuzzy=None, ComicVersion=None, SARC=SARC, IssueArcID=IssueArcID, mode=None, rsscheck=None, ComicID=None) foundcom,prov = search.search_init(ComicName=issuechk['ComicName'], IssueNumber=issuechk['Issue_Number'], ComicYear=issuechk['IssueYear'], SeriesYear=issuechk['SeriesYear'], Publisher=None, IssueDate=None, StoreDate=issuechk['ReleaseDate'], IssueID=issuechk['IssueID'], AlternateSearch=None, UseFuzzy=None, ComicVersion=None, SARC=SARC, IssueArcID=IssueArcID, mode=None, rsscheck=None, ComicID=None)
if foundcom == "yes": if foundcom == "yes":
print "sucessfully found." print "sucessfully found."
updater.foundsearch(s_comicid, s_issueid, mode=mode, provider=prov, SARC=SARC, IssueArcID=IssueArcID) updater.foundsearch(s_comicid, s_issueid, mode=mode, provider=prov, SARC=SARC, IssueArcID=IssueArcID)
@ -2070,6 +2146,8 @@ class WebInterface(object):
"http_user" : mylar.HTTP_USERNAME, "http_user" : mylar.HTTP_USERNAME,
"http_port" : mylar.HTTP_PORT, "http_port" : mylar.HTTP_PORT,
"http_pass" : mylar.HTTP_PASSWORD, "http_pass" : mylar.HTTP_PASSWORD,
"api_enabled" : helpers.checked(mylar.API_ENABLED),
"api_key" : mylar.API_KEY,
"launch_browser" : helpers.checked(mylar.LAUNCH_BROWSER), "launch_browser" : helpers.checked(mylar.LAUNCH_BROWSER),
"logverbose" : helpers.checked(mylar.LOGVERBOSE), "logverbose" : helpers.checked(mylar.LOGVERBOSE),
"download_scan_interval" : mylar.DOWNLOAD_SCAN_INTERVAL, "download_scan_interval" : mylar.DOWNLOAD_SCAN_INTERVAL,
@ -2077,7 +2155,9 @@ class WebInterface(object):
"nzb_startup_search" : helpers.checked(mylar.NZB_STARTUP_SEARCH), "nzb_startup_search" : helpers.checked(mylar.NZB_STARTUP_SEARCH),
"libraryscan_interval" : mylar.LIBRARYSCAN_INTERVAL, "libraryscan_interval" : mylar.LIBRARYSCAN_INTERVAL,
"search_delay" : mylar.SEARCH_DELAY, "search_delay" : mylar.SEARCH_DELAY,
"use_sabnzbd" : helpers.checked(mylar.USE_SABNZBD), "nzb_downloader_sabnzbd" : helpers.radio(int(mylar.NZB_DOWNLOADER), 0),
"nzb_downloader_nzbget" : helpers.radio(int(mylar.NZB_DOWNLOADER), 1),
"nzb_downloader_blackhole" : helpers.radio(int(mylar.NZB_DOWNLOADER), 2),
"sab_host" : mylar.SAB_HOST, "sab_host" : mylar.SAB_HOST,
"sab_user" : mylar.SAB_USERNAME, "sab_user" : mylar.SAB_USERNAME,
"sab_api" : mylar.SAB_APIKEY, "sab_api" : mylar.SAB_APIKEY,
@ -2085,14 +2165,13 @@ class WebInterface(object):
"sab_cat" : mylar.SAB_CATEGORY, "sab_cat" : mylar.SAB_CATEGORY,
"sab_priority" : mylar.SAB_PRIORITY, "sab_priority" : mylar.SAB_PRIORITY,
"sab_directory" : mylar.SAB_DIRECTORY, "sab_directory" : mylar.SAB_DIRECTORY,
"use_nzbget" : helpers.checked(mylar.USE_NZBGET),
"nzbget_host" : mylar.NZBGET_HOST, "nzbget_host" : mylar.NZBGET_HOST,
"nzbget_port" : mylar.NZBGET_PORT, "nzbget_port" : mylar.NZBGET_PORT,
"nzbget_user" : mylar.NZBGET_USERNAME, "nzbget_user" : mylar.NZBGET_USERNAME,
"nzbget_pass" : mylar.NZBGET_PASSWORD, "nzbget_pass" : mylar.NZBGET_PASSWORD,
"nzbget_cat" : mylar.NZBGET_CATEGORY, "nzbget_cat" : mylar.NZBGET_CATEGORY,
"nzbget_priority" : mylar.NZBGET_PRIORITY, "nzbget_priority" : mylar.NZBGET_PRIORITY,
"use_blackhole" : helpers.checked(mylar.BLACKHOLE), "nzbget_directory" : mylar.NZBGET_DIRECTORY,
"blackhole_dir" : mylar.BLACKHOLE_DIR, "blackhole_dir" : mylar.BLACKHOLE_DIR,
"usenet_retention" : mylar.USENET_RETENTION, "usenet_retention" : mylar.USENET_RETENTION,
"use_nzbsu" : helpers.checked(mylar.NZBSU), "use_nzbsu" : helpers.checked(mylar.NZBSU),
@ -2217,6 +2296,22 @@ class WebInterface(object):
error_change.exposed = True error_change.exposed = True
def manual_annual_add(self, manual_comicid, comicname, comicyear, comicid, x=None, y=None):
import urllib
b = urllib.unquote_plus(comicname)
cname = b.encode('utf-8')
print ('comicid to be attached : ' + str(manual_comicid))
print ('comicname : ' + str(cname))
print ('comicyear : ' + str(comicyear))
print ('comicid : ' + str(comicid))
issueid = manual_comicid
logger.fdebug(str(issueid) + ' added to series list as an Annual')
threading.Thread(target=importer.manualAnnual, args=[manual_comicid, comicname, comicyear, comicid]).start()
raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % comicid)
manual_annual_add.exposed = True
def comic_config(self, com_location, ComicID, alt_search=None, fuzzy_year=None, comic_version=None, force_continuing=None): def comic_config(self, com_location, ComicID, alt_search=None, fuzzy_year=None, comic_version=None, force_continuing=None):
myDB = db.DBConnection() myDB = db.DBConnection()
#--- this is for multipe search terms............ #--- this is for multipe search terms............
@ -2334,9 +2429,9 @@ class WebInterface(object):
readOptions.exposed = True readOptions.exposed = True
def configUpdate(self, http_host='0.0.0.0', http_username=None, http_port=8090, http_password=None, launch_browser=0, logverbose=0, download_scan_interval=None, nzb_search_interval=None, nzb_startup_search=0, libraryscan_interval=None, def configUpdate(self, http_host='0.0.0.0', http_username=None, http_port=8090, http_password=None, api_enabled=0, api_key=None, launch_browser=0, logverbose=0, download_scan_interval=None, nzb_search_interval=None, nzb_startup_search=0, libraryscan_interval=None,
use_sabnzbd=0, sab_host=None, sab_username=None, sab_apikey=None, sab_password=None, sab_category=None, sab_priority=None, sab_directory=None, log_dir=None, log_level=0, blackhole=0, blackhole_dir=None, nzb_downloader=0, sab_host=None, sab_username=None, sab_apikey=None, sab_password=None, sab_category=None, sab_priority=None, sab_directory=None, log_dir=None, log_level=0, blackhole_dir=None,
use_nzbget=0, nzbget_host=None, nzbget_port=None, nzbget_username=None, nzbget_password=None, nzbget_category=None, nzbget_priority=None, nzbget_host=None, nzbget_port=None, nzbget_username=None, nzbget_password=None, nzbget_category=None, nzbget_priority=None, nzbget_directory=None,
usenet_retention=None, nzbsu=0, nzbsu_uid=None, nzbsu_apikey=None, dognzb=0, dognzb_uid=None, dognzb_apikey=None, nzbx=0, newznab=0, newznab_host=None, newznab_name=None, newznab_apikey=None, newznab_uid=None, newznab_enabled=0, usenet_retention=None, nzbsu=0, nzbsu_uid=None, nzbsu_apikey=None, dognzb=0, dognzb_uid=None, dognzb_apikey=None, nzbx=0, newznab=0, newznab_host=None, newznab_name=None, newznab_apikey=None, newznab_uid=None, newznab_enabled=0,
raw=0, raw_provider=None, raw_username=None, raw_password=None, raw_groups=None, experimental=0, raw=0, raw_provider=None, raw_username=None, raw_password=None, raw_groups=None, experimental=0,
enable_meta=0, cmtagger_path=None, enable_rss=0, rss_checkinterval=None, enable_torrent_search=0, enable_kat=0, enable_cbt=0, cbt_passkey=None, enable_meta=0, cmtagger_path=None, enable_rss=0, rss_checkinterval=None, enable_torrent_search=0, enable_kat=0, enable_cbt=0, cbt_passkey=None,
@ -2348,6 +2443,8 @@ class WebInterface(object):
mylar.HTTP_PORT = http_port mylar.HTTP_PORT = http_port
mylar.HTTP_USERNAME = http_username mylar.HTTP_USERNAME = http_username
mylar.HTTP_PASSWORD = http_password mylar.HTTP_PASSWORD = http_password
mylar.API_ENABLED = api_enabled
mylar.API_KEY = api_key
mylar.LAUNCH_BROWSER = launch_browser mylar.LAUNCH_BROWSER = launch_browser
mylar.LOGVERBOSE = logverbose mylar.LOGVERBOSE = logverbose
mylar.DOWNLOAD_SCAN_INTERVAL = download_scan_interval mylar.DOWNLOAD_SCAN_INTERVAL = download_scan_interval
@ -2355,7 +2452,7 @@ class WebInterface(object):
mylar.NZB_STARTUP_SEARCH = nzb_startup_search mylar.NZB_STARTUP_SEARCH = nzb_startup_search
mylar.LIBRARYSCAN_INTERVAL = libraryscan_interval mylar.LIBRARYSCAN_INTERVAL = libraryscan_interval
mylar.SEARCH_DELAY = search_delay mylar.SEARCH_DELAY = search_delay
mylar.USE_SABNZBD = use_sabnzbd mylar.NZB_DOWNLOADER = int(nzb_downloader)
mylar.SAB_HOST = sab_host mylar.SAB_HOST = sab_host
mylar.SAB_USERNAME = sab_username mylar.SAB_USERNAME = sab_username
mylar.SAB_PASSWORD = sab_password mylar.SAB_PASSWORD = sab_password
@ -2363,14 +2460,13 @@ class WebInterface(object):
mylar.SAB_CATEGORY = sab_category mylar.SAB_CATEGORY = sab_category
mylar.SAB_PRIORITY = sab_priority mylar.SAB_PRIORITY = sab_priority
mylar.SAB_DIRECTORY = sab_directory mylar.SAB_DIRECTORY = sab_directory
mylar.USE_NZBGET = use_nzbget
mylar.NZBGET_HOST = nzbget_host mylar.NZBGET_HOST = nzbget_host
mylar.NZBGET_USERNAME = nzbget_username mylar.NZBGET_USERNAME = nzbget_username
mylar.NZBGET_PASSWORD = nzbget_password mylar.NZBGET_PASSWORD = nzbget_password
mylar.NZBGET_PORT = nzbget_port mylar.NZBGET_PORT = nzbget_port
mylar.NZBGET_CATEGORY = nzbget_category mylar.NZBGET_CATEGORY = nzbget_category
mylar.NZBGET_PRIORITY = nzbget_priority mylar.NZBGET_PRIORITY = nzbget_priority
mylar.BLACKHOLE = blackhole mylar.NZBGET_DIRECTORY = nzbget_directory
mylar.BLACKHOLE_DIR = blackhole_dir mylar.BLACKHOLE_DIR = blackhole_dir
mylar.USENET_RETENTION = usenet_retention mylar.USENET_RETENTION = usenet_retention
mylar.NZBSU = nzbsu mylar.NZBSU = nzbsu
@ -2510,7 +2606,14 @@ class WebInterface(object):
mylar.CMTAGGER_PATH = re.sub(os.path.basename(mylar.CMTAGGER_PATH), '', mylar.CMTAGGER_PATH) mylar.CMTAGGER_PATH = re.sub(os.path.basename(mylar.CMTAGGER_PATH), '', mylar.CMTAGGER_PATH)
logger.fdebug("Removed application name from ComicTagger path") logger.fdebug("Removed application name from ComicTagger path")
logger.info('nzb_downloader')
#legacy support of older config - reload into old values for consistency.
if mylar.NZB_DOWNLOADER == 0: mylar.USE_SABNZBD = True
elif mylar.NZB_DOWNLOADER == 1: mylar.USE_NZBGET = True
elif mylar.NZB_DOWNLOADER == 2: mylar.USE_BLACKHOLE = True
# Write the config # Write the config
logger.info('sending to config..')
mylar.config_write() mylar.config_write()
raise cherrypy.HTTPRedirect("config") raise cherrypy.HTTPRedirect("config")
@ -2600,4 +2703,29 @@ class WebInterface(object):
return cache.getArtwork(ComicID, imageURL) return cache.getArtwork(ComicID, imageURL)
getComicArtwork.exposed = True getComicArtwork.exposed = True
def generateAPI(self):
import hashlib, random
apikey = hashlib.sha224( str(random.getrandbits(256)) ).hexdigest()[0:32]
logger.info("New API generated")
mylar.API_KEY = apikey
return apikey
generateAPI.exposed = True
def api(self, *args, **kwargs):
from mylar.api import Api
a = Api()
a.checkParams(*args, **kwargs)
data = a.fetchData()
return data
api.exposed = True

View File

@ -405,11 +405,13 @@ def pullitcheck(comic1off_name=None,comic1off_id=None,forcecheck=None, futurepul
ccname = [] ccname = []
pubdate = [] pubdate = []
w = 0 w = 0
wc = 0
tot = 0 tot = 0
chkout = [] chkout = []
watchfnd = [] watchfnd = []
watchfndiss = [] watchfndiss = []
watchfndextra = [] watchfndextra = []
alternate = []
#print ("----------WATCHLIST--------") #print ("----------WATCHLIST--------")
a_list = [] a_list = []
@ -432,7 +434,7 @@ def pullitcheck(comic1off_name=None,comic1off_id=None,forcecheck=None, futurepul
w = 1 w = 1
else: else:
#let's read in the comic.watchlist from the db here #let's read in the comic.watchlist from the db here
cur.execute("SELECT ComicID, ComicName, ComicYear, ComicPublisher, ComicPublished, LatestDate, ForceContinuing from comics") cur.execute("SELECT ComicID, ComicName, ComicYear, ComicPublisher, ComicPublished, LatestDate, ForceContinuing, AlternateSearch from comics")
while True: while True:
watchd = cur.fetchone() watchd = cur.fetchone()
#print ("watchd: " + str(watchd)) #print ("watchd: " + str(watchd))
@ -467,16 +469,48 @@ def pullitcheck(comic1off_name=None,comic1off_id=None,forcecheck=None, futurepul
b_list.append(watchd[2]) b_list.append(watchd[2])
comicid.append(watchd[0]) comicid.append(watchd[0])
pubdate.append(watchd[4]) pubdate.append(watchd[4])
#print ( "Comic:" + str(a_list[w]) + " Year: " + str(b_list[w]) )
#if "WOLVERINE AND THE X-MEN" in str(a_list[w]): a_list[w] = "WOLVERINE AND X-MEN"
lines.append(a_list[w].strip()) lines.append(a_list[w].strip())
unlines.append(a_list[w].strip()) unlines.append(a_list[w].strip())
llen.append(a_list[w].splitlines()) w+=1 # we need to increment the count here, so we don't count the same comics twice (albeit with alternate names)
ccname.append(a_list[w].strip())
tmpwords = a_list[w].split(None) #here we load in the alternate search names for a series and assign them the comicid and
ltmpwords = len(tmpwords) #alternate names
ltmp = 1 Altload = helpers.LoadAlternateSearchNames(watchd[7], watchd[0])
w+=1 if Altload == 'no results':
pass
else:
wc = 0
alt_cid = Altload['ComicID']
n = 0
iscnt = Altload['Count']
while (n <= iscnt):
try:
altval = Altload['AlternateName'][n]
except IndexError:
break
cleanedname = altval['AlternateName']
a_list.append(altval['AlternateName'])
b_list.append(watchd[2])
comicid.append(alt_cid)
pubdate.append(watchd[4])
lines.append(a_list[w+wc].strip())
unlines.append(a_list[w+wc].strip())
logger.info('loading in Alternate name for ' + str(cleanedname))
n+=1
wc+=1
w+=wc
#-- to be removed -
#print ( "Comic:" + str(a_list[w]) + " Year: " + str(b_list[w]) )
#if "WOLVERINE AND THE X-MEN" in str(a_list[w]): a_list[w] = "WOLVERINE AND X-MEN"
#lines.append(a_list[w].strip())
#unlines.append(a_list[w].strip())
#llen.append(a_list[w].splitlines())
#ccname.append(a_list[w].strip())
#tmpwords = a_list[w].split(None)
#ltmpwords = len(tmpwords)
#ltmp = 1
#-- end to be removed
else: else:
logger.fdebug("Determined to not be a Continuing series at this time.") logger.fdebug("Determined to not be a Continuing series at this time.")
cnt = int(w-1) cnt = int(w-1)
@ -561,6 +595,7 @@ def pullitcheck(comic1off_name=None,comic1off_id=None,forcecheck=None, futurepul
modcomicnm = re.sub(r'\s', '', modcomicnm) modcomicnm = re.sub(r'\s', '', modcomicnm)
logger.fdebug("watchcomic : " + str(watchcomic) + " / mod :" + str(modwatchcomic)) logger.fdebug("watchcomic : " + str(watchcomic) + " / mod :" + str(modwatchcomic))
logger.fdebug("comicnm : " + str(comicnm) + " / mod :" + str(modcomicnm)) logger.fdebug("comicnm : " + str(comicnm) + " / mod :" + str(modcomicnm))
if comicnm == watchcomic.upper() or modcomicnm == modwatchcomic.upper(): if comicnm == watchcomic.upper() or modcomicnm == modwatchcomic.upper():
logger.fdebug("matched on:" + comicnm + "..." + watchcomic.upper()) logger.fdebug("matched on:" + comicnm + "..." + watchcomic.upper())
pass pass
@ -569,6 +604,37 @@ def pullitcheck(comic1off_name=None,comic1off_id=None,forcecheck=None, futurepul
# print ( row[3] + " matched on ANNUAL") # print ( row[3] + " matched on ANNUAL")
else: else:
break break
#this all needs to get redone, so the ability to compare issue dates can be done systematically.
#Everything below should be in it's own function - at least the callable sections - in doing so, we can
#then do comparisons when two titles of the same name exist and are by definition 'current'. Issue date comparisons
#would identify the difference between two #1 titles within the same series year, but have different publishing dates.
#Wolverine (2013) & Wolverine (2014) are good examples of this situation.
#of course initially, the issue data for the newer series wouldn't have any issue data associated with it so it would be
#a null value, but given that the 2013 series (as an exmaple) would be from 2013-05-01, it obviously wouldn't be a match to
#the current date & year (2014). Throwing out that, we could just assume that the 2014 would match the #1.
#get the issue number of the 'weeklypull' series.
#load in the actual series issue number's store-date (not publishing date)
#---use a function to check db, then return the results in a tuple/list to avoid db locks.
#if the store-date is >= weeklypull-list date then continue processing below.
#if the store-date is <= weeklypull-list date then break.
### week['ISSUE'] #issue # from pullist
### week['SHIPDATE'] #weeklypull-list date
### comicid[cnt] #comicid of matched series
datecheck = loaditup(watchcomic, comicid[cnt], week['ISSUE'])
logger.fdebug('Now checking date comparison using an issue store date of ' + str(datecheck))
if datecheck == 'no results':
pass
elif datecheck >= week['SHIPDATE']:
#logger.info('The issue date of issue #' + str(week['ISSUE']) + ' was on ' + str(datecheck) + ' which is on/ahead of ' + str(week['SHIPDATE']))
logger.fdebug('Store Date falls within acceptable range - series MATCH')
pass
elif datecheck < week['SHIPDATE']:
logger.fdebug('The issue date of issue #' + str(week['ISSUE']) + ' was on ' + str(datecheck) + ' which is prior to ' + str(week['SHIPDATE']))
break
if ("NA" not in week['ISSUE']) and ("HC" not in week['ISSUE']): if ("NA" not in week['ISSUE']) and ("HC" not in week['ISSUE']):
if ("COMBO PACK" not in week['EXTRA']) and ("2ND PTG" not in week['EXTRA']) and ("3RD PTG" not in week['EXTRA']): if ("COMBO PACK" not in week['EXTRA']) and ("2ND PTG" not in week['EXTRA']) and ("3RD PTG" not in week['EXTRA']):
otot+=1 otot+=1
@ -635,3 +701,19 @@ def pullitcheck(comic1off_name=None,comic1off_id=None,forcecheck=None, futurepul
def check(fname, txt): def check(fname, txt):
with open(fname) as dataf: with open(fname) as dataf:
return any(txt in line for line in dataf) return any(txt in line for line in dataf)
def loaditup(comicname, comicid, issue):
myDB = db.DBConnection()
issue_number = helpers.issuedigits(issue)
logger.fdebug('[' + comicname + '] trying to locate issue ' + str(issue) + ' to do comparitive issue analysis for pull-list')
issueload = myDB.action('SELECT * FROM issues WHERE ComicID=? AND Int_IssueNumber=?', [comicid, issue_number]).fetchone()
if issueload is None:
logger.fdebug('No results matched for Issue number - either this is a NEW series with no data yet, or something is wrong')
return 'no results'
if issueload['ReleaseDate'] is not None or issueload['ReleaseDate'] is not 'None':
logger.fdebug('Returning Release Date for issue # ' + str(issue) + ' of ' + str(issueload['ReleaseDate']))
return issueload['ReleaseDate']
else:
logger.fdebug('Returning Publication Date for issue # ' + str(issue) + ' of ' + str(issueload['PublicationDate']))
return issueload['PublicationDate']