IMP: Cleaned up interface for StoryArcs / Story Arc Details, IMP: Cleaned up interface for Reading list Management, IMP: Added better reading list management - new status (added, downloaded, read), IMP: Added sync option for use with another device for reading list transfer (ie. tablet) Android only, IMP: Autopopulate new weekly pull releases to reading list, IMP: 'Watch' option in weekly pull list now fully functional. Will watch CV for series' that do not have any series data yet as they are new starting series. Will auto-add once available, IMP: Auto-watch check is run after every refresh/recreate of the weeklypull list, IMP: Improved the Add a Series option where it will now look for issues that are 'new' or 'wanted' during add sequence, IMP: Main page interface now has coloured have/total bars to denote series completion, IMP: New scheduler / threading locks in place in an attempt to avoid database locks, FIX: Removed some erroneous locking that was going on when importing a directory was being run, IMP: Stat counter now present when post-processing multiple issues in sequence, FIX: for issue number error when post-processing and issue number was a non-alphanumeric, FIX: for metatagging: when original file was .cbz, would try to convert and fail, FIX: for issues that were negative and were preceeded by a # in the filename (filechecker), FIX: for publisher having non-alphanumeric character in name when attempting to determine publisher, FIX: if annuals enabled, would incorrectly show as being 'already in library' when viewing search results if results constained annuals, FIX:(#944) for incorrect nzbname being used when post-processing was being performed from an nzb client (experimental mainly), IMP: Turned off logging for ComicVine API counter, FIX: Added retry attempts when connecting to ComicVine in order to avoid errors when adding a series, IMP:(#963) Added ability to add snatched to filter when viewing Wanted issues on Wanted tab, FIX: When importing and then selecting a series to import via the select screen, will now flip back to the importresults and add the selected series in the background, IMP:(#952) Main page is now sorted in ascending order by Continuing/Ended status (and subbed by whether is Active/Paused).Custom sorting is still available, FIX: Dupecheck will now automatically assume existing 0-byte files are to be overwritten when performing post-processing, FIX: If publication date for series contained a '?' (usually with brand new series) will force to 'Present' to allow for pull-list comparisons to take place, FIX: Mylar will now disallow search results which have 'covers only' or 'variant' in the filename, IMP: Better nzbname generation/retrieval (will check inside nzb for possible names) to be used when post-processing, IMP: DB Update will now perform update to all active comics in descending order by Latest Date (instead of random order), FIX: Enforce the 5hr limit rule when running DB update (will only update series that haven't been updated in >5 hours), FIX: Annuals will now have/retain the proper status upon doing DB Update, FIX: Have totals will now be updated when doing a recheck files (sometimes wouldn't get updated depending on various states of status'), FIX:(#966) Added urllib2.URLError exeception trap when attempting to check Git for updates, IMP: Removed the individual sqlite calls for weeklypull, and brought them into line with using the db module (which will minimize concurrent access, which seemed to be causing db locks), IMP: Cleaned up some code and shuffled some functions so they are in more appropriate locations

This commit is contained in:
evilhero 2015-03-27 13:27:59 -04:00
parent 052e6ecb0b
commit cdc3e8a7a0
31 changed files with 1866 additions and 1210 deletions

View File

@ -207,25 +207,6 @@ table#annual_table td#areldate { vertical-align: middle; text-align: center; }
table#annual_table td#astatus { vertical-align: middle; text-align: center; font-size: 13px; }
table#annual_table td#aoptions { vertical-align: middle; text-align: center; }
img.albumArt { float: left; padding-right: 5px; }
div#albumheader { padding-top: 48px; height: 200px; }
div#track_wrapper { margin-left: -50px; padding-top: 20px; font-size: 16px; width: 100%; }
table#track_table th#number { text-align: right; min-width: 10px; }
table#track_table th#name { text-align: center; min-width: 350px; }
table#track_table th#duration { width: 175px; text-align: center; min-width: 100px; }
table#track_table th#location { text-align: center; width: 250px; }
table#track_table th#bitrate { text-align: center; min-width: 75px; }
table#track_table th#format { text-align: center; min-width: 75px; }
table#track_table td#number { vertical-align: middle; text-align: right; }
table#track_table td#name { vertical-align: middle; text-align: center; font-size: 15px; }
table#track_table td#duration { vertical-align: middle; text-align: center; }
table#track_table td#location { vertical-align: middle; text-align: center; font-size: 11px; }
table#track_table td#bitrate { vertical-align: middle; text-align: center; font-size: 12px; }
table#track_table td#format { vertical-align: middle; text-align: center; font-size: 12px; }
table#history_table { background-color: white; width: 100%; font-size: 13px; }
table#history_table td#dateadded { vertical-align: middle; text-align: center; min-width: 150px; font-size: 14px; }
@ -276,7 +257,6 @@ table#searchresults_table td#name { vertical-align: middle; text-align: left; mi
table#searchresults_table td#comicyear { vertical-align: middle; text-align: left; min-width: 50px; }
table#searchresults_table td#issues { vertical-align: middle; text-align: center; min-width: 50px; }
div.progress-container { border: 1px solid #ccc; width: 100px; height: 14px; margin: 2px 5px 2px 0; padding: 1px; float: left; background: white; }
.havetracks { font-size: 13px; margin-left: 36px; padding-bottom: 3px; vertical-align: middle; }
footer { margin: 20px auto 20px auto; }

View File

@ -451,9 +451,7 @@
<a href="#" title="Manually meta-tag issue" onclick="doAjaxCall('manual_metatag?dirName=${comic['ComicLocation'] |u}&issueid=${issue['IssueID']}&filename=${linky |u}&comicid=${issue['ComicID']}&comversion=${comic['ComicVersion']}',$(this),'table')" data-success="${issue['Issue_Number']} successfully tagged."><img src="interfaces/default/images/comictagger.png" height="25" width="25" class="highqual" /></a>
%endif
%endif
<!--
<a href="#" title="Add to Reading List" onclick="doAjaxCall('addtoreadlist?IssueID=${issue['IssueID']}',$(this),'table')" data-success="${issue['Issue_Number']} added to Reading List"><img src="interfaces/default/images/glasses-icon.png" height="25" width="25" class="highqual" /></a>
-->
<a href="#" title="Add to Reading List" onclick="doAjaxCall('addtoreadlist?IssueID=${issue['IssueID']}',$(this),'table')" data-success="${comic['ComicName']} #${issue['Issue_Number']} added to Reading List"><img src="interfaces/default/images/glasses-icon.png" height="25" width="25" class="highqual" /></a>
%else:
<a href="#" title="Retry the same download again" onclick="doAjaxCall('queueit?ComicID=${issue['ComicID']}&IssueID=${issue['IssueID']}&ComicIssue=${issue['Issue_Number']}&mode=want', $(this),'table')" data-success="Retrying the same version of '${issue['ComicName']}' '${issue['Issue_Number']}'"><img src="interfaces/default/images/retry_icon.png" height="25" width="25" class="highqual" /></a>
<a href="#" title="Mark issue as Skipped" onclick="doAjaxCall('unqueueissue?IssueID=${issue['IssueID']}&ComicID=${issue['ComicID']}',$(this),'table')" data-success="'${issue['Issue_Number']}' has been marked as skipped"><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" class="highqual" /></a>

View File

@ -56,7 +56,8 @@
but you can contribute and support the development</br>
by buying me a coffee (or several)</strong></label></br></br>
</div>
<div style="width: 55%; margin: 0px auto;">
<div style="width: 60%; margin: 0px auto;">
<a id="navDonate" href="https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&amp;hosted_button_id=EWQADB5AMVRFU" rel="noreferrer" onclick="window.open('http://dereferer.org/?' + this.href); return false;"><img src="https://www.paypalobjects.com/en_US/i/btn/btn_donate_SM.gif" alt="[donate]" /></a>
<a href="https://flattr.com/submit/auto?user_id=evilhero&url=https://github.com/evilhero/mylar&title=Mylar%20Donation%description=Supporting&20the%development%20of%20Mylar&language=en_CA&hidden=1&category=software" target="_blank">
<img src="//api.flattr.com/button/flattr-badge-large.png" alt="Flattr this" title="Flattr this" border="0" align="center">
</a>

View File

@ -797,7 +797,18 @@ div#searchbar .mini-icon {
background-color: #ffb0b0;
height: 30px;
}
.Snatched {
background-color: #ebf5ff;
height: 30px;
}
.Failed {
background-color: #ff5858;
height: 30px;
}
.Ignored {
background-color: #fff5cc;
height: 30px;
}
.comictable legend {
font-size: 14px
font-weight: bold;
@ -906,44 +917,93 @@ div#artistheader h2 a {
font-family: "Trebuchet MS", Helvetica, Arial, sans-serif;
}
#read_detail th#options {
min-width: 150px;
max-width: 150px;
text-align: left;
}
#read_detail th#comicname {
min-width: 300px;
text-align: left;
}
#read_detail th#issue,
#read_detail th#status {
min-width: 30px;
#read_detail th#issue {
max-width: 25px;
text-align: center;
}
#read_detail th#issueyear {
min-width: 40px;
#read_detail th#issueyear,
#read_detail th#status,
#read_detail th#statuschange {
max-width: 50px;
text-align: center;
}
#read_detail th#select {
max-width: 10px;
text-align: left;
vertical-align: middle;
}
#read_detail td#comicname {
min-width: 300px;
text-align: left;
vertical-align: middle;
font-size: 12px;
}
#read_detail td#status,
#read_detail td#issue {
min-width: 30px;
max-width: 25px;
text-align: left;
vertical-align: middle;
}
#read_detail td#issueyear{
min-width: 40px;
text-align: left;
#read_detail td#issueyear,
#read_detail td#status,
#read_detail td#statuschange {
max-width: 50px;
text-align: center;
vertical-align: middle;
}
#read_detail td#options {
min-width: 150px;
max-width: 150px;
text-align: left;
vertical-align: middle;
}
#read_detail td#select {
max-width: 10px;
text-align: left;
vertical-align: middle;
}
#storyarcs th#options {
max-width: 100px;
text-align: left;
}
#storyarcs th#storyarc {
min-width: 300px;
text-align: left;
}
#storyarcs th#years {
max-width: 40px;
text-align: center;
}
#storyarcs th#have {
max-width: 42px;
text-align: center;
}
#storyarcs td#storyarc {
min-width: 300px;
text-align: left;
vertical-align: middle;
font-size: 12px;
}
#storyarcs td#years {
max-width: 40px;
text-align: left;
vertical-align: middle;
}
#storyarcs td#options {
max-width: 100px;
text-align: left;
vertical-align: middle;
}
#storyarcs td#have {
max-width: 42px;
text-align: center;
vertical-align: middle;
}
#weekly_pull th#publisher {
min-width: 150px;
text-align: left;
@ -1141,7 +1201,7 @@ div#artistheader h2 a {
text-align: left;
}
#series_table th#have {
text-align: center;
text-align: left;
}
#series_table td#publisher {
min-width: 100px;
@ -1171,142 +1231,10 @@ div#artistheader h2 a {
text-align: left;
vertical-align: middle;
}
#markalbum {
position: relative;
top: 25px;
display: inline-block;
}
#albumheader {
margin-top: 50px;
min-height: 200px;
}
#albumheader #albumImg {
background: #ffffff url("../images/loader_black.gif") center no-repeat;
border: 5px solid #FFF;
-moz-box-shadow: 1px 1px 2px 0 #555555;
-webkit-box-shadow: 1px 1px 2px 0 #555555;
-o-box-shadow: 1px 1px 2px 0 #555555;
box-shadow: 1px 1px 2px 0 #555555;
float: left;
height: 200px;
margin-bottom: 30px;
margin-right: 25px;
overflow: hidden;
text-indent: -3000px;
width: 200px;
}
#albumheader p {
font-size: 16px;
line-height: 24px;
margin-bottom: 10px;
}
#albumheader h1 a {
display: inline-block;
font-size: 32px;
line-height: 35px;
margin-bottom: 3px;
font-family: "Trebuchet MS", Helvetica, Arial, sans-serif;
}
#albumheader h2 a {
display: inline-block;
font-style: italic;
font-weight: 400;
margin-bottom: 5px;
font-family: "Trebuchet MS", Helvetica, Arial, sans-serif;
}
#albumheader .albuminfo {
margin-left: 210px;
}
#albumheader .albuminfo li {
border-right: 1px dotted #ccc;
float: left;
font-size: 16px;
font-weight: bold;
list-style: none;
margin-right: 10px;
padding-right: 10px;
}
#albumheader .albuminfo li:last-child {
border: none;
}
#album_table {
background-color: #FFF;
}
#album_table th#select {
min-width: 10px;
text-align: left;
vertical-align: middle;
}
#album_table th#select input {
vertical-align: middle;
}
#album_table th#reldate {
min-width: 70px;
text-align: center;
width: 175px;
}
#album_table th#status,
#album_table th#albumart {
min-width: 50px;
text-align: left;
}
#album_table th#status {
min-width: 80px;
text-align: center;
width: 175px;
}
#album_table th#wantlossless {
min-width: 80px;
text-align: center;
width: 80px;
}
#album_table td#albumart img {
background: #FFF;
border: 1px solid #ccc;
padding: 3px;
}
#album_table td#status a#wantlossless {
white-space: nowrap;
}
#manageheader {
margin-top: 45px;
margin-bottom: 0;
}
#track_wrapper {
font-size: 16px;
padding-top: 20px;
width: 100%;
}
#track_table th#number {
min-width: 10px;
text-align: right;
}
#track_table th#name {
min-width: 350px;
text-align: center;
}
#track_table th#location {
text-align: center;
width: 250px;
}
#track_table td {
border-bottom: 1px solid #FFFFFF;
}
#track_table td#number {
text-align: right;
vertical-align: middle;
}
#track_table td#name {
font-size: 15px;
text-align: left;
vertical-align: middle;
}
#track_table td#location {
font-size: 11px;
line-height: normal;
text-align: center;
vertical-align: middle;
}
#history_table {
background-color: #FFF;
font-size: 13px;
@ -1514,32 +1442,111 @@ div#artistheader h2 a {
min-width: 95px;
vertical-align: middle;
}
.progress-container {
background: #FFF;
border: 1px solid #ccc;
float: left;
height: 14px;
margin: 2px 5px 2px 0;
padding: 1px;
width: 100px;
DIV.progress-container
{
position: relative;
width: 100px;
height: 18px;
margin: 2px 5px 2px 0;
float: left;
border:1px solid #ccc;
background-color: #F7F7F7;
background-image: -moz-linear-gradient(top, whiteSmoke, #F9F9F9);
background-image: -webkit-gradient(linear, 0 0, 0 100%, from(whiteSmoke), to(#F9F9F9));
background-image: -webkit-linear-gradient(top, whiteSmoke, #F9F9F9);
background-image: -o-linear-gradient(top, whiteSmoke, #F9F9F9);
background-image: linear-gradient(to bottom, whiteSmoke, #F9F9F9);
background-repeat: repeat-x;
-webkit-border-radius: 4px;
-moz-border-radius: 4px;
border-radius: 4px;
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fff5f5f5', endColorstr='#fff9f9f9', GradientType=0);
-webkit-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.1);
-moz-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.1);
box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.1);
}
.progress-container > div {
background-image: -moz-linear-gradient(#a3e532, #90cc2a) !important;
background-image: linear-gradient(#a3e532, #90cc2a) !important;
background-image: -webkit-linear-gradient(#a3e532, #90cc2a) !important;
background-image: -o-linear-gradient(#a3e532, #90cc2a) !important;
filter: progid:dximagetransform.microsoft.gradient(startColorstr=#fafafa, endColorstr=#eaeaea) !important;
-ms-filter: progid:dximagetransform.microsoft.gradient(startColorstr=#fafafa, endColorstr=#eaeaea) !important;
height: 14px;
DIV.progress-container > DIV
{
background-color: #0EBEED;
height: 18px;
-webkit-border-radius: 4px;
-moz-border-radius: 4px;
text-align: center;
z-index: 900;
}
.danger > DIV
{
background-color: #DD514C;
background-image: -moz-linear-gradient(top, #EE5F5B, #C43C35);
background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#EE5F5B), to(#C43C35));
background-image: -webkit-linear-gradient(top, #EE5F5B, #C43C35);
background-image: -o-linear-gradient(top, #EE5F5B, #C43C35);
background-image: linear-gradient(to bottom, #EE5F5B, #C43C35);
background-repeat: repeat-x;
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ffee5f5b', endColorstr='#ffc43c35', GradientType=0);
}
.warning > DIV
{
background-color: #FAA732;
background-image: -moz-linear-gradient(top, #FBB450, #F89406);
background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#FBB450), to(#F89406));
background-image: -webkit-linear-gradient(top, #FBB450, #F89406);
background-image: -o-linear-gradient(top, #FBB450, #F89406);
background-image: linear-gradient(to bottom, #FBB450, #F89406);
background-repeat: repeat-x;
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fffbb450', endColorstr='#fff89406', GradientType=0);
}
.complete > DIV
{
background-color: #5EB95E;
background-image: -moz-linear-gradient(top, #62C462, #57A957);
background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#62C462), to(#57A957));
background-image: -webkit-linear-gradient(top, #62C462, #57A957);
background-image: -o-linear-gradient(top, #62C462, #57A957);
background-image: linear-gradient(to bottom, #62C462, #57A957);
background-repeat: repeat-x;
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff62c462', endColorstr='#ff57a957', GradientType=0);
}
.missing > DIV
{
background-color: #4BB1CF;
background-image: -moz-linear-gradient(top, #5BC0DE, #339BB9);
background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#5BC0DE), to(#339BB9));
background-image: -webkit-linear-gradient(top, #5BC0DE, #339BB9);
background-image: -o-linear-gradient(top, #5BC0DE, #339BB9);
background-image: linear-gradient(to bottom, #5BC0DE, #339BB9);
background-repeat: repeat-x;
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff5bc0de', endColorstr='#ff339bb9', GradientType=0);
}
.havetracks {
font-size: 11px;
font-size: 12px;
line-height: normal;
margin-left: 36px;
padding-bottom: 3px;
padding-bottom: 30px;
vertical-align: middle;
}
.progressbar-back-text
{
font-size: 12px;
vertical-align: middle;
background-color: transparent;
position: absolute;
text-align: center;
width: 100%;
text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25);
z-index: 800;
}
.progressbar-front-text
{
font-size: 12px;
vertical-align: text-top;
background-color: transparent;
display: block;
width: 100%;
position: absolute;
color: #000000;
overflow: hidden;
}
#version {
color: #999;
font-size: 10px;
@ -1691,34 +1698,11 @@ div#artistheader h2 a {
.clearfix:after {
clear: both;
}
#album_table th#albumname,
#album_table th#artistname,
#upcoming_table th#comicname,
#wanted_table th#comicname {
min-width: 150px;
text-align: center;
}
#album_table th#type,
#track_table th#duration {
min-width: 100px;
text-align: center;
width: 175px;
}
#album_table th#bitrate,
#album_table th#albumformat {
min-width: 60px;
text-align: center;
}
#album_table td#select,
#album_table td#albumart {
text-align: left;
vertical-align: middle;
}
#album_table td#albumname,
#album_table td#artistname,
#album_table td#reldate,
#album_table td#type,
#track_table td#duration,
#upcoming_table td#select,
#upcoming_table td#status,
#wanted_table td#select,
@ -1726,45 +1710,18 @@ div#artistheader h2 a {
text-align: center;
vertical-align: middle;
}
#album_table td#status,
#album_table td#bitrate,
#album_table td#albumformat,
#album_table td#wantlossless {
font-size: 13px;
text-align: center;
vertical-align: middle;
}
div#albumheader .albuminfo li span,
div#artistheader h3 span {
font-weight: 400;
}
#track_table th#bitrate,
#track_table th#format,
#upcoming_table th#type,
#wanted_table th#type,
#searchresults_table th#score {
min-width: 75px;
text-align: center;
}
#track_table td#bitrate,
#track_table td#format {
font-size: 12px;
text-align: center;
vertical-align: middle;
}
#history_table td#status,
#history_table td#action {
font-size: 14px;
text-align: center;
vertical-align: middle;
}
#upcoming_table td#albumart img,
#wanted_table td#albumart img {
background: #FFF;
border: 1px solid #ccc;
padding: 3px;
}
#upcoming_table th#albumart,
#wanted_table th#albumart {
min-width: 50px;
text-align: center;

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.1 KiB

View File

@ -61,9 +61,8 @@
<input type="hidden" value="Go">
</div>
<table class="display" id="impresults_table">
<tr />
<tr/><tr/>
<tr><center><h3>To be Imported</h3></center></tr>
<tr><center><small>(green indicates confirmed on watchlist)</tr>
<thead>
<tr>
<th id="select"></th>
@ -114,10 +113,6 @@
%endif
</td>
</tr>
<%
myDB = db.DBConnection()
files = myDB.action("SELECT * FROM importresults WHERE ComicName=?", [result['ComicName']])
%>
%endfor
%else:
<tr>
@ -129,53 +124,6 @@
</tbody>
</table>
</form>
<table class="display" id="impresults_table">
<tr><br /></tr>
<tr><center><h3>Already on Watchlist</h3></center></tr>
<tr><center>(you need to CONFIRM the match before doing an import!)</tr>
<thead>
<tr>
<th id="select"></th>
<th id="comicname">Comic Name</th>
<th id="comicyear">Year</th>
<th id="status">Status</th>
<th id="importdate">Import Date</th>
<th id="confirmed">Confirmed</th>
<th id="addcomic">Options</th>
</tr>
</thead>
<tbody>
%if watchresults:
%for wresult in watchresults:
<tr>
<td id="select"><input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="${wresult['ComicName']}" class="checkbox" /></td>
<td id="comicname"><a href="http://www.comicvine.com/volume/4050-${wresult['WatchMatch']} title="${wresult['ComicName']}" target="_blank">${wresult['ComicName']}</td>
<td id="comicissues"><title="${wresult['ComicYear']}">${wresult['ComicYear']}</td>
<td id="status">${wresult['Status']}</td>
<td id="importdate">${wresult['ImportDate']}</td>
<td id="confirmed">
<input type="text" name="confirmed" id="confirmed" size="5">
%if wresult['WatchMatch']:
<a href="confirmResult?comicname=${wresult['ComicName']}&comicid=${wresult['WatchMatch']}">Confirm</a>
%else:
No
%endif
</td>
[<a href="deleteimport?ComicName=${wresult['ComicName']}">Remove</a>]
</td>
</tr>
<%
myDB = db.DBConnection()
files = myDB.action("SELECT * FROM importresults WHERE ComicName=?", [wresult['ComicName']])
%>
%endfor
%else:
<tr>
<td colspan="100%"><center><legend>There are no results to display</legend></center></td></tr>
%endif
</tbody>
</table>
</div>
</%def>

View File

@ -53,7 +53,7 @@
<%
calledby = "web-import"
%>
<td class="add"><a href="addbyid?comicid=${result['comicid']}&calledby=${calledby}"><span class="ui-icon ui-icon-plus"></span>Add this Comic</a></td>
<td class="add"><a href="addbyid?comicid=${result['comicid']}&calledby=${calledby}&imported='yes'&ogcname=${result['ogcname']}"><span class="ui-icon ui-icon-plus"></span>Add this Comic</a></td>
%else:
<td class="add"><span class="ui-icon ui-icon-plus"></span>Already in Library</td>
%endif

View File

@ -23,6 +23,13 @@
<tbody>
%for comic in comics:
<%
if comic['percent'] == 101:
css = '<div class=\"progress-container warning\">'
if comic['percent'] == 100:
css = '<div class=\"progress-container complete\">'
if comic['percent'] < 100:
css = '<div class=\"progress-container missing\">'
if comic['Status'] == 'Paused':
grade = 'X'
elif comic['Status'] == 'Loading':
@ -31,6 +38,7 @@
grade = 'X'
else:
grade = 'A'
%>
<tr class="grade${grade}">
<td id="publisher">${comic['ComicPublisher']}</td>
@ -38,7 +46,7 @@
<td id="year"><span title="${comic['ComicYear']}"></span>${comic['ComicYear']}</td>
<td id="issue"><span title="${comic['LatestIssue']}"></span># ${comic['LatestIssue']}</td>
<td id="published">${comic['LatestDate']}</td>
<td id="have"><span title="${comic['percent']}"></span><div class="progress-container"><div style="background-color:#a3e532; height:14px; width:${comic['percent']}%"><div class="havetracks">${comic['haveissues']}/${comic['totalissues']}</div></div></div></td>
<td id="have"><span title="${comic['percent']}"></span>${css}<div style="width:${comic['percent']}%"><span class="progressbar-front-text">${comic['haveissues']}/${comic['totalissues']}</span></div></td>
<td id="status">${comic['recentstatus']}</td>
<td id="active" align="center">
%if comic['Status'] == "Active":
@ -80,7 +88,7 @@
"bStateSave": true,
"iDisplayLength": 25,
"sPaginationType": "full_numbers",
"aaSorting": []
"aaSorting": [[6,'asc'],[7,'asc']]
});
resetFilters("comic");
}

View File

@ -128,9 +128,10 @@
<a href="#" onclick="doAjaxCall('wanted_Export',$(this))" data-sucess="Exported to Wanted list." data-error="Failed to export. Check logs"><span class="ui-icon ui-icon-refresh"></span>Export Wanted to CSV</a>
</div>
<br/><br/>
<legend>Hidden Options</legend>
<legend>Additional Options</legend>
<div classs="links">
<a href="readlist">Reading List Management</a><br/>
<a href="storyarc_main">Story Arc Management</a><br/>
<a href="importResults">Import Results Management</a>
</div>
</fieldset>

View File

@ -9,11 +9,12 @@
<%def name="headerIncludes()">
<div id="subhead_container">
<div id="subhead_menu">
<a id="menu_link_delete" href="#">Sync</a>
%if mylar.TAB_ENABLE:
<a id="menu_link_delete" href="#" onclick="doAjaxCall('syncfiles',$(this),'table')" data-success="Syncing complete.">Sync</a>
%endif
<a id="menu_link_delete" href="#" onclick="doAjaxCall('removefromreadlist?AllRead=1',$(this),'table')" data-success="All Read Records Removed">Remove Read</a>
<a id="menu_link_delete" href="#">Force New Check</a>
<a id="menu_link_refresh" href="#">Clear File Cache</a>
<a id="menu_link_refresh" href="#">Import Story Arc File</a>
</div>
</div>
</%def>
@ -23,136 +24,152 @@
<h1 class="clearfix"><img src="interfaces/default/images/ReadingList-icon.png" height="26" width="26" alt="Reading List"/>Reading List Management</h1>
</div>
<div id="tabs">
<ul>
<li><a href="#tabs-1">Issue Reading List</a></li>
<li><a href="#tabs-2">Story Arcs</a></li>
</ul>
<div id="tabs-1">
<table class="configtable" id="read_detail">
<fieldset>
<center><legend>Individual Reading Lists</legend>
<strong>(Watchlist)</strong>
<p>Your watchlisted series' that you have issues marked as wanting to add
to the Reading List go here.<br/></p></center>
</fieldset>
<ul>
<li><a href="#tabs-1">General</a></li>
<li><a href="#tabs-2">Readlist options</a></li>
</ul>
<div id="tabs-1">
<table class="comictable" summary="Comic Details">
<tr>
<td>
<fieldset>
<center><legend>Individual Reading Lists</legend>
<p>Your watchlisted series' that you have issues marked as wanting to add
to the Reading List go here.<br/></p></center>
</fieldset>
</td>
<td>
<fieldset>
<legend>Reading List Statistics</legend>
<div>
<label><strong># of Issues Added: </strong>${counts['added']}</br></label>
<label><strong># of Issues Sent to device: </strong>${counts['sent']}</br></label>
<label><strong># of Issues Read: </strong>${counts['read']}</br></label>
<label><strong> ... total in Readlist Management: </strong>${counts['total']}</br></label>
</div>
<div id="actions">
<small><a href="#" id="helpout"><span class="ui-button-icon-primary ui-icon ui-icon-help"></span>Help.</a></small>
<div id="dialog" title="Help with Reading Lists" style="display:none" class="configtable">
<p>Status definitions:</br>
Added: Issue has been added to your RL.</br>
Downloaded: Issue has been downloaded to your device.</br>
Read: Issue has been marked as Read.</br>
</p>
</div>
</div>
</fieldset>
</td>
</tr>
</table>
</div>
<div id="tabs-2">
<table class="configtable">
<td>
<form action="readlistOptions" id="chkoptions" method="GET">
<fieldset>
<legend>ReadList Options</legend>
<div class="row checkbox left clearfix">
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="send2read" id="send2read" value="1" ${checked(mylar.SEND2READ)} /><label>Automatically send new pulls to Readlist (Added)</label></br>
</div>
</fieldset>
</td>
<td width="100%">
<img src="interfaces/default/images/android.png" style="float:right" height="50" width="50" />
<fieldset>
<div>
<legend>Tablet Device</legend>
<small class="heading"><span style="float: left; margin-right: .3em; margin-top: 4px;" class="ui-icon ui-icon-info"></span>Requires SFTP Server running on tablet</small>
</div>
<div class="row checkbox left clearfix">
<input id="tabenable" type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" onclick="initConfigCheckbox($this);" name="tab_enable" id="tab_enable" value="1" ${checked(mylar.TAB_ENABLE)} /><label>Enable Tablet (Android)</label>
</div>
<div class="config">
<div class="row">
<label>IP:PORT</label>
<input type="text" placeholder="IP Address of tablet" name="tab_host" value="${mylar.TAB_HOST}" size="30">
</div>
<div class="row">
<label>Username</label>
<input type="text" name="tab_user" value="${mylar.TAB_USER}" size="20">
</div>
<div class="row">
<label>Password:</label>
<input type="password" name="tab_pass" value="${mylar.TAB_PASS}" size="20">
</div>
<div class="row">
<label>Download Location:</label>
<input type="text" placeholder="Full path (or jailed path)" name="tab_directory" value="${mylar.TAB_DIRECTORY}" size="36" /></br>
</div>
</div>
</fieldset>
</td>
<div>
<input type="submit" value="Update"/>
</div>
</form>
</td>
</table>
</div>
</div>
<form action="markreads" method="get" id="markreads">
<div id="markissue" style="top:0;">
Mark selected issues as
<select name="action" onChange="doAjaxCall('markreads',$(this),'table',true);" data-error="You didn't select any issues" data-success="selected issues marked">
<option disabled="disabled" selected="selected">Choose...</option>
<option value="Added">Added</option>
<option value="Downloaded">Downloaded</option>
<option value="Read">Read</option>
<option value="Remove">Remove</option>
<option value="Send">Send</option>
</select>
<input type="hidden" value="Go">
</div>
<div class="table_wrapper">
<table class="display" id="read_detail">
<thead>
<tr>
<th id="select"><input type="checkbox" onClick="toggle(this)" /></th>
<th id="comicname">ComicName</th>
<th id="issue">Issue</th>
<th id="issueyear">Issue Date</th>
<th id="issueyear">Pub Date</th>
<th id="status">Status</th>
<th id="statuschange">Change</th>
<th id="options">Options</th>
</tr>
</thead>
<tbody>
%for issue in issuelist:
<tr>
<%
if issue['Status'] == 'Read':
grade = 'A'
elif issue['Status'] == 'Added':
grade = 'X'
elif issue['Status'] == 'Downloaded':
grade = 'C'
else:
grade = 'Z'
%>
<tr class="grade${grade}">
<td id="select"><input type="checkbox" name="${issue['IssueID']}" value="${issue['IssueID']}" class="checkbox" /></td>
<td id="comicname"><a href="comicDetails?ComicID=${issue['ComicID']}">${issue['ComicName']} (${issue['SeriesYear']})</td>
<td id="issue">${issue['Issue_Number']}</td>
<td id="issueyear">${issue['IssueDate']}</td>
<td id="status">${issue['Status']}</td>
<td id="statuschange">${issue['StatusChange']}</td>
<td id="options">
%if issue['inCacheDIR']:
<%
try:
with open(os.path.join(mylar.CACHE_DIR,issue['Location'])) as f:
linky = issue['Location']
except IOError as e:
linky = None
%>
%if linky:
<a href="cache/${linky}"><img src="interfaces/default/images/download_icon.png" height="25" width="25" title="Download the Issue" /></a>
%endif
%else:
<a onclick="doAjaxCall('downloadLocal?IssueID=${issue['IssueID']}', $(this), 'table')" ><img src="interfaces/default/images/copy_icon.png" height="25" width="25" title="Copy issue to local cache (ready for download)" /></a>
%endif
<a onclick="doAjaxCall('removefromreadlist?IssueID=${issue['IssueID']}',$(this),'table')" data-success="Sucessfully removed ${issue['ComicName']} #${issue['Issue_Number']} from Reading List"><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" title="Remove from Reading List" /></a>
<a onclick="doAjaxCall('markasRead?IssueID=${issue['IssueID']}', $(this),'table')" data-success="Marked ${issue['ComicName']} ${issue['Issue_Number']} as Read."><img src="interfaces/default/images/wanted_icon.png" height="25" width="25" title="Mark as Read" /></a>
<a onclick="doAjaxCall('removefromreadlist?IssueID=${issue['IssueID']}',$(this),'table')" data-success="Sucessfully removed ${issue['ComicName']} #${issue['Issue_Number']} from Reading List"><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" title="Remove from Reading List" /></a>
<a onclick="doAjaxCall('markasRead?IssueID=${issue['IssueID']}', $(this),'table')" data-success="Marked ${issue['ComicName']} ${issue['Issue_Number']} as Read."><img src="interfaces/default/images/wanted_icon.png" height="25" width="25" title="Mark as Read" /></a>
</td>
</tr>
%endfor
</tbody>
</table>
</div>
<div id="tabs-2">
<table class="configtable">
<tr>
<form action="searchit" method="get">
<input type="hidden" name="type" value="story_arc">
<input type="text" value="" placeholder="Search" onfocus="if(this.value==this.defaultValue) this.value='';" name="name" />
<span class="mini-icon"></span>
<input type="submit" value="Search"/>
</form>
<tr>
<form action="importReadlist" method="get">
<div class="row" style="float:right">
<label for="">File to import</label>
<input type="text" runat="server" value="Enter a filename to import" onfocus="if
(this.value==this.defaultValue) this.value='';" name="filename" size="70" />
<input type="submit" value="Import">
</div>
</form>
</tr>
<tr>
<form action="readOptions" id="chkoptions" method="GET">
<fieldset>
<legend>Options</legend>
<div class="row checkbox left clearfix">
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" /><label>Arcs in Grabbag Directory?</label><br/>
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="storyarcdir" id="storyarcdir" value="1" ${checked(mylar.STORYARCDIR)} /><label>Arcs in StoryArc Directory (off of ComicLocationRoot)?</label><br/>
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" /><label>Show Downloaded Story Arc Issues on ReadingList tab</label><br/>
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="read2filename" id="read2filename" value="1" ${checked(mylar.READ2FILENAME)} /><label>Append Reading # to filename</label>
</div>
</fieldset>
<div>
<input type="submit" value="Update"/>
</div>
</form>
</tr>
</tr>
</table>
<table class="configtable" id="artist_table">
<thead>
<tr>
<th id="storyarc">Story Arc</th>
<th id="issue">Issues</th>
<th id="have">Status</th>
<th id="action">Options</th>
</tr>
</thead>
<tbody>
%for item in readlist:
<%
myDB = db.DBConnection()
totalcnt = myDB.action("SELECT COUNT(*) as count FROM readinglist WHERE StoryArcID=?", [item['StoryArcID']]).fetchall()
totalarc = totalcnt[0][0]
havecnt = myDB.action("SELECT COUNT(*) as count FROM readinglist WHERE StoryArcID=? AND (Status='Downloaded' or Status='Archived')", [item['StoryArcID']]).fetchall()
havearc = havecnt[0][0]
if not havearc:
havearc = 0
try:
percent = (havearc *100.0)/totalarc
if percent > 100:
percent = 100
except (ZeroDivisionError, TypeError):
percent = 0
totalarc = '?'
%>
<tr>
<td id="storyarc"><a href="detailReadlist?StoryArcID=${item['StoryArcID']}&StoryArcName=${item['StoryArc']}">${item['StoryArc']}</a></td>
<td id="issue">${item['TotalIssues']}</td>
<td id="have"><span title="${percent}"></span><div class="progress-container"><div style="background-color:#a3e532; height:14px; width:${percent}%"><div class="havetracks">${havearc}/${totalarc}</div></div></div></td>
<td id="action">
<a title="Remove from Reading List" onclick="doAjaxCall('removefromreadlist?StoryArcID=${item['StoryArcID']}',$(this),'table')" data-success="Sucessfully removed ${item['StoryArc']} from list."><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" /></a>
</td>
</tr>
%endfor
</tbody>
</table>
</div>
</div>
</%def>
<%def name="headIncludes()">
@ -160,23 +177,37 @@
</%def>
<%def name="javascriptIncludes()">
<script src="js/libs/jquery.dataTables.min.js"></script>
<script type="text/javascript">
$("#menu_link_scan").click(function() {
$('#chkoptions').submit();
return true;
<script src="js/libs/jquery.dataTables.min.js"></script>
<script>
function openHelp() {
$("#dialog").dialog();
};
function initThisPage(){
$(function() {
$( "#tabs" ).tabs();
});
$("#helpout").click(openHelp);
initActions();
$('#read_detail').dataTable({
"bDestroy": true,
"oLanguage": {
"sLengthMenu":"Show _MENU_ items per page",
"sEmptyTable": "<em>No History to Display</em>",
"sInfo":"Showing _START_ to _END_ of _TOTAL_ items",
"sInfoEmpty":"Showing 0 to 0 of 0 items",
"sInfoFiltered":"(filtered from _MAX_ total items)"},
"iDisplayLength": 25,
"sPaginationType": "full_numbers",
"aaSorting": []
});
resetFilters("issuelist");
}
$(document).ready(function() {
initThisPage();
initActions();
initConfigCheckbox("#tabenable");
});
</script>
<script>
function initThisPage() {
jQuery( "#tabs" ).tabs();
}
$(document).ready(function() {
initThisPage();
initActions();
});
$(window).load(function(){
initFancybox();
});
</script>
</%def>

View File

@ -0,0 +1,149 @@
<%inherit file="base.html"/>
<%!
import os
import mylar
from mylar import db
from mylar.helpers import checked
%>
<%def name="headerIncludes()">
<div id="subhead_container">
<div id="subhead_menu">
<a id="menu_link_refresh" href="readlist">Reading List Management</a>
</div>
</div>
</%def>
<%def name="body()">
<div id="paddingheader">
<h1 class="clearfix"><img src="interfaces/default/images/ReadingList-icon.png" height="26" width="26" alt="Reading List"/>Story Arc Management</h1>
</div>
<div id="tabs">
<ul>
<li><a href="#tabs-1">Options</a></li>
</ul>
<div id="tabs-1">
<table class="configtable">
<tr>
<td>
<form action="searchit" method="get">
<fieldset>
<legend>Add StoryArc</legend>
<input type="hidden" name="type" value="story_arc">
<input type="text" value="" placeholder="Search for Story Arc" onfocus="if(this.value==this.defaultValue) this.value='';" name="name" />
<span class="mini-icon"></span>
<input type="submit" value="Search"/>
</fieldset>
</form>
<form action="importReadlist" method="get">
<fieldset>
<input type="text" value="" runat="server" placeholder="Enter full path to .cbl file to import" onfocus="if
(this.value==this.defaultValue) this.value='';" name="filename" size="40" />
<input type="submit" value="Import">
</fieldset>
</form>
</td>
<td>
<form action="readOptions" id="chkoptions" method="GET">
<fieldset>
<legend>Options</legend>
<div class="row checkbox left clearfix">
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" /><label>Arcs in Grabbag Directory?</label>
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="storyarcdir" id="storyarcdir" value="1" ${checked(mylar.STORYARCDIR)} /><label>Arcs in StoryArc Directory (off of ComicLocationRoot)?</label>
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" /><label>Show Downloaded Story Arc Issues on ReadingList tab</label>
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="read2filename" id="read2filename" value="1" ${checked(mylar.READ2FILENAME)} /><label>Append Reading # to filename</label>
</div>
</fieldset>
<div>
<input type="submit" value="Update"/>
</div>
</form>
</td>
</tr>
</table>
</div>
</div>
<div class="table_wrapper">
<table class="display" id="storyarcs">
<thead>
<tr>
<th id="storyarc">Story Arc</th>
<th id="years">Span Years</th>
<th id="have">Have</th>
<th id="options">Options</th>
</tr>
</thead>
<tbody>
%for item in arclist:
<%
if item['percent'] == 101:
css = '<div class=\"progress-container warning\">'
if item['percent'] == 100:
css = '<div class=\"progress-container complete\">'
if item['percent'] < 100:
css = '<div class=\"progress-container missing\">'
grade = 'A'
%>
<tr class="grade${grade}">
<td id="storyarc"><a href="detailStoryArc?StoryArcID=${item['StoryArcID']}&StoryArcName=${item['StoryArc']}">${item['StoryArc']}</a></td>
<td id="years">${item['SpanYears']}</td>
<td id="have"><span title="${item['percent']}"></span>${css}<div style="width:${item['percent']}%"><span class="progressbar-front-text">${item['Have']}/${item['Total']}</span></div></td>
<td id="options">
<a title="Remove from Story Arc Watchlist" onclick="doAjaxCall('removefromreadlist?StoryArcID=${item['StoryArcID']}',$(this),'table')" data-success="Sucessfully removed ${item['StoryArc']} from list."><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" /></a>
</td>
</tr>
%endfor
</tbody>
</table>
</div>
</%def>
<%def name="headIncludes()">
<link rel="stylesheet" href="interfaces/default/css/data_table.css">
</%def>
<%def name="javascriptIncludes()">
<script src="js/libs/jquery.dataTables.min.js"></script>
<script type="text/javascript">
$("#menu_link_scan").click(function() {
$('#chkoptions').submit();
return true;
});
</script>
<script>
function initThisPage() {
$(function() {
$( "#tabs" ).tabs();
});
initActions();
$('#storyarcs').dataTable({
"bDestroy": true,
"oLanguage": {
"sLengthMenu":"Show _MENU_ items per page",
"sEmptyTable": "<em>No History to Display</em>",
"sInfo":"Showing _START_ to _END_ of _TOTAL_ items",
"sInfoEmpty":"Showing 0 to 0 of 0 items",
"sInfoFiltered":"(filtered from _MAX_ total items)"},
"iDisplayLength": 25,
"sPaginationType": "full_numbers",
"aaSorting": []
});
resetFilters("arclist");
}
$(document).ready(function() {
initThisPage();
initActions();
});
</script>
</%def>

View File

@ -8,7 +8,9 @@
<%def name="headerIncludes()">
<div id="subhead_container">
<div id="subhead_menu">
<a id="menu_link_delete" href="#">Sync</a>
%if mylar.TAB_ENABLE:
<a id="menu_link_delete" onclick="doAjaxCall('syncfiles',$(this),'table')" data-success="Successfully sent issues to your device">Sync</a>
%endif
<a id="menu_link_delete" href="#">Remove Read</a>
<a id="menu_link_delete" href="#">Clear File Cache</a>
<a id="menu_link_refresh" onclick="doAjaxCall('ReadGetWanted?StoryArcID=${storyarcid}',$(this),'table')" data-success="Searching for Missing StoryArc Issues">Search for Missing</a>
@ -19,14 +21,20 @@
<%def name="body()">
<div id="paddingheader">
<h1 class="clearfix"><a href="readlist"><img src="interfaces/default/images/ReadingList-icon.png" height="26" width="26" alt="Reading List"/>Reading List Management</a></h1>
<h1 class="clearfix"><a href="storyarc_main"><img src="interfaces/default/images/ReadingList-icon.png" height="26" width="26" alt="Story Arc Management"/>Story Arc Management</a></h1>
</div>
<center><h1>${storyarcname}</h1></center>
<table class="configtable">
<div id="tabs">
<ul>
<li><a href="#tabs-1">Options</a></li>
</ul>
<div id="tabs-1">
<table class="configtable">
<tr>
<form action="readOptions" id="chkoptions" method="GET">
<fieldset>
<legend>Options</legend>
<div class="row checkbox left clearfix">
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="storyarcdir" id="storyarcdir" value="1" ${checked(mylar.STORYARCDIR)} /><label>Should I create a Story-Arc Directory?</label><br/>
<small>Arcs in StoryArc Directory: <% sdir = os.path.join(mylar.DESTINATION_DIR, "StoryArcs") %>${sdir}</small><br/>
@ -44,9 +52,9 @@
</div>
</form>
</tr>
</table>
</table>
</div>
</div>
<table class="display" id="read_detail">
<thead>
<tr>
@ -80,7 +88,7 @@
elif item['Status'] == 'Not Watched':
grade = 'X'
else:
grade = 'A'
grade = 'Z'
%>
<tr id="${item['ReadingOrder']}" class="grade${grade}">
@ -107,7 +115,7 @@
<td id="status">${item['Status']}</td>
<td id="action">
%if item['Status'] is None or item['Status'] == None:
<a href="queueissue?ComicName=${item['ComicName'] | u}&ComicIssue=${item['IssueNumber']}&ComicYear=${issueyear}&mode=readlist&SARC=${item['StoryArc']}&IssueArcID=${item['IssueArcID']}&SeriesYear=${item['SeriesYear']}"><span class="ui-icon ui-icon-plus"></span>Grab it</a>
<a href="#" onclick="doAjaxCall('queueit?ComicName=${item['ComicName'] | u}&ComicIssue=${item['IssueNumber']}&ComicYear=${issueyear}&mode=readlist&SARC=${item['StoryArc']}&IssueArcID=${item['IssueArcID']}&SeriesYear=${item['SeriesYear']}',$(this),'table')" data-success="Now searching for ${item['ComicName']} #${item['IssueNumber']}"><span class="ui-icon ui-icon-plus"></span>Grab it</a>
%elif item['Status'] == 'Snatched':
<a href="#" onclick="doAjaxCall('queueissue?ComicName=${item['ComicName'] | u}&ComicIssue=${item['IssueNumber']}&ComicYear=${issueyear}&mode=readlist&SARC=${item['StoryArc']}&IssueArcID=${item['IssueArcID']}&SeriesYear=${item['SeriesYear']}',$(this),'table')" data-success="Trying to Retry"><span class="ui-icon ui-icon-plus"></span>Retry</a>
%endif
@ -124,9 +132,7 @@
<%def name="javascriptIncludes()">
<script src="js/libs/jquery.dataTables.min.js"></script>
<!--
<script src="js/libs/jquery.dataTables.rowReordering.js"></script>
-->
<script type="text/javascript">
$("#menu_link_scan").click(function() {
$('#chkoptions').submit();
@ -134,8 +140,11 @@
});
</script>
<script>
function initThisPage() {
$(function() {
$( "#tabs" ).tabs();
});
initActions();
$('#read_detail').dataTable(
{
"bDestroy": true,
@ -149,14 +158,6 @@
"sPaginationType": "full_numbers",
"aaSorting": []
});
//
// }).rowReordering({
// sAjax: "reOrder",
// fnAlert: function(text){
// alert("Order cannot be changed.\n" + text);
// }
// });
//
resetFilters("item");
}

View File

@ -1,4 +1,7 @@
<%inherit file="base.html" />
<%!
import mylar
%>
<%def name="headerIncludes()">
<div id="subhead_container">
@ -14,19 +17,33 @@
<div class="title">
<h1 class="clearfix"><img src="interfaces/default/images/icon_wanted.png" alt="Wanted Issues"/>Wanted Issues (${wantedcount})</h1>
</div>
<form action="markissues" method="get" id="markissues">
<div id="markissue" style="top:0;">
Mark selected issues as
<select name="action" onChange="doAjaxCall('markissues',$(this),'table',true);" data-error="You didn't select any issues" data-success="selected issues marked">
<option disabled="disabled" selected="selected">Choose...</option>
<option value="Skipped">Skipped</option>
<option value="Downloaded">Downloaded</option>
<div id="checkboxControls" style="float: right; vertical-align: middle; margin: 3px; margin-top: -1px;">
<div style="padding-bottom: 5px;">
<label for="Wanted" class="checkbox inline Wanted"><input type="checkbox" id="Wanted" checked="checked" /> Wanted: <b>${isCounts['Wanted']}</b></label>
%if int(isCounts['Snatched']) > 0:
<label for="Snatched" class="checkbox inline Snatched"><input type="checkbox" id="Snatched" checked="checked" /> Snatched: <b>${isCounts['Snatched']}</b></label>
%endif
%if int(isCounts['Failed']) > 0 and mylar.FAILED_DOWNLOAD_HANDLING:
<label for="Failed" class="checkbox inline Failed"><input type="checkbox" id="Failed" checked="checked" /> Failed: <b>${isCounts['Failed']}</b></label>
%endif
</div>
</div>
<div class="table_wrapper" id="wanted_table_wrapper" >
<form action="markissues" method="get" id="markissues">
<div id="markissue" style="top:0;">
Mark selected issues as
<select name="action" onChange="doAjaxCall('markissues',$(this),'table',true);" data-error="You didn't select any issues" data-success="selected issues marked">
<option disabled="disabled" selected="selected">Choose...</option>
<option value="Skipped">Skipped</option>
<option value="Downloaded">Downloaded</option>
<option value="Archived">Archived</option>
<option value="Ignored">Ignored</option>
</select>
<input type="hidden" value="Go">
</div>
<div class="table_wrapper" id="wanted_table_wrapper" >
</select>
<input type="hidden" value="Go">
</div>
<table class="display" id="wanted_table">
<thead>
<tr>
@ -38,8 +55,19 @@
</thead>
<tbody>
%for issue in issues:
<tr class="gradeZ">
<td id="select"><input type="checkbox" name="${issue['IssueID']}" class="checkbox" /></td>
<%
if issue['Status'] == 'Wanted':
grade = 'X'
elif issue['Status'] == 'Snatched':
grade = 'C'
elif issue['Status'] == 'Failed':
grade = 'C'
else:
grade = 'Z'
%>
<tr class="${issue['Status']} grade${grade}">
<td id="select"><input type="checkbox" name="${issue['IssueID']}" class="checkbox" value="${issue['IssueID']}"/></td>
<td id="comicname"><a href="comicDetails?ComicID=${issue['ComicID']}">
<%
if any(d['IssueID'] == str(issue['IssueID']) for d in ann_list):
@ -58,6 +86,7 @@
</form>
</div>
<div class="title">
<h1 class="clearfix"><img src="interfaces/default/images/icon_upcoming.png" alt="Upcoming Issues"/>Upcoming Issues</h1>
</div>
@ -166,7 +195,41 @@
<%def name="javascriptIncludes()">
<script src="js/libs/jquery.dataTables.min.js"></script>
<script>
<script>
// show/hide different types of rows when the checkboxes are changed
$("#checkboxControls input").change(function(e){
var whichClass = $(this).attr('id')
$(this).showHideRows(whichClass)
return
$('tr.'+whichClass).each(function(i){
$(this).toggle();
});
});
// initially show/hide all the rows according to the checkboxes
$("#checkboxControls input").each(function(e){
var status = this.checked;
$("tr."+$(this).attr('id')).each(function(e){
if (status) {
$(this).show();
} else {
$(this).hide();
}
});
});
$.fn.showHideRows = function(whichClass){
var status = $('#checkboxControls > input, #'+whichClass).prop('checked')
$("tr."+whichClass).each(function(e){
if (status) {
$(this).show();
} else {
$(this).hide();
}
});
}
function initThisPage() {
$(function() {

View File

@ -24,7 +24,6 @@ import logging
import mylar
import subprocess
import urllib2
import sqlite3
from xml.dom.minidom import parseString
@ -259,10 +258,10 @@ class PostProcessor(object):
if 'annual' in temploc.lower():
biannchk = re.sub('-', '', temploc.lower()).strip()
if 'biannual' in biannchk:
logger.info(module + ' Bi-Annual detected.')
logger.fdebug(module + ' Bi-Annual detected.')
fcdigit = helpers.issuedigits(re.sub('biannual', '', str(biannchk)).strip())
else:
logger.info(module + ' Annual detected.')
logger.fdebug(module + ' Annual detected.')
fcdigit = helpers.issuedigits(re.sub('annual', '', str(temploc.lower())).strip())
annchk = "yes"
issuechk = myDB.selectone("SELECT * from annuals WHERE ComicID=? AND Int_IssueNumber=?", [cs['ComicID'],fcdigit]).fetchone()
@ -341,7 +340,7 @@ class PostProcessor(object):
#replace spaces
nzbname = re.sub(' ', '.', str(nzbname))
nzbname = re.sub('[\,\:\?\']', '', str(nzbname))
nzbname = re.sub('[\,\:\?\'\(\)]', '', str(nzbname))
nzbname = re.sub('[\&]', 'and', str(nzbname))
nzbname = re.sub('_', '.', str(nzbname))
@ -552,15 +551,18 @@ class PostProcessor(object):
logger.info(module + ' No matches for Manual Run ... exiting.')
return
i = 1
for ml in manual_list:
comicid = ml['ComicID']
issueid = ml['IssueID']
issuenumOG = ml['IssueNumber']
dupthis = helpers.duplicate_filecheck(ml['ComicLocation'], ComicID=comicid, IssueID=issueid)
if dupthis == "write":
self.Process_next(comicid,issueid,issuenumOG,ml)
stat = ' [' + str(i) + '/' + str(len(manual_list)) + ']'
self.Process_next(comicid,issueid,issuenumOG,ml,stat)
dupthis = None
logger.info(module + ' Manual post-processing completed.')
i+=1
logger.info(module + ' Manual post-processing completed for ' + str(i) + ' issues.')
return
else:
comicid = issuenzb['ComicID']
@ -578,7 +580,8 @@ class PostProcessor(object):
return self.queue.put(self.valreturn)
def Process_next(self,comicid,issueid,issuenumOG,ml=None):
def Process_next(self,comicid,issueid,issuenumOG,ml=None,stat=None):
if stat is None: stat = ' [1/1]'
module = self.module
annchk = "no"
extensions = ('.cbr', '.cbz')
@ -598,12 +601,11 @@ class PostProcessor(object):
issuenzb = myDB.selectone("SELECT * from annuals WHERE issueid=? and comicid=?", [issueid,comicid]).fetchone()
annchk = "yes"
if annchk == "no":
logger.info(module + ' Starting Post-Processing for ' + issuenzb['ComicName'] + ' issue: ' + str(issuenzb['Issue_Number']))
logger.info(module + stat + ' Starting Post-Processing for ' + issuenzb['ComicName'] + ' issue: ' + issuenzb['Issue_Number'])
else:
logger.info(module + ' Starting Post-Processing for ' + issuenzb['ReleaseComicName'] + ' issue: ' + str(issuenzb['Issue_Number']))
logger.info(module + stat + ' Starting Post-Processing for ' + issuenzb['ReleaseComicName'] + ' issue: ' + issuenzb['Issue_Number'])
logger.fdebug(module + ' issueid: ' + str(issueid))
logger.fdebug(module + ' issuenumOG: ' + str(issuenumOG))
logger.fdebug(module + ' issuenumOG: ' + issuenumOG)
#issueno = str(issuenum).split('.')[0]
#new CV API - removed all decimals...here we go AGAIN!
issuenum = issuenzb['Issue_Number']
@ -623,6 +625,16 @@ class PostProcessor(object):
issuenum = re.sub("[^0-9]", "", issuenum)
issue_except = '.NOW'
elif u'\xbd' in issuenum:
issuenum = '0.5'
elif u'\xbc' in issuenum:
issuenum = '0.25'
elif u'\xbe' in issuenum:
issuenum = '0.75'
elif u'\u221e' in issuenum:
#issnum = utf-8 will encode the infinity symbol without any help
issuenum = 'infinity'
if '.' in issuenum:
iss_find = issuenum.find('.')
iss_b4dec = issuenum[:iss_find]
@ -646,7 +658,7 @@ class PostProcessor(object):
logger.fdebug(module + ' Issue Number: ' + str(iss))
else:
iss = issuenum
issueno = str(iss)
issueno = iss
# issue zero-suppression here
if mylar.ZERO_LEVEL == "0":
@ -1013,20 +1025,20 @@ class PostProcessor(object):
if annchk == "no":
updater.foundsearch(comicid, issueid, down=downtype, module=module)
dispiss = 'issue: ' + str(issuenumOG)
dispiss = 'issue: ' + issuenumOG
else:
updater.foundsearch(comicid, issueid, mode='want_ann', down=downtype, module=module)
if 'annual' not in series.lower():
dispiss = 'annual issue: ' + str(issuenumOG)
dispiss = 'annual issue: ' + issuenumOG
else:
dispiss = str(issuenumOG)
dispiss = issuenumOG
#force rescan of files
updater.forceRescan(comicid,module=module)
if mylar.WEEKFOLDER:
#if enabled, will *copy* the post-processed file to the weeklypull list folder for the given week.
weeklypull.weekly_singlecopy(comicid,issuenum,str(nfilename+ext),dst,module=module)
weeklypull.weekly_singlecopy(comicid,issuenum,str(nfilename+ext),dst,module=module,issueid=issueid)
# retrieve/create the corresponding comic objects
if mylar.ENABLE_EXTRA_SCRIPTS:

View File

@ -37,7 +37,7 @@ from lib.configobj import ConfigObj
import cherrypy
from mylar import logger, versioncheck, rsscheck, search, PostProcessor, weeklypull, helpers #versioncheckit, searchit, weeklypullit, dbupdater, scheduler
from mylar import logger, versioncheckit, rsscheckit, searchit, weeklypullit, dbupdater, PostProcessor, helpers, scheduler #versioncheck, rsscheck, search, PostProcessor, weeklypull, helpers, scheduler
FULL_PATH = None
PROG_DIR = None
@ -65,13 +65,14 @@ WRITELOCK = False
LOGTYPE = None
## for use with updated scheduler (not working atm)
#INIT_LOCK = Lock()
#dbUpdateScheduler = None
#searchScheduler = None
#RSSScheduler = None
#WeeklyScheduler = None
#VersionScheduler = None
#FolderMonitorScheduler = None
INIT_LOCK = Lock()
dbUpdateScheduler = None
searchScheduler = None
RSSScheduler = None
WeeklyScheduler = None
VersionScheduler = None
FolderMonitorScheduler = None
QUEUE = Queue.Queue()
DATA_DIR = None
@ -278,6 +279,12 @@ CV_ONETIMER = 1
GRABBAG_DIR = None
HIGHCOUNT = 0
READ2FILENAME = 0
SEND2READ = 0
TAB_ENABLE = 0
TAB_HOST = None
TAB_USER = None
TAB_PASS = None
TAB_DIRECTORY = None
STORYARCDIR = 0
COPY2ARCDIR = 0
@ -378,7 +385,6 @@ def check_setting_str(config, cfg_name, item_name, def_val, log=True):
def initialize():
with INIT_LOCK:
global __INITIALIZED__, DBCHOICE, DBUSER, DBPASS, DBNAME, COMICVINE_API, DEFAULT_CVAPI, CVAPI_COUNT, CVAPI_TIME, CVAPI_MAX, FULL_PATH, PROG_DIR, VERBOSE, DAEMON, COMICSORT, DATA_DIR, CONFIG_FILE, CFG, CONFIG_VERSION, LOG_DIR, CACHE_DIR, MAX_LOGSIZE, LOGVERBOSE, OLDCONFIG_VERSION, OS_DETECT, OS_LANG, OS_ENCODING, \
queue, HTTP_PORT, HTTP_HOST, HTTP_USERNAME, HTTP_PASSWORD, HTTP_ROOT, ENABLE_HTTPS, HTTPS_CERT, HTTPS_KEY, HTTPS_FORCE_ON, API_ENABLED, API_KEY, LAUNCH_BROWSER, GIT_PATH, SAFESTART, AUTO_UPDATE, \
CURRENT_VERSION, LATEST_VERSION, CHECK_GITHUB, CHECK_GITHUB_ON_STARTUP, CHECK_GITHUB_INTERVAL, USER_AGENT, DESTINATION_DIR, MULTIPLE_DEST_DIRS, CREATE_FOLDERS, \
@ -393,7 +399,7 @@ def initialize():
ENABLE_RSS, RSS_CHECKINTERVAL, RSS_LASTRUN, FAILED_DOWNLOAD_HANDLING, FAILED_AUTO, ENABLE_TORRENT_SEARCH, ENABLE_KAT, KAT_PROXY, ENABLE_CBT, CBT_PASSKEY, SNATCHEDTORRENT_NOTIFY, \
PROWL_ENABLED, PROWL_PRIORITY, PROWL_KEYS, PROWL_ONSNATCH, NMA_ENABLED, NMA_APIKEY, NMA_PRIORITY, NMA_ONSNATCH, PUSHOVER_ENABLED, PUSHOVER_PRIORITY, PUSHOVER_APIKEY, PUSHOVER_USERKEY, PUSHOVER_ONSNATCH, BOXCAR_ENABLED, BOXCAR_ONSNATCH, BOXCAR_TOKEN, \
PUSHBULLET_ENABLED, PUSHBULLET_APIKEY, PUSHBULLET_DEVICEID, PUSHBULLET_ONSNATCH, LOCMOVE, NEWCOM_DIR, FFTONEWCOM_DIR, \
PREFERRED_QUALITY, MOVE_FILES, RENAME_FILES, LOWERCASE_FILENAMES, USE_MINSIZE, MINSIZE, USE_MAXSIZE, MAXSIZE, CORRECT_METADATA, FOLDER_FORMAT, FILE_FORMAT, REPLACE_CHAR, REPLACE_SPACES, ADD_TO_CSV, CVINFO, LOG_LEVEL, POST_PROCESSING, POST_PROCESSING_SCRIPT, SEARCH_DELAY, GRABBAG_DIR, READ2FILENAME, STORYARCDIR, COPY2ARCDIR, CVURL, CVAPIFIX, CHECK_FOLDER, ENABLE_CHECK_FOLDER, \
PREFERRED_QUALITY, MOVE_FILES, RENAME_FILES, LOWERCASE_FILENAMES, USE_MINSIZE, MINSIZE, USE_MAXSIZE, MAXSIZE, CORRECT_METADATA, FOLDER_FORMAT, FILE_FORMAT, REPLACE_CHAR, REPLACE_SPACES, ADD_TO_CSV, CVINFO, LOG_LEVEL, POST_PROCESSING, POST_PROCESSING_SCRIPT, SEARCH_DELAY, GRABBAG_DIR, READ2FILENAME, SEND2READ, TAB_ENABLE, TAB_HOST, TAB_USER, TAB_PASS, TAB_DIRECTORY, STORYARCDIR, COPY2ARCDIR, CVURL, CVAPIFIX, CHECK_FOLDER, ENABLE_CHECK_FOLDER, \
COMIC_LOCATION, QUAL_ALTVERS, QUAL_SCANNER, QUAL_TYPE, QUAL_QUALITY, ENABLE_EXTRA_SCRIPTS, EXTRA_SCRIPTS, ENABLE_PRE_SCRIPTS, PRE_SCRIPTS, PULLNEW, ALT_PULL, COUNT_ISSUES, COUNT_HAVES, COUNT_COMICS, SYNO_FIX, CHMOD_FILE, CHMOD_DIR, ANNUALS_ON, CV_ONLY, CV_ONETIMER, WEEKFOLDER, UMASK
if __INITIALIZED__:
@ -523,6 +529,12 @@ def initialize():
HIGHCOUNT = check_setting_str(CFG, 'General', 'highcount', '')
if not HIGHCOUNT: HIGHCOUNT = 0
READ2FILENAME = bool(check_setting_int(CFG, 'General', 'read2filename', 0))
SEND2READ = bool(check_setting_int(CFG, 'General', 'send2read', 0))
TAB_ENABLE = bool(check_setting_int(CFG, 'General', 'tab_enable', 0))
TAB_HOST = check_setting_str(CFG, 'General', 'tab_host', '')
TAB_USER = check_setting_str(CFG, 'General', 'tab_user', '')
TAB_PASS = check_setting_str(CFG, 'General', 'tab_pass', '')
TAB_DIRECTORY = check_setting_str(CFG, 'General', 'tab_directory', '')
STORYARCDIR = bool(check_setting_int(CFG, 'General', 'storyarcdir', 0))
COPY2ARCDIR = bool(check_setting_int(CFG, 'General', 'copy2arcdir', 0))
PROWL_ENABLED = bool(check_setting_int(CFG, 'Prowl', 'prowl_enabled', 0))
@ -981,48 +993,48 @@ def initialize():
#start the db write only thread here.
#this is a thread that continually runs in the background as the ONLY thread that can write to the db.
# logger.info('Starting Write-Only thread.')
#logger.info('Starting Write-Only thread.')
#db.WriteOnly()
#initialize the scheduler threads here.
#dbUpdateScheduler = scheduler.Scheduler(action=dbupdater.dbUpdate(),
# cycleTime=datetime.timedelta(hours=48),
# runImmediately=False,
# threadName="DBUPDATE")
dbUpdateScheduler = scheduler.Scheduler(action=dbupdater.dbUpdate(),
cycleTime=datetime.timedelta(hours=48),
runImmediately=False,
threadName="DBUPDATE")
# if NZB_STARTUP_SEARCH:
# searchrunmode = True
# else:
# searchrunmode = False
if NZB_STARTUP_SEARCH:
searchrunmode = True
else:
searchrunmode = False
#searchScheduler = scheduler.Scheduler(searchit.CurrentSearcher(),
# cycleTime=datetime.timedelta(minutes=SEARCH_INTERVAL),
# threadName="SEARCH",
# runImmediately=searchrunmode)
searchScheduler = scheduler.Scheduler(searchit.CurrentSearcher(),
cycleTime=datetime.timedelta(minutes=SEARCH_INTERVAL),
threadName="SEARCH",
runImmediately=searchrunmode)
#RSSScheduler = scheduler.Scheduler(rsscheckit.tehMain(),
# cycleTime=datetime.timedelta(minutes=int(RSS_CHECKINTERVAL)),
# threadName="RSSCHECK",
# runImmediately=True,
# delay=30)
RSSScheduler = scheduler.Scheduler(rsscheckit.tehMain(),
cycleTime=datetime.timedelta(minutes=int(RSS_CHECKINTERVAL)),
threadName="RSSCHECK",
runImmediately=True,
delay=30)
#WeeklyScheduler = scheduler.Scheduler(weeklypullit.Weekly(),
# cycleTime=datetime.timedelta(hours=24),
# threadName="WEEKLYCHECK",
# runImmediately=True,
# delay=10)
WeeklyScheduler = scheduler.Scheduler(weeklypullit.Weekly(),
cycleTime=datetime.timedelta(hours=24),
threadName="WEEKLYCHECK",
runImmediately=True,
delay=10)
#VersionScheduler = scheduler.Scheduler(versioncheckit.CheckVersion(),
# cycleTime=datetime.timedelta(minutes=CHECK_GITHUB_INTERVAL),
# threadName="VERSIONCHECK",
# runImmediately=True)
VersionScheduler = scheduler.Scheduler(versioncheckit.CheckVersion(),
cycleTime=datetime.timedelta(minutes=CHECK_GITHUB_INTERVAL),
threadName="VERSIONCHECK",
runImmediately=False)
#FolderMonitorScheduler = scheduler.Scheduler(PostProcessor.FolderCheck(),
# cycleTime=datetime.timedelta(minutes=int(DOWNLOAD_SCAN_INTERVAL)),
# threadName="FOLDERMONITOR",
# runImmediately=True,
# delay=60)
FolderMonitorScheduler = scheduler.Scheduler(PostProcessor.FolderCheck(),
cycleTime=datetime.timedelta(minutes=int(DOWNLOAD_SCAN_INTERVAL)),
threadName="FOLDERMONITOR",
runImmediately=True,
delay=60)
# Store the original umask
UMASK = os.umask(0)
@ -1186,6 +1198,12 @@ def config_write():
new_config['General']['grabbag_dir'] = GRABBAG_DIR
new_config['General']['highcount'] = HIGHCOUNT
new_config['General']['read2filename'] = int(READ2FILENAME)
new_config['General']['send2read'] = int(SEND2READ)
new_config['General']['tab_enable'] = int(TAB_ENABLE)
new_config['General']['tab_host'] = TAB_HOST
new_config['General']['tab_user'] = TAB_USER
new_config['General']['tab_pass'] = TAB_PASS
new_config['General']['tab_directory'] = TAB_DIRECTORY
new_config['General']['storyarcdir'] = int(STORYARCDIR)
new_config['General']['copy2arcdir'] = int(COPY2ARCDIR)
new_config['General']['use_minsize'] = int(USE_MINSIZE)
@ -1338,9 +1356,9 @@ def config_write():
def start():
global __INITIALIZED__, started
#dbUpdateScheduler, searchScheduler, RSSScheduler, \
#WeeklyScheduler, VersionScheduler, FolderMonitorScheduler
global __INITIALIZED__, started, \
dbUpdateScheduler, searchScheduler, RSSScheduler, \
WeeklyScheduler, VersionScheduler, FolderMonitorScheduler
with INIT_LOCK:
@ -1350,52 +1368,51 @@ def start():
#from mylar import updater, search, PostProcessor
SCHED.add_interval_job(updater.dbUpdate, hours=48)
SCHED.add_interval_job(search.searchforissue, minutes=SEARCH_INTERVAL)
#SCHED.add_interval_job(updater.dbUpdate, hours=48)
#SCHED.add_interval_job(search.searchforissue, minutes=SEARCH_INTERVAL)
#start the db updater scheduler
#logger.info('Initializing the DB Updater.')
#dbUpdateScheduler.thread.start()
logger.info('Initializing the DB Updater.')
dbUpdateScheduler.thread.start()
#start the search scheduler
#searchScheduler.thread.start()
searchScheduler.thread.start()
helpers.latestdate_fix()
#start the ComicVine API Counter here.
logger.info('Initiating the ComicVine API Checker to report API hits every 5 minutes.')
SCHED.add_interval_job(helpers.cvapi_check, minutes=5)
#SCHED.add_interval_job(helpers.cvapi_check, minutes=5)
#initiate startup rss feeds for torrents/nzbs here...
if ENABLE_RSS:
SCHED.add_interval_job(rsscheck.tehMain, minutes=int(RSS_CHECKINTERVAL))
#RSSScheduler.thread.start()
#SCHED.add_interval_job(rsscheck.tehMain, minutes=int(RSS_CHECKINTERVAL))
RSSScheduler.thread.start()
logger.info('Initiating startup-RSS feed checks.')
rsscheck.tehMain()
#rsscheck.tehMain()
#weekly pull list gets messed up if it's not populated first, so let's populate it then set the scheduler.
logger.info('Checking for existance of Weekly Comic listing...')
PULLNEW = 'no' #reset the indicator here.
threading.Thread(target=weeklypull.pullit).start()
#PULLNEW = 'no' #reset the indicator here.
#threading.Thread(target=weeklypull.pullit).start()
#now the scheduler (check every 24 hours)
SCHED.add_interval_job(weeklypull.pullit, hours=24)
#WeeklyScheduler.thread.start()
#SCHED.add_interval_job(weeklypull.pullit, hours=24)
WeeklyScheduler.thread.start()
#let's do a run at the Wanted issues here (on startup) if enabled.
if NZB_STARTUP_SEARCH:
threading.Thread(target=search.searchforissue).start()
#if NZB_STARTUP_SEARCH:
# threading.Thread(target=search.searchforissue).start()
if CHECK_GITHUB:
#VersionScheduler.thread.start()
SCHED.add_interval_job(versioncheck.checkGithub, minutes=CHECK_GITHUB_INTERVAL)
VersionScheduler.thread.start()
#SCHED.add_interval_job(versioncheck.checkGithub, minutes=CHECK_GITHUB_INTERVAL)
#run checkFolder every X minutes (basically Manual Run Post-Processing)
if ENABLE_CHECK_FOLDER:
if DOWNLOAD_SCAN_INTERVAL >0:
logger.info('Enabling folder monitor for : ' + str(CHECK_FOLDER) + ' every ' + str(DOWNLOAD_SCAN_INTERVAL) + ' minutes.')
#FolderMonitorScheduler.thread.start()
SCHED.add_interval_job(helpers.checkFolder, minutes=int(DOWNLOAD_SCAN_INTERVAL))
FolderMonitorScheduler.thread.start()
#SCHED.add_interval_job(helpers.checkFolder, minutes=int(DOWNLOAD_SCAN_INTERVAL))
else:
logger.error('You need to specify a monitoring time for the check folder option to work')
SCHED.start()
@ -1413,7 +1430,7 @@ def dbcheck():
c_error = 'sqlite3.OperationalError'
c=conn.cursor()
c.execute('CREATE TABLE IF NOT EXISTS comics (ComicID TEXT UNIQUE, ComicName TEXT, ComicSortName TEXT, ComicYear TEXT, DateAdded TEXT, Status TEXT, IncludeExtras INTEGER, Have INTEGER, Total INTEGER, ComicImage TEXT, ComicPublisher TEXT, ComicLocation TEXT, ComicPublished TEXT, LatestIssue TEXT, LatestDate TEXT, Description TEXT, QUALalt_vers TEXT, QUALtype TEXT, QUALscanner TEXT, QUALquality TEXT, LastUpdated TEXT, AlternateSearch TEXT, UseFuzzy TEXT, ComicVersion TEXT, SortOrder INTEGER, DetailURL TEXT, ForceContinuing INTEGER, ComicName_Filesafe TEXT, AlternateFileName TEXT, ComicImageURL TEXT, ComicImageALTURL TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS comics (ComicID TEXT UNIQUE, ComicName TEXT, ComicSortName TEXT, ComicYear TEXT, DateAdded TEXT, Status TEXT, IncludeExtras INTEGER, Have INTEGER, Total INTEGER, ComicImage TEXT, ComicPublisher TEXT, ComicLocation TEXT, ComicPublished TEXT, NewPublish TEXT, LatestIssue TEXT, LatestDate TEXT, Description TEXT, QUALalt_vers TEXT, QUALtype TEXT, QUALscanner TEXT, QUALquality TEXT, LastUpdated TEXT, AlternateSearch TEXT, UseFuzzy TEXT, ComicVersion TEXT, SortOrder INTEGER, DetailURL TEXT, ForceContinuing INTEGER, ComicName_Filesafe TEXT, AlternateFileName TEXT, ComicImageURL TEXT, ComicImageALTURL TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS issues (IssueID TEXT, ComicName TEXT, IssueName TEXT, Issue_Number TEXT, DateAdded TEXT, Status TEXT, Type TEXT, ComicID TEXT, ArtworkURL Text, ReleaseDate TEXT, Location TEXT, IssueDate TEXT, Int_IssueNumber INT, ComicSize TEXT, AltIssueNumber TEXT, IssueDate_Edit TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS snatched (IssueID TEXT, ComicName TEXT, Issue_Number TEXT, Size INTEGER, DateAdded TEXT, Status TEXT, FolderName TEXT, ComicID TEXT, Provider TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS upcoming (ComicName TEXT, IssueNumber TEXT, ComicID TEXT, IssueID TEXT, IssueDate TEXT, Status TEXT, DisplayComicName TEXT)')
@ -1421,13 +1438,13 @@ def dbcheck():
c.execute('CREATE TABLE IF NOT EXISTS weekly (SHIPDATE text, PUBLISHER text, ISSUE text, COMIC VARCHAR(150), EXTRA text, STATUS text)')
# c.execute('CREATE TABLE IF NOT EXISTS sablog (nzo_id TEXT, ComicName TEXT, ComicYEAR TEXT, ComicIssue TEXT, name TEXT, nzo_complete TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS importresults (impID TEXT, ComicName TEXT, ComicYear TEXT, Status TEXT, ImportDate TEXT, ComicFilename TEXT, ComicLocation TEXT, WatchMatch TEXT, DisplayName TEXT, SRID TEXT, ComicID TEXT, IssueID TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS readlist (IssueID TEXT, ComicName TEXT, Issue_Number TEXT, Status TEXT, DateAdded TEXT, Location TEXT, inCacheDir TEXT, SeriesYear TEXT, ComicID TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS readlist (IssueID TEXT, ComicName TEXT, Issue_Number TEXT, Status TEXT, DateAdded TEXT, Location TEXT, inCacheDir TEXT, SeriesYear TEXT, ComicID TEXT, StatusChange TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS readinglist(StoryArcID TEXT, ComicName TEXT, IssueNumber TEXT, SeriesYear TEXT, IssueYEAR TEXT, StoryArc TEXT, TotalIssues TEXT, Status TEXT, inCacheDir TEXT, Location TEXT, IssueArcID TEXT, ReadingOrder INT, IssueID TEXT, ComicID TEXT, StoreDate TEXT, IssueDate TEXT, Publisher TEXT, IssuePublisher TEXT, IssueName TEXT, CV_ArcID TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS annuals (IssueID TEXT, Issue_Number TEXT, IssueName TEXT, IssueDate TEXT, Status TEXT, ComicID TEXT, GCDComicID TEXT, Location TEXT, ComicSize TEXT, Int_IssueNumber INT, ComicName TEXT, ReleaseDate TEXT, ReleaseComicID TEXT, ReleaseComicName TEXT, IssueDate_Edit TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS rssdb (Title TEXT UNIQUE, Link TEXT, Pubdate TEXT, Site TEXT, Size TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS futureupcoming (ComicName TEXT, IssueNumber TEXT, ComicID TEXT, IssueID TEXT, IssueDate TEXT, Publisher TEXT, Status TEXT, DisplayComicName TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS failed (ID TEXT, Status TEXT, ComicID TEXT, IssueID TEXT, Provider TEXT, ComicName TEXT, Issue_Number TEXT, NZBName TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS searchresults (SRID TEXT, results Numeric, Series TEXT, publisher TEXT, haveit TEXT, name TEXT, deck TEXT, url TEXT, description TEXT, comicid TEXT, comicimage TEXT, issues TEXT, comicyear TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS searchresults (SRID TEXT, results Numeric, Series TEXT, publisher TEXT, haveit TEXT, name TEXT, deck TEXT, url TEXT, description TEXT, comicid TEXT, comicimage TEXT, issues TEXT, comicyear TEXT, ogcname TEXT)')
conn.commit
c.close
#new
@ -1447,14 +1464,17 @@ def dbcheck():
c.execute('SELECT QUALalt_vers from comics')
except sqlite3.OperationalError:
c.execute('ALTER TABLE comics ADD COLUMN QUALalt_vers TEXT')
try:
c.execute('SELECT QUALtype from comics')
except sqlite3.OperationalError:
c.execute('ALTER TABLE comics ADD COLUMN QUALtype TEXT')
try:
c.execute('SELECT QUALscanner from comics')
except sqlite3.OperationalError:
c.execute('ALTER TABLE comics ADD COLUMN QUALscanner TEXT')
try:
c.execute('SELECT QUALquality from comics')
except sqlite3.OperationalError:
@ -1510,6 +1530,11 @@ def dbcheck():
except sqlite3.OperationalError:
c.execute('ALTER TABLE comics ADD COLUMN ComicImageALTURL TEXT')
try:
c.execute('SELECT NewPublish from comics')
except sqlite3.OperationalError:
c.execute('ALTER TABLE comics ADD COLUMN NewPublish TEXT')
# -- Issues Table --
try:
@ -1612,6 +1637,10 @@ def dbcheck():
except sqlite3.OperationalError:
c.execute('ALTER TABLE readlist ADD COLUMN ComicID TEXT')
try:
c.execute('SELECT StatusChange from readlist')
except sqlite3.OperationalError:
c.execute('ALTER TABLE readlist ADD COLUMN StatusChange TEXT')
## -- Weekly Table --
@ -1641,7 +1670,7 @@ def dbcheck():
try:
c.execute('SELECT AltNZBName from nzblog')
except sqlite3.OperationalError:
c.execute('ALTER TABLE nzblog ADD COLUMN ALTNZBName TEXT')
c.execute('ALTER TABLE nzblog ADD COLUMN AltNZBName TEXT')
## -- Annuals Table --
@ -1761,6 +1790,10 @@ def dbcheck():
except sqlite3.OperationalError:
c.execute('ALTER TABLE searchresults ADD COLUMN sresults TEXT')
try:
c.execute('SELECT ogcname from searchresults')
except sqlite3.OperationalError:
c.execute('ALTER TABLE searchresults ADD COLUMN ogcname TEXT')
#if it's prior to Wednesday, the issue counts will be inflated by one as the online db's everywhere
#prepare for the next 'new' release of a series. It's caught in updater.py, so let's just store the
@ -1854,65 +1887,65 @@ def csv_load():
conn.commit()
c.close()
#def halt():
# global __INITIALIZED__, dbUpdateScheduler, seachScheduler, RSSScheduler, WeeklyScheduler, \
# VersionScheduler, FolderMonitorScheduler, started
def halt():
global __INITIALIZED__, dbUpdateScheduler, seachScheduler, RSSScheduler, WeeklyScheduler, \
VersionScheduler, FolderMonitorScheduler, started
# with INIT_LOCK:
with INIT_LOCK:
# if __INITIALIZED__:
if __INITIALIZED__:
# logger.info(u"Aborting all threads")
logger.info(u"Aborting all threads")
# abort all the threads
# dbUpdateScheduler.abort = True
# logger.info(u"Waiting for the DB UPDATE thread to exit")
# try:
# dbUpdateScheduler.thread.join(10)
# except:
# pass
dbUpdateScheduler.abort = True
logger.info(u"Waiting for the DB UPDATE thread to exit")
try:
dbUpdateScheduler.thread.join(10)
except:
pass
# searchScheduler.abort = True
# logger.info(u"Waiting for the SEARCH thread to exit")
# try:
# searchScheduler.thread.join(10)
# except:
# pass
searchScheduler.abort = True
logger.info(u"Waiting for the SEARCH thread to exit")
try:
searchScheduler.thread.join(10)
except:
pass
# RSSScheduler.abort = True
# logger.info(u"Waiting for the RSS CHECK thread to exit")
# try:
# RSSScheduler.thread.join(10)
# except:
# pass
RSSScheduler.abort = True
logger.info(u"Waiting for the RSS CHECK thread to exit")
try:
RSSScheduler.thread.join(10)
except:
pass
# WeeklyScheduler.abort = True
# logger.info(u"Waiting for the WEEKLY CHECK thread to exit")
# try:
# WeeklyScheduler.thread.join(10)
# except:
# pass
WeeklyScheduler.abort = True
logger.info(u"Waiting for the WEEKLY CHECK thread to exit")
try:
WeeklyScheduler.thread.join(10)
except:
pass
# VersionScheduler.abort = True
# logger.info(u"Waiting for the VERSION CHECK thread to exit")
# try:
# VersionScheduler.thread.join(10)
# except:
# pass
VersionScheduler.abort = True
logger.info(u"Waiting for the VERSION CHECK thread to exit")
try:
VersionScheduler.thread.join(10)
except:
pass
# FolderMonitorScheduler.abort = True
# logger.info(u"Waiting for the FOLDER MONITOR thread to exit")
# try:
# FolderMonitorScheduler.thread.join(10)
# except:
# pass
FolderMonitorScheduler.abort = True
logger.info(u"Waiting for the FOLDER MONITOR thread to exit")
try:
FolderMonitorScheduler.thread.join(10)
except:
pass
# __INITIALIZED__ = False
__INITIALIZED__ = False
def shutdown(restart=False, update=False):
#halt()
halt()
cherrypy.engine.exit()

View File

@ -184,6 +184,7 @@ def run (dirName, nzbName=None, issueid=None, comversion=None, manual=None, file
base = os.path.splitext( f )[0]
shutil.move( f, base + ".cbz" )
logger.fdebug(module + ' {0}: renaming {1} to be a cbz'.format( scriptname, os.path.basename( f ) ))
filename = base + '.cbz'
if file_extension_fixing:
if filename.endswith('.cbz'):

View File

@ -16,6 +16,7 @@
import sys
import os
import re
import time
import logger
import string
import urllib
@ -134,8 +135,13 @@ def getComic(comicid,type,issueid=None,arc=None,arcid=None,arclist=None,comicidl
dom = pulldetails(arcid,'comicyears',offset=0,comicidlist=comicidlist)
return GetSeriesYears(dom)
def GetComicInfo(comicid,dom):
def GetComicInfo(comicid,dom,safechk=None):
if safechk is None:
#safetycheck when checking comicvine. If it times out, increment the chk on retry attempts up until 5 tries then abort.
safechk = 1
elif safechk > 4:
logger.error('Unable to add / refresh the series due to inablity to retrieve data from ComicVine. You might want to try abit later and/or make sure ComicVine is up.')
return
#comicvine isn't as up-to-date with issue counts..
#so this can get really buggered, really fast.
tracks = dom.getElementsByTagName('issue')
@ -189,8 +195,16 @@ def GetComicInfo(comicid,dom):
comic['ComicYear'] = dom.getElementsByTagName('start_year')[0].firstChild.wholeText
except:
comic['ComicYear'] = '0000'
comic['ComicURL'] = dom.getElementsByTagName('site_detail_url')[trackcnt].firstChild.wholeText
try:
comic['ComicURL'] = dom.getElementsByTagName('site_detail_url')[trackcnt].firstChild.wholeText
except:
#this should never be an exception. If it is, it's probably due to CV timing out - so let's sleep for abit then retry.
logger.warn('Unable to retrieve URL for volume. This is usually due to a timeout to CV, or going over the API. Retrying again in 10s.')
time.sleep(10)
safechk +=1
GetComicInfo(comicid, dom, safechk)
desdeck = 0
#the description field actually holds the Volume# - so let's grab it
try:

View File

@ -59,9 +59,9 @@ class WriteOnly:
if sqlResult:
mylarQueue.task_done()
return sqlResult
#else:
# time.sleep(1)
# logger.fdebug('[' + str(thisthread) + '] sleeping until active.')
else:
time.sleep(1)
#logger.fdebug('[' + str(thisthread) + '] sleeping until active.')
class DBConnection:
@ -159,6 +159,7 @@ class DBConnection:
def upsert(self, tableName, valueDict, keyDict):
thisthread = threading.currentThread().name
changesBefore = self.connection.total_changes
@ -174,10 +175,10 @@ class DBConnection:
self.action(query, valueDict.values() + keyDict.values())
# else:
# logger.info('[' + str(thisthread) + '] db is currently locked for writing. Queuing this action until it is free')
# logger.info('Table: ' + str(tableName) + ' Values: ' + str(valueDict) + ' Keys: ' + str(keyDict))
# self.queue.put( (tableName, valueDict, keyDict) )
# #assuming this is coming in from a seperate thread, so loop it until it's free to write.
# #self.queuesend()
#else:
# logger.info('[' + str(thisthread) + '] db is currently locked for writing. Queuing this action until it is free')
# logger.info('Table: ' + str(tableName) + ' Values: ' + str(valueDict) + ' Keys: ' + str(keyDict))
# self.queue.put( (tableName, valueDict, keyDict) )
# #assuming this is coming in from a seperate thread, so loop it until it's free to write.
# #self.queuesend()

View File

@ -621,7 +621,7 @@ def listFiles(dir,watchcomic,Publisher,AlternateSearch=None,manual=None,sarc=Non
elif ('-' in watchcomic or '.' in watchcomic) and j < len(watchcomic):
logger.fdebug('[FILECHECKER] - appears in series title, ignoring.')
else:
digitchk = subname[j-1:]
digitchk = re.sub('#','', subname[j-1:]).strip()
logger.fdebug('[FILECHECKER] special character appears outside of title - ignoring @ position: ' + str(charpos[i]))
nonocount-=1
@ -651,7 +651,7 @@ def listFiles(dir,watchcomic,Publisher,AlternateSearch=None,manual=None,sarc=Non
logger.fdebug('[FILECHECKER] after title removed from FILENAME [' + str(item[jtd_len:]) + ']')
logger.fdebug('[FILECHECKER] creating just the digits using SUBNAME, pruning first [' + str(jtd_len) + '] chars from [' + subname + ']')
justthedigits_1 = subname[jtd_len:].strip()
justthedigits_1 = re.sub('#','', subname[jtd_len:]).strip()
if enable_annual:
logger.fdebug('enable annual is on')

View File

@ -1,12 +1,20 @@
#!/usr/local/bin/python
#import paramiko
import paramiko
import os
import time
import mylar
from mylar import logger
class FastTransport(paramiko.Transport):
def __init__(self, sock):
super(FastTransport, self).__init__(sock)
self.window_size = 2147483647
self.packetizer.REKEY_BYTES = pow(2, 40)
self.packetizer.REKEY_PACKETS = pow(2, 40)
def putfile(localpath,file): #localpath=full path to .torrent (including filename), file=filename of torrent
try:
@ -68,6 +76,131 @@ def putfile(localpath,file): #localpath=full path to .torrent (including file
logger.fdebug('Upload complete to seedbox.')
return "pass"
if __name__ == '__main__':
putfile(sys.argv[1])
def sendfiles(filelist):
fhost = mylar.TAB_HOST.find(':')
host = mylar.TAB_HOST[:fhost]
port = int(mylar.TAB_HOST[fhost+1:])
logger.fdebug('Destination: ' + host)
logger.fdebug('Using SSH port : ' + str(port))
transport = FastTransport((host, port))
password = mylar.TAB_PASS
username = mylar.TAB_USER
transport.connect(username = username, password = password)
sftp = paramiko.SFTPClient.from_transport(transport)
import sys
remotepath = mylar.TAB_DIRECTORY
logger.fdebug('remote path set to ' + remotepath)
if len(filelist) > 0:
logger.info('Initiating send for ' + str(len(filelist)) + ' files...')
logger.info(sftp)
logger.info(filelist)
logger.info(transport)
sendtohome(sftp, remotepath, filelist, transport)
def sendtohome(sftp, remotepath, filelist, transport):
fhost = mylar.TAB_HOST.find(':')
host = mylar.TAB_HOST[:fhost]
port = int(mylar.TAB_HOST[fhost+1:])
successlist = []
for files in filelist:
tempfile = files['filename']
issid = files['issueid']
logger.fdebug('Checking filename for problematic characters: ' + tempfile)
#we need to make the required directory(ies)/subdirectories before the get will work.
if u'\xb4' in files['filename']:
# right quotation
logger.fdebug('detected abnormal character in filename')
filename = tempfile.replace('0xb4', '\'')
if u'\xbd' in files['filename']:
# 1/2 character
filename = tempfile.replace('0xbd', 'half')
if u'\uff1a' in files['filename']:
#some unknown character
filename = tempfile.replace('\0ff1a', '-')
#now we encode the structure to ascii so we can write directories/filenames without error.
filename = tempfile.encode('ascii','ignore')
remdir = remotepath
localsend = os.path.join(files['filepath'], files['filename'])
logger.info('Sending : ' + localsend)
remotesend = os.path.join(remdir,filename)
logger.info('To : ' + remotesend)
if not filechk:
sendcheck = False
count = 1
while sendcheck == False:
try:
sftp.put(localsend, remotesend)
sendcheck = True
except Exception, e:
logger.info('Attempt #' + str(count) + ': ERROR Sending issue to seedbox *** Caught exception: %s: %s' % (e.__class__,e))
logger.info('Forcibly closing connection and attempting to reconnect')
sftp.close()
transport.close()
#reload the transport here cause it locked up previously.
transport = FastTransport((host, port))
transport.connect(username=mylar.TAB_USER, password=mylar.TAB_PASS)
sftp = paramiko.SFTPClient.from_transport(transport)
count+=1
if count > 5:
break
if count > 5:
logger.info('Unable to send - tried 5 times and failed. Aborting entire process.')
break
else:
logger.info('file already exists - checking if complete or not.')
filesize = sftp.stat(remotesend).st_size
if not filesize == files['filesize']:
logger.info('file not complete - attempting to resend')
sendcheck = False
count = 1
while sendcheck == False:
try:
sftp.put(localsend, remotesend)
sendcheck = True
except Exception, e:
logger.info('Attempt #' + str(count) + ': ERROR Sending issue to seedbox *** Caught exception: %s: %s' % (e.__class__,e))
logger.info('Forcibly closing connection and attempting to reconnect')
sftp.close()
transport.close()
#reload the transport here cause it locked up previously.
transport = FastTransport((host, port))
transport.connect(username=mylar.TAB_USER, password=mylar.TAB_PASS)
sftp = paramiko.SFTPClient.from_transport(transport)
count+=1
if count > 5:
break
if count > 5:
logger.info('Unable to send - tried 5 times and failed. Aborting entire process.')
break
else:
logger.info('file 100% complete according to byte comparison.')
logger.info('Marking as being successfully Downloaded to 3rd party device (Queuing to change Read Status to Downloaded)')
successlist.append({"issueid": issid})
sftp.close()
transport.close()
logger.fdebug('Upload of readlist complete.')
return
#if __name__ == '__main__':
# putfile(sys.argv[1])

View File

@ -816,6 +816,8 @@ def cleanhtml(raw_html):
def issuedigits(issnum):
import db, logger
int_issnum = None
try:
tst = issnum.isdigit()
@ -833,27 +835,36 @@ def issuedigits(issnum):
# logger.error('This is not an issue number - not enough numerics to parse')
# int_issnum = 999999999999999
# return int_issnum
if 'au' in issnum.lower() and issnum[:1].isdigit():
int_issnum = (int(issnum[:-2]) * 1000) + ord('a') + ord('u')
elif 'ai' in issnum.lower() and issnum[:1].isdigit():
int_issnum = (int(issnum[:-2]) * 1000) + ord('a') + ord('i')
elif 'inh' in issnum.lower():
remdec = issnum.find('.') #find the decimal position.
if remdec == -1:
try:
if 'au' in issnum.lower() and issnum[:1].isdigit():
int_issnum = (int(issnum[:-2]) * 1000) + ord('a') + ord('u')
elif 'ai' in issnum.lower() and issnum[:1].isdigit():
int_issnum = (int(issnum[:-2]) * 1000) + ord('a') + ord('i')
elif 'inh' in issnum.lower() or 'now' in issnum.lower():
remdec = issnum.find('.') #find the decimal position.
if remdec == -1:
#if no decimal, it's all one string
#remove the last 3 characters from the issue # (INH)
int_issnum = (int(issnum[:-3]) * 1000) + ord('i') + ord('n') + ord('h')
else:
int_issnum = (int(issnum[:-4]) * 1000) + ord('i') + ord('n') + ord('h')
elif 'now' in issnum.lower():
if '!' in issnum: issnum = re.sub('\!', '', issnum)
remdec = issnum.find('.') #find the decimal position.
if remdec == -1:
int_issnum = (int(issnum[:-3]) * 1000) + ord('i') + ord('n') + ord('h')
else:
int_issnum = (int(issnum[:-4]) * 1000) + ord('i') + ord('n') + ord('h')
elif 'now' in issnum.lower():
if '!' in issnum: issnum = re.sub('\!', '', issnum)
remdec = issnum.find('.') #find the decimal position.
if remdec == -1:
#if no decimal, it's all one string
#remove the last 3 characters from the issue # (NOW)
int_issnum = (int(issnum[:-3]) * 1000) + ord('n') + ord('o') + ord('w')
else:
int_issnum = (int(issnum[:-4]) * 1000) + ord('n') + ord('o') + ord('w')
int_issnum = (int(issnum[:-3]) * 1000) + ord('n') + ord('o') + ord('w')
else:
int_issnum = (int(issnum[:-4]) * 1000) + ord('n') + ord('o') + ord('w')
except ValueError as e:
logger.error('[' + issnum + '] Unable to properly determine the issue number. Error: %s', e)
return 9999999999
if int_issnum is not None:
return int_issnum
elif u'\xbd' in issnum:
int_issnum = .5 * 1000
elif u'\xbc' in issnum:
@ -953,11 +964,11 @@ def checkthepub(ComicID):
return mylar.BIGGIE_PUB
else:
for publish in publishers:
if publish in str(pubchk['ComicPublisher']).lower():
logger.fdebug('Biggie publisher detected - ' + str(pubchk['ComicPublisher']))
if publish in pubchk['ComicPublisher'].lower():
logger.fdebug('Biggie publisher detected - ' + pubchk['ComicPublisher'])
return mylar.BIGGIE_PUB
logger.fdebug('Indie publisher detected - ' + str(pubchk['ComicPublisher']))
logger.fdebug('Indie publisher detected - ' + pubchk['ComicPublisher'])
return mylar.INDIE_PUB
def annual_update():
@ -1113,7 +1124,6 @@ def havetotals(refreshit=None):
comics = []
if refreshit is None:
if mylar.DBCHOICE == 'postgresql':
import db_postgresql as db
@ -1170,7 +1180,7 @@ def havetotals(refreshit=None):
try:
percent = (haveissues*100.0)/totalissues
if percent > 100:
percent = 100
percent = 101
except (ZeroDivisionError, TypeError):
percent = 0
totalissuess = '?'
@ -1187,10 +1197,13 @@ def havetotals(refreshit=None):
c_date = datetime.date(int(latestdate[:4]),int(latestdate[5:7]),1)
n_date = datetime.date.today()
recentchk = (n_date - c_date).days
if recentchk < 55:
if comic['NewPublish']:
recentstatus = 'Continuing'
else:
recentstatus = 'Ended'
if recentchk < 55:
recentstatus = 'Continuing'
else:
recentstatus = 'Ended'
else:
recentstatus = 'Ended'
@ -1202,7 +1215,7 @@ def havetotals(refreshit=None):
"ComicImage": comic['ComicImage'],
"LatestIssue": comic['LatestIssue'],
"LatestDate": comic['LatestDate'],
"ComicPublished": comic['ComicPublished'],
"ComicPublished": re.sub('(N)','',comic['ComicPublished']).strip(),
"Status": comic['Status'],
"recentstatus": recentstatus,
"percent": percent,
@ -1214,7 +1227,8 @@ def havetotals(refreshit=None):
def cvapi_check(web=None):
import logger
if web is None: logger.fdebug('[ComicVine API] ComicVine API Check Running...')
#if web is None:
# logger.fdebug('[ComicVine API] ComicVine API Check Running...')
if mylar.CVAPI_TIME is None or mylar.CVAPI_TIME == '':
c_date = now()
c_obj_date = datetime.datetime.strptime(c_date,"%Y-%m-%d %H:%M:%S")
@ -1224,14 +1238,14 @@ def cvapi_check(web=None):
c_obj_date = datetime.datetime.strptime(mylar.CVAPI_TIME,"%Y-%m-%d %H:%M:%S")
else:
c_obj_date = mylar.CVAPI_TIME
if web is None: logger.fdebug('[ComicVine API] API Start Monitoring Time (~15mins): ' + str(mylar.CVAPI_TIME))
#if web is None: logger.fdebug('[ComicVine API] API Start Monitoring Time (~15mins): ' + str(mylar.CVAPI_TIME))
now_date = now()
n_date = datetime.datetime.strptime(now_date,"%Y-%m-%d %H:%M:%S")
if web is None: logger.fdebug('[ComicVine API] Time now: ' + str(n_date))
#if web is None: logger.fdebug('[ComicVine API] Time now: ' + str(n_date))
absdiff = abs(n_date - c_obj_date)
mins = round(((absdiff.days * 24 * 60 * 60 + absdiff.seconds) / 60.0),2)
if mins < 15:
if web is None: logger.info('[ComicVine API] Comicvine API count now at : ' + str(mylar.CVAPI_COUNT) + ' / ' + str(mylar.CVAPI_MAX) + ' in ' + str(mins) + ' minutes.')
#if web is None: logger.info('[ComicVine API] Comicvine API count now at : ' + str(mylar.CVAPI_COUNT) + ' / ' + str(mylar.CVAPI_MAX) + ' in ' + str(mins) + ' minutes.')
if mylar.CVAPI_COUNT > mylar.CVAPI_MAX:
cvleft = 15 - mins
if web is None: logger.warn('[ComicVine API] You have already hit your API limit (' + str(mylar.CVAPI_MAX) + ' with ' + str(cvleft) + ' minutes. Best be slowing down, cowboy.')
@ -1239,7 +1253,7 @@ def cvapi_check(web=None):
mylar.CVAPI_COUNT = 0
c_date = now()
mylar.CVAPI_TIME = datetime.datetime.strptime(c_date,"%Y-%m-%d %H:%M:%S")
if web is None: logger.info('[ComicVine API] 15 minute API interval resetting [' + str(mylar.CVAPI_TIME) + ']. Resetting API count to : ' + str(mylar.CVAPI_COUNT))
#if web is None: logger.info('[ComicVine API] 15 minute API interval resetting [' + str(mylar.CVAPI_TIME) + ']. Resetting API count to : ' + str(mylar.CVAPI_COUNT))
if web is None:
return
@ -1548,9 +1562,10 @@ def listLibrary():
for row in list:
library[row['ComicID']] = row['ComicID']
# Add the annuals
list = myDB.select("SELECT ReleaseComicId,ComicID FROM Annuals")
for row in list:
library[row['ReleaseComicId']] = row['ComicID']
if mylar.ANNUALS_ON:
list = myDB.select("SELECT ReleaseComicId,ComicID FROM Annuals")
for row in list:
library[row['ReleaseComicId']] = row['ComicID']
return library
def incr_snatched(ComicID):
@ -1609,40 +1624,45 @@ def duplicate_filecheck(filename, ComicID=None, IssueID=None, StoryArcID=None):
#this will be eventually user-controlled via the GUI once the options are enabled.
if int(dupsize) == 0:
logger.info('[DUPECHECK] Existing filesize is 0 as I cannot locate the original entry. Will assume it is Archived already.')
rtnval = "dupe"
else:
logger.fdebug('[DUPECHECK] Based on duplication preferences I will retain based on : ' + mylar.DUPECONSTRAINT)
if 'cbr' in mylar.DUPECONSTRAINT or 'cbz' in mylar.DUPECONSTRAINT:
if 'cbr' in mylar.DUPECONSTRAINT:
#this has to be configured in config - either retain cbr or cbz.
if dupchk['Location'].endswith('.cbz'):
#keep dupechk['Location']
logger.info('[DUPECHECK-CBR PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining currently scanned in file : ' + dupchk['Location'])
rtnval = "dupe"
else:
#keep filename
logger.info('[DUPECHECK-CBR PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining newly scanned in file : ' + filename)
rtnval = "write"
logger.info('[DUPECHECK] Existing filesize is 0 as I cannot locate the original entry.')
if dupchk['Status'] == 'Archived':
logger.info('[DUPECHECK] Assuming issue is Archived.')
rtnval = "dupe"
return
else:
logger.info('[DUPECHECK] Assuming 0-byte file - this one is gonna get hammered.')
elif 'cbz' in mylar.DUPECONSTRAINT:
if dupchk['Location'].endswith('.cbr'):
#keep dupchk['Location']
logger.info('[DUPECHECK-CBZ PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining currently scanned in filename : ' + dupchk['Location'])
rtnval = "dupe"
else:
#keep filename
logger.info('[DUPECHECK-CBZ PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining newly scanned in filename : ' + filename)
rtnval = "write"
if mylar.DUPECONSTRAINT == 'filesize':
if filesz <= dupsize:
logger.info('[DUPECHECK-FILESIZE PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining currently scanned in filename : ' + dupchk['Location'])
logger.fdebug('[DUPECHECK] Based on duplication preferences I will retain based on : ' + mylar.DUPECONSTRAINT)
if 'cbr' in mylar.DUPECONSTRAINT or 'cbz' in mylar.DUPECONSTRAINT:
if 'cbr' in mylar.DUPECONSTRAINT:
#this has to be configured in config - either retain cbr or cbz.
if dupchk['Location'].endswith('.cbz'):
#keep dupechk['Location']
logger.info('[DUPECHECK-CBR PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining currently scanned in file : ' + dupchk['Location'])
rtnval = "dupe"
else:
logger.info('[DUPECHECK-FILESIZE PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining newly scanned in filename : ' + filename)
#keep filename
logger.info('[DUPECHECK-CBR PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining newly scanned in file : ' + filename)
rtnval = "write"
elif 'cbz' in mylar.DUPECONSTRAINT:
if dupchk['Location'].endswith('.cbr'):
#keep dupchk['Location']
logger.info('[DUPECHECK-CBZ PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining currently scanned in filename : ' + dupchk['Location'])
rtnval = "dupe"
else:
#keep filename
logger.info('[DUPECHECK-CBZ PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining newly scanned in filename : ' + filename)
rtnval = "write"
if mylar.DUPECONSTRAINT == 'filesize':
if filesz <= int(dupsize) and int(dupsize) != 0:
logger.info('[DUPECHECK-FILESIZE PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining currently scanned in filename : ' + dupchk['Location'])
rtnval = "dupe"
else:
logger.info('[DUPECHECK-FILESIZE PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining newly scanned in filename : ' + filename)
rtnval = "write"
else:
logger.info('[DUPECHECK] Duplication detection returned no hits. This is not a duplicate of anything that I have scanned in as of yet.')
rtnval = "write"
@ -1683,7 +1703,6 @@ def create_https_certificates(ssl_cert, ssl_key):
return True
from threading import Thread
class ThreadWithReturnValue(Thread):

View File

@ -535,6 +535,7 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
lastpubdate = issuedata['LastPubDate']
series_status = issuedata['SeriesStatus']
#move the files...if imported is not empty & not futurecheck (meaning it's not from the mass importer.)
logger.info('imported is : ' + str(imported))
if imported is None or imported == 'None' or imported == 'futurecheck':
pass
else:
@ -1357,7 +1358,14 @@ def updateissuedata(comicid, comicname=None, issued=None, comicIssues=None, call
else:
lastpubdate = str(ltmonth) + ' ' + str(ltyear)
publishfigure = str(stmonth) + ' ' + str(styear) + ' - ' + str(lastpubdate)
if stmonth == '?' and ('?' in lastpubdate and '0000' in lastpubdate):
lastpubdate = 'Present'
newpublish = True
publishfigure = str(styear) + ' - ' + str(lastpubdate)
else:
newpublish = False
publishfigure = str(stmonth) + ' ' + str(styear) + ' - ' + str(lastpubdate)
if stmonth == '?' and styear == '?' and lastpubdate =='0000' and comicIssues == '0':
logger.info('No available issue data - I believe this is a NEW series.')
latestdate = latestissueinfo[0]['latestdate']
@ -1370,7 +1378,9 @@ def updateissuedata(comicid, comicname=None, issued=None, comicIssues=None, call
controlValueStat = {"ComicID": comicid}
newValueStat = {"Status": "Active",
"Total": comicIssues,
"ComicPublished": publishfigure,
"NewPublish": newpublish,
"LatestIssue": latestiss,
"LatestDate": latestdate,
"LastUpdated": helpers.now()
@ -1396,6 +1406,7 @@ def updateissuedata(comicid, comicname=None, issued=None, comicIssues=None, call
def annual_check(ComicName, SeriesYear, comicid, issuetype, issuechk, weeklyissue_check):
annualids = [] #to be used to make sure an ID isn't double-loaded
annload = []
anncnt = 0
nowdate = datetime.datetime.now()
nowtime = nowdate.strftime("%Y%m%d")

View File

@ -6,8 +6,8 @@ import shutil
def movefiles(comicid,comlocation,ogcname,imported=None):
myDB = db.DBConnection()
print ("comlocation is : " + str(comlocation))
print ("original comicname is : " + str(ogcname))
logger.fdebug('comlocation is : ' + str(comlocation))
logger.fdebug('original comicname is : ' + str(ogcname))
impres = myDB.select("SELECT * from importresults WHERE ComicName=?", [ogcname])
if impres is not None:
@ -17,15 +17,15 @@ def movefiles(comicid,comlocation,ogcname,imported=None):
orig_filename = impr['ComicFilename']
orig_iss = impr['impID'].rfind('-')
orig_iss = impr['impID'][orig_iss+1:]
print ("Issue :" + str(orig_iss))
logger.fdebug("Issue :" + str(orig_iss))
#before moving check to see if Rename to Mylar structure is enabled.
if mylar.IMP_RENAME and mylar.FILE_FORMAT != '':
print("Renaming files according to configuration details : " + str(mylar.FILE_FORMAT))
logger.fdebug("Renaming files according to configuration details : " + str(mylar.FILE_FORMAT))
renameit = helpers.rename_param(comicid, impr['ComicName'], orig_iss, orig_filename)
nfilename = renameit['nfilename']
dstimp = os.path.join(comlocation,nfilename)
else:
print("Renaming files not enabled, keeping original filename(s)")
logger.fdebug("Renaming files not enabled, keeping original filename(s)")
dstimp = os.path.join(comlocation,orig_filename)
logger.info("moving " + str(srcimp) + " ... to " + str(dstimp))
@ -33,7 +33,7 @@ def movefiles(comicid,comlocation,ogcname,imported=None):
shutil.move(srcimp, dstimp)
except (OSError, IOError):
logger.error("Failed to move files - check directories and manually re-run.")
print("all files moved.")
logger.fdebug("all files moved.")
#now that it's moved / renamed ... we remove it from importResults or mark as completed.
results = myDB.select("SELECT * from importresults WHERE ComicName=?", [ogcname])

View File

@ -53,7 +53,6 @@ def tehMain(forcerss=None):
logger.info('[RSS] Watchlist Check complete.')
if forcerss:
logger.info('Successfully ran RSS Force Check.')
return
def torrents(pickfeed=None,seriesname=None,issue=None):

View File

@ -115,8 +115,8 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueD
newznabs+=1
logger.fdebug("newznab name:" + str(newznab_host[0]) + " @ " + str(newznab_host[1]))
logger.fdebug('newznab hosts: ' + str(newznab_hosts))
logger.fdebug('nzbprovider: ' + str(nzbprovider))
#logger.fdebug('newznab hosts: ' + str(newznab_hosts))
logger.fdebug('nzbprovider(s): ' + str(nzbprovider))
# --------
logger.fdebug("there are : " + str(torp) + " torrent providers you have selected.")
torpr = torp - 1
@ -448,23 +448,18 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
logger.fdebug("Sending request to [" + str(nzbprov) + "] RSS for " + str(findcomic) + " : " + str(mod_isssearch))
bb = rsscheck.torrentdbsearch(findcomic,mod_isssearch,ComicID,nzbprov)
rss = "yes"
if bb is not None: logger.fdebug("bb results: " + str(bb))
#if bb is not None: logger.fdebug("bb results: " + str(bb))
else:
cmname = re.sub("%20", " ", str(comsrc))
logger.fdebug("Sending request to RSS for " + str(findcomic) + " : " + str(mod_isssearch) + " (" + str(ComicYear) + ")")
bb = rsscheck.nzbdbsearch(findcomic,mod_isssearch,ComicID,nzbprov,ComicYear,ComicVersion)
rss = "yes"
if bb is not None: logger.fdebug("bb results: " + str(bb))
#if bb is not None: logger.fdebug("bb results: " + str(bb))
#this is the API calls
else:
#CBT is redudant now since only RSS works
# - just getting it ready for when it's not redudant :)
if nzbprov == 'CBT':
# cmname = re.sub("%20", " ", str(comsrc))
# logger.fdebug("Sending request to [CBT] RSS for " + str(cmname) + " : " + str(mod_isssearch))
# bb = rsscheck.torrentdbsearch(cmname,mod_isssearch,ComicID)
# rss = "yes"
# if bb is not None: logger.fdebug("results: " + str(bb))
bb = "no results"
elif nzbprov == 'KAT':
cmname = re.sub("%20", " ", str(comsrc))
@ -718,8 +713,8 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
vers4vol = "no"
versionfound = "no"
if 'cover only' in cleantitle.lower():
logger.fdebug("Ignoring title as Cover Only detected.")
if any( ['cover only' in cleantitle.lower(), 'variant' in cleantitle.lower()] ):
logger.fdebug("Ignoring title as Cover/Variant Only detected.")
cleantitle = "abcdefghijk 0 (1901).cbz"
continue
@ -1302,7 +1297,6 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
logger.fdebug("Found matching comic...preparing to send to Updater with IssueID: " + str(IssueID) + " and nzbname: " + str(nzbname) + '[' + alt_nzbname + ']')
if '[RSS]' in tmpprov : tmpprov = re.sub('\[RSS\]','', tmpprov).strip()
updater.nzblog(IssueID, nzbname, ComicName, SARC=SARC, IssueArcID=IssueArcID, id=nzbid, prov=tmpprov, alt_nzbname=alt_nzbname)
# #send out the notifications for the snatch.
notify_snatch(nzbname, sent_to, helpers.filesafe(modcomicname), comyear, IssueNumber, nzbprov)
prov_count == 0
@ -1322,9 +1316,9 @@ def searchforissue(issueid=None, new=False, rsscheck=None):
if not issueid or rsscheck:
if rsscheck:
logger.info(u"Initiating RSS Search Scan at scheduled interval of " + str(mylar.RSS_CHECKINTERVAL) + " minutes.")
logger.info(u"Initiating Search Scan at scheduled interval of " + str(mylar.RSS_CHECKINTERVAL) + " minutes.")
else:
logger.info(u"Initiating NZB Search scan at requested interval of " + str(mylar.SEARCH_INTERVAL) + " minutes.")
logger.info(u"Initiating Search scan at requested interval of " + str(mylar.SEARCH_INTERVAL) + " minutes.")
myDB = db.DBConnection()
@ -1335,7 +1329,10 @@ def searchforissue(issueid=None, new=False, rsscheck=None):
stloop+=1
while (stloop > 0):
if stloop == 1:
issues_1 = myDB.select('SELECT * from issues WHERE Status="Wanted"')
if mylar.FAILED_DOWNLOAD_HANDLING and mylar.FAILED_AUTO:
issues_1 = myDB.select('SELECT * from issues WHERE Status="Wanted" OR Status="Failed"')
else:
issues_1 = myDB.select('SELECT * from issues WHERE Status="Wanted"')
for iss in issues_1:
results.append({'ComicID': iss['ComicID'],
'IssueID': iss['IssueID'],
@ -1345,7 +1342,10 @@ def searchforissue(issueid=None, new=False, rsscheck=None):
'mode': 'want'
})
elif stloop == 2:
issues_2 = myDB.select('SELECT * from annuals WHERE Status="Wanted"')
if mylar.FAILED_DOWNLOAD_HANDLING and mylar.FAILED_AUTO:
issues_2 = myDB.select('SELECT * from annuals WHERE Status="Wanted" OR Status="Failed"')
else:
issues_2 = myDB.select('SELECT * from annuals WHERE Status="Wanted"')
for iss in issues_2:
results.append({'ComicID': iss['ComicID'],
'IssueID': iss['IssueID'],
@ -1554,8 +1554,7 @@ def nzbname_create(provider, title=None, info=None):
logger.fdebug('[SEARCHER] nzbname (space to .): ' + nzbname)
#gotta replace & or escape it
nzbname = re.sub("\&", 'and', nzbname)
nzbname = re.sub('[\,\:\?\']', '', nzbname)
extensions = ('.cbr', '.cbz')
nzbname = re.sub('[\,\:\?\'\(\)]', '', nzbname)
logger.fdebug('[SEARCHER] end nzbname: ' + nzbname)
@ -1596,6 +1595,67 @@ def searcher(nzbprov, nzbname, comicinfo, link, IssueID, ComicID, tmpprov, direc
#if sab priority isn't selected, default to Normal (0)
nzbgetpriority = "0"
if link and (nzbprov != 'KAT' and nzbprov != 'CBT'):
opener = urllib.FancyURLopener({})
opener.addheaders = []
opener.addheader('User-Agent', str(mylar.USER_AGENT))
nzo_info = {}
filen = None
try:
fn, header = opener.retrieve(link)
except:
fn = None
for tup in header.items():
try:
item = tup[0].lower()
value = tup[1].strip()
except:
continue
if item in ('category_id', 'x-dnzb-category'):
category = value
elif item in ('x-dnzb-moreinfo',):
nzo_info['more_info'] = value
elif item in ('x-dnzb-name',):
filen = value
nzo_info['filename'] = filen
elif item == 'x-dnzb-propername':
nzo_info['propername'] = value
elif item == 'x-dnzb-episodename':
nzo_info['episodename'] = value
elif item == 'x-dnzb-year':
nzo_info['year'] = value
elif item == 'x-dnzb-failure':
nzo_info['failure'] = value
elif item == 'x-dnzb-details':
nzo_info['details'] = value
elif item in ('content-length',):
try:
ivalue = int(value)
except:
ivalue = 0
length = ivalue
nzo_info['length'] = length
if not filen:
for item in tup:
if "filename=" in item:
filen = item[item.index("filename=") + 9:].strip(';').strip('"')
logger.fdebug('nzo_info:' + str(nzo_info))
#convert to a generic type of format to help with post-processing.
filen = re.sub('.cbr', '', filen).strip()
filen = re.sub('.cbz', '', filen).strip()
filen = re.sub("\&", 'and', filen)
filen = re.sub('[\,\:\?\'\(\)]', '', filen)
if re.sub('.nzb','', filen.lower()).strip() != re.sub('.nzb','', nzbname.lower()).strip():
alt_nzbname = re.sub('.nzb','', filen).strip()
alt_nzbname = re.sub('[\s+]', ' ', alt_nzbname)
alt_nzbname = re.sub('[\s\_]', '.', alt_nzbname)
logger.info('filen: ' + alt_nzbname + ' -- nzbname: ' + nzbname + ' are not identical. Storing extra value as : ' + alt_nzbname)
#check if nzb is in do not download list
if nzbprov == 'experimental':
#id is located after the /download/ portion
@ -1619,61 +1679,6 @@ def searcher(nzbprov, nzbname, comicinfo, link, IssueID, ComicID, tmpprov, direc
path_parts = url_parts[2].rpartition('/')
nzbid = re.sub('.nzb&amp','', path_parts[2]).strip()
elif nzbprov == 'dognzb':
if link:
opener = urllib.FancyURLopener({})
opener.addheaders = []
opener.addheader('User-Agent', str(mylar.USER_AGENT))
nzo_info = {}
filen = None
try:
fn, header = opener.retrieve(link)
except:
fn = None
for tup in header.items():
try:
item = tup[0].lower()
value = tup[1].strip()
except:
continue
if item in ('category_id', 'x-dnzb-category'):
category = value
elif item in ('x-dnzb-moreinfo',):
nzo_info['more_info'] = value
elif item in ('x-dnzb-name',):
filen = value
if not filen.endswith('.nzb'):
filen += '.nzb'
nzo_info['filename'] = filen
elif item == 'x-dnzb-propername':
nzo_info['propername'] = value
elif item == 'x-dnzb-episodename':
nzo_info['episodename'] = value
elif item == 'x-dnzb-year':
nzo_info['year'] = value
elif item == 'x-dnzb-failure':
nzo_info['failure'] = value
elif item == 'x-dnzb-details':
nzo_info['details'] = value
elif item in ('content-length',):
try:
ivalue = int(value)
except:
ivalue = 0
length = ivalue
nzo_info['length'] = length
if not filen:
for item in tup:
if "filename=" in item:
filen = item[item.index("filename=") + 9:].strip(';').strip('"')
logger.info('nzo_info:' + str(nzo_info))
if re.sub('.nzb','', filen.lower()).strip() != re.sub('.nzb','', nzbname.lower()).strip():
alt_nzbname = re.sub('.nzb','', filen).strip()
logger.info('filen: ' + filen + ' -- nzbname: ' + nzbname + ' are not identical. Storing extra value as : ' + alt_nzbname)
url_parts = urlparse.urlparse(link)
path_parts = url_parts[2].rpartition('/')
nzbid = path_parts[0].rsplit('/',1)[1]

View File

@ -19,12 +19,11 @@ import mylar
from mylar import logger
#import threading
class CurrentSearcher():
def __init__(self):
def __init__(self, **kwargs):
pass
def run(self):
logger.info('[SEARCH] Running Search for Wanted.')
mylar.search.searchforissue()

View File

@ -30,20 +30,36 @@ def dbUpdate(ComicIDList=None, calledfrom=None):
myDB = db.DBConnection()
#print "comicidlist:" + str(ComicIDList)
if ComicIDList is None:
comiclist = myDB.select('SELECT ComicID, ComicName from comics WHERE Status="Active" or Status="Loading" order by LastUpdated ASC')
comiclist = myDB.select('SELECT LatestDate, LastUpdated, ComicID, ComicName from comics WHERE Status="Active" or Status="Loading" order by LatestDate DESC, LastUpdated ASC')
else:
comiclist = ComicIDList
if calledfrom is None:
logger.info('Starting update for %i active comics' % len(comiclist))
cnt = 1
for comic in comiclist:
if ComicIDList is None:
ComicID = comic[0]
ComicID = comic[2]
ComicName = comic[3]
c_date = comic[1]
if c_date is None:
logger.error(ComicName + ' failed during a previous add /refresh as it has no Last Update timestamp. Forcing refresh now.')
else:
c_obj_date = datetime.datetime.strptime(c_date, "%Y-%m-%d %H:%M:%S")
n_date = datetime.datetime.now()
absdiff = abs(n_date - c_obj_date)
hours = (absdiff.days * 24 * 60 * 60 + absdiff.seconds) / 3600.0
if hours < 5:
logger.info(ComicName + '[' + str(ComicID) + '] Was refreshed less than 5 hours ago. Skipping Refresh at this time.')
cnt +=1
continue
logger.info('[' + str(cnt) + '/' + str(len(comiclist)) + '] Refreshing :' + ComicName + ' [' + str(ComicID) + ']')
else:
ComicID = comic
logger.fdebug('Refreshing :' + str(ComicID))
mismatch = "no"
logger.fdebug('Refreshing comicid: ' + str(ComicID))
if not mylar.CV_ONLY or ComicID[:1] == "G":
CV_EXcomicid = myDB.selectone("SELECT * from exceptions WHERE ComicID=?", [ComicID]).fetchone()
@ -149,7 +165,13 @@ def dbUpdate(ComicIDList=None, calledfrom=None):
newVAL = {"Status": issue['Status']}
if newVAL['Status'] == None:
newVAL = {"Status": "Skipped"}
datechk = re.sub('-','', newissue['ReleaseDate']).strip() # converts date to 20140718 format
if mylar.AUTOWANT_ALL:
newVAL = {"Status": "Wanted"}
elif int(datechk) >= int(nowtime) and mylar.AUTOWANT_UPCOMING:
newVAL = {"Status": "Wanted"}
else:
newVAL = {"Status": "Skipped"}
if issue['IssueDate_Edit']:
logger.info('[#' + str(issue['Issue_Number']) + '] detected manually edited Issue Date.')
@ -172,24 +194,34 @@ def dbUpdate(ComicIDList=None, calledfrom=None):
logger.info("In the process of converting the data to CV, I changed the status of " + str(icount) + " issues.")
issues_new = myDB.select('SELECT * FROM issues WHERE ComicID=? AND Status is NULL', [ComicID])
issuesnew = myDB.select('SELECT * FROM issues WHERE ComicID=? AND Status is NULL', [ComicID])
if mylar.ANNUALS_ON:
issues_new += myDB.select('SELECT * FROM annuals WHERE ComicID=? AND Status is NULL', [ComicID])
annualsnew = myDB.select('SELECT * FROM annuals WHERE ComicID=? AND Status is NULL', [ComicID])
newiss = []
if mylar.AUTOWANT_UPCOMING:
newstatus = "Wanted"
else:
newstatus = "Skipped"
for iss in issues_new:
for iss in issuesnew:
newiss.append({"IssueID": iss['IssueID'],
"Status": newstatus})
"Status": newstatus,
"Annual": False})
for ann in annualsnew:
newiss.append({"IssueID": iss['IssueID'],
"Status": newstatus,
"Annual": True})
if len(newiss) > 0:
for newi in newiss:
ctrlVAL = {"IssueID": newi['IssueID']}
newVAL = {"Status": newi['Status']}
#logger.fdebug('writing issuedata: ' + str(newVAL))
myDB.upsert("Issues", newVAL, ctrlVAL)
if newi['Annual'] == True:
myDB.upsert("Annuals", newVAL, ctrlVAL)
else:
myDB.upsert("Issues", newVAL, ctrlVAL)
logger.info('I have added ' + str(len(newiss)) + ' new issues for this series that were not present before.')
forceRescan(ComicID)
@ -200,6 +232,7 @@ def dbUpdate(ComicIDList=None, calledfrom=None):
else:
mylar.importer.addComictoDB(ComicID,mismatch)
cnt +=1
time.sleep(5) #pause for 5 secs so dont hammer CV and get 500 error
logger.info('Update complete')
@ -624,10 +657,12 @@ def foundsearch(ComicID, IssueID, mode=None, down=None, provider=None, SARC=None
else:
controlValue = {"IssueID": IssueID}
newValue = {"Status": "Downloaded"}
if mode == 'want_ann':
myDB.upsert("annuals", newValue, controlValue)
else:
myDB.upsert("issues", newValue, controlValue)
myDB.upsert("issues", newValue, controlValue)
logger.info(module + ' Updating Status (' + downstatus + ') now complete for ' + ComicName + ' issue: ' + str(IssueNum))
logger.info(module + ' Updating Status (' + downstatus + ') now complete for ' + ComicName + ' issue: ' + IssueNum)
return
def forceRescan(ComicID,archive=None,module=None):
@ -686,7 +721,9 @@ def forceRescan(ComicID,archive=None,module=None):
"AnnualComicID": cla['AnnualComicID']})
i+=1
fc['comiclist'] = fcb
iscnt = rescan['Total']
is_cnt = myDB.select("SELECT COUNT(*) FROM issues WHERE ComicID=?", [ComicID])
iscnt = is_cnt[0][0]
#iscnt = rescan['Total']
havefiles = 0
if mylar.ANNUALS_ON:
@ -732,6 +769,7 @@ def forceRescan(ComicID,archive=None,module=None):
while (fn < fccnt):
haveissue = "no"
issuedupe = "no"
annualdupe = "no"
try:
tmpfc = fc['comiclist'][fn]
except IndexError:
@ -851,9 +889,9 @@ def forceRescan(ComicID,archive=None,module=None):
issuedupe_temp = []
tmphavefiles = 0
for x in issuedupechk:
logger.fdebug('Comparing x: ' + x['filename'] + ' to di:' + di['filename'])
#logger.fdebug('Comparing x: ' + x['filename'] + ' to di:' + di['filename'])
if x['filename'] != di['filename']:
logger.fdebug('Matched.')
#logger.fdebug('Matched.')
issuedupe_temp.append(x)
tmphavefiles+=1
issuedupechk = issuedupe_temp
@ -901,43 +939,110 @@ def forceRescan(ComicID,archive=None,module=None):
fcn = len(fcnew)
n = 0
reann = None
while (n < anncnt):
som = 0
while True:
try:
reann = reannuals[n]
except IndexError:
break
int_iss, iss_except = helpers.decimal_issue(reann['Issue_Number'])
int_iss = helpers.issuedigits(reann['Issue_Number'])
logger.fdebug(module + ' int_iss:' + str(int_iss))
issyear = reann['IssueDate'][:4]
old_status = reann['Status']
while (som < fcn):
#counts get buggered up when the issue is the last field in the filename - ie. '50$
#logger.fdebug('checking word - ' + str(fcnew[som]))
if ".cbr" in fcnew[som].lower():
fcnew[som] = fcnew[som].replace(".cbr", "")
elif ".cbz" in fcnew[som].lower():
fcnew[som] = fcnew[som].replace(".cbz", "")
if "(c2c)" in fcnew[som].lower():
fcnew[som] = fcnew[som].replace("(c2c)", " ")
get_issue = shlex.split(str(fcnew[som]))
if fcnew[som] != " ":
fcnew[som] = get_issue[0]
if 'annual' in fcnew[som].lower():
logger.fdebug('Annual detected.')
if fcnew[som+1].isdigit():
ann_iss = fcnew[som+1]
logger.fdebug('Annual # ' + str(ann_iss) + ' detected.')
fcdigit = helpers.issuedigits(ann_iss)
logger.fdebug(module + ' fcdigit:' + str(fcdigit))
logger.fdebug(module + ' int_iss:' + str(int_iss))
if int(fcdigit) == int_iss:
logger.fdebug(module + ' Annual match - issue : ' + str(int_iss))
for d in annualdupechk:
if int(d['fcdigit']) == int(fcdigit) and d['anncomicid'] == ANNComicID:
logger.fdebug(module + ' Duplicate annual issue detected for Annual ComicID of ' + str(ANNComicID) + ' - not counting this: ' + str(tmpfc['ComicFilename']))
issuedupe = "yes"
fcdigit = helpers.issuedigits(re.sub('annual', '', temploc.lower()).strip())
logger.fdebug(module + ' fcdigit:' + str(fcdigit))
if int(fcdigit) == int_iss:
logger.fdebug(module + ' [' + str(ANNComicID) + '] Annual match - issue : ' + str(int_iss))
#baseline these to default to normal scanning
multiplechk = False
annualdupe = "no"
foundchk = False
#check here if muliple identical numbering issues exist for the series
if len(mc_issue) > 1:
for mi in mc_issue:
if mi['Int_IssueNumber'] == int_iss:
if mi['IssueID'] == reann['IssueID']:
logger.fdebug(module + ' IssueID matches to multiple issues : ' + str(mi['IssueID']) + '. Checking dupe.')
logger.fdebug(module + ' miISSUEYEAR: ' + str(mi['IssueYear']) + ' -- issyear : ' + str(issyear))
if any(mi['IssueID'] == d['issueid'] for d in issuedupechk):
logger.fdebug(module + ' IssueID already within dupe. Checking next if available.')
multiplechk = True
break
if (mi['IssueYear'] in tmpfc['ComicFilename']) and (issyear == mi['IssueYear']):
logger.fdebug(module + ' Matched to year within filename : ' + str(issyear))
multiplechk = False
break
else:
logger.fdebug(module + ' Did not match to year within filename : ' + str(issyear))
multiplechk = True
if multiplechk == True:
n+=1
continue
#this will detect duplicate filenames within the same directory.
for di in annualdupechk:
if di['fcdigit'] == fcdigit:
#base off of config - base duplication keep on filesize or file-type (or both)
logger.fdebug('[DUPECHECK] Duplicate issue detected [' + di['filename'] + '] [' + tmpfc['ComicFilename'] + ']')
# mylar.DUPECONSTRAINT = 'filesize' / 'filetype-cbr' / 'filetype-cbz'
logger.fdebug('[DUPECHECK] Based on duplication preferences I will retain based on : ' + mylar.DUPECONSTRAINT)
removedupe = False
if 'cbr' in mylar.DUPECONSTRAINT or 'cbz' in mylar.DUPECONSTRAINT:
if 'cbr' in mylar.DUPECONSTRAINT:
#this has to be configured in config - either retain cbr or cbz.
if tmpfc['ComicFilename'].endswith('.cbz'):
#keep di['filename']
logger.fdebug('[DUPECHECK-CBR PRIORITY] [#' + reann['Issue_Number'] + '] Retaining currently scanned in file : ' + di['filename'])
annualdupe = "yes"
break
else:
#keep tmpfc['ComicFilename']
logger.fdebug('[DUPECHECK-CBR PRIORITY] [#' + reann['Issue_Number'] + '] Retaining newly scanned in file : ' + tmpfc['ComicFilename'])
removedupe = True
elif 'cbz' in mylar.DUPECONSTRAINT:
if tmpfc['ComicFilename'].endswith('.cbr'):
#keep di['filename']
logger.fdebug('[DUPECHECK-CBZ PRIORITY] [#' + reann['Issue_Number'] + '] Retaining currently scanned in filename : ' + di['filename'])
annualdupe = "yes"
break
else:
#keep tmpfc['ComicFilename']
logger.fdebug('[DUPECHECK-CBZ PRIORITY] [#' + reann['Issue_Number'] + '] Retaining newly scanned in filename : ' + tmpfc['ComicFilename'])
removedupe = True
if mylar.DUPECONSTRAINT == 'filesize':
if tmpfc['ComicSize'] <= di['filesize']:
logger.fdebug('[DUPECHECK-FILESIZE PRIORITY] [#' + reann['Issue_Number'] + '] Retaining currently scanned in filename : ' + di['filename'])
annualdupe = "yes"
break
else:
logger.fdebug('[DUPECHECK-FILESIZE PRIORITY] [#' + reann['Issue_Number'] + '] Retaining newly scanned in filename : ' + tmpfc['ComicFilename'])
removedupe = True
if removedupe:
#need to remove the entry from issuedupechk so can add new one.
#tuple(y for y in x if y) for x in a
annualdupe_temp = []
tmphavefiles = 0
for x in annualdupechk:
logger.fdebug('Comparing x: ' + x['filename'] + ' to di:' + di['filename'])
if x['filename'] != di['filename']:
logger.fdebug('Matched.')
annualdupe_temp.append(x)
tmphavefiles+=1
annualdupechk = annualdupe_temp
havefiles = tmphavefiles
logger.fdebug(annualdupechk)
foundchk = False
break
if issuedupe == "no":
if annualdupe == "no":
if foundchk == False:
logger.fdebug(module + ' Matched...annual issue: ' + rescan['ComicName'] + '#' + str(reann['Issue_Number']) + ' --- ' + str(int_iss))
havefiles+=1
haveissue = "yes"
@ -948,13 +1053,21 @@ def forceRescan(ComicID,archive=None,module=None):
# to avoid duplicate issues which screws up the count...let's store the filename issues then
# compare earlier...
annualdupechk.append({'fcdigit': int(fcdigit),
'anncomicid': ANNComicID})
'anncomicid': ANNComicID,
'filename': tmpfc['ComicFilename'],
'filesize': tmpfc['ComicSize'],
'issueyear': issyear,
'issueid': reann['IssueID']})
break
som+=1
if haveissue == "yes": break
if annualdupe == "yes":
logger.fdebug(module + ' I should break out here because of a dupe.')
break
if haveissue == "yes" or annualdupe == "yes": break
n+=1
if issuedupe == "yes": pass
if issuedupe == "yes" or annualdupe == "yes": pass
else:
#we have the # of comics, now let's update the db.
#even if we couldn't find the physical issue, check the status.
@ -1140,12 +1253,12 @@ def forceRescan(ComicID,archive=None,module=None):
logger.fdebug(module + ' I have changed the status of ' + str(archivedissues) + ' issues to a status of Archived, as I now cannot locate them in the series directory.')
combined_total = iscnt + anncnt #(rescan['Total'] + anncnt)
#let's update the total count of comics that was found.
controlValueStat = {"ComicID": rescan['ComicID']}
newValueStat = {"Have": havefiles
}
combined_total = rescan['Total'] + anncnt
newValueStat = {"Have": havefiles,
"Total": iscnt}
myDB.upsert("comics", newValueStat, controlValueStat)
logger.info(module + ' I have physically found ' + str(foundcount) + ' issues, ignored ' + str(ignorecount) + ' issues, snatched ' + str(snatchedcount) + ' issues, and accounted for ' + str(totalarc) + ' in an Archived state [ Total Issue Count: ' + str(havefiles) + ' / ' + str(combined_total) + ' ]')

View File

@ -184,7 +184,7 @@ def update():
try:
logger.info('Downloading update from: '+tar_download_url)
data = urllib2.urlopen(tar_download_url)
except (IOError, URLError):
except (IOError, urllib2.URLError):
logger.error("Unable to retrieve new version from "+tar_download_url+", can't update")
return

View File

@ -34,8 +34,7 @@ import shutil
import mylar
from mylar import logger, db, importer, mb, search, filechecker, helpers, updater, parseit, weeklypull, PostProcessor, version, librarysync, moveit, Failed #,rsscheck
#from mylar.helpers import checked, radio, today
from mylar import logger, db, importer, mb, search, filechecker, helpers, updater, parseit, weeklypull, PostProcessor, version, librarysync, moveit, Failed, readinglist #,rsscheck
import lib.simplejson as simplejson
@ -239,7 +238,7 @@ class WebInterface(object):
"comicpublisher": comicpublisher,
"IssueDate": serinfo[0]['IssueDate'],
"IssueNumber": serinfo[0]['IssueNumber']})
self.future_check_add(comicid, ser)
weeklypull.future_check_add(comicid, ser)
sresults = []
cresults = []
mismatch = "no"
@ -531,7 +530,7 @@ class WebInterface(object):
#run the Search for Watchlist matches now.
logger.fdebug(module + ' Now searching your watchlist for matches belonging to this story arc.')
self.ArcWatchlist(storyarcid)
raise cherrypy.HTTPRedirect("detailReadlist?StoryArcID=%s&StoryArcName=%s" % (storyarcid, storyarcname))
raise cherrypy.HTTPRedirect("detailStoryArc?StoryArcID=%s&StoryArcName=%s" % (storyarcid, storyarcname))
addStoryArc.exposed = True
@ -739,15 +738,20 @@ class WebInterface(object):
raise cherrypy.HTTPRedirect("home")
deleteArtist.exposed = True
def wipenzblog(self, ComicID=None):
logger.fdebug("Wiping NZBLOG in it's entirety. You should NOT be downloading while doing this or else you'll lose the log for the download.")
def wipenzblog(self, ComicID=None, IssueID=None):
myDB = db.DBConnection()
if ComicID is None:
logger.fdebug("Wiping NZBLOG in it's entirety. You should NOT be downloading while doing this or else you'll lose the log for the download.")
myDB.action('DROP table nzblog')
logger.fdebug("Deleted nzblog table.")
myDB.action('CREATE TABLE IF NOT EXISTS nzblog (IssueID TEXT, NZBName TEXT, SARC TEXT, PROVIDER TEXT, ID TEXT)')
myDB.action('CREATE TABLE IF NOT EXISTS nzblog (IssueID TEXT, NZBName TEXT, SARC TEXT, PROVIDER TEXT, ID TEXT, AltNZBName TEXT)')
logger.fdebug("Re-created nzblog table.")
raise cherrypy.HTTPRedirect("history")
raise cherrypy.HTTPRedirect("history")
if IssueID:
logger.fdebug('Removing all download history for the given IssueID. This should allow post-processing to finish for the given IssueID.')
myDB.action('DELETE FROM nzblog WHERE IssueID=?', [IssueID])
logger.fdebug('Successfully removed all entries in the download log for IssueID: ' + str(IssueID))
raise cherrypy.HTTPRedirect("history")
wipenzblog.exposed = True
def refreshSeries(self, ComicID):
@ -1018,10 +1022,9 @@ class WebInterface(object):
if len(issuesToAdd) > 0:
logger.fdebug("Marking issues: %s as Wanted" % (issuesToAdd))
threading.Thread(target=search.searchIssueIDList, args=[issuesToAdd]).start()
#if IssueID:
raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % mi['ComicID'])
#else:
# raise cherrypy.HTTPRedirect("upcoming")
markissues.exposed = True
def retryit(self, **kwargs):
@ -1588,100 +1591,11 @@ class WebInterface(object):
add2futurewatchlist.exposed = True
def future_check(self):
# this is the function that will check the futureupcoming table
# for series that have yet to be released and have no CV data associated with it
# ie. #1 issues would fall into this as there is no series data to poll against until it's released.
# Mylar will look for #1 issues, and in finding any will do the following:
# - check comicvine to see if the series data has been released and / or issue data
# - will automatically import the series (Add A Series) upon finding match
# - will then proceed to mark the issue as Wanted, then remove from the futureupcoming table
# - will then attempt to download the issue(s) in question.
# future to-do
# specify whether you want to 'add a series (Watch For)' or 'mark an issue as a one-off download'.
# currently the 'add series' option in the futurepulllist will attempt to add a series as per normal.
myDB = db.DBConnection()
chkfuture = myDB.select("SELECT * FROM futureupcoming WHERE IssueNumber='1' OR IssueNumber='0'") #is not NULL")
if chkfuture is None:
logger.info("There are not any series on your future-list that I consider to be a NEW series")
raise cherrypy.HTTPRedirect("home")
cflist = []
#load in the values on an entry-by-entry basis into a tuple, so that we can query the sql clean again.
for cf in chkfuture:
cflist.append({"ComicName": cf['ComicName'],
"IssueDate": cf['IssueDate'],
"IssueNumber": cf['IssueNumber'], #this should be all #1's as the sql above limits the hits.
"Publisher": cf['Publisher'],
"Status": cf['Status']})
print 'cflist: ' + str(cflist)
#now we load in
logger.info('I will be looking to see if any information has been released for ' + str(len(cflist)) + ' series that are NEW series')
#limit the search to just the 'current year' since if it's anything but a #1, it should have associated data already.
#limittheyear = []
#limittheyear.append(cf['IssueDate'][-4:])
for ser in cflist:
theissdate = ser['IssueDate'][-4:]
if not theissdate.startswith('20'):
theissdate = ser['IssueDate'][:4]
logger.info('looking for new data for ' + ser['ComicName'] + '[#' + str(ser['IssueNumber']) + '] (' + str(theissdate) + ')')
searchresults, explicit = mb.findComic(ser['ComicName'], mode='pullseries', issue=ser['IssueNumber'], limityear=theissdate, explicit='all')
print searchresults
if len(searchresults) > 1:
logger.info('publisher: ' + str(ser['Publisher']))
logger.info('More than one result returned - this may have to be a manual add')
return serve_template(templatename="searchresults.html", title='New Series Results for: "' + ser['ComicName'] + '"',searchresults=searchresults, type='series', imported='futurecheck', ogcname=ser['ComicName'], name=ser['ComicName'], explicit='all', serinfo=ser) #imported=comicstoIMP, ogcname=ogcname)
#call secondary module here to complete the selected add.
else:
for sr in searchresults:
#we should probably load all additional issues for the series on the futureupcoming list that are marked as Wanted and then
#throw them to the importer as a tuple, and once imported the import can run the additional search against them.
#now we scan for additional issues of the same series on the upcoming list and mark them accordingly.
chkwant = myDB.select("SELECT * FROM futureupcoming WHERE ComicName=? AND IssueNumber != '1' AND Status='Wanted'", [ser['ComicName']])
if chkwant is None:
logger.info('No extra issues to mark at this time for ' + ser['ComicName'])
else:
chkthewanted = []
for chk in chkwant:
chkthewanted.append({"ComicName": chk['ComicName'],
"IssueDate": chk['IssueDate'],
"IssueNumber": chk['IssueNumber'], #this should be all #1's as the sql above limits the hits.
"Publisher": chk['Publisher'],
"Status": chk['Status']})
logger.info('Marking ' + str(len(chkthewanted)) + ' additional issues as Wanted from ' + ser['ComicName'] + ' series as requested')
self.future_check_add(sr['comicid'], ser, chkthewanted, theissdate)
weeklypull.future_check
raise cherrypy.HTTPRedirect("upcoming")
future_check.exposed = True
def future_check_add(self, comicid, serinfo, chkthewanted=None, theissdate=None):
#In order to not error out when adding series with absolutely NO issue data, we need to 'fakeup' some values
#latestdate = the 'On sale' date from the futurepull-list OR the Shipping date if not available.
#latestiss = the IssueNumber for the first issue (this should always be #1, but might change at some point)
ser = serinfo
if theissdate is None:
theissdate = ser['IssueDate'][-4:]
if not theissdate.startswith('20'):
theissdate = ser['IssueDate'][:4]
latestissueinfo = []
latestissueinfo.append({"latestdate": ser['IssueDate'],
"latestiss": ser['IssueNumber']})
logger.fdebug('sending latestissueinfo from future as : ' + str(latestissueinfo))
chktheadd = importer.addComictoDB(comicid, "no", chkwant=chkthewanted, latestissueinfo=latestissueinfo, calledfrom="futurecheck")
if chktheadd != 'Exists':
logger.info('Sucessfully imported ' + ser['ComicName'] + ' (' + str(theissdate) + ')')
myDB = db.DBConnection()
myDB.action('DELETE from futureupcoming WHERE ComicName=?', [ser['ComicName']])
logger.info('Removed ' + ser['ComicName'] + ' (' + str(theissdate) + ') from the future upcoming list as it is now added.')
raise cherrypy.HTTPRedirect("home")
future_check_add.exposed = True
def filterpull(self):
myDB = db.DBConnection()
weeklyresults = myDB.select("SELECT * from weekly")
@ -1779,9 +1693,12 @@ class WebInterface(object):
futureupcoming = sorted(futureupcoming, key=itemgetter('IssueDate','ComicName','IssueNumber'), reverse=True)
issues = myDB.select("SELECT * from issues WHERE Status='Wanted'")
isscnt = myDB.select("SELECT COUNT(*) FROM issues WHERE Status='Wanted'")
iss_cnt = isscnt[0][0]
issues = myDB.select("SELECT * from issues WHERE Status='Wanted' OR Status='Snatched' OR Status='Failed'")
# isscnt = myDB.select("SELECT COUNT(*) FROM issues WHERE Status='Wanted' OR Status='Snatched'")
isCounts = {}
isCounts[1] = 0 #1 wanted
isCounts[2] = 0 #2 snatched
isCounts[3] = 0 #3 failed
ann_list = []
@ -1790,13 +1707,31 @@ class WebInterface(object):
if mylar.ANNUALS_ON:
#let's add the annuals to the wanted table so people can see them
#ComicName wasn't present in db initially - added on startup chk now.
annuals_list = myDB.select("SELECT * FROM annuals WHERE Status='Wanted'")
anncnt = myDB.select("SELECT COUNT(*) FROM annuals WHERE Status='Wanted'")
ann_cnt = anncnt[0][0]
annuals_list = myDB.select("SELECT * FROM annuals WHERE Status='Wanted' OR Status='Snatched' OR Status='Failed'")
# anncnt = myDB.select("SELECT COUNT(*) FROM annuals WHERE Status='Wanted' OR Status='Snatched'")
# ann_cnt = anncnt[0][0]
ann_list += annuals_list
issues += annuals_list
wantedcount = iss_cnt + ann_cnt
for curResult in issues:
baseissues = {'wanted':1,'snatched':2,'failed':3}
for seas in baseissues:
if curResult['Status'] is None:
continue
else:
if seas in curResult['Status'].lower():
sconv = baseissues[seas]
isCounts[sconv]+=1
continue
isCounts = {"Wanted" : str(isCounts[1]),
"Snatched" : str(isCounts[2]),
"Failed" : str(isCounts[3])}
print isCounts
iss_cnt = int(isCounts['Wanted'])
wantedcount = iss_cnt# + ann_cnt
#let's straightload the series that have no issue data associated as of yet (ie. new series) from the futurepulllist
future_nodata_upcoming = myDB.select("SELECT * FROM futureupcoming WHERE IssueNumber='1' OR IssueNumber='0'")
@ -1825,7 +1760,7 @@ class WebInterface(object):
deleteit = myDB.action("DELETE from upcoming WHERE ComicName=? AND IssueNumber=?", [mvup['ComicName'],mvup['IssueNumber']])
return serve_template(templatename="upcoming.html", title="Upcoming", upcoming=upcoming, issues=issues, ann_list=ann_list, futureupcoming=futureupcoming, future_nodata_upcoming=future_nodata_upcoming, futureupcoming_count=futureupcoming_count, upcoming_count=upcoming_count, wantedcount=wantedcount)
return serve_template(templatename="upcoming.html", title="Upcoming", upcoming=upcoming, issues=issues, ann_list=ann_list, futureupcoming=futureupcoming, future_nodata_upcoming=future_nodata_upcoming, futureupcoming_count=futureupcoming_count, upcoming_count=upcoming_count, wantedcount=wantedcount, isCounts=isCounts)
upcoming.exposed = True
def skipped2wanted(self, comicid, fromupdate=None):
@ -2033,16 +1968,125 @@ class WebInterface(object):
def readlist(self):
myDB = db.DBConnection()
readlist = myDB.select("SELECT * from readinglist WHERE ComicName is not Null group by StoryArcID COLLATE NOCASE")
issuelist = myDB.select("SELECT * from readlist")
return serve_template(templatename="readinglist.html", title="Readlist", readlist=readlist, issuelist=issuelist)
#tuple this
readlist = []
counts = []
c_added = 0 #count of issues that have been added to the readlist and remain in that status ( meaning not sent / read )
c_sent = 0 #count of issues that have been sent to a third-party device ( auto-marked after a successful send completion )
c_read = 0 #count of issues that have been marked as read ( manually marked as read - future: read state from xml )
for iss in issuelist:
if iss['Status'] == 'Added':
statuschange = iss['DateAdded']
c_added +=1
else:
if iss['Status'] == 'Read':
c_read +=1
elif iss['Status'] == 'Downloaded':
c_sent +=1
statuschange = iss['StatusChange']
readlist.append({"ComicID": iss['ComicID'],
"ComicName": iss['ComicName'],
"SeriesYear": iss['SeriesYear'],
"Issue_Number": iss['Issue_Number'],
"IssueDate": iss['IssueDate'],
"Status": iss['Status'],
"StatusChange": statuschange,
"inCacheDIR": iss['inCacheDIR'],
"Location": iss['Location'],
"IssueID": iss['IssueID']})
counts = {"added": c_added,
"read": c_read,
"sent": c_sent,
"total": (c_added + c_read + c_sent)}
return serve_template(templatename="readinglist.html", title="Reading Lists", issuelist=readlist, counts=counts)
readlist.exposed = True
def detailReadlist(self,StoryArcID, StoryArcName):
def storyarc_main(self):
myDB = db.DBConnection()
readlist = myDB.select("SELECT * from readinglist WHERE StoryArcID=? order by ReadingOrder ASC", [StoryArcID])
return serve_template(templatename="readlist.html", title="Detailed Arc list", readlist=readlist, storyarcname=StoryArcName, storyarcid=StoryArcID)
detailReadlist.exposed = True
arclist = []
alist = myDB.select("SELECT * from readinglist WHERE ComicName is not Null group by StoryArcID") #COLLATE NOCASE")
for al in alist:
totalcnt = myDB.select("SELECT * FROM readinglist WHERE StoryArcID=?", [al['StoryArcID']])
maxyear = 0
for la in totalcnt:
if la['IssueYEAR'] != la['SeriesYear'] and la['IssueYEAR'] > la['SeriesYear']:
maxyear = la['IssueYear']
if maxyear == 0:
spanyears = la['SeriesYear']
else:
spanyears = la['SeriesYear'] + ' - ' + str(maxyear)
havecnt = myDB.select("SELECT COUNT(*) as count FROM readinglist WHERE StoryArcID=? AND (Status='Downloaded' or Status='Archived')", [al['StoryArcID']])
havearc = havecnt[0][0]
totalarc = int(al['TotalIssues'])
if not havearc:
havearc = 0
try:
percent = (havearc *100.0)/totalarc
if percent > 100:
percent = 101
except (ZeroDivisionError, TypeError):
percent = 0
totalarc = '?'
arclist.append({"StoryArcID": al['StoryArcID'],
"StoryArc": al['StoryArc'],
"TotalIssues": al['TotalIssues'],
"SeriesYear": al['SeriesYear'],
"Status": al['Status'],
"percent": percent,
"Have": havearc,
"SpanYears": spanyears,
"Total": al['TotalIssues']})
return serve_template(templatename="storyarc.html", title="Story Arcs", arclist=arclist)
storyarc_main.exposed = True
def detailStoryArc(self,StoryArcID, StoryArcName):
myDB = db.DBConnection()
arcinfo = myDB.select("SELECT * from readinglist WHERE StoryArcID=? order by ReadingOrder ASC", [StoryArcID])
return serve_template(templatename="storyarc_detail.html", title="Detailed Arc list", readlist=arcinfo, storyarcname=StoryArcName, storyarcid=StoryArcID)
detailStoryArc.exposed = True
def markreads(self, action=None, **args):
sendtablet_queue = []
myDB = db.DBConnection()
for IssueID in args:
if IssueID is None or 'issue_table' in IssueID or 'issue_table_length' in IssueID:
continue
else:
mi = myDB.selectone("SELECT * FROM readlist WHERE IssueID=?",[IssueID]).fetchone()
if mi is None:
continue
else:
comicname = mi['ComicName']
if action == 'Downloaded':
logger.fdebug(u"Marking %s %s as %s" % (comicname, mi['Issue_Number'], action))
read = readinglist.Readinglist(IssueID)
read.addtoreadlist()
elif action == 'Read':
logger.fdebug(u"Marking %s %s as %s" % (comicname, mi['Issue_Number'], action))
markasRead(IssueID)
elif action == 'Added':
logger.fdebug(u"Marking %s %s as %s" % (comicname, mi['Issue_Number'], action))
read = readinglist.Readinglist(IssueID)
read.addtoreadlist()
elif action == 'Remove':
logger.fdebug('Deleting %s %s' % (comicname, mi['Issue_Number']))
myDB.action('DELETE from readlist WHERE IssueID=?', [IssueID])
elif action == 'Send':
logger.fdebug('Queuing ' + mi['Location'] + ' to send to tablet.')
sendtablet_queue.append({"filename": mi['Location'],
"issueid": IssueID,
"comicid": mi['ComicID']})
if len(sendtablet_queue) > 0:
read = readinglist.Readinglist(sendtablet_queue)
threading.Thread(target=read.syncreading).start()
markreads.exposed = True
def removefromreadlist(self, IssueID=None, StoryArcID=None, IssueArcID=None, AllRead=None):
myDB = db.DBConnection()
@ -2064,47 +2108,15 @@ class WebInterface(object):
removefromreadlist.exposed = True
def markasRead(self, IssueID=None, IssueArcID=None):
myDB = db.DBConnection()
if IssueID:
issue = myDB.selectone('SELECT * from readlist WHERE IssueID=?', [IssueID]).fetchone()
if issue['Status'] == 'Read':
NewVal = {"Status": "Added"}
else:
NewVal = {"Status": "Read"}
CtrlVal = {"IssueID": IssueID}
myDB.upsert("readlist", NewVal, CtrlVal)
logger.info("Marked " + str(issue['ComicName']) + " #" + str(issue['Issue_Number']) + " as Read.")
elif IssueArcID:
issue = myDB.selectone('SELECT * from readinglist WHERE IssueArcID=?', [IssueArcID]).fetchone()
if issue['Status'] == 'Read':
NewVal = {"Status": "Added"}
else:
NewVal = {"Status": "Read"}
CtrlVal = {"IssueArcID": IssueArcID}
myDB.upsert("readinglist", NewVal, CtrlVal)
logger.info("Marked " + str(issue['ComicName']) + " #" + str(issue['IssueNumber']) + " as Read.")
read = readinglist.Readinglist(IssueID, IssueArcID)
read.markasRead()
markasRead.exposed = True
def addtoreadlist(self, IssueID):
myDB = db.DBConnection()
readlist = myDB.selectone("SELECT * from issues where IssueID=?", [IssueID]).fetchone()
comicinfo = myDB.selectone("SELECT * from comics where ComicID=?", [readlist['ComicID']]).fetchone()
if readlist is None:
logger.error("Cannot locate IssueID - aborting..")
else:
logger.info("attempting to add..issueid " + readlist['IssueID'])
ctrlval = {"IssueID": IssueID}
newval = {"DateAdded": helpers.today(),
"Status": "added",
"ComicID": readlist['ComicID'],
"Issue_Number": readlist['Issue_Number'],
"IssueDate": readlist['IssueDate'],
"SeriesYear": comicinfo['ComicYear'],
"ComicName": comicinfo['ComicName']}
myDB.upsert("readlist", newval, ctrlval)
logger.info("Added " + str(readlist['ComicName']) + " # " + str(readlist['Issue_Number']) + " to the Reading list.")
raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % readlist['ComicID'])
read = readinglist.Readinglist(IssueID)
read.addtoreadlist()
return
#raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % readlist['ComicID'])
addtoreadlist.exposed = True
def importReadlist(self,filename):
@ -2282,7 +2294,7 @@ class WebInterface(object):
myDB.upsert("readinglist", newVals, newCtrl)
raise cherrypy.HTTPRedirect("detailReadlist?StoryArcID=%s&StoryArcName=%s" % (storyarcid, storyarc))
raise cherrypy.HTTPRedirect("detailStoryArc?StoryArcID=%s&StoryArcName=%s" % (storyarcid, storyarc))
importReadlist.exposed = True
#Story Arc Ascension...welcome to the next level :)
@ -2505,9 +2517,10 @@ class WebInterface(object):
logger.info(want['ComicName'] + " -- #" + str(want['IssueNumber']))
logger.info(u"Story Arc : " + str(SARC) + " queueing the selected issue...")
logger.info(u"IssueArcID : " + str(IssueArcID))
logger.info(u"ComicID: " + s_comicid + " --- IssueID: " + s_issueid)
logger.info(u"ComicID: " + str(s_comicid) + " --- IssueID: " + str(s_issueid)) # no comicid in issues table.
logger.info(u"StoreDate: " + str(stdate) + " --- IssueDate: " + str(issdate))
foundcom, prov = search.search_init(ComicName=want['ComicName'], IssueNumber=want['IssueNumber'], ComicYear=want['IssueYear'], SeriesYear=want['SeriesYear'], Publisher=want['Publisher'], IssueDate=issdate, StoreDate=stdate, IssueID=None, AlternateSearch=None, UseFuzzy=None, ComicVersion=None, SARC=SARC, IssueArcID=IssueArcID, mode=None, rsscheck=None, ComicID=None)
#logger.info(u'Publisher: ' + want['Publisher']) <-- no publisher in issues table.
foundcom, prov = search.search_init(ComicName=want['ComicName'], IssueNumber=want['IssueNumber'], ComicYear=want['IssueYear'], SeriesYear=want['SeriesYear'], Publisher=None, IssueDate=issdate, StoreDate=stdate, IssueID=s_issueid, SARC=SARC, IssueArcID=IssueArcID)
else:
# it's a watched series
s_comicid = issuechk['ComicID']
@ -2550,10 +2563,10 @@ class WebInterface(object):
logger.info('comicname : ' + watchchk['ComicName'])
logger.info('issuenumber : ' + watchchk['IssueNumber'])
logger.info('comicyear : ' + watchchk['SeriesYear'])
logger.info('publisher : ' + watchchk['IssuePublisher'])
#logger.info('publisher : ' + watchchk['IssuePublisher']) <-- no publisher in table
logger.info('SARC : ' + SARC)
logger.info('IssueArcID : ' + IssueArcID)
foundcom, prov = search.search_init(ComicName=watchchk['ComicName'], IssueNumber=watchchk['IssueNumber'], ComicYear=issueyear, SeriesYear=watchchk['SeriesYear'], Publisher=watchchk['IssuePublisher'], IssueDate=None, StoreDate=None, IssueID=None, AlternateSearch=None, UseFuzzy=None, ComicVersion=None, SARC=SARC, IssueArcID=IssueArcID, mode=None, rsscheck=None, ComicID=None)
foundcom, prov = search.search_init(ComicName=watchchk['ComicName'], IssueNumber=watchchk['IssueNumber'], ComicYear=issueyear, SeriesYear=watchchk['SeriesYear'], Publisher='None', SARC=SARC, IssueArcID=IssueArcID)
else:
# it's a watched series
s_comicid = issuechk['ComicID']
@ -2839,210 +2852,215 @@ class WebInterface(object):
deleteimport.exposed = True
def preSearchit(self, ComicName, comiclist=None, mimp=0, displaycomic=None):
importlock = threading.Lock()
myDB = db.DBConnection()
if mimp == 0:
comiclist = []
comiclist.append(ComicName)
for cl in comiclist:
implog = ''
implog = implog + "imp_rename:" + str(mylar.IMP_RENAME) + "\n"
implog = implog + "imp_move:" + str(mylar.IMP_MOVE) + "\n"
ComicName = cl
logger.info('comicname is :' + ComicName)
implog = implog + "comicName: " + str(ComicName) + "\n"
myDB = db.DBConnection()
results = myDB.select("SELECT * FROM importresults WHERE ComicName=?", [ComicName])
if not results:
logger.info('I cannot find any results.')
continue
#if results > 0:
# print ("There are " + str(results[7]) + " issues to import of " + str(ComicName))
#build the valid year ranges and the minimum issue# here to pass to search.
yearRANGE = []
yearTOP = 0
minISSUE = 0
startISSUE = 10000000
starttheyear = None
comicstoIMP = []
movealreadyonlist = "no"
movedata = []
with importlock:
for cl in comiclist:
implog = ''
implog = implog + "imp_rename:" + str(mylar.IMP_RENAME) + "\n"
implog = implog + "imp_move:" + str(mylar.IMP_MOVE) + "\n"
ComicName = cl
logger.info('comicname is :' + ComicName)
implog = implog + "comicName: " + str(ComicName) + "\n"
results = myDB.select("SELECT * FROM importresults WHERE ComicName=?", [ComicName])
if not results:
logger.info('I cannot find any results.')
continue
#if results > 0:
# print ("There are " + str(results[7]) + " issues to import of " + str(ComicName))
#build the valid year ranges and the minimum issue# here to pass to search.
yearRANGE = []
yearTOP = 0
minISSUE = 0
startISSUE = 10000000
starttheyear = None
comicstoIMP = []
for result in results:
if result is None:
break
movealreadyonlist = "no"
movedata = []
if result['WatchMatch']:
watchmatched = result['WatchMatch']
else:
watchmatched = ''
for result in results:
if result is None:
break
if watchmatched.startswith('C'):
implog = implog + "Confirmed. ComicID already provided - initiating auto-magik mode for import.\n"
comicid = result['WatchMatch'][1:]
implog = implog + result['WatchMatch'] + " .to. " + str(comicid) + "\n"
#since it's already in the watchlist, we just need to move the files and re-run the filechecker.
#self.refreshArtist(comicid=comicid,imported='yes')
if mylar.IMP_MOVE:
implog = implog + "Mass import - Move files\n"
comloc = myDB.selectone("SELECT * FROM comics WHERE ComicID=?", [comicid]).fetchone()
movedata_comicid = comicid
movedata_comiclocation = comloc['ComicLocation']
movedata_comicname = ComicName
movealreadyonlist = "yes"
#mylar.moveit.movefiles(comicid,comloc['ComicLocation'],ComicName)
#check for existing files... (this is already called after move files in importer)
#updater.forceRescan(comicid)
if result['WatchMatch']:
watchmatched = result['WatchMatch']
else:
implog = implog + "nothing to do if I'm not moving.\n"
raise cherrypy.HTTPRedirect("importResults")
else:
comicstoIMP.append(result['ComicLocation'].decode(mylar.SYS_ENCODING, 'replace'))
getiss = result['impID'].rfind('-')
getiss = result['impID'][getiss+1:]
imlog = implog + "figured issue is : " + str(getiss) + "\n"
if (result['ComicYear'] not in yearRANGE) or (yearRANGE is None):
if result['ComicYear'] <> "0000":
implog = implog + "adding..." + str(result['ComicYear']) + "\n"
yearRANGE.append(str(result['ComicYear']))
yearTOP = str(result['ComicYear'])
getiss_num = helpers.issuedigits(getiss)
miniss_num = helpers.issuedigits(str(minISSUE))
startiss_num = helpers.issuedigits(str(startISSUE))
if int(getiss_num) > int(miniss_num):
implog = implog + "issue now set to : " + str(getiss) + " ... it was : " + str(minISSUE) + "\n"
logger.fdebug('Minimum issue now set to : ' + str(getiss) + ' - it was : ' + str(minISSUE))
minISSUE = str(getiss)
if int(getiss_num) < int(startiss_num):
implog = implog + "issue now set to : " + str(getiss) + " ... it was : " + str(startISSUE) + "\n"
logger.fdebug('Start issue now set to : ' + str(getiss) + ' - it was : ' + str(startISSUE))
startISSUE = str(getiss)
if helpers.issuedigits(startISSUE) == 1000: # if it's an issue #1, get the year and assume that's the start.
startyear = result['ComicYear']
watchmatched = ''
#taking this outside of the transaction in an attempt to stop db locking.
if mylar.IMP_MOVE and movealreadyonlist == "yes":
# for md in movedata:
mylar.moveit.movefiles(movedata_comicid, movedata_comiclocation, movedata_comicname)
updater.forceRescan(comicid)
if watchmatched.startswith('C'):
implog = implog + "Confirmed. ComicID already provided - initiating auto-magik mode for import.\n"
comicid = result['WatchMatch'][1:]
implog = implog + result['WatchMatch'] + " .to. " + str(comicid) + "\n"
#since it's already in the watchlist, we just need to move the files and re-run the filechecker.
#self.refreshArtist(comicid=comicid,imported='yes')
if mylar.IMP_MOVE:
implog = implog + "Mass import - Move files\n"
comloc = myDB.selectone("SELECT * FROM comics WHERE ComicID=?", [comicid]).fetchone()
raise cherrypy.HTTPRedirect("importResults")
#figure out # of issues and the year range allowable
if starttheyear is None:
if yearTOP > 0:
if helpers.int_num(minISSUE) < 1000:
maxyear = int(yearTOP)
movedata_comicid = comicid
movedata_comiclocation = comloc['ComicLocation']
movedata_comicname = ComicName
movealreadyonlist = "yes"
#mylar.moveit.movefiles(comicid,comloc['ComicLocation'],ComicName)
#check for existing files... (this is already called after move files in importer)
#updater.forceRescan(comicid)
else:
implog = implog + "nothing to do if I'm not moving.\n"
raise cherrypy.HTTPRedirect("importResults")
else:
maxyear = int(yearTOP) - (int(minISSUE) / 12)
if str(maxyear) not in yearRANGE:
yearRANGE.append(str(maxyear))
implog = implog + "there is a " + str(maxyear) + " year variation based on the 12 issues/year\n"
comicstoIMP.append(result['ComicLocation'].decode(mylar.SYS_ENCODING, 'replace'))
getiss = result['impID'].rfind('-')
getiss = result['impID'][getiss+1:]
imlog = implog + "figured issue is : " + str(getiss) + "\n"
if (result['ComicYear'] not in yearRANGE) or (yearRANGE is None):
if result['ComicYear'] <> "0000":
implog = implog + "adding..." + str(result['ComicYear']) + "\n"
yearRANGE.append(str(result['ComicYear']))
yearTOP = str(result['ComicYear'])
getiss_num = helpers.issuedigits(getiss)
miniss_num = helpers.issuedigits(str(minISSUE))
startiss_num = helpers.issuedigits(str(startISSUE))
if int(getiss_num) > int(miniss_num):
implog = implog + "issue now set to : " + str(getiss) + " ... it was : " + str(minISSUE) + "\n"
logger.fdebug('Minimum issue now set to : ' + str(getiss) + ' - it was : ' + str(minISSUE))
minISSUE = str(getiss)
if int(getiss_num) < int(startiss_num):
implog = implog + "issue now set to : " + str(getiss) + " ... it was : " + str(startISSUE) + "\n"
logger.fdebug('Start issue now set to : ' + str(getiss) + ' - it was : ' + str(startISSUE))
startISSUE = str(getiss)
if helpers.issuedigits(startISSUE) == 1000: # if it's an issue #1, get the year and assume that's the start.
startyear = result['ComicYear']
#taking this outside of the transaction in an attempt to stop db locking.
if mylar.IMP_MOVE and movealreadyonlist == "yes":
# for md in movedata:
mylar.moveit.movefiles(movedata_comicid, movedata_comiclocation, movedata_comicname)
updater.forceRescan(comicid)
raise cherrypy.HTTPRedirect("importResults")
#figure out # of issues and the year range allowable
if starttheyear is None:
if yearTOP > 0:
if helpers.int_num(minISSUE) < 1000:
maxyear = int(yearTOP)
else:
maxyear = int(yearTOP) - (int(minISSUE) / 12)
if str(maxyear) not in yearRANGE:
yearRANGE.append(str(maxyear))
implog = implog + "there is a " + str(maxyear) + " year variation based on the 12 issues/year\n"
else:
implog = implog + "no year detected in any issues...Nulling the value\n"
yearRANGE = None
else:
implog = implog + "no year detected in any issues...Nulling the value\n"
yearRANGE = None
else:
implog = implog + "First issue detected as starting in " + str(starttheyear) + ". Setting start range to that.\n"
yearRANGE.append(starttheyear)
#determine a best-guess to # of issues in series
#this needs to be reworked / refined ALOT more.
#minISSUE = highest issue #, startISSUE = lowest issue #
numissues = helpers.int_num(minISSUE) - helpers.int_num(startISSUE) +1 # add 1 to account for one issue itself.
#normally minissue would work if the issue #'s started at #1.
implog = implog + "the years involved are : " + str(yearRANGE) + "\n"
implog = implog + "highest issue # is : " + str(minISSUE) + "\n"
implog = implog + "lowest issue # is : " + str(startISSUE) + "\n"
implog = implog + "approximate number of issues : " + str(numissues) + "\n"
implog = implog + "issues present on system : " + str(len(comicstoIMP)) + "\n"
implog = implog + "versioning checking on filenames: \n"
cnsplit = ComicName.split()
#cnwords = len(cnsplit)
#cnvers = cnsplit[cnwords-1]
ogcname = ComicName
for splitt in cnsplit:
if 'v' in str(splitt):
implog = implog + "possible versioning detected.\n"
if splitt[1:].isdigit():
implog = implog + splitt + " - assuming versioning. Removing from initial search pattern.\n"
ComicName = re.sub(str(splitt), '', ComicName)
implog = implog + "new comicname is : " + ComicName + "\n"
# we need to pass the original comicname here into the entire importer module
# so that we can reference the correct issues later.
implog = implog + "First issue detected as starting in " + str(starttheyear) + ". Setting start range to that.\n"
yearRANGE.append(starttheyear)
#determine a best-guess to # of issues in series
#this needs to be reworked / refined ALOT more.
#minISSUE = highest issue #, startISSUE = lowest issue #
numissues = helpers.int_num(minISSUE) - helpers.int_num(startISSUE) +1 # add 1 to account for one issue itself.
#normally minissue would work if the issue #'s started at #1.
implog = implog + "the years involved are : " + str(yearRANGE) + "\n"
implog = implog + "highest issue # is : " + str(minISSUE) + "\n"
implog = implog + "lowest issue # is : " + str(startISSUE) + "\n"
implog = implog + "approximate number of issues : " + str(numissues) + "\n"
implog = implog + "issues present on system : " + str(len(comicstoIMP)) + "\n"
implog = implog + "versioning checking on filenames: \n"
cnsplit = ComicName.split()
#cnwords = len(cnsplit)
#cnvers = cnsplit[cnwords-1]
ogcname = ComicName
for splitt in cnsplit:
if 'v' in str(splitt):
implog = implog + "possible versioning detected.\n"
if splitt[1:].isdigit():
implog = implog + splitt + " - assuming versioning. Removing from initial search pattern.\n"
ComicName = re.sub(str(splitt), '', ComicName)
implog = implog + "new comicname is : " + ComicName + "\n"
# we need to pass the original comicname here into the entire importer module
# so that we can reference the correct issues later.
mode='series'
displaycomic = helpers.filesafe(ComicName)
logger.fdebug('displaycomic : ' + displaycomic)
logger.fdebug('comicname : ' + ComicName)
if yearRANGE is None:
sresults, explicit = mb.findComic(displaycomic, mode, issue=numissues, explicit='all') #ogcname, mode, issue=numissues, explicit='all') #ComicName, mode, issue=numissues)
else:
sresults, explicit = mb.findComic(displaycomic, mode, issue=numissues, limityear=yearRANGE, explicit='all') #ogcname, mode, issue=numissues, limityear=yearRANGE, explicit='all') #ComicName, mode, issue=numissues, limityear=yearRANGE)
type='comic'
mode='series'
displaycomic = helpers.filesafe(ComicName)
logger.fdebug('displaycomic : ' + displaycomic)
logger.fdebug('comicname : ' + ComicName)
if yearRANGE is None:
sresults, explicit = mb.findComic(displaycomic, mode, issue=numissues, explicit='all') #ogcname, mode, issue=numissues, explicit='all') #ComicName, mode, issue=numissues)
else:
sresults, explicit = mb.findComic(displaycomic, mode, issue=numissues, limityear=yearRANGE, explicit='all') #ogcname, mode, issue=numissues, limityear=yearRANGE, explicit='all') #ComicName, mode, issue=numissues, limityear=yearRANGE)
type='comic'
if len(sresults) == 1:
sr = sresults[0]
implog = implog + "only one result...automagik-mode enabled for " + displaycomic + " :: " + str(sr['comicid']) + "\n"
logger.fdebug("only one result...automagik-mode enabled for " + displaycomic + " :: " + str(sr['comicid']))
resultset = 1
# #need to move the files here.
elif len(sresults) == 0 or len(sresults) is None:
implog = implog + "no results, removing the year from the agenda and re-querying.\n"
logger.fdebug("no results, removing the year from the agenda and re-querying.")
sresults, explicit = mb.findComic(ogcname, mode, issue=numissues, explicit='all') #ComicName, mode, issue=numissues)
if len(sresults) == 1:
sr = sresults[0]
implog = implog + "only one result...automagik-mode enabled for " + displaycomic + " :: " + str(sr['comicid']) + "\n"
logger.fdebug("only one result...automagik-mode enabled for " + displaycomic + " :: " + str(sr['comicid']))
resultset = 1
else:
# #need to move the files here.
elif len(sresults) == 0 or len(sresults) is None:
implog = implog + "no results, removing the year from the agenda and re-querying.\n"
logger.fdebug("no results, removing the year from the agenda and re-querying.")
sresults, explicit = mb.findComic(ogcname, mode, issue=numissues, explicit='all') #ComicName, mode, issue=numissues)
if len(sresults) == 1:
sr = sresults[0]
implog = implog + "only one result...automagik-mode enabled for " + displaycomic + " :: " + str(sr['comicid']) + "\n"
logger.fdebug("only one result...automagik-mode enabled for " + displaycomic + " :: " + str(sr['comicid']))
resultset = 1
else:
resultset = 0
else:
implog = implog + "returning results to screen - more than one possibility.\n"
logger.fdebug("Returning results to Select option - more than one possibility, manual intervention required.")
resultset = 0
else:
implog = implog + "returning results to screen - more than one possibility.\n"
logger.fdebug("Returning results to Select option - more than one possibility, manual intervention required.")
resultset = 0
#generate random Search Results ID to allow for easier access for viewing logs / search results.
import random
SRID = str(random.randint(100000,999999))
#generate random Search Results ID to allow for easier access for viewing logs / search results.
import random
SRID = str(random.randint(100000,999999))
#write implog to db here.
ctrlVal = {"ComicName": ogcname} #{"ComicName": ComicName}
newVal = {"implog": implog,
"SRID": SRID}
myDB.upsert("importresults", newVal, ctrlVal)
#write implog to db here.
ctrlVal = {"ComicName": ogcname} #{"ComicName": ComicName}
newVal = {"implog": implog,
"SRID": SRID}
myDB.upsert("importresults", newVal, ctrlVal)
# store the search results for series that returned more than one result for user to select later / when they want.
# should probably assign some random numeric for an id to reference back at some point.
for sr in sresults:
cVal = {"SRID": SRID,
"comicid": sr['comicid']}
#should store ogcname in here somewhere to account for naming conversions above.
nVal = {"Series": ComicName,
"results": len(sresults),
"publisher": sr['publisher'],
"haveit": sr['haveit'],
"name": sr['name'],
"deck": sr['deck'],
"url": sr['url'],
"description": sr['description'],
"comicimage": sr['comicimage'],
"issues": sr['issues'],
"comicyear": sr['comicyear']}
myDB.upsert("searchresults", nVal, cVal)
# store the search results for series that returned more than one result for user to select later / when they want.
# should probably assign some random numeric for an id to reference back at some point.
for sr in sresults:
cVal = {"SRID": SRID,
"comicid": sr['comicid']}
#should store ogcname in here somewhere to account for naming conversions above.
nVal = {"Series": ComicName,
"results": len(sresults),
"publisher": sr['publisher'],
"haveit": sr['haveit'],
"name": sr['name'],
"deck": sr['deck'],
"url": sr['url'],
"description": sr['description'],
"comicimage": sr['comicimage'],
"issues": sr['issues'],
"ogcname": ogcname,
"comicyear": sr['comicyear']}
myDB.upsert("searchresults", nVal, cVal)
if resultset == 1:
self.addbyid(sr['comicid'], calledby=True, imported='yes', ogcname=ogcname)
#implog = implog + "ogcname -- " + str(ogcname) + "\n"
#cresults = self.addComic(comicid=sr['comicid'],comicname=sr['name'],comicyear=sr['comicyear'],comicpublisher=sr['publisher'],comicimage=sr['comicimage'],comicissues=sr['issues'],imported='yes',ogcname=ogcname) #imported=comicstoIMP,ogcname=ogcname)
#return serve_template(templatename="searchfix.html", title="Error Check", comicname=sr['name'], comicid=sr['comicid'], comicyear=sr['comicyear'], comicimage=sr['comicimage'], comicissues=sr['issues'], cresults=cresults, imported='yes', ogcname=str(ogcname))
#else:
# return serve_template(templatename="searchresults.html", title='Import Results for: "' + displaycomic + '"',searchresults=sresults, type=type, imported='yes', ogcname=ogcname, name=ogcname, explicit=explicit, serinfo=None) #imported=comicstoIMP, ogcname=ogcname)
#status update.
ctrlVal = {"ComicName": ComicName}
newVal = {"Status": 'Imported',
"SRID": SRID,
"ComicID": sr['comicid']}
myDB.upsert("importresults", newVal, ctrlVal)
if resultset == 1:
self.addbyid(sr['comicid'], calledby=True, imported='yes', ogcname=ogcname)
#implog = implog + "ogcname -- " + str(ogcname) + "\n"
#cresults = self.addComic(comicid=sr['comicid'],comicname=sr['name'],comicyear=sr['comicyear'],comicpublisher=sr['publisher'],comicimage=sr['comicimage'],comicissues=sr['issues'],imported='yes',ogcname=ogcname) #imported=comicstoIMP,ogcname=ogcname)
#return serve_template(templatename="searchfix.html", title="Error Check", comicname=sr['name'], comicid=sr['comicid'], comicyear=sr['comicyear'], comicimage=sr['comicimage'], comicissues=sr['issues'], cresults=cresults, imported='yes', ogcname=str(ogcname))
#else:
#return serve_template(templatename="searchresults.html", title='Import Results for: "' + displaycomic + '"',searchresults=sresults, type=type, imported='yes', ogcname=ogcname, name=ogcname, explicit=explicit, serinfo=None) #imported=comicstoIMP, ogcname=ogcname)
#status update.
ctrlVal = {"ComicName": ComicName}
newVal = {"Status": 'Imported',
"SRID": SRID,
"ComicID": sr['comicid']}
myDB.upsert("importresults", newVal, ctrlVal)
preSearchit.exposed = True
@ -3420,18 +3438,23 @@ class WebInterface(object):
raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % ComicID)
comic_config.exposed = True
def readlistOptions(self, send2read=0, tab_enable=0, tab_host=None, tab_user=None, tab_pass=None, tab_directory=None):
mylar.SEND2READ = int(send2read)
mylar.TAB_ENABLE = int(tab_enable)
mylar.TAB_HOST = tab_host
mylar.TAB_USER = tab_user
mylar.TAB_PASS = tab_pass
mylar.TAB_DIRECTORY = tab_directory
mylar.config_write()
raise cherrypy.HTTPRedirect("readlist")
readlistOptions.exposed = True
def readOptions(self, StoryArcID=None, StoryArcName=None, read2filename=0, storyarcdir=0, copy2arcdir=0):
print 'initial'
print mylar.READ2FILENAME
print mylar.STORYARCDIR
print mylar.COPY2ARCDIR
mylar.READ2FILENAME = int(read2filename)
mylar.STORYARCDIR = int(storyarcdir)
mylar.COPY2ARCDIR = int(copy2arcdir)
print 'after int'
print mylar.READ2FILENAME
print mylar.STORYARCDIR
print mylar.COPY2ARCDIR
mylar.config_write()
#force the check/creation of directory com_location here
@ -3443,7 +3466,7 @@ class WebInterface(object):
logger.fdebug("Updated Directory doesn't exist! - attempting to create now.")
filechecker.validateAndCreateDirectory(arcdir, True)
if StoryArcID is not None:
raise cherrypy.HTTPRedirect("detailReadlist?StoryArcID=%s&StoryArcName=%s" % (StoryArcID, StoryArcName))
raise cherrypy.HTTPRedirect("detailStoryArc?StoryArcID=%s&StoryArcName=%s" % (StoryArcID, StoryArcName))
else:
raise cherrypy.HTTPRedirect("readlist")
readOptions.exposed = True
@ -3910,7 +3933,6 @@ class WebInterface(object):
CreateFolders.exposed = True
def getPushbulletDevices(self, api=None):
logger.fdebug('here')
notifythis = notifiers.pushbullet
result = notifythis.get_devices(api)
if result:
@ -3918,3 +3940,13 @@ class WebInterface(object):
else:
return 'Error sending Pushbullet notifications.'
getPushbulletDevices.exposed = True
def syncfiles(self):
#3 status' exist for the readlist.
# Added (Not Read) - Issue is added to the readlist and is awaiting to be 'sent' to your reading client.
# Read - Issue has been read
# Not Read - Issue has been downloaded to your reading client after the syncfiles has taken place.
read = readinglist.Readinglist()
threading.Thread(target=read.syncreading).start()
syncfiles.exposed = True

View File

@ -29,7 +29,7 @@ import datetime
import shutil
import mylar
from mylar import db, updater, helpers, logger, newpull
from mylar import db, updater, helpers, logger, newpull, importer, mb
def pullit(forcecheck=None):
myDB = db.DBConnection()
@ -451,6 +451,7 @@ def pullitcheck(comic1off_name=None, comic1off_id=None, forcecheck=None, futurep
logger.info(u"Checking the Weekly Releases list for comics I'm watching...")
else:
logger.info('Checking the Future Releases list for upcoming comics I am watching for...')
myDB = db.DBConnection()
not_t = ['TP',
@ -482,36 +483,45 @@ def pullitcheck(comic1off_name=None, comic1off_id=None, forcecheck=None, futurep
b_list = []
comicid = []
mylardb = os.path.join(mylar.DATA_DIR, "mylar.db")
con = sqlite3.connect(str(mylardb))
with con:
cur = con.cursor()
# if it's a one-off check (during an add series), load the comicname here and ignore below.
if comic1off_name:
logger.fdebug("This is a one-off for " + comic1off_name + '[ latest issue: ' + str(issue) + ' ]')
lines.append(comic1off_name.strip())
unlines.append(comic1off_name.strip())
comicid.append(comic1off_id)
latestissue.append(issue)
w = 1
# if it's a one-off check (during an add series), load the comicname here and ignore below.
if comic1off_name:
logger.fdebug("This is a one-off for " + comic1off_name + '[ latest issue: ' + str(issue) + ' ]')
lines.append(comic1off_name.strip())
unlines.append(comic1off_name.strip())
comicid.append(comic1off_id)
latestissue.append(issue)
w = 1
else:
#let's read in the comic.watchlist from the db here
#cur.execute("SELECT ComicID, ComicName_Filesafe, ComicYear, ComicPublisher, ComicPublished, LatestDate, ForceContinuing, AlternateSearch, LatestIssue from comics WHERE Status = 'Active'")
weeklylist = []
comiclist = myDB.select("SELECT * FROM comics WHERE Status='Active'")
if comiclist is None:
pass
else:
#let's read in the comic.watchlist from the db here
cur.execute("SELECT ComicID, ComicName_Filesafe, ComicYear, ComicPublisher, ComicPublished, LatestDate, ForceContinuing, AlternateSearch, LatestIssue from comics WHERE Status = 'Active'")
while True:
watchd = cur.fetchone()
#print ("watchd: " + str(watchd))
if watchd is None:
break
if 'Present' in watchd[4] or (helpers.now()[:4] in watchd[4]) or watchd[6] == 1:
# this gets buggered up when series are named the same, and one ends in the current
# year, and the new series starts in the same year - ie. Avengers
# lets' grab the latest issue date and see how far it is from current
# anything > 45 days we'll assume it's a false match ;)
logger.fdebug("ComicName: " + watchd[1])
latestdate = watchd[5]
for weekly in comiclist:
#assign it.
weeklylist.append({"ComicName": weekly['ComicName'],
"ComicID": weekly['ComicID'],
"ComicName_Filesafe": weekly['ComicName_Filesafe'],
"ComicYear": weekly['ComicYear'],
"ComicPublisher": weekly['ComicPublisher'],
"ComicPublished": weekly['ComicPublished'],
"LatestDate": weekly['LatestDate'],
"LatestIssue": weekly['LatestIssue'],
"ForceContinuing": weekly['ForceContinuing'],
"AlternateSearch": weekly['AlternateSearch']})
if len(weeklylist) > 0:
for week in weeklylist:
if 'Present' in week['ComicPublished'] or (helpers.now()[:4] in week['ComicPublished']) or week['ForceContinuing'] == 1:
# this gets buggered up when series are named the same, and one ends in the current
# year, and the new series starts in the same year - ie. Avengers
# lets' grab the latest issue date and see how far it is from current
# anything > 45 days we'll assume it's a false match ;)
logger.fdebug("ComicName: " + week['ComicName'])
latestdate = week['LatestDate']
logger.fdebug("latestdate: " + str(latestdate))
if latestdate[8:] == '':
logger.fdebug("invalid date " + str(latestdate) + " appending 01 for day for continuation.")
@ -523,25 +533,25 @@ def pullitcheck(comic1off_name=None, comic1off_id=None, forcecheck=None, futurep
logger.fdebug("c_date : " + str(c_date) + " ... n_date : " + str(n_date))
recentchk = (n_date - c_date).days
logger.fdebug("recentchk: " + str(recentchk) + " days")
chklimit = helpers.checkthepub(watchd[0])
chklimit = helpers.checkthepub(week['ComicID'])
logger.fdebug("Check date limit set to : " + str(chklimit))
logger.fdebug(" ----- ")
if recentchk < int(chklimit) or watchd[6] == 1:
if watchd[6] == 1:
if recentchk < int(chklimit) or week['ForceContinuing'] == 1:
if week['ForceContinuing'] == 1:
logger.fdebug('Forcing Continuing Series enabled for series...')
# let's not even bother with comics that are not in the Present.
a_list.append(watchd[1])
b_list.append(watchd[2])
comicid.append(watchd[0])
pubdate.append(watchd[4])
latestissue.append(watchd[8])
a_list.append(week['ComicName_Filesafe'])
b_list.append(week['ComicYear'])
comicid.append(week['ComicID'])
pubdate.append(week['ComicPublished'])
latestissue.append(week['LatestIssue'])
lines.append(a_list[w].strip())
unlines.append(a_list[w].strip())
w+=1 # we need to increment the count here, so we don't count the same comics twice (albeit with alternate names)
#here we load in the alternate search names for a series and assign them the comicid and
#alternate names
Altload = helpers.LoadAlternateSearchNames(watchd[7], watchd[0])
Altload = helpers.LoadAlternateSearchNames(week['AlternateSearch'], week['ComicID'])
if Altload == 'no results':
pass
else:
@ -556,10 +566,10 @@ def pullitcheck(comic1off_name=None, comic1off_id=None, forcecheck=None, futurep
break
cleanedname = altval['AlternateName']
a_list.append(altval['AlternateName'])
b_list.append(watchd[2])
b_list.append(week['ComicYear'])
comicid.append(alt_cid)
pubdate.append(watchd[4])
latestissue.append(watchd[8])
pubdate.append(week['ComicPublished'])
latestissue.append(week['LatestIssue'])
lines.append(a_list[w+wc].strip())
unlines.append(a_list[w+wc].strip())
logger.fdebug('loading in Alternate name for ' + str(cleanedname))
@ -567,17 +577,6 @@ def pullitcheck(comic1off_name=None, comic1off_id=None, forcecheck=None, futurep
wc+=1
w+=wc
#-- to be removed -
#print ( "Comic:" + str(a_list[w]) + " Year: " + str(b_list[w]) )
#if "WOLVERINE AND THE X-MEN" in str(a_list[w]): a_list[w] = "WOLVERINE AND X-MEN"
#lines.append(a_list[w].strip())
#unlines.append(a_list[w].strip())
#llen.append(a_list[w].splitlines())
#ccname.append(a_list[w].strip())
#tmpwords = a_list[w].split(None)
#ltmpwords = len(tmpwords)
#ltmp = 1
#-- end to be removed
else:
logger.fdebug("Determined to not be a Continuing series at this time.")
cnt = int(w-1)
@ -900,7 +899,7 @@ def checkthis(datecheck,datestatus,usedate):
return valid_check
def weekly_singlecopy(comicid, issuenum, file, path, module=None):
def weekly_singlecopy(comicid, issuenum, file, path, module=None, issueid=None):
if module is None:
module = ''
module += '[WEEKLY-PULL]'
@ -946,5 +945,152 @@ def weekly_singlecopy(comicid, issuenum, file, path, module=None):
return
logger.info(module + ' Sucessfully copied to ' + desfile.encode('utf-8').strip() )
if mylar.SEND2READ:
logger.info(module + " Send to Reading List enabled for new pulls. Adding to your readlist in the status of 'Added'")
if issueid is None:
chkthis = myDB.selectone('SELECT * FROM issues WHERE ComicID=? AND Int_IssueNumber=?',[comicid, helpers.issuedigits(issuenum)]).fetchone()
annchk = myDB.selectone('SELECT * FROM annuals WHERE ComicID=? AND Int_IssueNumber=?',[comicid, helpers.issuedigits(issuenum)]).fetchone()
if chkthis is None and annchk is None:
logger.warn(module + ' Unable to locate issue within your series watchlist.')
return
if chkthis is None:
issueid = annchk['IssueID']
elif annchk is None:
issueid = chkthis['IssueID']
else:
#if issue number exists in issues and annuals for given series, break down by year.
#get pulldate.
pullcomp = pulldate[:4]
isscomp = chkthis['ReleaseDate'][:4]
anncomp = annchk['ReleaseDate'][:4]
logger.info(module + ' Comparing :' + str(pullcomp) + ' to issdate: ' + str(isscomp) + ' to annyear: ' + str(anncomp))
if int(pullcomp) == int(isscomp) and int(pullcomp) != int(anncomp):
issueid = chkthis['IssueID']
elif int(pullcomp) == int(anncomp) and int(pullcomp) != int(isscomp):
issueid = annchk['IssueID']
else:
if 'annual' in file.lower():
issueid = annchk['IssueID']
else:
logger.info(module + ' Unsure as to the exact issue this is. Not adding to the Reading list at this time.')
return
read = mylar.readinglist.Readinglist(issueid)
read.addtoreadlist()
return
def future_check():
# this is the function that will check the futureupcoming table
# for series that have yet to be released and have no CV data associated with it
# ie. #1 issues would fall into this as there is no series data to poll against until it's released.
# Mylar will look for #1 issues, and in finding any will do the following:
# - check comicvine to see if the series data has been released and / or issue data
# - will automatically import the series (Add A Series) upon finding match
# - will then proceed to mark the issue as Wanted, then remove from the futureupcoming table
# - will then attempt to download the issue(s) in question.
# future to-do
# specify whether you want to 'add a series (Watch For)' or 'mark an issue as a one-off download'.
# currently the 'add series' option in the futurepulllist will attempt to add a series as per normal.
myDB = db.DBConnection()
chkfuture = myDB.select("SELECT * FROM futureupcoming WHERE IssueNumber='1' OR IssueNumber='0'") #is not NULL")
if chkfuture is None:
logger.info("There are not any series on your future-list that I consider to be a NEW series")
return
cflist = []
#load in the values on an entry-by-entry basis into a tuple, so that we can query the sql clean again.
for cf in chkfuture:
cflist.append({"ComicName": cf['ComicName'],
"IssueDate": cf['IssueDate'],
"IssueNumber": cf['IssueNumber'], #this should be all #1's as the sql above limits the hits.
"Publisher": cf['Publisher'],
"Status": cf['Status']})
logger.fdebug('cflist: ' + str(cflist))
#now we load in
if len(cflist) == 0:
logger.info('No series have been marked as being on auto-watch.')
return
logger.info('I will be looking to see if any information has been released for ' + str(len(cflist)) + ' series that are NEW series')
#limit the search to just the 'current year' since if it's anything but a #1, it should have associated data already.
#limittheyear = []
#limittheyear.append(cf['IssueDate'][-4:])
for ser in cflist:
matched = False
theissdate = ser['IssueDate'][-4:]
if not theissdate.startswith('20'):
theissdate = ser['IssueDate'][:4]
logger.info('looking for new data for ' + ser['ComicName'] + '[#' + str(ser['IssueNumber']) + '] (' + str(theissdate) + ')')
searchresults, explicit = mb.findComic(ser['ComicName'], mode='pullseries', issue=ser['IssueNumber'], limityear=theissdate, explicit='all')
#logger.info('[' + ser['ComicName'] + '] searchresults: ' + str(searchresults))
if len(searchresults) > 1:
logger.info('publisher: ' + str(ser['Publisher']))
logger.info('More than one result returned - this may have to be a manual add')
matches = []
for sr in searchresults:
tmpsername = re.sub('[\'\*\^\%\$\#\@\!\-\/\,\.\:\(\)]','', ser['ComicName']).strip()
tmpsrname = re.sub('[\'\*\^\%\$\#\@\!\-\/\,\.\:\(\)]','', sr['name']).strip()
if tmpsername.lower() == tmpsrname.lower() and len(tmpsername) <= len(tmpsrname):
logger.info('name & lengths matched : ' + sr['name'])
if str(sr['comicyear']) == str(theissdate):
logger.info('matched to : ' + str(theissdate))
matches.append(sr)
if len(matches) == 1:
logger.info('Narrowed down to one series as a direct match: ' + matches[0]['name'] + '[' + str(matches[0]['comicid']) + ']')
cid = matches[0]['comicid']
matched = True
elif len(searchresults) == 1:
matched = True
cid = searchresults[0]['comicid']
else:
logger.info('No series information available as of yet for ' + ser['ComicName'] + '[#' + str(ser['IssueNumber']) + '] (' + str(theissdate) + ')')
continue
if matched:
#we should probably load all additional issues for the series on the futureupcoming list that are marked as Wanted and then
#throw them to the importer as a tuple, and once imported the import can run the additional search against them.
#now we scan for additional issues of the same series on the upcoming list and mark them accordingly.
chkthewanted = []
chkwant = myDB.select("SELECT * FROM futureupcoming WHERE ComicName=? AND IssueNumber != '1' AND Status='Wanted'", [ser['ComicName']])
if chkwant is None:
logger.info('No extra issues to mark at this time for ' + ser['ComicName'])
else:
for chk in chkwant:
chkthewanted.append({"ComicName": chk['ComicName'],
"IssueDate": chk['IssueDate'],
"IssueNumber": chk['IssueNumber'], #this should be all #1's as the sql above limits the hits.
"Publisher": chk['Publisher'],
"Status": chk['Status']})
logger.info('Marking ' + str(len(chkthewanted)) + ' additional issues as Wanted from ' + ser['ComicName'] + ' series as requested.')
future_check_add(cid, ser, chkthewanted, theissdate)
logger.info('Finished attempting to auto-add new series.')
return
def future_check_add(comicid, serinfo, chkthewanted=None, theissdate=None):
#In order to not error out when adding series with absolutely NO issue data, we need to 'fakeup' some values
#latestdate = the 'On sale' date from the futurepull-list OR the Shipping date if not available.
#latestiss = the IssueNumber for the first issue (this should always be #1, but might change at some point)
ser = serinfo
if theissdate is None:
theissdate = ser['IssueDate'][-4:]
if not theissdate.startswith('20'):
theissdate = ser['IssueDate'][:4]
latestissueinfo = []
latestissueinfo.append({"latestdate": ser['IssueDate'],
"latestiss": ser['IssueNumber']})
logger.fdebug('sending latestissueinfo from future as : ' + str(latestissueinfo))
chktheadd = importer.addComictoDB(comicid, "no", chkwant=chkthewanted, latestissueinfo=latestissueinfo, calledfrom="futurecheck")
if chktheadd != 'Exists':
logger.info('Sucessfully imported ' + ser['ComicName'] + ' (' + str(theissdate) + ')')
myDB = db.DBConnection()
myDB.action('DELETE from futureupcoming WHERE ComicName=?', [ser['ComicName']])
logger.info('Removed ' + ser['ComicName'] + ' (' + str(theissdate) + ') from the future upcoming list as it is now added.')
return

View File

@ -28,4 +28,5 @@ class Weekly():
def run(self):
logger.info('[WEEKLY] Checking Weekly Pull-list for new releases/updates')
mylar.weeklypull.pullit()
mylar.weeklypull.future_check()
return