mirror of https://github.com/evilhero/mylar
IMP:New scheduler tab (Manage / Activity) where you can see job current run status', next runtime, and prev runtimes as well as force/pause a job, FIX: Disabling
torrents will now properly hide torrent information, IMP: Specified daemon port for deluge as an on-screen tip for more detail, IMP: Added 100,200,ALL as viewable watchlist views, FIX: When viewing pullist and annual integration enabled, if annual was present would incorrectly link to invalid annual series instead of the actual series itself, IMP: Added more detail error messages to metatagging errors and better handling of stranded files during cleanup, IMP: Improved some handling for weekly pull-list one-off's and refactored the nzb/oneoff post-processing into a seperate function for future callables, Moved all the main url locations for public torrent sites to the init module so that it can be cascaded down for use in other modules instead as a global, IMP: Added a 'deep_search_32p' variable in the config.ini for specific usage with 32p, where if there is more than one result will dig deeper into each result to try and figure out if there are series matches, as opposed to the default where it will only use ref32p table if available or just the first hit in a multiple series search results and ignore the remainder, FIX:Fixed some unknown characters appearing in the pullist due to unicode-related conversion problems, FIX: fixed some special cases of file parsing errors due to Volume label being named different than expected, FIX: Added a 3s pause between experimental searches to try and not hit their frequency limitation, IMP: Weekly Pullist One-off's will now show status of Snatched/Downloaded as required, FIX: Fixed some deluge parameter problems when using auto-snatch torrent script/option, IMP: Changed the downlocation in the auto-snatch option to an env variable instead of being passed to avoid unicode-related problems, FIX: Fixed some magnet-related issues for torrents when using a watchdir + TPSE, FIX: Added more verbose error message for rtorrent connection issues, FIX: Could not connect to rtorrent client if no username/password were provided, IMP: Set the db updater to run every 5 minutes on the watchlist, automatically refreshing the oldest updated series each time that is more than 5 hours old (force db update from the activity/job schedulers page will run the db updater against the entire watchlist in sequence), IMP: Attempt to handle long paths in windows (ie. > 256c) by prepending the unicode windows api character to the import a directory path (windows only), IMP: When manual metatagging a series, will update the series after all the metatagging has been completed as opposed to after each issue, IMP: Will now display available inkdrops on Config/Search Providers tab when using 32P (future will utilize/indicate inkdrop threshold when downloading)
This commit is contained in:
parent
d2b8ffebad
commit
21eee17344
2
Mylar.py
2
Mylar.py
|
@ -63,7 +63,7 @@ def main():
|
|||
mylar.SYS_ENCODING = 'UTF-8'
|
||||
|
||||
# Set up and gather command line arguments
|
||||
parser = argparse.ArgumentParser(description='Comic Book add-on for SABnzbd+')
|
||||
parser = argparse.ArgumentParser(description='Automated Comic Book Downloader')
|
||||
|
||||
parser.add_argument('-v', '--verbose', action='store_true', help='Increase console logging verbosity')
|
||||
parser.add_argument('-q', '--quiet', action='store_true', help='Turn off console logging')
|
||||
|
|
|
@ -24,7 +24,7 @@
|
|||
%if mylar.RENAME_FILES:
|
||||
<a id="menu_link_refresh" onclick="doAjaxCall('manualRename?comicid=${comic['ComicID']}', $(this),'table')" data-success="Renaming files.">Rename Files</a>
|
||||
%endif
|
||||
<a id="menu_link_refresh" onclick="doAjaxCall('forceRescan?ComicID=${comic['ComicID']}', $(this),'table')" data-success="${comic['ComicName']} is being rescanned">Recheck Files</a>
|
||||
<a id="menu_link_refresh" onclick="doAjaxCall('forceRescan?ComicID=${comic['ComicID']}', $(this),true);return true;" data-success="${comic['ComicName']} is being rescanned">Recheck Files</a>
|
||||
%if mylar.ENABLE_META:
|
||||
<a id="menu_link_refresh" onclick="doAjaxCall('group_metatag?ComicID=${comic['ComicID']}&dirName=${comic['ComicLocation'] |u}', $(this),'table')" data-success="(re)tagging every issue present for '${comic['ComicName']}'">Manual MetaTagging</a>
|
||||
%endif
|
||||
|
@ -299,7 +299,7 @@
|
|||
|
||||
<form action="markissues" method="get" id="markissues">
|
||||
<div id="markissue">Mark selected issues as
|
||||
<select name="action" onChange="doAjaxCall('markissues',$(this),'table',true);" data-success="selected issues marked">
|
||||
<select name="action" onChange="doAjaxCall('markissues',$(this),'table',true)" data-success="selected issues marked">
|
||||
<option disabled="disabled" selected="selected">Choose...</option>
|
||||
<option value="Wanted">Wanted</option>
|
||||
<option value="Skipped">Skipped</option>
|
||||
|
@ -414,7 +414,7 @@
|
|||
<a href="#" title="Add to Reading List" onclick="doAjaxCall('addtoreadlist?IssueID=${issue['IssueID']}',$(this),'table')" data-success="${comic['ComicName']} #${issue['Issue_Number']} added to Reading List"><img src="interfaces/default/images/glasses-icon.png" height="25" width="25" class="highqual" /></a>
|
||||
%else:
|
||||
<a href="#" title="Retry the same download again" onclick="doAjaxCall('queueit?ComicID=${issue['ComicID']}&IssueID=${issue['IssueID']}&ComicIssue=${issue['Issue_Number']}&mode=want', $(this),'table')" data-success="Retrying the same version of '${issue['ComicName']}' '${issue['Issue_Number']}'"><img src="interfaces/default/images/retry_icon.png" height="25" width="25" class="highqual" /></a>
|
||||
<a href="#" title="Mark issue as Skipped" onclick="doAjaxCall('unqueueissue?IssueID=${issue['IssueID']}&ComicID=${issue['ComicID']}',$(this),'table')" data-success="'${issue['Issue_Number']}' has been marked as skipped"><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" class="highqual" /></a>
|
||||
<a href="#" title="Mark issue as Skipped" onclick="doAjaxCall('unqueueissue?IssueID=${issue['IssueID']}&ComicID=${issue['ComicID']}',$(this),'table',true);" data-success="'${issue['Issue_Number']}' has been marked as skipped"><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" class="highqual" /></a>
|
||||
%endif
|
||||
<!--
|
||||
<a href="#" onclick="doAjaxCall('archiveissue?IssueID=${issue['IssueID']}&comicid=${comic['ComicID']}',$(this),'table')"><img src="interfaces/default/images/archive_icon.png" height="25" width="25" title="Mark issue as Archived" class="highqual" /></a>
|
||||
|
@ -429,7 +429,6 @@
|
|||
|
||||
|
||||
<div class="table_wrapper">
|
||||
|
||||
%if annuals:
|
||||
<h1>Annuals</h1>
|
||||
%for aninfo in annualinfo:
|
||||
|
@ -813,7 +812,8 @@
|
|||
}
|
||||
|
||||
$(document).ready(function() {
|
||||
$('table.display').DataTable();
|
||||
$("#issue_table").dataTable();
|
||||
$("#annual_table").dataTable();
|
||||
initThisPage();
|
||||
});
|
||||
</script>
|
||||
|
|
|
@ -398,7 +398,7 @@
|
|||
<input id="enable_torrents" type="checkbox" onclick="initConfigCheckbox($(this));" name="enable_torrents" value=1 ${config['enable_torrents']} /><label>Use Torrents</label>
|
||||
</div>
|
||||
<div class="config">
|
||||
<div class="row">
|
||||
<div class="row">
|
||||
<label>Minimum # of seeders</label>
|
||||
<input type="text" name="minseeds" value="${config['minseeds']}" size="10">
|
||||
</div>
|
||||
|
@ -410,8 +410,7 @@
|
|||
<input type="radio" name="torrent_downloader" id="torrent_downloader_deluge" value="4" ${config['torrent_downloader_deluge']}> Deluge
|
||||
<input type="radio" name="torrent_downloader" id="torrent_downloader_qbittorrent" value="5" ${config['torrent_downloader_qbittorrent']}> qBittorrent
|
||||
</center>
|
||||
</div>
|
||||
</fieldset>
|
||||
|
||||
<fieldset id="watchlist_options">
|
||||
<div class="row checkbox left clearfix">
|
||||
<input id="local_watchdir" type="checkbox" name="torrent_local" value="1" ${config['torrent_local']} /><label>Local Watch dir</label>
|
||||
|
@ -547,6 +546,7 @@
|
|||
<div class="row">
|
||||
<label>Deluge Host:Port </label>
|
||||
<input type="text" name="deluge_host" value="${config['deluge_host']}" size="30">
|
||||
<small>port uses the deluge daemon port (remote connection to daemon has to be enabled)</small>
|
||||
</div>
|
||||
<div class="row">
|
||||
<label>Deluge Username</label>
|
||||
|
@ -591,7 +591,7 @@
|
|||
<small>Automatically start torrent on successful loading within qBittorrent client</small>
|
||||
</div>
|
||||
</fieldset>
|
||||
|
||||
</div>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
@ -609,7 +609,7 @@
|
|||
<label>RSS Interval Feed Check</label>
|
||||
<input type="text" name="rss_checkinterval" value="${config['rss_checkinterval']}" size="6" /><small>(Mins)</small>
|
||||
<a href="#" style="float:right" type="button" onclick="doAjaxCall('force_rss',$(this))" data-success="RSS Force now running" data-error="Error trying to retrieve RSS Feeds"><span class="ui-icon ui-icon-extlink"></span>Force RSS</a>
|
||||
</br><small><% rss_last=mylar.RSS_LASTRUN %>last run: ${rss_last}</small>
|
||||
</br><small>last run: ${config['rss_last']}</small>
|
||||
</div>
|
||||
</fieldset>
|
||||
<fieldset>
|
||||
|
@ -709,6 +709,15 @@
|
|||
<input type="button" value="Test Connection" id="test_32p" style="float:center" /></br>
|
||||
<input type="text" name="status32p" style="text-align:center; font-size:11px;" id="status32p" size="50" DISABLED />
|
||||
</div>
|
||||
<div name="inkdrops32p" id="inkdrops32p" style="font-size:11px;" align="center">
|
||||
<%
|
||||
if mylar.INKDROPS_32P is None:
|
||||
inkdrops = ''
|
||||
else:
|
||||
inkdrops = 'Inkdrops Available: ' + '{:06,}'.format(float(mylar.INKDROPS_32P))
|
||||
%>
|
||||
<span>${inkdrops}</span>
|
||||
</div>
|
||||
</fieldset>
|
||||
</div>
|
||||
<div class="row checkbox left clearfix">
|
||||
|
@ -1696,6 +1705,10 @@
|
|||
});
|
||||
});
|
||||
|
||||
function numberWithCommas(x) {
|
||||
return x.toString().replace(/\B(?=(\d{3})+(?!\d))/g, ",");
|
||||
};
|
||||
|
||||
$("#test_32p").click(function(){
|
||||
$.get('test_32p',
|
||||
function(data){
|
||||
|
@ -1703,7 +1716,11 @@
|
|||
alert(data.error);
|
||||
return;
|
||||
}
|
||||
$('#status32p').val(data);
|
||||
var obj = JSON.parse(data);
|
||||
var inkd = parseInt(obj['inkdrops']);
|
||||
inkd = numberWithCommas(inkd);
|
||||
$('#status32p').val(obj['status']);
|
||||
$('#inkdrops32p span').text('Inkdrops Available: '+inkd);
|
||||
});
|
||||
});
|
||||
|
||||
|
|
|
@ -920,6 +920,46 @@ div#artistheader h2 a {
|
|||
font-weight: bold;
|
||||
font-family: "Trebuchet MS", Helvetica, Arial, sans-serif;
|
||||
}
|
||||
#scheduler_detail th#job {
|
||||
width: 50px;
|
||||
text-align: center;
|
||||
}
|
||||
#scheduler_detail th#nextrun {
|
||||
width: 100px;
|
||||
text-align: center;
|
||||
}
|
||||
#scheduler_detail th#prevrun {
|
||||
width: 100px;
|
||||
text-align: center;
|
||||
}
|
||||
#scheduler_detail th#options {
|
||||
width: 100px;
|
||||
text-align: center;
|
||||
}
|
||||
#scheduler_detail td#job {
|
||||
width: 50px;
|
||||
text-align: left;
|
||||
vertical-align: middle;
|
||||
font-size: 12px;
|
||||
}
|
||||
#scheduler_detail td#nextrun {
|
||||
width: 100px;
|
||||
text-align: center;
|
||||
vertical-align: middle;
|
||||
font-size: 12px;
|
||||
}
|
||||
#scheduler_detail td#prevrun {
|
||||
width: 100px;
|
||||
text-align: center;
|
||||
vertical-align: middle;
|
||||
font-size: 12px;
|
||||
}
|
||||
#scheduler_detail td#options {
|
||||
width: 100px;
|
||||
text-align: center;
|
||||
vertical-align: middle;
|
||||
font-size: 12px;
|
||||
}
|
||||
#read_detail th#options {
|
||||
min-width: 150px;
|
||||
text-align: left;
|
||||
|
|
|
@ -121,7 +121,7 @@
|
|||
{ "orderData": 10, "targets": 9 },
|
||||
{ "order": [[7, 'asc'],[1, 'asc']] }
|
||||
],
|
||||
"lengthMenu": [[10, 15, 25, 50, -1], [10, 15, 25, 50, 'All' ]],
|
||||
"lengthMenu": [[10, 15, 25, 50, 100, 200, -1], [10, 15, 25, 50, 100, 200, 'All' ]],
|
||||
"language": {
|
||||
"lengthMenu":"Show _MENU_ results per page",
|
||||
"emptyTable": "No results",
|
||||
|
|
|
@ -22,6 +22,7 @@
|
|||
<li><a href="#tabs-1">Scan Comic Library</a></li>
|
||||
<li><a href="#tabs-2">Manual Post-Processing</a></li>
|
||||
<li><a href="#tabs-3">Advanced Options</a></li>
|
||||
<li><a href="#tabs-4" title="jobs">Activity / Jobs</a></li>
|
||||
</ul>
|
||||
<div id="tabs-1" class="configtable">
|
||||
<div style="float:right;position:absolute;right:0;top;0;margin-right:50px;">
|
||||
|
@ -139,19 +140,8 @@
|
|||
|
||||
<div id="tabs-3">
|
||||
<table summary="Advanced Options" class="configtable">
|
||||
|
||||
<tr>
|
||||
<td>
|
||||
<fieldset>
|
||||
<div class="links">
|
||||
<legend>Force Options</legend>
|
||||
<a href="#" onclick="doAjaxCall('forceSearch',$(this))" data-success="Checking for wanted issues successful" data-error="Error checking wanted issues"><span class="ui-icon ui-icon-search"></span>Force Check for Wanted Issues</a>
|
||||
<a href="#" onclick="doAjaxCall('forceUpdate',$(this))" data-success="Update active comics successful" data-error="Error forcing update Comics"><span class="ui-icon ui-icon-heart"></span>Force Update Active Comics</a>
|
||||
<a href="#" onclick="doAjaxCall('checkGithub',$(this))" data-success="Checking for update successful" data-error="Error checking for update"><span class="ui-icon ui-icon-refresh"></span>Check for Mylar Updates</a>
|
||||
</div>
|
||||
</fieldset>
|
||||
</td>
|
||||
<td>
|
||||
<tr>
|
||||
<td>
|
||||
<fieldset>
|
||||
<legend>Export</legend>
|
||||
<div class="links">
|
||||
|
@ -159,25 +149,100 @@
|
|||
<a href="#" onclick="doAjaxCall('wanted_Export?mode=Downloaded',$(this))" data-sucess="Exported to Downloaded list." data-error="Failed to export. Check logs"><span class="ui-icon ui-icon-refresh"></span>Export Downloaded to CSV</a>
|
||||
</div>
|
||||
</fieldset>
|
||||
</br>
|
||||
</td>
|
||||
<td>
|
||||
<fieldset>
|
||||
<legend>Additional Options</legend>
|
||||
<div classs="links">
|
||||
<div class="links">
|
||||
<a href="readlist">Reading List Management</a><br/>
|
||||
<a href="storyarc_main">Story Arc Management</a><br/>
|
||||
<a href="importResults">Import Results Management</a>
|
||||
</div>
|
||||
</fieldset>
|
||||
</td>
|
||||
</tr>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
<div id="tabs-4">
|
||||
<div id="curtime" style="float:right;"></div>
|
||||
<table summary="Activity / Jobs" width="100%" cellpadding="6px" cellspacing="2px">
|
||||
<legend><center><h1>Schedulers<h1><center></legend>
|
||||
<br />
|
||||
<thead>
|
||||
<tr border="1">
|
||||
<th id="job" style="width: 50px;text-align: center;">Name</th>
|
||||
<th id="nextrun" style="width: 90px;text-align: center;">Next</th>
|
||||
<th id="interval" style="width: 20px;text-align: center;">Interval</th>
|
||||
<th id="prevrun" style="width: 90px;text-align: center;">Prev</th>
|
||||
<th id="status" style="width: 50px;text-align: center;">Status</th>
|
||||
<th id="options" style="width: 50px;;text-align: center;">Options</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
%for j in jobs:
|
||||
<%
|
||||
if j['status'] == 'Paused':
|
||||
grade = '#D9150F'
|
||||
elif j['status'] == 'Waiting':
|
||||
grade = '#0F49D9'
|
||||
elif j['status'] == 'Running':
|
||||
grade = '#55D90F'
|
||||
%>
|
||||
<tr>
|
||||
<td id="job" style="width: 50px;text-align: center;">${j['jobname']}</td>
|
||||
<td id="nextrun" style="width: 90px;text-align: center;">${j['next_run_datetime']}</td>
|
||||
<td id="interval" style="width: 20px;text-align: center;">${j['interval']}</td>
|
||||
<td id="prevrun" style="width: 90px;text-align: center;">${j['prev_run_datetime']}</td>
|
||||
<td id="status" style="width: 50px;text-align: center;color: ${grade};">${j['status']}</td>
|
||||
<td id="options" style="width: 50px;text-align: center;">
|
||||
%if any([j['status'] == 'Running', j['status'] == 'Waiting']):
|
||||
<a href="#" onclick="doAjaxCall('jobmanage?job=${j['jobname']}&mode=pause',$(this),'tabs',true);return true;" data-success="Successfully paused ${j['jobname']}" data-error="Error Pausing ${j['jobname']}."><span class="ui-icon ui-icon-stop"></span>Pause</a>
|
||||
%elif j['status'] == 'Paused':
|
||||
<a href="#" onclick="doAjaxCall('jobmanage?job=${j['jobname']}&mode=resume',$(this),'tabs',true);return true;" data-success="Successfully resumed ${j['jobname']}" data-error="Error Pausing ${j['jobname']}."><span class="ui-icon ui-icon-play"></span>Resume</a>
|
||||
%endif
|
||||
%if j['jobname'] == 'Auto-Search':
|
||||
<a href="#" onclick="doAjaxCall('schedulerForceCheck?jobid=search',$(this),'tabs',true);return true;" data-success="Force Search successfully submitted" data-error="Error checking wanted issues"><span class="ui-icon ui-icon-star"></span>Force</a>
|
||||
%elif j['jobname'] == 'DB Updater':
|
||||
<a href="#" onclick="doAjaxCall('schedulerForceCheck?jobid=updater',$(this),'tabs',true);return true;" data-success="Update active series now running" data-error="Error forcing update Comics"><span class="ui-icon ui-icon-star"></span>Force</a>
|
||||
%elif j['jobname'] == 'Weekly Pullist':
|
||||
<a href="#" onclick="doAjaxCall('schedulerForceCheck?jobid=weekly',$(this),'tabs',true);return true;" data-success="Now updating Weekly Pullist" data-error="Error forcing update to Weekly Pullist"><span class="ui-icon ui-icon-star"></span>Force</a>
|
||||
%elif j['jobname'] == 'Folder Monitor':
|
||||
<a href="#" onclick="doAjaxCall('schedulerForceCheck?jobid=monitor',$(this),'tabs',true);return true;" data-success="Folder Monitor now running" data-error="Error forcing Folder Monitor to run"><span class="ui-icon ui-icon-star"></span>Force</a>
|
||||
%elif j['jobname'] == 'RSS Feeds':
|
||||
<a href="#" onclick="doAjaxCall('schedulerForceCheck?jobid=rss',$(this),'tabs',true);return true;" data-success="Force RSS Check now running" data-error="Error forcing update Comics"><span class="ui-icon ui-icon-star"></span>Force</a>
|
||||
%elif j['jobname'] == 'Check Version':
|
||||
<a href="#" onclick="doAjaxCall('schedulerForceCheck?jobid=version',$(this),'tabs',true);return true;" data-success="Version check successfully submitted" data-error="Error checking for update"><span class="ui-icon ui-icon-star"></span>Force</a>
|
||||
%endif
|
||||
</td>
|
||||
</tr>
|
||||
%endfor
|
||||
</tbody>
|
||||
</table>
|
||||
</br><small><center>There could be up to a 60s delay in a given scheduler running due to other processes currently running</center></small>
|
||||
</div>
|
||||
|
||||
|
||||
</div>
|
||||
</%def>
|
||||
<%def name="javascriptIncludes()">
|
||||
<script>
|
||||
var CheckEnabled = true;
|
||||
function startTime() {
|
||||
var today = new Date();
|
||||
var h = today.getHours();
|
||||
var m = today.getMinutes();
|
||||
var s = today.getSeconds();
|
||||
h = checkTime(h);
|
||||
m = checkTime(m);
|
||||
s = checkTime(s);
|
||||
document.getElementById('curtime').innerHTML = h + ":" + m + ":" + s;
|
||||
var t = setTimeout(startTime, 500);
|
||||
};
|
||||
function checkTime(i) {
|
||||
if (i < 10) {i = "0" + i}; // add zero in front of numbers if < 10
|
||||
return i;
|
||||
};
|
||||
function addScanAction() {
|
||||
$('#autoadd').append('<input type="hidden" name="scan" value=1 />');
|
||||
CheckEnabled = true;
|
||||
|
@ -216,7 +281,7 @@
|
|||
jQuery( "#tabs" ).tabs();
|
||||
initActions();
|
||||
initConfigCheckbox("#imp_move");
|
||||
|
||||
startTime();
|
||||
};
|
||||
$(document).ready(function() {
|
||||
initThisPage();
|
||||
|
|
|
@ -10,6 +10,9 @@
|
|||
<div id="subhead_menu">
|
||||
<a href="#" id="menu_link_refresh" onclick="doAjaxCall('pullist?week=${weekinfo['weeknumber']}&year=${weekinfo['year']}',$(this),'table')" data-success="Refresh submitted.">Refresh Pull-list</a>
|
||||
<a id="menu_link_retry" href="pullrecreate">Recreate Pull-list</a>
|
||||
<!--
|
||||
<a href="#" id="menu_link_retry" onclick="doAjaxCall('create_readlist?weeknumber=${weekinfo['weeknumber']}&year=${weekinfo['year']}',$(this),'table')" data-success="Submitted request for reading-list generation for this week">Generate Reading-List</a>
|
||||
-->
|
||||
<a id="menu_link_scan" class="button">Download</a>
|
||||
<a href="#" id="menu_link_refresh" onclick="doAjaxCall('pullSearch?week=${weekinfo['weeknumber']}&year=${weekinfo['year']}',$(this),'table')" data-success="Submitted background search request for new pull issues">Manually check for issues</a>
|
||||
</div>
|
||||
|
@ -90,16 +93,14 @@
|
|||
%if pullfilter is True:
|
||||
<td class="publisher">${weekly['PUBLISHER']}</td>
|
||||
<td class="comicname">
|
||||
%if weekly['HAVEIT'] == 'No':
|
||||
%if weekly['COMICID'] != '' and weekly['COMICID'] is not None:
|
||||
%if any([weekly['HAVEIT'] == 'No', weekly['HAVEIT'] == 'OneOff']):
|
||||
%if any([weekly['COMICID'] != '', weekly['COMICID'] is not None]):
|
||||
<a href="${weekly['LINK']}" target="_blank">${weekly['COMIC']}</a>
|
||||
%else:
|
||||
${weekly['COMIC']}
|
||||
%endif
|
||||
%elif weekly['HAVEIT'] == 'OneOff':
|
||||
<a href="#">${weekly['COMIC']}</a>
|
||||
%else:
|
||||
<a href="comicDetails?ComicID=${weekly['COMICID']}">${weekly['COMIC']}</a>
|
||||
<a href="comicDetails?ComicID=${weekly['HAVEIT']}">${weekly['COMIC']}</a>
|
||||
%endif
|
||||
</td>
|
||||
<td class="comicnumber">${weekly['ISSUE']}</td>
|
||||
|
|
|
@ -1,3 +1,8 @@
|
|||
version_info = (2, 0, 0)
|
||||
version = '.'.join(str(n) for n in version_info[:3])
|
||||
release = version + ''.join(str(n) for n in version_info[3:])
|
||||
# These will be removed in APScheduler 4.0.
|
||||
#release = __import__('pkg_resources').get_distribution('APScheduler').version.split('-')[0]
|
||||
#version_info = tuple(int(x) if x.isdigit() else x for x in release.split('.'))
|
||||
#version = __version__ = '.'.join(str(x) for x in version_info[:3])
|
||||
|
||||
version_info = (3, 3, 1)
|
||||
release = '3.3.1'
|
||||
version = __version__ = '3.3.1'
|
||||
|
|
|
@ -1,63 +1,93 @@
|
|||
__all__ = ('EVENT_SCHEDULER_START', 'EVENT_SCHEDULER_SHUTDOWN',
|
||||
'EVENT_JOBSTORE_ADDED', 'EVENT_JOBSTORE_REMOVED',
|
||||
'EVENT_JOBSTORE_JOB_ADDED', 'EVENT_JOBSTORE_JOB_REMOVED',
|
||||
'EVENT_JOB_EXECUTED', 'EVENT_JOB_ERROR', 'EVENT_JOB_MISSED',
|
||||
'EVENT_ALL', 'SchedulerEvent', 'JobStoreEvent', 'JobEvent')
|
||||
__all__ = ('EVENT_SCHEDULER_STARTED', 'EVENT_SCHEDULER_SHUTDOWN', 'EVENT_SCHEDULER_PAUSED',
|
||||
'EVENT_SCHEDULER_RESUMED', 'EVENT_EXECUTOR_ADDED', 'EVENT_EXECUTOR_REMOVED',
|
||||
'EVENT_JOBSTORE_ADDED', 'EVENT_JOBSTORE_REMOVED', 'EVENT_ALL_JOBS_REMOVED',
|
||||
'EVENT_JOB_ADDED', 'EVENT_JOB_REMOVED', 'EVENT_JOB_MODIFIED', 'EVENT_JOB_EXECUTED',
|
||||
'EVENT_JOB_ERROR', 'EVENT_JOB_MISSED', 'EVENT_JOB_SUBMITTED', 'EVENT_JOB_MAX_INSTANCES',
|
||||
'SchedulerEvent', 'JobEvent', 'JobExecutionEvent')
|
||||
|
||||
|
||||
EVENT_SCHEDULER_START = 1 # The scheduler was started
|
||||
EVENT_SCHEDULER_SHUTDOWN = 2 # The scheduler was shut down
|
||||
EVENT_JOBSTORE_ADDED = 4 # A job store was added to the scheduler
|
||||
EVENT_JOBSTORE_REMOVED = 8 # A job store was removed from the scheduler
|
||||
EVENT_JOBSTORE_JOB_ADDED = 16 # A job was added to a job store
|
||||
EVENT_JOBSTORE_JOB_REMOVED = 32 # A job was removed from a job store
|
||||
EVENT_JOB_EXECUTED = 64 # A job was executed successfully
|
||||
EVENT_JOB_ERROR = 128 # A job raised an exception during execution
|
||||
EVENT_JOB_MISSED = 256 # A job's execution was missed
|
||||
EVENT_ALL = (EVENT_SCHEDULER_START | EVENT_SCHEDULER_SHUTDOWN |
|
||||
EVENT_JOBSTORE_ADDED | EVENT_JOBSTORE_REMOVED |
|
||||
EVENT_JOBSTORE_JOB_ADDED | EVENT_JOBSTORE_JOB_REMOVED |
|
||||
EVENT_JOB_EXECUTED | EVENT_JOB_ERROR | EVENT_JOB_MISSED)
|
||||
EVENT_SCHEDULER_STARTED = EVENT_SCHEDULER_START = 2 ** 0
|
||||
EVENT_SCHEDULER_SHUTDOWN = 2 ** 1
|
||||
EVENT_SCHEDULER_PAUSED = 2 ** 2
|
||||
EVENT_SCHEDULER_RESUMED = 2 ** 3
|
||||
EVENT_EXECUTOR_ADDED = 2 ** 4
|
||||
EVENT_EXECUTOR_REMOVED = 2 ** 5
|
||||
EVENT_JOBSTORE_ADDED = 2 ** 6
|
||||
EVENT_JOBSTORE_REMOVED = 2 ** 7
|
||||
EVENT_ALL_JOBS_REMOVED = 2 ** 8
|
||||
EVENT_JOB_ADDED = 2 ** 9
|
||||
EVENT_JOB_REMOVED = 2 ** 10
|
||||
EVENT_JOB_MODIFIED = 2 ** 11
|
||||
EVENT_JOB_EXECUTED = 2 ** 12
|
||||
EVENT_JOB_ERROR = 2 ** 13
|
||||
EVENT_JOB_MISSED = 2 ** 14
|
||||
EVENT_JOB_SUBMITTED = 2 ** 15
|
||||
EVENT_JOB_MAX_INSTANCES = 2 ** 16
|
||||
EVENT_ALL = (EVENT_SCHEDULER_STARTED | EVENT_SCHEDULER_SHUTDOWN | EVENT_SCHEDULER_PAUSED |
|
||||
EVENT_SCHEDULER_RESUMED | EVENT_EXECUTOR_ADDED | EVENT_EXECUTOR_REMOVED |
|
||||
EVENT_JOBSTORE_ADDED | EVENT_JOBSTORE_REMOVED | EVENT_ALL_JOBS_REMOVED |
|
||||
EVENT_JOB_ADDED | EVENT_JOB_REMOVED | EVENT_JOB_MODIFIED | EVENT_JOB_EXECUTED |
|
||||
EVENT_JOB_ERROR | EVENT_JOB_MISSED | EVENT_JOB_SUBMITTED | EVENT_JOB_MAX_INSTANCES)
|
||||
|
||||
|
||||
class SchedulerEvent(object):
|
||||
"""
|
||||
An event that concerns the scheduler itself.
|
||||
|
||||
:var code: the type code of this event
|
||||
:ivar code: the type code of this event
|
||||
:ivar alias: alias of the job store or executor that was added or removed (if applicable)
|
||||
"""
|
||||
def __init__(self, code):
|
||||
|
||||
def __init__(self, code, alias=None):
|
||||
super(SchedulerEvent, self).__init__()
|
||||
self.code = code
|
||||
|
||||
|
||||
class JobStoreEvent(SchedulerEvent):
|
||||
"""
|
||||
An event that concerns job stores.
|
||||
|
||||
:var alias: the alias of the job store involved
|
||||
:var job: the new job if a job was added
|
||||
"""
|
||||
def __init__(self, code, alias, job=None):
|
||||
SchedulerEvent.__init__(self, code)
|
||||
self.alias = alias
|
||||
if job:
|
||||
self.job = job
|
||||
|
||||
def __repr__(self):
|
||||
return '<%s (code=%d)>' % (self.__class__.__name__, self.code)
|
||||
|
||||
|
||||
class JobEvent(SchedulerEvent):
|
||||
"""
|
||||
An event that concerns the execution of individual jobs.
|
||||
An event that concerns a job.
|
||||
|
||||
:var job: the job instance in question
|
||||
:var scheduled_run_time: the time when the job was scheduled to be run
|
||||
:var retval: the return value of the successfully executed job
|
||||
:var exception: the exception raised by the job
|
||||
:var traceback: the traceback object associated with the exception
|
||||
:ivar code: the type code of this event
|
||||
:ivar job_id: identifier of the job in question
|
||||
:ivar jobstore: alias of the job store containing the job in question
|
||||
"""
|
||||
def __init__(self, code, job, scheduled_run_time, retval=None,
|
||||
exception=None, traceback=None):
|
||||
SchedulerEvent.__init__(self, code)
|
||||
self.job = job
|
||||
|
||||
def __init__(self, code, job_id, jobstore):
|
||||
super(JobEvent, self).__init__(code)
|
||||
self.code = code
|
||||
self.job_id = job_id
|
||||
self.jobstore = jobstore
|
||||
|
||||
|
||||
class JobSubmissionEvent(JobEvent):
|
||||
"""
|
||||
An event that concerns the submission of a job to its executor.
|
||||
|
||||
:ivar scheduled_run_times: a list of datetimes when the job was intended to run
|
||||
"""
|
||||
|
||||
def __init__(self, code, job_id, jobstore, scheduled_run_times):
|
||||
super(JobSubmissionEvent, self).__init__(code, job_id, jobstore)
|
||||
self.scheduled_run_times = scheduled_run_times
|
||||
|
||||
|
||||
class JobExecutionEvent(JobEvent):
|
||||
"""
|
||||
An event that concerns the running of a job within its executor.
|
||||
|
||||
:ivar scheduled_run_time: the time when the job was scheduled to be run
|
||||
:ivar retval: the return value of the successfully executed job
|
||||
:ivar exception: the exception raised by the job
|
||||
:ivar traceback: a formatted traceback for the exception
|
||||
"""
|
||||
|
||||
def __init__(self, code, job_id, jobstore, scheduled_run_time, retval=None, exception=None,
|
||||
traceback=None):
|
||||
super(JobExecutionEvent, self).__init__(code, job_id, jobstore)
|
||||
self.scheduled_run_time = scheduled_run_time
|
||||
self.retval = retval
|
||||
self.exception = exception
|
||||
|
|
|
@ -0,0 +1,49 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
import sys
|
||||
|
||||
from apscheduler.executors.base import BaseExecutor, run_job
|
||||
|
||||
try:
|
||||
from asyncio import iscoroutinefunction
|
||||
from apscheduler.executors.base_py3 import run_coroutine_job
|
||||
except ImportError:
|
||||
from trollius import iscoroutinefunction
|
||||
run_coroutine_job = None
|
||||
|
||||
|
||||
class AsyncIOExecutor(BaseExecutor):
|
||||
"""
|
||||
Runs jobs in the default executor of the event loop.
|
||||
|
||||
If the job function is a native coroutine function, it is scheduled to be run directly in the
|
||||
event loop as soon as possible. All other functions are run in the event loop's default
|
||||
executor which is usually a thread pool.
|
||||
|
||||
Plugin alias: ``asyncio``
|
||||
"""
|
||||
|
||||
def start(self, scheduler, alias):
|
||||
super(AsyncIOExecutor, self).start(scheduler, alias)
|
||||
self._eventloop = scheduler._eventloop
|
||||
|
||||
def _do_submit_job(self, job, run_times):
|
||||
def callback(f):
|
||||
try:
|
||||
events = f.result()
|
||||
except:
|
||||
self._run_job_error(job.id, *sys.exc_info()[1:])
|
||||
else:
|
||||
self._run_job_success(job.id, events)
|
||||
|
||||
if iscoroutinefunction(job.func):
|
||||
if run_coroutine_job is not None:
|
||||
coro = run_coroutine_job(job, job._jobstore_alias, run_times, self._logger.name)
|
||||
f = self._eventloop.create_task(coro)
|
||||
else:
|
||||
raise Exception('Executing coroutine based jobs is not supported with Trollius')
|
||||
else:
|
||||
f = self._eventloop.run_in_executor(None, run_job, job, job._jobstore_alias, run_times,
|
||||
self._logger.name)
|
||||
|
||||
f.add_done_callback(callback)
|
|
@ -0,0 +1,137 @@
|
|||
from abc import ABCMeta, abstractmethod
|
||||
from collections import defaultdict
|
||||
from datetime import datetime, timedelta
|
||||
from traceback import format_tb
|
||||
import logging
|
||||
import sys
|
||||
|
||||
from pytz import utc
|
||||
import six
|
||||
|
||||
from apscheduler.events import (
|
||||
JobExecutionEvent, EVENT_JOB_MISSED, EVENT_JOB_ERROR, EVENT_JOB_EXECUTED)
|
||||
|
||||
|
||||
class MaxInstancesReachedError(Exception):
|
||||
def __init__(self, job):
|
||||
super(MaxInstancesReachedError, self).__init__(
|
||||
'Job "%s" has already reached its maximum number of instances (%d)' %
|
||||
(job.id, job.max_instances))
|
||||
|
||||
|
||||
class BaseExecutor(six.with_metaclass(ABCMeta, object)):
|
||||
"""Abstract base class that defines the interface that every executor must implement."""
|
||||
|
||||
_scheduler = None
|
||||
_lock = None
|
||||
_logger = logging.getLogger('apscheduler.executors')
|
||||
|
||||
def __init__(self):
|
||||
super(BaseExecutor, self).__init__()
|
||||
self._instances = defaultdict(lambda: 0)
|
||||
|
||||
def start(self, scheduler, alias):
|
||||
"""
|
||||
Called by the scheduler when the scheduler is being started or when the executor is being
|
||||
added to an already running scheduler.
|
||||
|
||||
:param apscheduler.schedulers.base.BaseScheduler scheduler: the scheduler that is starting
|
||||
this executor
|
||||
:param str|unicode alias: alias of this executor as it was assigned to the scheduler
|
||||
|
||||
"""
|
||||
self._scheduler = scheduler
|
||||
self._lock = scheduler._create_lock()
|
||||
self._logger = logging.getLogger('apscheduler.executors.%s' % alias)
|
||||
|
||||
def shutdown(self, wait=True):
|
||||
"""
|
||||
Shuts down this executor.
|
||||
|
||||
:param bool wait: ``True`` to wait until all submitted jobs
|
||||
have been executed
|
||||
"""
|
||||
|
||||
def submit_job(self, job, run_times):
|
||||
"""
|
||||
Submits job for execution.
|
||||
|
||||
:param Job job: job to execute
|
||||
:param list[datetime] run_times: list of datetimes specifying
|
||||
when the job should have been run
|
||||
:raises MaxInstancesReachedError: if the maximum number of
|
||||
allowed instances for this job has been reached
|
||||
|
||||
"""
|
||||
assert self._lock is not None, 'This executor has not been started yet'
|
||||
with self._lock:
|
||||
if self._instances[job.id] >= job.max_instances:
|
||||
raise MaxInstancesReachedError(job)
|
||||
|
||||
self._do_submit_job(job, run_times)
|
||||
self._instances[job.id] += 1
|
||||
|
||||
@abstractmethod
|
||||
def _do_submit_job(self, job, run_times):
|
||||
"""Performs the actual task of scheduling `run_job` to be called."""
|
||||
|
||||
def _run_job_success(self, job_id, events):
|
||||
"""
|
||||
Called by the executor with the list of generated events when :func:`run_job` has been
|
||||
successfully called.
|
||||
|
||||
"""
|
||||
with self._lock:
|
||||
self._instances[job_id] -= 1
|
||||
if self._instances[job_id] == 0:
|
||||
del self._instances[job_id]
|
||||
|
||||
for event in events:
|
||||
self._scheduler._dispatch_event(event)
|
||||
|
||||
def _run_job_error(self, job_id, exc, traceback=None):
|
||||
"""Called by the executor with the exception if there is an error calling `run_job`."""
|
||||
with self._lock:
|
||||
self._instances[job_id] -= 1
|
||||
if self._instances[job_id] == 0:
|
||||
del self._instances[job_id]
|
||||
|
||||
exc_info = (exc.__class__, exc, traceback)
|
||||
self._logger.error('Error running job %s', job_id, exc_info=exc_info)
|
||||
|
||||
|
||||
def run_job(job, jobstore_alias, run_times, logger_name):
|
||||
"""
|
||||
Called by executors to run the job. Returns a list of scheduler events to be dispatched by the
|
||||
scheduler.
|
||||
|
||||
"""
|
||||
events = []
|
||||
logger = logging.getLogger(logger_name)
|
||||
for run_time in run_times:
|
||||
# See if the job missed its run time window, and handle
|
||||
# possible misfires accordingly
|
||||
if job.misfire_grace_time is not None:
|
||||
difference = datetime.now(utc) - run_time
|
||||
grace_time = timedelta(seconds=job.misfire_grace_time)
|
||||
if difference > grace_time:
|
||||
events.append(JobExecutionEvent(EVENT_JOB_MISSED, job.id, jobstore_alias,
|
||||
run_time))
|
||||
logger.warning('Run time of job "%s" was missed by %s', job, difference)
|
||||
continue
|
||||
|
||||
logger.info('Running job "%s" (scheduled at %s)', job, run_time)
|
||||
try:
|
||||
retval = job.func(*job.args, **job.kwargs)
|
||||
except:
|
||||
exc, tb = sys.exc_info()[1:]
|
||||
formatted_tb = ''.join(format_tb(tb))
|
||||
events.append(JobExecutionEvent(EVENT_JOB_ERROR, job.id, jobstore_alias, run_time,
|
||||
exception=exc, traceback=formatted_tb))
|
||||
logger.exception('Job "%s" raised an exception', job)
|
||||
else:
|
||||
events.append(JobExecutionEvent(EVENT_JOB_EXECUTED, job.id, jobstore_alias, run_time,
|
||||
retval=retval))
|
||||
logger.info('Job "%s" executed successfully', job)
|
||||
|
||||
return events
|
|
@ -0,0 +1,41 @@
|
|||
import logging
|
||||
import sys
|
||||
from datetime import datetime, timedelta
|
||||
from traceback import format_tb
|
||||
|
||||
from pytz import utc
|
||||
|
||||
from apscheduler.events import (
|
||||
JobExecutionEvent, EVENT_JOB_MISSED, EVENT_JOB_ERROR, EVENT_JOB_EXECUTED)
|
||||
|
||||
|
||||
async def run_coroutine_job(job, jobstore_alias, run_times, logger_name):
|
||||
"""Coroutine version of run_job()."""
|
||||
events = []
|
||||
logger = logging.getLogger(logger_name)
|
||||
for run_time in run_times:
|
||||
# See if the job missed its run time window, and handle possible misfires accordingly
|
||||
if job.misfire_grace_time is not None:
|
||||
difference = datetime.now(utc) - run_time
|
||||
grace_time = timedelta(seconds=job.misfire_grace_time)
|
||||
if difference > grace_time:
|
||||
events.append(JobExecutionEvent(EVENT_JOB_MISSED, job.id, jobstore_alias,
|
||||
run_time))
|
||||
logger.warning('Run time of job "%s" was missed by %s', job, difference)
|
||||
continue
|
||||
|
||||
logger.info('Running job "%s" (scheduled at %s)', job, run_time)
|
||||
try:
|
||||
retval = await job.func(*job.args, **job.kwargs)
|
||||
except:
|
||||
exc, tb = sys.exc_info()[1:]
|
||||
formatted_tb = ''.join(format_tb(tb))
|
||||
events.append(JobExecutionEvent(EVENT_JOB_ERROR, job.id, jobstore_alias, run_time,
|
||||
exception=exc, traceback=formatted_tb))
|
||||
logger.exception('Job "%s" raised an exception', job)
|
||||
else:
|
||||
events.append(JobExecutionEvent(EVENT_JOB_EXECUTED, job.id, jobstore_alias, run_time,
|
||||
retval=retval))
|
||||
logger.info('Job "%s" executed successfully', job)
|
||||
|
||||
return events
|
|
@ -0,0 +1,20 @@
|
|||
import sys
|
||||
|
||||
from apscheduler.executors.base import BaseExecutor, run_job
|
||||
|
||||
|
||||
class DebugExecutor(BaseExecutor):
|
||||
"""
|
||||
A special executor that executes the target callable directly instead of deferring it to a
|
||||
thread or process.
|
||||
|
||||
Plugin alias: ``debug``
|
||||
"""
|
||||
|
||||
def _do_submit_job(self, job, run_times):
|
||||
try:
|
||||
events = run_job(job, job._jobstore_alias, run_times, self._logger.name)
|
||||
except:
|
||||
self._run_job_error(job.id, *sys.exc_info()[1:])
|
||||
else:
|
||||
self._run_job_success(job.id, events)
|
|
@ -0,0 +1,30 @@
|
|||
from __future__ import absolute_import
|
||||
import sys
|
||||
|
||||
from apscheduler.executors.base import BaseExecutor, run_job
|
||||
|
||||
|
||||
try:
|
||||
import gevent
|
||||
except ImportError: # pragma: nocover
|
||||
raise ImportError('GeventExecutor requires gevent installed')
|
||||
|
||||
|
||||
class GeventExecutor(BaseExecutor):
|
||||
"""
|
||||
Runs jobs as greenlets.
|
||||
|
||||
Plugin alias: ``gevent``
|
||||
"""
|
||||
|
||||
def _do_submit_job(self, job, run_times):
|
||||
def callback(greenlet):
|
||||
try:
|
||||
events = greenlet.get()
|
||||
except:
|
||||
self._run_job_error(job.id, *sys.exc_info()[1:])
|
||||
else:
|
||||
self._run_job_success(job.id, events)
|
||||
|
||||
gevent.spawn(run_job, job, job._jobstore_alias, run_times, self._logger.name).\
|
||||
link(callback)
|
|
@ -0,0 +1,54 @@
|
|||
from abc import abstractmethod
|
||||
import concurrent.futures
|
||||
|
||||
from apscheduler.executors.base import BaseExecutor, run_job
|
||||
|
||||
|
||||
class BasePoolExecutor(BaseExecutor):
|
||||
@abstractmethod
|
||||
def __init__(self, pool):
|
||||
super(BasePoolExecutor, self).__init__()
|
||||
self._pool = pool
|
||||
|
||||
def _do_submit_job(self, job, run_times):
|
||||
def callback(f):
|
||||
exc, tb = (f.exception_info() if hasattr(f, 'exception_info') else
|
||||
(f.exception(), getattr(f.exception(), '__traceback__', None)))
|
||||
if exc:
|
||||
self._run_job_error(job.id, exc, tb)
|
||||
else:
|
||||
self._run_job_success(job.id, f.result())
|
||||
|
||||
f = self._pool.submit(run_job, job, job._jobstore_alias, run_times, self._logger.name)
|
||||
f.add_done_callback(callback)
|
||||
|
||||
def shutdown(self, wait=True):
|
||||
self._pool.shutdown(wait)
|
||||
|
||||
|
||||
class ThreadPoolExecutor(BasePoolExecutor):
|
||||
"""
|
||||
An executor that runs jobs in a concurrent.futures thread pool.
|
||||
|
||||
Plugin alias: ``threadpool``
|
||||
|
||||
:param max_workers: the maximum number of spawned threads.
|
||||
"""
|
||||
|
||||
def __init__(self, max_workers=10):
|
||||
pool = concurrent.futures.ThreadPoolExecutor(int(max_workers))
|
||||
super(ThreadPoolExecutor, self).__init__(pool)
|
||||
|
||||
|
||||
class ProcessPoolExecutor(BasePoolExecutor):
|
||||
"""
|
||||
An executor that runs jobs in a concurrent.futures process pool.
|
||||
|
||||
Plugin alias: ``processpool``
|
||||
|
||||
:param max_workers: the maximum number of spawned processes.
|
||||
"""
|
||||
|
||||
def __init__(self, max_workers=10):
|
||||
pool = concurrent.futures.ProcessPoolExecutor(int(max_workers))
|
||||
super(ProcessPoolExecutor, self).__init__(pool)
|
|
@ -0,0 +1,54 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
import sys
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
from tornado.gen import convert_yielded
|
||||
|
||||
from apscheduler.executors.base import BaseExecutor, run_job
|
||||
|
||||
try:
|
||||
from inspect import iscoroutinefunction
|
||||
from apscheduler.executors.base_py3 import run_coroutine_job
|
||||
except ImportError:
|
||||
def iscoroutinefunction(func):
|
||||
return False
|
||||
|
||||
|
||||
class TornadoExecutor(BaseExecutor):
|
||||
"""
|
||||
Runs jobs either in a thread pool or directly on the I/O loop.
|
||||
|
||||
If the job function is a native coroutine function, it is scheduled to be run directly in the
|
||||
I/O loop as soon as possible. All other functions are run in a thread pool.
|
||||
|
||||
Plugin alias: ``tornado``
|
||||
|
||||
:param int max_workers: maximum number of worker threads in the thread pool
|
||||
"""
|
||||
|
||||
def __init__(self, max_workers=10):
|
||||
super(TornadoExecutor, self).__init__()
|
||||
self.executor = ThreadPoolExecutor(max_workers)
|
||||
|
||||
def start(self, scheduler, alias):
|
||||
super(TornadoExecutor, self).start(scheduler, alias)
|
||||
self._ioloop = scheduler._ioloop
|
||||
|
||||
def _do_submit_job(self, job, run_times):
|
||||
def callback(f):
|
||||
try:
|
||||
events = f.result()
|
||||
except:
|
||||
self._run_job_error(job.id, *sys.exc_info()[1:])
|
||||
else:
|
||||
self._run_job_success(job.id, events)
|
||||
|
||||
if iscoroutinefunction(job.func):
|
||||
f = run_coroutine_job(job, job._jobstore_alias, run_times, self._logger.name)
|
||||
else:
|
||||
f = self.executor.submit(run_job, job, job._jobstore_alias, run_times,
|
||||
self._logger.name)
|
||||
|
||||
f = convert_yielded(f)
|
||||
f.add_done_callback(callback)
|
|
@ -0,0 +1,25 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
from apscheduler.executors.base import BaseExecutor, run_job
|
||||
|
||||
|
||||
class TwistedExecutor(BaseExecutor):
|
||||
"""
|
||||
Runs jobs in the reactor's thread pool.
|
||||
|
||||
Plugin alias: ``twisted``
|
||||
"""
|
||||
|
||||
def start(self, scheduler, alias):
|
||||
super(TwistedExecutor, self).start(scheduler, alias)
|
||||
self._reactor = scheduler._reactor
|
||||
|
||||
def _do_submit_job(self, job, run_times):
|
||||
def callback(success, result):
|
||||
if success:
|
||||
self._run_job_success(job.id, result)
|
||||
else:
|
||||
self._run_job_error(job.id, result.value, result.tb)
|
||||
|
||||
self._reactor.getThreadPool().callInThreadWithCallback(
|
||||
callback, run_job, job, job._jobstore_alias, run_times, self._logger.name)
|
|
@ -1,134 +1,289 @@
|
|||
"""
|
||||
Jobs represent scheduled tasks.
|
||||
"""
|
||||
from collections import Iterable, Mapping
|
||||
from uuid import uuid4
|
||||
|
||||
from threading import Lock
|
||||
from datetime import timedelta
|
||||
import six
|
||||
|
||||
from apscheduler.util import to_unicode, ref_to_obj, get_callable_name,\
|
||||
obj_to_ref
|
||||
|
||||
|
||||
class MaxInstancesReachedError(Exception):
|
||||
pass
|
||||
from apscheduler.triggers.base import BaseTrigger
|
||||
from apscheduler.util import (
|
||||
ref_to_obj, obj_to_ref, datetime_repr, repr_escape, get_callable_name, check_callable_args,
|
||||
convert_to_datetime)
|
||||
|
||||
|
||||
class Job(object):
|
||||
"""
|
||||
Encapsulates the actual Job along with its metadata. Job instances
|
||||
are created by the scheduler when adding jobs, and it should not be
|
||||
directly instantiated.
|
||||
Contains the options given when scheduling callables and its current schedule and other state.
|
||||
This class should never be instantiated by the user.
|
||||
|
||||
:param trigger: trigger that determines the execution times
|
||||
:param func: callable to call when the trigger is triggered
|
||||
:param args: list of positional arguments to call func with
|
||||
:param kwargs: dict of keyword arguments to call func with
|
||||
:param name: name of the job (optional)
|
||||
:param misfire_grace_time: seconds after the designated run time that
|
||||
the job is still allowed to be run
|
||||
:param coalesce: run once instead of many times if the scheduler determines
|
||||
that the job should be run more than once in succession
|
||||
:param max_runs: maximum number of times this job is allowed to be
|
||||
triggered
|
||||
:param max_instances: maximum number of concurrently running
|
||||
instances allowed for this job
|
||||
:var str id: the unique identifier of this job
|
||||
:var str name: the description of this job
|
||||
:var func: the callable to execute
|
||||
:var tuple|list args: positional arguments to the callable
|
||||
:var dict kwargs: keyword arguments to the callable
|
||||
:var bool coalesce: whether to only run the job once when several run times are due
|
||||
:var trigger: the trigger object that controls the schedule of this job
|
||||
:var str executor: the name of the executor that will run this job
|
||||
:var int misfire_grace_time: the time (in seconds) how much this job's execution is allowed to
|
||||
be late
|
||||
:var int max_instances: the maximum number of concurrently executing instances allowed for this
|
||||
job
|
||||
:var datetime.datetime next_run_time: the next scheduled run time of this job
|
||||
|
||||
.. note::
|
||||
The ``misfire_grace_time`` has some non-obvious effects on job execution. See the
|
||||
:ref:`missed-job-executions` section in the documentation for an in-depth explanation.
|
||||
"""
|
||||
id = None
|
||||
next_run_time = None
|
||||
|
||||
def __init__(self, trigger, func, args, kwargs, misfire_grace_time,
|
||||
coalesce, name=None, max_runs=None, max_instances=1):
|
||||
if not trigger:
|
||||
raise ValueError('The trigger must not be None')
|
||||
if not hasattr(func, '__call__'):
|
||||
raise TypeError('func must be callable')
|
||||
if not hasattr(args, '__getitem__'):
|
||||
raise TypeError('args must be a list-like object')
|
||||
if not hasattr(kwargs, '__getitem__'):
|
||||
raise TypeError('kwargs must be a dict-like object')
|
||||
if misfire_grace_time <= 0:
|
||||
raise ValueError('misfire_grace_time must be a positive value')
|
||||
if max_runs is not None and max_runs <= 0:
|
||||
raise ValueError('max_runs must be a positive value')
|
||||
if max_instances <= 0:
|
||||
raise ValueError('max_instances must be a positive value')
|
||||
__slots__ = ('_scheduler', '_jobstore_alias', 'id', 'trigger', 'executor', 'func', 'func_ref',
|
||||
'args', 'kwargs', 'name', 'misfire_grace_time', 'coalesce', 'max_instances',
|
||||
'next_run_time')
|
||||
|
||||
self._lock = Lock()
|
||||
def __init__(self, scheduler, id=None, **kwargs):
|
||||
super(Job, self).__init__()
|
||||
self._scheduler = scheduler
|
||||
self._jobstore_alias = None
|
||||
self._modify(id=id or uuid4().hex, **kwargs)
|
||||
|
||||
self.trigger = trigger
|
||||
self.func = func
|
||||
self.args = args
|
||||
self.kwargs = kwargs
|
||||
self.name = to_unicode(name or get_callable_name(func))
|
||||
self.misfire_grace_time = misfire_grace_time
|
||||
self.coalesce = coalesce
|
||||
self.max_runs = max_runs
|
||||
self.max_instances = max_instances
|
||||
self.runs = 0
|
||||
self.instances = 0
|
||||
|
||||
def compute_next_run_time(self, now):
|
||||
if self.runs == self.max_runs:
|
||||
self.next_run_time = None
|
||||
else:
|
||||
self.next_run_time = self.trigger.get_next_fire_time(now)
|
||||
|
||||
return self.next_run_time
|
||||
|
||||
def get_run_times(self, now):
|
||||
def modify(self, **changes):
|
||||
"""
|
||||
Computes the scheduled run times between ``next_run_time`` and ``now``.
|
||||
Makes the given changes to this job and saves it in the associated job store.
|
||||
|
||||
Accepted keyword arguments are the same as the variables on this class.
|
||||
|
||||
.. seealso:: :meth:`~apscheduler.schedulers.base.BaseScheduler.modify_job`
|
||||
|
||||
:return Job: this job instance
|
||||
|
||||
"""
|
||||
self._scheduler.modify_job(self.id, self._jobstore_alias, **changes)
|
||||
return self
|
||||
|
||||
def reschedule(self, trigger, **trigger_args):
|
||||
"""
|
||||
Shortcut for switching the trigger on this job.
|
||||
|
||||
.. seealso:: :meth:`~apscheduler.schedulers.base.BaseScheduler.reschedule_job`
|
||||
|
||||
:return Job: this job instance
|
||||
|
||||
"""
|
||||
self._scheduler.reschedule_job(self.id, self._jobstore_alias, trigger, **trigger_args)
|
||||
return self
|
||||
|
||||
def pause(self):
|
||||
"""
|
||||
Temporarily suspend the execution of this job.
|
||||
|
||||
.. seealso:: :meth:`~apscheduler.schedulers.base.BaseScheduler.pause_job`
|
||||
|
||||
:return Job: this job instance
|
||||
|
||||
"""
|
||||
self._scheduler.pause_job(self.id, self._jobstore_alias)
|
||||
return self
|
||||
|
||||
def resume(self):
|
||||
"""
|
||||
Resume the schedule of this job if previously paused.
|
||||
|
||||
.. seealso:: :meth:`~apscheduler.schedulers.base.BaseScheduler.resume_job`
|
||||
|
||||
:return Job: this job instance
|
||||
|
||||
"""
|
||||
self._scheduler.resume_job(self.id, self._jobstore_alias)
|
||||
return self
|
||||
|
||||
def remove(self):
|
||||
"""
|
||||
Unschedules this job and removes it from its associated job store.
|
||||
|
||||
.. seealso:: :meth:`~apscheduler.schedulers.base.BaseScheduler.remove_job`
|
||||
|
||||
"""
|
||||
self._scheduler.remove_job(self.id, self._jobstore_alias)
|
||||
|
||||
@property
|
||||
def pending(self):
|
||||
"""
|
||||
Returns ``True`` if the referenced job is still waiting to be added to its designated job
|
||||
store.
|
||||
|
||||
"""
|
||||
return self._jobstore_alias is None
|
||||
|
||||
#
|
||||
# Private API
|
||||
#
|
||||
|
||||
def _get_run_times(self, now):
|
||||
"""
|
||||
Computes the scheduled run times between ``next_run_time`` and ``now`` (inclusive).
|
||||
|
||||
:type now: datetime.datetime
|
||||
:rtype: list[datetime.datetime]
|
||||
|
||||
"""
|
||||
run_times = []
|
||||
run_time = self.next_run_time
|
||||
increment = timedelta(microseconds=1)
|
||||
while ((not self.max_runs or self.runs < self.max_runs) and
|
||||
run_time and run_time <= now):
|
||||
run_times.append(run_time)
|
||||
run_time = self.trigger.get_next_fire_time(run_time + increment)
|
||||
next_run_time = self.next_run_time
|
||||
while next_run_time and next_run_time <= now:
|
||||
run_times.append(next_run_time)
|
||||
next_run_time = self.trigger.get_next_fire_time(next_run_time, now)
|
||||
|
||||
return run_times
|
||||
|
||||
def add_instance(self):
|
||||
self._lock.acquire()
|
||||
try:
|
||||
if self.instances == self.max_instances:
|
||||
raise MaxInstancesReachedError
|
||||
self.instances += 1
|
||||
finally:
|
||||
self._lock.release()
|
||||
def _modify(self, **changes):
|
||||
"""
|
||||
Validates the changes to the Job and makes the modifications if and only if all of them
|
||||
validate.
|
||||
|
||||
def remove_instance(self):
|
||||
self._lock.acquire()
|
||||
try:
|
||||
assert self.instances > 0, 'Already at 0 instances'
|
||||
self.instances -= 1
|
||||
finally:
|
||||
self._lock.release()
|
||||
"""
|
||||
approved = {}
|
||||
|
||||
if 'id' in changes:
|
||||
value = changes.pop('id')
|
||||
if not isinstance(value, six.string_types):
|
||||
raise TypeError("id must be a nonempty string")
|
||||
if hasattr(self, 'id'):
|
||||
raise ValueError('The job ID may not be changed')
|
||||
approved['id'] = value
|
||||
|
||||
if 'func' in changes or 'args' in changes or 'kwargs' in changes:
|
||||
func = changes.pop('func') if 'func' in changes else self.func
|
||||
args = changes.pop('args') if 'args' in changes else self.args
|
||||
kwargs = changes.pop('kwargs') if 'kwargs' in changes else self.kwargs
|
||||
|
||||
if isinstance(func, six.string_types):
|
||||
func_ref = func
|
||||
func = ref_to_obj(func)
|
||||
elif callable(func):
|
||||
try:
|
||||
func_ref = obj_to_ref(func)
|
||||
except ValueError:
|
||||
# If this happens, this Job won't be serializable
|
||||
func_ref = None
|
||||
else:
|
||||
raise TypeError('func must be a callable or a textual reference to one')
|
||||
|
||||
if not hasattr(self, 'name') and changes.get('name', None) is None:
|
||||
changes['name'] = get_callable_name(func)
|
||||
|
||||
if isinstance(args, six.string_types) or not isinstance(args, Iterable):
|
||||
raise TypeError('args must be a non-string iterable')
|
||||
if isinstance(kwargs, six.string_types) or not isinstance(kwargs, Mapping):
|
||||
raise TypeError('kwargs must be a dict-like object')
|
||||
|
||||
check_callable_args(func, args, kwargs)
|
||||
|
||||
approved['func'] = func
|
||||
approved['func_ref'] = func_ref
|
||||
approved['args'] = args
|
||||
approved['kwargs'] = kwargs
|
||||
|
||||
if 'name' in changes:
|
||||
value = changes.pop('name')
|
||||
if not value or not isinstance(value, six.string_types):
|
||||
raise TypeError("name must be a nonempty string")
|
||||
approved['name'] = value
|
||||
|
||||
if 'misfire_grace_time' in changes:
|
||||
value = changes.pop('misfire_grace_time')
|
||||
if value is not None and (not isinstance(value, six.integer_types) or value <= 0):
|
||||
raise TypeError('misfire_grace_time must be either None or a positive integer')
|
||||
approved['misfire_grace_time'] = value
|
||||
|
||||
if 'coalesce' in changes:
|
||||
value = bool(changes.pop('coalesce'))
|
||||
approved['coalesce'] = value
|
||||
|
||||
if 'max_instances' in changes:
|
||||
value = changes.pop('max_instances')
|
||||
if not isinstance(value, six.integer_types) or value <= 0:
|
||||
raise TypeError('max_instances must be a positive integer')
|
||||
approved['max_instances'] = value
|
||||
|
||||
if 'trigger' in changes:
|
||||
trigger = changes.pop('trigger')
|
||||
if not isinstance(trigger, BaseTrigger):
|
||||
raise TypeError('Expected a trigger instance, got %s instead' %
|
||||
trigger.__class__.__name__)
|
||||
|
||||
approved['trigger'] = trigger
|
||||
|
||||
if 'executor' in changes:
|
||||
value = changes.pop('executor')
|
||||
if not isinstance(value, six.string_types):
|
||||
raise TypeError('executor must be a string')
|
||||
approved['executor'] = value
|
||||
|
||||
if 'next_run_time' in changes:
|
||||
value = changes.pop('next_run_time')
|
||||
approved['next_run_time'] = convert_to_datetime(value, self._scheduler.timezone,
|
||||
'next_run_time')
|
||||
|
||||
if changes:
|
||||
raise AttributeError('The following are not modifiable attributes of Job: %s' %
|
||||
', '.join(changes))
|
||||
|
||||
for key, value in six.iteritems(approved):
|
||||
setattr(self, key, value)
|
||||
|
||||
def __getstate__(self):
|
||||
# Prevents the unwanted pickling of transient or unpicklable variables
|
||||
state = self.__dict__.copy()
|
||||
state.pop('instances', None)
|
||||
state.pop('func', None)
|
||||
state.pop('_lock', None)
|
||||
state['func_ref'] = obj_to_ref(self.func)
|
||||
return state
|
||||
# Don't allow this Job to be serialized if the function reference could not be determined
|
||||
if not self.func_ref:
|
||||
raise ValueError(
|
||||
'This Job cannot be serialized since the reference to its callable (%r) could not '
|
||||
'be determined. Consider giving a textual reference (module:function name) '
|
||||
'instead.' % (self.func,))
|
||||
|
||||
return {
|
||||
'version': 1,
|
||||
'id': self.id,
|
||||
'func': self.func_ref,
|
||||
'trigger': self.trigger,
|
||||
'executor': self.executor,
|
||||
'args': self.args,
|
||||
'kwargs': self.kwargs,
|
||||
'name': self.name,
|
||||
'misfire_grace_time': self.misfire_grace_time,
|
||||
'coalesce': self.coalesce,
|
||||
'max_instances': self.max_instances,
|
||||
'next_run_time': self.next_run_time
|
||||
}
|
||||
|
||||
def __setstate__(self, state):
|
||||
state['instances'] = 0
|
||||
state['func'] = ref_to_obj(state.pop('func_ref'))
|
||||
state['_lock'] = Lock()
|
||||
self.__dict__ = state
|
||||
if state.get('version', 1) > 1:
|
||||
raise ValueError('Job has version %s, but only version 1 can be handled' %
|
||||
state['version'])
|
||||
|
||||
self.id = state['id']
|
||||
self.func_ref = state['func']
|
||||
self.func = ref_to_obj(self.func_ref)
|
||||
self.trigger = state['trigger']
|
||||
self.executor = state['executor']
|
||||
self.args = state['args']
|
||||
self.kwargs = state['kwargs']
|
||||
self.name = state['name']
|
||||
self.misfire_grace_time = state['misfire_grace_time']
|
||||
self.coalesce = state['coalesce']
|
||||
self.max_instances = state['max_instances']
|
||||
self.next_run_time = state['next_run_time']
|
||||
|
||||
def __eq__(self, other):
|
||||
if isinstance(other, Job):
|
||||
return self.id is not None and other.id == self.id or self is other
|
||||
return self.id == other.id
|
||||
return NotImplemented
|
||||
|
||||
def __repr__(self):
|
||||
return '<Job (name=%s, trigger=%s)>' % (self.name, repr(self.trigger))
|
||||
return '<Job (id=%s name=%s)>' % (repr_escape(self.id), repr_escape(self.name))
|
||||
|
||||
def __str__(self):
|
||||
return '%s (trigger: %s, next run at: %s)' % (self.name,
|
||||
str(self.trigger), str(self.next_run_time))
|
||||
return repr_escape(self.__unicode__())
|
||||
|
||||
def __unicode__(self):
|
||||
if hasattr(self, 'next_run_time'):
|
||||
status = ('next run at: ' + datetime_repr(self.next_run_time) if
|
||||
self.next_run_time else 'paused')
|
||||
else:
|
||||
status = 'pending'
|
||||
|
||||
return u'%s (trigger: %s, %s)' % (self.name, self.trigger, status)
|
||||
|
|
|
@ -1,25 +1,143 @@
|
|||
"""
|
||||
Abstract base class that provides the interface needed by all job stores.
|
||||
Job store methods are also documented here.
|
||||
"""
|
||||
from abc import ABCMeta, abstractmethod
|
||||
import logging
|
||||
|
||||
import six
|
||||
|
||||
|
||||
class JobStore(object):
|
||||
def add_job(self, job):
|
||||
"""Adds the given job from this store."""
|
||||
raise NotImplementedError
|
||||
class JobLookupError(KeyError):
|
||||
"""Raised when the job store cannot find a job for update or removal."""
|
||||
|
||||
def update_job(self, job):
|
||||
"""Persists the running state of the given job."""
|
||||
raise NotImplementedError
|
||||
def __init__(self, job_id):
|
||||
super(JobLookupError, self).__init__(u'No job by the id of %s was found' % job_id)
|
||||
|
||||
def remove_job(self, job):
|
||||
"""Removes the given jobs from this store."""
|
||||
raise NotImplementedError
|
||||
|
||||
def load_jobs(self):
|
||||
"""Loads jobs from this store into memory."""
|
||||
raise NotImplementedError
|
||||
class ConflictingIdError(KeyError):
|
||||
"""Raised when the uniqueness of job IDs is being violated."""
|
||||
|
||||
def close(self):
|
||||
def __init__(self, job_id):
|
||||
super(ConflictingIdError, self).__init__(
|
||||
u'Job identifier (%s) conflicts with an existing job' % job_id)
|
||||
|
||||
|
||||
class TransientJobError(ValueError):
|
||||
"""
|
||||
Raised when an attempt to add transient (with no func_ref) job to a persistent job store is
|
||||
detected.
|
||||
"""
|
||||
|
||||
def __init__(self, job_id):
|
||||
super(TransientJobError, self).__init__(
|
||||
u'Job (%s) cannot be added to this job store because a reference to the callable '
|
||||
u'could not be determined.' % job_id)
|
||||
|
||||
|
||||
class BaseJobStore(six.with_metaclass(ABCMeta)):
|
||||
"""Abstract base class that defines the interface that every job store must implement."""
|
||||
|
||||
_scheduler = None
|
||||
_alias = None
|
||||
_logger = logging.getLogger('apscheduler.jobstores')
|
||||
|
||||
def start(self, scheduler, alias):
|
||||
"""
|
||||
Called by the scheduler when the scheduler is being started or when the job store is being
|
||||
added to an already running scheduler.
|
||||
|
||||
:param apscheduler.schedulers.base.BaseScheduler scheduler: the scheduler that is starting
|
||||
this job store
|
||||
:param str|unicode alias: alias of this job store as it was assigned to the scheduler
|
||||
"""
|
||||
|
||||
self._scheduler = scheduler
|
||||
self._alias = alias
|
||||
self._logger = logging.getLogger('apscheduler.jobstores.%s' % alias)
|
||||
|
||||
def shutdown(self):
|
||||
"""Frees any resources still bound to this job store."""
|
||||
|
||||
def _fix_paused_jobs_sorting(self, jobs):
|
||||
for i, job in enumerate(jobs):
|
||||
if job.next_run_time is not None:
|
||||
if i > 0:
|
||||
paused_jobs = jobs[:i]
|
||||
del jobs[:i]
|
||||
jobs.extend(paused_jobs)
|
||||
break
|
||||
|
||||
@abstractmethod
|
||||
def lookup_job(self, job_id):
|
||||
"""
|
||||
Returns a specific job, or ``None`` if it isn't found..
|
||||
|
||||
The job store is responsible for setting the ``scheduler`` and ``jobstore`` attributes of
|
||||
the returned job to point to the scheduler and itself, respectively.
|
||||
|
||||
:param str|unicode job_id: identifier of the job
|
||||
:rtype: Job
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_due_jobs(self, now):
|
||||
"""
|
||||
Returns the list of jobs that have ``next_run_time`` earlier or equal to ``now``.
|
||||
The returned jobs must be sorted by next run time (ascending).
|
||||
|
||||
:param datetime.datetime now: the current (timezone aware) datetime
|
||||
:rtype: list[Job]
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_next_run_time(self):
|
||||
"""
|
||||
Returns the earliest run time of all the jobs stored in this job store, or ``None`` if
|
||||
there are no active jobs.
|
||||
|
||||
:rtype: datetime.datetime
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_all_jobs(self):
|
||||
"""
|
||||
Returns a list of all jobs in this job store.
|
||||
The returned jobs should be sorted by next run time (ascending).
|
||||
Paused jobs (next_run_time == None) should be sorted last.
|
||||
|
||||
The job store is responsible for setting the ``scheduler`` and ``jobstore`` attributes of
|
||||
the returned jobs to point to the scheduler and itself, respectively.
|
||||
|
||||
:rtype: list[Job]
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def add_job(self, job):
|
||||
"""
|
||||
Adds the given job to this store.
|
||||
|
||||
:param Job job: the job to add
|
||||
:raises ConflictingIdError: if there is another job in this store with the same ID
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def update_job(self, job):
|
||||
"""
|
||||
Replaces the job in the store with the given newer version.
|
||||
|
||||
:param Job job: the job to update
|
||||
:raises JobLookupError: if the job does not exist
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def remove_job(self, job_id):
|
||||
"""
|
||||
Removes the given job from this store.
|
||||
|
||||
:param str|unicode job_id: identifier of the job
|
||||
:raises JobLookupError: if the job does not exist
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def remove_all_jobs(self):
|
||||
"""Removes all jobs from this store."""
|
||||
|
||||
def __repr__(self):
|
||||
return '<%s>' % self.__class__.__name__
|
||||
|
|
|
@ -0,0 +1,108 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
from apscheduler.jobstores.base import BaseJobStore, JobLookupError, ConflictingIdError
|
||||
from apscheduler.util import datetime_to_utc_timestamp
|
||||
|
||||
|
||||
class MemoryJobStore(BaseJobStore):
|
||||
"""
|
||||
Stores jobs in an array in RAM. Provides no persistence support.
|
||||
|
||||
Plugin alias: ``memory``
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
super(MemoryJobStore, self).__init__()
|
||||
# list of (job, timestamp), sorted by next_run_time and job id (ascending)
|
||||
self._jobs = []
|
||||
self._jobs_index = {} # id -> (job, timestamp) lookup table
|
||||
|
||||
def lookup_job(self, job_id):
|
||||
return self._jobs_index.get(job_id, (None, None))[0]
|
||||
|
||||
def get_due_jobs(self, now):
|
||||
now_timestamp = datetime_to_utc_timestamp(now)
|
||||
pending = []
|
||||
for job, timestamp in self._jobs:
|
||||
if timestamp is None or timestamp > now_timestamp:
|
||||
break
|
||||
pending.append(job)
|
||||
|
||||
return pending
|
||||
|
||||
def get_next_run_time(self):
|
||||
return self._jobs[0][0].next_run_time if self._jobs else None
|
||||
|
||||
def get_all_jobs(self):
|
||||
return [j[0] for j in self._jobs]
|
||||
|
||||
def add_job(self, job):
|
||||
if job.id in self._jobs_index:
|
||||
raise ConflictingIdError(job.id)
|
||||
|
||||
timestamp = datetime_to_utc_timestamp(job.next_run_time)
|
||||
index = self._get_job_index(timestamp, job.id)
|
||||
self._jobs.insert(index, (job, timestamp))
|
||||
self._jobs_index[job.id] = (job, timestamp)
|
||||
|
||||
def update_job(self, job):
|
||||
old_job, old_timestamp = self._jobs_index.get(job.id, (None, None))
|
||||
if old_job is None:
|
||||
raise JobLookupError(job.id)
|
||||
|
||||
# If the next run time has not changed, simply replace the job in its present index.
|
||||
# Otherwise, reinsert the job to the list to preserve the ordering.
|
||||
old_index = self._get_job_index(old_timestamp, old_job.id)
|
||||
new_timestamp = datetime_to_utc_timestamp(job.next_run_time)
|
||||
if old_timestamp == new_timestamp:
|
||||
self._jobs[old_index] = (job, new_timestamp)
|
||||
else:
|
||||
del self._jobs[old_index]
|
||||
new_index = self._get_job_index(new_timestamp, job.id)
|
||||
self._jobs.insert(new_index, (job, new_timestamp))
|
||||
|
||||
self._jobs_index[old_job.id] = (job, new_timestamp)
|
||||
|
||||
def remove_job(self, job_id):
|
||||
job, timestamp = self._jobs_index.get(job_id, (None, None))
|
||||
if job is None:
|
||||
raise JobLookupError(job_id)
|
||||
|
||||
index = self._get_job_index(timestamp, job_id)
|
||||
del self._jobs[index]
|
||||
del self._jobs_index[job.id]
|
||||
|
||||
def remove_all_jobs(self):
|
||||
self._jobs = []
|
||||
self._jobs_index = {}
|
||||
|
||||
def shutdown(self):
|
||||
self.remove_all_jobs()
|
||||
|
||||
def _get_job_index(self, timestamp, job_id):
|
||||
"""
|
||||
Returns the index of the given job, or if it's not found, the index where the job should be
|
||||
inserted based on the given timestamp.
|
||||
|
||||
:type timestamp: int
|
||||
:type job_id: str
|
||||
|
||||
"""
|
||||
lo, hi = 0, len(self._jobs)
|
||||
timestamp = float('inf') if timestamp is None else timestamp
|
||||
while lo < hi:
|
||||
mid = (lo + hi) // 2
|
||||
mid_job, mid_timestamp = self._jobs[mid]
|
||||
mid_timestamp = float('inf') if mid_timestamp is None else mid_timestamp
|
||||
if mid_timestamp > timestamp:
|
||||
hi = mid
|
||||
elif mid_timestamp < timestamp:
|
||||
lo = mid + 1
|
||||
elif mid_job.id > job_id:
|
||||
hi = mid
|
||||
elif mid_job.id < job_id:
|
||||
lo = mid + 1
|
||||
else:
|
||||
return mid
|
||||
|
||||
return lo
|
|
@ -0,0 +1,141 @@
|
|||
from __future__ import absolute_import
|
||||
import warnings
|
||||
|
||||
from apscheduler.jobstores.base import BaseJobStore, JobLookupError, ConflictingIdError
|
||||
from apscheduler.util import maybe_ref, datetime_to_utc_timestamp, utc_timestamp_to_datetime
|
||||
from apscheduler.job import Job
|
||||
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except ImportError: # pragma: nocover
|
||||
import pickle
|
||||
|
||||
try:
|
||||
from bson.binary import Binary
|
||||
from pymongo.errors import DuplicateKeyError
|
||||
from pymongo import MongoClient, ASCENDING
|
||||
except ImportError: # pragma: nocover
|
||||
raise ImportError('MongoDBJobStore requires PyMongo installed')
|
||||
|
||||
|
||||
class MongoDBJobStore(BaseJobStore):
|
||||
"""
|
||||
Stores jobs in a MongoDB database. Any leftover keyword arguments are directly passed to
|
||||
pymongo's `MongoClient
|
||||
<http://api.mongodb.org/python/current/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient>`_.
|
||||
|
||||
Plugin alias: ``mongodb``
|
||||
|
||||
:param str database: database to store jobs in
|
||||
:param str collection: collection to store jobs in
|
||||
:param client: a :class:`~pymongo.mongo_client.MongoClient` instance to use instead of
|
||||
providing connection arguments
|
||||
:param int pickle_protocol: pickle protocol level to use (for serialization), defaults to the
|
||||
highest available
|
||||
"""
|
||||
|
||||
def __init__(self, database='apscheduler', collection='jobs', client=None,
|
||||
pickle_protocol=pickle.HIGHEST_PROTOCOL, **connect_args):
|
||||
super(MongoDBJobStore, self).__init__()
|
||||
self.pickle_protocol = pickle_protocol
|
||||
|
||||
if not database:
|
||||
raise ValueError('The "database" parameter must not be empty')
|
||||
if not collection:
|
||||
raise ValueError('The "collection" parameter must not be empty')
|
||||
|
||||
if client:
|
||||
self.client = maybe_ref(client)
|
||||
else:
|
||||
connect_args.setdefault('w', 1)
|
||||
self.client = MongoClient(**connect_args)
|
||||
|
||||
self.collection = self.client[database][collection]
|
||||
|
||||
def start(self, scheduler, alias):
|
||||
super(MongoDBJobStore, self).start(scheduler, alias)
|
||||
self.collection.ensure_index('next_run_time', sparse=True)
|
||||
|
||||
@property
|
||||
def connection(self):
|
||||
warnings.warn('The "connection" member is deprecated -- use "client" instead',
|
||||
DeprecationWarning)
|
||||
return self.client
|
||||
|
||||
def lookup_job(self, job_id):
|
||||
document = self.collection.find_one(job_id, ['job_state'])
|
||||
return self._reconstitute_job(document['job_state']) if document else None
|
||||
|
||||
def get_due_jobs(self, now):
|
||||
timestamp = datetime_to_utc_timestamp(now)
|
||||
return self._get_jobs({'next_run_time': {'$lte': timestamp}})
|
||||
|
||||
def get_next_run_time(self):
|
||||
document = self.collection.find_one({'next_run_time': {'$ne': None}},
|
||||
projection=['next_run_time'],
|
||||
sort=[('next_run_time', ASCENDING)])
|
||||
return utc_timestamp_to_datetime(document['next_run_time']) if document else None
|
||||
|
||||
def get_all_jobs(self):
|
||||
jobs = self._get_jobs({})
|
||||
self._fix_paused_jobs_sorting(jobs)
|
||||
return jobs
|
||||
|
||||
def add_job(self, job):
|
||||
try:
|
||||
self.collection.insert({
|
||||
'_id': job.id,
|
||||
'next_run_time': datetime_to_utc_timestamp(job.next_run_time),
|
||||
'job_state': Binary(pickle.dumps(job.__getstate__(), self.pickle_protocol))
|
||||
})
|
||||
except DuplicateKeyError:
|
||||
raise ConflictingIdError(job.id)
|
||||
|
||||
def update_job(self, job):
|
||||
changes = {
|
||||
'next_run_time': datetime_to_utc_timestamp(job.next_run_time),
|
||||
'job_state': Binary(pickle.dumps(job.__getstate__(), self.pickle_protocol))
|
||||
}
|
||||
result = self.collection.update({'_id': job.id}, {'$set': changes})
|
||||
if result and result['n'] == 0:
|
||||
raise JobLookupError(job.id)
|
||||
|
||||
def remove_job(self, job_id):
|
||||
result = self.collection.remove(job_id)
|
||||
if result and result['n'] == 0:
|
||||
raise JobLookupError(job_id)
|
||||
|
||||
def remove_all_jobs(self):
|
||||
self.collection.remove()
|
||||
|
||||
def shutdown(self):
|
||||
self.client.close()
|
||||
|
||||
def _reconstitute_job(self, job_state):
|
||||
job_state = pickle.loads(job_state)
|
||||
job = Job.__new__(Job)
|
||||
job.__setstate__(job_state)
|
||||
job._scheduler = self._scheduler
|
||||
job._jobstore_alias = self._alias
|
||||
return job
|
||||
|
||||
def _get_jobs(self, conditions):
|
||||
jobs = []
|
||||
failed_job_ids = []
|
||||
for document in self.collection.find(conditions, ['_id', 'job_state'],
|
||||
sort=[('next_run_time', ASCENDING)]):
|
||||
try:
|
||||
jobs.append(self._reconstitute_job(document['job_state']))
|
||||
except:
|
||||
self._logger.exception('Unable to restore job "%s" -- removing it',
|
||||
document['_id'])
|
||||
failed_job_ids.append(document['_id'])
|
||||
|
||||
# Remove all the jobs we failed to restore
|
||||
if failed_job_ids:
|
||||
self.collection.remove({'_id': {'$in': failed_job_ids}})
|
||||
|
||||
return jobs
|
||||
|
||||
def __repr__(self):
|
||||
return '<%s (client=%s)>' % (self.__class__.__name__, self.client)
|
|
@ -1,84 +0,0 @@
|
|||
"""
|
||||
Stores jobs in a MongoDB database.
|
||||
"""
|
||||
import logging
|
||||
|
||||
from apscheduler.jobstores.base import JobStore
|
||||
from apscheduler.job import Job
|
||||
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except ImportError: # pragma: nocover
|
||||
import pickle
|
||||
|
||||
try:
|
||||
from bson.binary import Binary
|
||||
from pymongo.connection import Connection
|
||||
except ImportError: # pragma: nocover
|
||||
raise ImportError('MongoDBJobStore requires PyMongo installed')
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class MongoDBJobStore(JobStore):
|
||||
def __init__(self, database='apscheduler', collection='jobs',
|
||||
connection=None, pickle_protocol=pickle.HIGHEST_PROTOCOL,
|
||||
**connect_args):
|
||||
self.jobs = []
|
||||
self.pickle_protocol = pickle_protocol
|
||||
|
||||
if not database:
|
||||
raise ValueError('The "database" parameter must not be empty')
|
||||
if not collection:
|
||||
raise ValueError('The "collection" parameter must not be empty')
|
||||
|
||||
if connection:
|
||||
self.connection = connection
|
||||
else:
|
||||
self.connection = Connection(**connect_args)
|
||||
|
||||
self.collection = self.connection[database][collection]
|
||||
|
||||
def add_job(self, job):
|
||||
job_dict = job.__getstate__()
|
||||
job_dict['trigger'] = Binary(pickle.dumps(job.trigger,
|
||||
self.pickle_protocol))
|
||||
job_dict['args'] = Binary(pickle.dumps(job.args,
|
||||
self.pickle_protocol))
|
||||
job_dict['kwargs'] = Binary(pickle.dumps(job.kwargs,
|
||||
self.pickle_protocol))
|
||||
job.id = self.collection.insert(job_dict)
|
||||
self.jobs.append(job)
|
||||
|
||||
def remove_job(self, job):
|
||||
self.collection.remove(job.id)
|
||||
self.jobs.remove(job)
|
||||
|
||||
def load_jobs(self):
|
||||
jobs = []
|
||||
for job_dict in self.collection.find():
|
||||
try:
|
||||
job = Job.__new__(Job)
|
||||
job_dict['id'] = job_dict.pop('_id')
|
||||
job_dict['trigger'] = pickle.loads(job_dict['trigger'])
|
||||
job_dict['args'] = pickle.loads(job_dict['args'])
|
||||
job_dict['kwargs'] = pickle.loads(job_dict['kwargs'])
|
||||
job.__setstate__(job_dict)
|
||||
jobs.append(job)
|
||||
except Exception:
|
||||
job_name = job_dict.get('name', '(unknown)')
|
||||
logger.exception('Unable to restore job "%s"', job_name)
|
||||
self.jobs = jobs
|
||||
|
||||
def update_job(self, job):
|
||||
spec = {'_id': job.id}
|
||||
document = {'$set': {'next_run_time': job.next_run_time},
|
||||
'$inc': {'runs': 1}}
|
||||
self.collection.update(spec, document)
|
||||
|
||||
def close(self):
|
||||
self.connection.disconnect()
|
||||
|
||||
def __repr__(self):
|
||||
connection = self.collection.database.connection
|
||||
return '<%s (connection=%s)>' % (self.__class__.__name__, connection)
|
|
@ -1,25 +0,0 @@
|
|||
"""
|
||||
Stores jobs in an array in RAM. Provides no persistence support.
|
||||
"""
|
||||
|
||||
from apscheduler.jobstores.base import JobStore
|
||||
|
||||
|
||||
class RAMJobStore(JobStore):
|
||||
def __init__(self):
|
||||
self.jobs = []
|
||||
|
||||
def add_job(self, job):
|
||||
self.jobs.append(job)
|
||||
|
||||
def update_job(self, job):
|
||||
pass
|
||||
|
||||
def remove_job(self, job):
|
||||
self.jobs.remove(job)
|
||||
|
||||
def load_jobs(self):
|
||||
pass
|
||||
|
||||
def __repr__(self):
|
||||
return '<%s>' % (self.__class__.__name__)
|
|
@ -0,0 +1,146 @@
|
|||
from __future__ import absolute_import
|
||||
from datetime import datetime
|
||||
|
||||
from pytz import utc
|
||||
import six
|
||||
|
||||
from apscheduler.jobstores.base import BaseJobStore, JobLookupError, ConflictingIdError
|
||||
from apscheduler.util import datetime_to_utc_timestamp, utc_timestamp_to_datetime
|
||||
from apscheduler.job import Job
|
||||
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except ImportError: # pragma: nocover
|
||||
import pickle
|
||||
|
||||
try:
|
||||
from redis import StrictRedis
|
||||
except ImportError: # pragma: nocover
|
||||
raise ImportError('RedisJobStore requires redis installed')
|
||||
|
||||
|
||||
class RedisJobStore(BaseJobStore):
|
||||
"""
|
||||
Stores jobs in a Redis database. Any leftover keyword arguments are directly passed to redis's
|
||||
:class:`~redis.StrictRedis`.
|
||||
|
||||
Plugin alias: ``redis``
|
||||
|
||||
:param int db: the database number to store jobs in
|
||||
:param str jobs_key: key to store jobs in
|
||||
:param str run_times_key: key to store the jobs' run times in
|
||||
:param int pickle_protocol: pickle protocol level to use (for serialization), defaults to the
|
||||
highest available
|
||||
"""
|
||||
|
||||
def __init__(self, db=0, jobs_key='apscheduler.jobs', run_times_key='apscheduler.run_times',
|
||||
pickle_protocol=pickle.HIGHEST_PROTOCOL, **connect_args):
|
||||
super(RedisJobStore, self).__init__()
|
||||
|
||||
if db is None:
|
||||
raise ValueError('The "db" parameter must not be empty')
|
||||
if not jobs_key:
|
||||
raise ValueError('The "jobs_key" parameter must not be empty')
|
||||
if not run_times_key:
|
||||
raise ValueError('The "run_times_key" parameter must not be empty')
|
||||
|
||||
self.pickle_protocol = pickle_protocol
|
||||
self.jobs_key = jobs_key
|
||||
self.run_times_key = run_times_key
|
||||
self.redis = StrictRedis(db=int(db), **connect_args)
|
||||
|
||||
def lookup_job(self, job_id):
|
||||
job_state = self.redis.hget(self.jobs_key, job_id)
|
||||
return self._reconstitute_job(job_state) if job_state else None
|
||||
|
||||
def get_due_jobs(self, now):
|
||||
timestamp = datetime_to_utc_timestamp(now)
|
||||
job_ids = self.redis.zrangebyscore(self.run_times_key, 0, timestamp)
|
||||
if job_ids:
|
||||
job_states = self.redis.hmget(self.jobs_key, *job_ids)
|
||||
return self._reconstitute_jobs(six.moves.zip(job_ids, job_states))
|
||||
return []
|
||||
|
||||
def get_next_run_time(self):
|
||||
next_run_time = self.redis.zrange(self.run_times_key, 0, 0, withscores=True)
|
||||
if next_run_time:
|
||||
return utc_timestamp_to_datetime(next_run_time[0][1])
|
||||
|
||||
def get_all_jobs(self):
|
||||
job_states = self.redis.hgetall(self.jobs_key)
|
||||
jobs = self._reconstitute_jobs(six.iteritems(job_states))
|
||||
paused_sort_key = datetime(9999, 12, 31, tzinfo=utc)
|
||||
return sorted(jobs, key=lambda job: job.next_run_time or paused_sort_key)
|
||||
|
||||
def add_job(self, job):
|
||||
if self.redis.hexists(self.jobs_key, job.id):
|
||||
raise ConflictingIdError(job.id)
|
||||
|
||||
with self.redis.pipeline() as pipe:
|
||||
pipe.multi()
|
||||
pipe.hset(self.jobs_key, job.id, pickle.dumps(job.__getstate__(),
|
||||
self.pickle_protocol))
|
||||
if job.next_run_time:
|
||||
pipe.zadd(self.run_times_key, datetime_to_utc_timestamp(job.next_run_time), job.id)
|
||||
pipe.execute()
|
||||
|
||||
def update_job(self, job):
|
||||
if not self.redis.hexists(self.jobs_key, job.id):
|
||||
raise JobLookupError(job.id)
|
||||
|
||||
with self.redis.pipeline() as pipe:
|
||||
pipe.hset(self.jobs_key, job.id, pickle.dumps(job.__getstate__(),
|
||||
self.pickle_protocol))
|
||||
if job.next_run_time:
|
||||
pipe.zadd(self.run_times_key, datetime_to_utc_timestamp(job.next_run_time), job.id)
|
||||
else:
|
||||
pipe.zrem(self.run_times_key, job.id)
|
||||
pipe.execute()
|
||||
|
||||
def remove_job(self, job_id):
|
||||
if not self.redis.hexists(self.jobs_key, job_id):
|
||||
raise JobLookupError(job_id)
|
||||
|
||||
with self.redis.pipeline() as pipe:
|
||||
pipe.hdel(self.jobs_key, job_id)
|
||||
pipe.zrem(self.run_times_key, job_id)
|
||||
pipe.execute()
|
||||
|
||||
def remove_all_jobs(self):
|
||||
with self.redis.pipeline() as pipe:
|
||||
pipe.delete(self.jobs_key)
|
||||
pipe.delete(self.run_times_key)
|
||||
pipe.execute()
|
||||
|
||||
def shutdown(self):
|
||||
self.redis.connection_pool.disconnect()
|
||||
|
||||
def _reconstitute_job(self, job_state):
|
||||
job_state = pickle.loads(job_state)
|
||||
job = Job.__new__(Job)
|
||||
job.__setstate__(job_state)
|
||||
job._scheduler = self._scheduler
|
||||
job._jobstore_alias = self._alias
|
||||
return job
|
||||
|
||||
def _reconstitute_jobs(self, job_states):
|
||||
jobs = []
|
||||
failed_job_ids = []
|
||||
for job_id, job_state in job_states:
|
||||
try:
|
||||
jobs.append(self._reconstitute_job(job_state))
|
||||
except:
|
||||
self._logger.exception('Unable to restore job "%s" -- removing it', job_id)
|
||||
failed_job_ids.append(job_id)
|
||||
|
||||
# Remove all the jobs we failed to restore
|
||||
if failed_job_ids:
|
||||
with self.redis.pipeline() as pipe:
|
||||
pipe.hdel(self.jobs_key, *failed_job_ids)
|
||||
pipe.zrem(self.run_times_key, *failed_job_ids)
|
||||
pipe.execute()
|
||||
|
||||
return jobs
|
||||
|
||||
def __repr__(self):
|
||||
return '<%s>' % self.__class__.__name__
|
|
@ -0,0 +1,153 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
from apscheduler.jobstores.base import BaseJobStore, JobLookupError, ConflictingIdError
|
||||
from apscheduler.util import maybe_ref, datetime_to_utc_timestamp, utc_timestamp_to_datetime
|
||||
from apscheduler.job import Job
|
||||
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except ImportError: # pragma: nocover
|
||||
import pickle
|
||||
|
||||
try:
|
||||
import rethinkdb as r
|
||||
except ImportError: # pragma: nocover
|
||||
raise ImportError('RethinkDBJobStore requires rethinkdb installed')
|
||||
|
||||
|
||||
class RethinkDBJobStore(BaseJobStore):
|
||||
"""
|
||||
Stores jobs in a RethinkDB database. Any leftover keyword arguments are directly passed to
|
||||
rethinkdb's `RethinkdbClient <http://www.rethinkdb.com/api/#connect>`_.
|
||||
|
||||
Plugin alias: ``rethinkdb``
|
||||
|
||||
:param str database: database to store jobs in
|
||||
:param str collection: collection to store jobs in
|
||||
:param client: a :class:`rethinkdb.net.Connection` instance to use instead of providing
|
||||
connection arguments
|
||||
:param int pickle_protocol: pickle protocol level to use (for serialization), defaults to the
|
||||
highest available
|
||||
"""
|
||||
|
||||
def __init__(self, database='apscheduler', table='jobs', client=None,
|
||||
pickle_protocol=pickle.HIGHEST_PROTOCOL, **connect_args):
|
||||
super(RethinkDBJobStore, self).__init__()
|
||||
|
||||
if not database:
|
||||
raise ValueError('The "database" parameter must not be empty')
|
||||
if not table:
|
||||
raise ValueError('The "table" parameter must not be empty')
|
||||
|
||||
self.database = database
|
||||
self.table = table
|
||||
self.client = client
|
||||
self.pickle_protocol = pickle_protocol
|
||||
self.connect_args = connect_args
|
||||
self.conn = None
|
||||
|
||||
def start(self, scheduler, alias):
|
||||
super(RethinkDBJobStore, self).start(scheduler, alias)
|
||||
|
||||
if self.client:
|
||||
self.conn = maybe_ref(self.client)
|
||||
else:
|
||||
self.conn = r.connect(db=self.database, **self.connect_args)
|
||||
|
||||
if self.database not in r.db_list().run(self.conn):
|
||||
r.db_create(self.database).run(self.conn)
|
||||
|
||||
if self.table not in r.table_list().run(self.conn):
|
||||
r.table_create(self.table).run(self.conn)
|
||||
|
||||
if 'next_run_time' not in r.table(self.table).index_list().run(self.conn):
|
||||
r.table(self.table).index_create('next_run_time').run(self.conn)
|
||||
|
||||
self.table = r.db(self.database).table(self.table)
|
||||
|
||||
def lookup_job(self, job_id):
|
||||
results = list(self.table.get_all(job_id).pluck('job_state').run(self.conn))
|
||||
return self._reconstitute_job(results[0]['job_state']) if results else None
|
||||
|
||||
def get_due_jobs(self, now):
|
||||
return self._get_jobs(r.row['next_run_time'] <= datetime_to_utc_timestamp(now))
|
||||
|
||||
def get_next_run_time(self):
|
||||
results = list(
|
||||
self.table
|
||||
.filter(r.row['next_run_time'] != None) # flake8: noqa
|
||||
.order_by(r.asc('next_run_time'))
|
||||
.map(lambda x: x['next_run_time'])
|
||||
.limit(1)
|
||||
.run(self.conn)
|
||||
)
|
||||
return utc_timestamp_to_datetime(results[0]) if results else None
|
||||
|
||||
def get_all_jobs(self):
|
||||
jobs = self._get_jobs()
|
||||
self._fix_paused_jobs_sorting(jobs)
|
||||
return jobs
|
||||
|
||||
def add_job(self, job):
|
||||
job_dict = {
|
||||
'id': job.id,
|
||||
'next_run_time': datetime_to_utc_timestamp(job.next_run_time),
|
||||
'job_state': r.binary(pickle.dumps(job.__getstate__(), self.pickle_protocol))
|
||||
}
|
||||
results = self.table.insert(job_dict).run(self.conn)
|
||||
if results['errors'] > 0:
|
||||
raise ConflictingIdError(job.id)
|
||||
|
||||
def update_job(self, job):
|
||||
changes = {
|
||||
'next_run_time': datetime_to_utc_timestamp(job.next_run_time),
|
||||
'job_state': r.binary(pickle.dumps(job.__getstate__(), self.pickle_protocol))
|
||||
}
|
||||
results = self.table.get_all(job.id).update(changes).run(self.conn)
|
||||
skipped = False in map(lambda x: results[x] == 0, results.keys())
|
||||
if results['skipped'] > 0 or results['errors'] > 0 or not skipped:
|
||||
raise JobLookupError(job.id)
|
||||
|
||||
def remove_job(self, job_id):
|
||||
results = self.table.get_all(job_id).delete().run(self.conn)
|
||||
if results['deleted'] + results['skipped'] != 1:
|
||||
raise JobLookupError(job_id)
|
||||
|
||||
def remove_all_jobs(self):
|
||||
self.table.delete().run(self.conn)
|
||||
|
||||
def shutdown(self):
|
||||
self.conn.close()
|
||||
|
||||
def _reconstitute_job(self, job_state):
|
||||
job_state = pickle.loads(job_state)
|
||||
job = Job.__new__(Job)
|
||||
job.__setstate__(job_state)
|
||||
job._scheduler = self._scheduler
|
||||
job._jobstore_alias = self._alias
|
||||
return job
|
||||
|
||||
def _get_jobs(self, predicate=None):
|
||||
jobs = []
|
||||
failed_job_ids = []
|
||||
query = (self.table.filter(r.row['next_run_time'] != None).filter(predicate) if
|
||||
predicate else self.table)
|
||||
query = query.order_by('next_run_time', 'id').pluck('id', 'job_state')
|
||||
|
||||
for document in query.run(self.conn):
|
||||
try:
|
||||
jobs.append(self._reconstitute_job(document['job_state']))
|
||||
except:
|
||||
self._logger.exception('Unable to restore job "%s" -- removing it', document['id'])
|
||||
failed_job_ids.append(document['id'])
|
||||
|
||||
# Remove all the jobs we failed to restore
|
||||
if failed_job_ids:
|
||||
r.expr(failed_job_ids).for_each(
|
||||
lambda job_id: self.table.get_all(job_id).delete()).run(self.conn)
|
||||
|
||||
return jobs
|
||||
|
||||
def __repr__(self):
|
||||
connection = self.conn
|
||||
return '<%s (connection=%s)>' % (self.__class__.__name__, connection)
|
|
@ -1,65 +0,0 @@
|
|||
"""
|
||||
Stores jobs in a file governed by the :mod:`shelve` module.
|
||||
"""
|
||||
|
||||
import shelve
|
||||
import pickle
|
||||
import random
|
||||
import logging
|
||||
|
||||
from apscheduler.jobstores.base import JobStore
|
||||
from apscheduler.job import Job
|
||||
from apscheduler.util import itervalues
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ShelveJobStore(JobStore):
|
||||
MAX_ID = 1000000
|
||||
|
||||
def __init__(self, path, pickle_protocol=pickle.HIGHEST_PROTOCOL):
|
||||
self.jobs = []
|
||||
self.path = path
|
||||
self.pickle_protocol = pickle_protocol
|
||||
self.store = shelve.open(path, 'c', self.pickle_protocol)
|
||||
|
||||
def _generate_id(self):
|
||||
id = None
|
||||
while not id:
|
||||
id = str(random.randint(1, self.MAX_ID))
|
||||
if not id in self.store:
|
||||
return id
|
||||
|
||||
def add_job(self, job):
|
||||
job.id = self._generate_id()
|
||||
self.jobs.append(job)
|
||||
self.store[job.id] = job.__getstate__()
|
||||
|
||||
def update_job(self, job):
|
||||
job_dict = self.store[job.id]
|
||||
job_dict['next_run_time'] = job.next_run_time
|
||||
job_dict['runs'] = job.runs
|
||||
self.store[job.id] = job_dict
|
||||
|
||||
def remove_job(self, job):
|
||||
del self.store[job.id]
|
||||
self.jobs.remove(job)
|
||||
|
||||
def load_jobs(self):
|
||||
jobs = []
|
||||
for job_dict in itervalues(self.store):
|
||||
try:
|
||||
job = Job.__new__(Job)
|
||||
job.__setstate__(job_dict)
|
||||
jobs.append(job)
|
||||
except Exception:
|
||||
job_name = job_dict.get('name', '(unknown)')
|
||||
logger.exception('Unable to restore job "%s"', job_name)
|
||||
|
||||
self.jobs = jobs
|
||||
|
||||
def close(self):
|
||||
self.store.close()
|
||||
|
||||
def __repr__(self):
|
||||
return '<%s (path=%s)>' % (self.__class__.__name__, self.path)
|
|
@ -0,0 +1,148 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
from apscheduler.jobstores.base import BaseJobStore, JobLookupError, ConflictingIdError
|
||||
from apscheduler.util import maybe_ref, datetime_to_utc_timestamp, utc_timestamp_to_datetime
|
||||
from apscheduler.job import Job
|
||||
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except ImportError: # pragma: nocover
|
||||
import pickle
|
||||
|
||||
try:
|
||||
from sqlalchemy import (
|
||||
create_engine, Table, Column, MetaData, Unicode, Float, LargeBinary, select)
|
||||
from sqlalchemy.exc import IntegrityError
|
||||
from sqlalchemy.sql.expression import null
|
||||
except ImportError: # pragma: nocover
|
||||
raise ImportError('SQLAlchemyJobStore requires SQLAlchemy installed')
|
||||
|
||||
|
||||
class SQLAlchemyJobStore(BaseJobStore):
|
||||
"""
|
||||
Stores jobs in a database table using SQLAlchemy.
|
||||
The table will be created if it doesn't exist in the database.
|
||||
|
||||
Plugin alias: ``sqlalchemy``
|
||||
|
||||
:param str url: connection string (see `SQLAlchemy documentation
|
||||
<http://docs.sqlalchemy.org/en/latest/core/engines.html?highlight=create_engine#database-urls>`_
|
||||
on this)
|
||||
:param engine: an SQLAlchemy Engine to use instead of creating a new one based on ``url``
|
||||
:param str tablename: name of the table to store jobs in
|
||||
:param metadata: a :class:`~sqlalchemy.MetaData` instance to use instead of creating a new one
|
||||
:param int pickle_protocol: pickle protocol level to use (for serialization), defaults to the
|
||||
highest available
|
||||
"""
|
||||
|
||||
def __init__(self, url=None, engine=None, tablename='apscheduler_jobs', metadata=None,
|
||||
pickle_protocol=pickle.HIGHEST_PROTOCOL):
|
||||
super(SQLAlchemyJobStore, self).__init__()
|
||||
self.pickle_protocol = pickle_protocol
|
||||
metadata = maybe_ref(metadata) or MetaData()
|
||||
|
||||
if engine:
|
||||
self.engine = maybe_ref(engine)
|
||||
elif url:
|
||||
self.engine = create_engine(url)
|
||||
else:
|
||||
raise ValueError('Need either "engine" or "url" defined')
|
||||
|
||||
# 191 = max key length in MySQL for InnoDB/utf8mb4 tables,
|
||||
# 25 = precision that translates to an 8-byte float
|
||||
self.jobs_t = Table(
|
||||
tablename, metadata,
|
||||
Column('id', Unicode(191, _warn_on_bytestring=False), primary_key=True),
|
||||
Column('next_run_time', Float(25), index=True),
|
||||
Column('job_state', LargeBinary, nullable=False)
|
||||
)
|
||||
|
||||
def start(self, scheduler, alias):
|
||||
super(SQLAlchemyJobStore, self).start(scheduler, alias)
|
||||
self.jobs_t.create(self.engine, True)
|
||||
|
||||
def lookup_job(self, job_id):
|
||||
selectable = select([self.jobs_t.c.job_state]).where(self.jobs_t.c.id == job_id)
|
||||
job_state = self.engine.execute(selectable).scalar()
|
||||
return self._reconstitute_job(job_state) if job_state else None
|
||||
|
||||
def get_due_jobs(self, now):
|
||||
timestamp = datetime_to_utc_timestamp(now)
|
||||
return self._get_jobs(self.jobs_t.c.next_run_time <= timestamp)
|
||||
|
||||
def get_next_run_time(self):
|
||||
selectable = select([self.jobs_t.c.next_run_time]).\
|
||||
where(self.jobs_t.c.next_run_time != null()).\
|
||||
order_by(self.jobs_t.c.next_run_time).limit(1)
|
||||
next_run_time = self.engine.execute(selectable).scalar()
|
||||
return utc_timestamp_to_datetime(next_run_time)
|
||||
|
||||
def get_all_jobs(self):
|
||||
jobs = self._get_jobs()
|
||||
self._fix_paused_jobs_sorting(jobs)
|
||||
return jobs
|
||||
|
||||
def add_job(self, job):
|
||||
insert = self.jobs_t.insert().values(**{
|
||||
'id': job.id,
|
||||
'next_run_time': datetime_to_utc_timestamp(job.next_run_time),
|
||||
'job_state': pickle.dumps(job.__getstate__(), self.pickle_protocol)
|
||||
})
|
||||
try:
|
||||
self.engine.execute(insert)
|
||||
except IntegrityError:
|
||||
raise ConflictingIdError(job.id)
|
||||
|
||||
def update_job(self, job):
|
||||
update = self.jobs_t.update().values(**{
|
||||
'next_run_time': datetime_to_utc_timestamp(job.next_run_time),
|
||||
'job_state': pickle.dumps(job.__getstate__(), self.pickle_protocol)
|
||||
}).where(self.jobs_t.c.id == job.id)
|
||||
result = self.engine.execute(update)
|
||||
if result.rowcount == 0:
|
||||
raise JobLookupError(id)
|
||||
|
||||
def remove_job(self, job_id):
|
||||
delete = self.jobs_t.delete().where(self.jobs_t.c.id == job_id)
|
||||
result = self.engine.execute(delete)
|
||||
if result.rowcount == 0:
|
||||
raise JobLookupError(job_id)
|
||||
|
||||
def remove_all_jobs(self):
|
||||
delete = self.jobs_t.delete()
|
||||
self.engine.execute(delete)
|
||||
|
||||
def shutdown(self):
|
||||
self.engine.dispose()
|
||||
|
||||
def _reconstitute_job(self, job_state):
|
||||
job_state = pickle.loads(job_state)
|
||||
job_state['jobstore'] = self
|
||||
job = Job.__new__(Job)
|
||||
job.__setstate__(job_state)
|
||||
job._scheduler = self._scheduler
|
||||
job._jobstore_alias = self._alias
|
||||
return job
|
||||
|
||||
def _get_jobs(self, *conditions):
|
||||
jobs = []
|
||||
selectable = select([self.jobs_t.c.id, self.jobs_t.c.job_state]).\
|
||||
order_by(self.jobs_t.c.next_run_time)
|
||||
selectable = selectable.where(*conditions) if conditions else selectable
|
||||
failed_job_ids = set()
|
||||
for row in self.engine.execute(selectable):
|
||||
try:
|
||||
jobs.append(self._reconstitute_job(row.job_state))
|
||||
except:
|
||||
self._logger.exception('Unable to restore job "%s" -- removing it', row.id)
|
||||
failed_job_ids.add(row.id)
|
||||
|
||||
# Remove all the jobs we failed to restore
|
||||
if failed_job_ids:
|
||||
delete = self.jobs_t.delete().where(self.jobs_t.c.id.in_(failed_job_ids))
|
||||
self.engine.execute(delete)
|
||||
|
||||
return jobs
|
||||
|
||||
def __repr__(self):
|
||||
return '<%s (url=%s)>' % (self.__class__.__name__, self.engine.url)
|
|
@ -1,87 +0,0 @@
|
|||
"""
|
||||
Stores jobs in a database table using SQLAlchemy.
|
||||
"""
|
||||
import pickle
|
||||
import logging
|
||||
|
||||
from apscheduler.jobstores.base import JobStore
|
||||
from apscheduler.job import Job
|
||||
|
||||
try:
|
||||
from sqlalchemy import *
|
||||
except ImportError: # pragma: nocover
|
||||
raise ImportError('SQLAlchemyJobStore requires SQLAlchemy installed')
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SQLAlchemyJobStore(JobStore):
|
||||
def __init__(self, url=None, engine=None, tablename='apscheduler_jobs',
|
||||
metadata=None, pickle_protocol=pickle.HIGHEST_PROTOCOL):
|
||||
self.jobs = []
|
||||
self.pickle_protocol = pickle_protocol
|
||||
|
||||
if engine:
|
||||
self.engine = engine
|
||||
elif url:
|
||||
self.engine = create_engine(url)
|
||||
else:
|
||||
raise ValueError('Need either "engine" or "url" defined')
|
||||
|
||||
self.jobs_t = Table(tablename, metadata or MetaData(),
|
||||
Column('id', Integer,
|
||||
Sequence(tablename + '_id_seq', optional=True),
|
||||
primary_key=True),
|
||||
Column('trigger', PickleType(pickle_protocol, mutable=False),
|
||||
nullable=False),
|
||||
Column('func_ref', String(1024), nullable=False),
|
||||
Column('args', PickleType(pickle_protocol, mutable=False),
|
||||
nullable=False),
|
||||
Column('kwargs', PickleType(pickle_protocol, mutable=False),
|
||||
nullable=False),
|
||||
Column('name', Unicode(1024), unique=True),
|
||||
Column('misfire_grace_time', Integer, nullable=False),
|
||||
Column('coalesce', Boolean, nullable=False),
|
||||
Column('max_runs', Integer),
|
||||
Column('max_instances', Integer),
|
||||
Column('next_run_time', DateTime, nullable=False),
|
||||
Column('runs', BigInteger))
|
||||
|
||||
self.jobs_t.create(self.engine, True)
|
||||
|
||||
def add_job(self, job):
|
||||
job_dict = job.__getstate__()
|
||||
result = self.engine.execute(self.jobs_t.insert().values(**job_dict))
|
||||
job.id = result.inserted_primary_key[0]
|
||||
self.jobs.append(job)
|
||||
|
||||
def remove_job(self, job):
|
||||
delete = self.jobs_t.delete().where(self.jobs_t.c.id == job.id)
|
||||
self.engine.execute(delete)
|
||||
self.jobs.remove(job)
|
||||
|
||||
def load_jobs(self):
|
||||
jobs = []
|
||||
for row in self.engine.execute(select([self.jobs_t])):
|
||||
try:
|
||||
job = Job.__new__(Job)
|
||||
job_dict = dict(row.items())
|
||||
job.__setstate__(job_dict)
|
||||
jobs.append(job)
|
||||
except Exception:
|
||||
job_name = job_dict.get('name', '(unknown)')
|
||||
logger.exception('Unable to restore job "%s"', job_name)
|
||||
self.jobs = jobs
|
||||
|
||||
def update_job(self, job):
|
||||
job_dict = job.__getstate__()
|
||||
update = self.jobs_t.update().where(self.jobs_t.c.id == job.id).\
|
||||
values(next_run_time=job_dict['next_run_time'],
|
||||
runs=job_dict['runs'])
|
||||
self.engine.execute(update)
|
||||
|
||||
def close(self):
|
||||
self.engine.dispose()
|
||||
|
||||
def __repr__(self):
|
||||
return '<%s (url=%s)>' % (self.__class__.__name__, self.engine.url)
|
|
@ -0,0 +1,179 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
import os
|
||||
from datetime import datetime
|
||||
|
||||
from pytz import utc
|
||||
from kazoo.exceptions import NoNodeError, NodeExistsError
|
||||
|
||||
from apscheduler.jobstores.base import BaseJobStore, JobLookupError, ConflictingIdError
|
||||
from apscheduler.util import maybe_ref, datetime_to_utc_timestamp, utc_timestamp_to_datetime
|
||||
from apscheduler.job import Job
|
||||
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except ImportError: # pragma: nocover
|
||||
import pickle
|
||||
|
||||
try:
|
||||
from kazoo.client import KazooClient
|
||||
except ImportError: # pragma: nocover
|
||||
raise ImportError('ZooKeeperJobStore requires Kazoo installed')
|
||||
|
||||
|
||||
class ZooKeeperJobStore(BaseJobStore):
|
||||
"""
|
||||
Stores jobs in a ZooKeeper tree. Any leftover keyword arguments are directly passed to
|
||||
kazoo's `KazooClient
|
||||
<http://kazoo.readthedocs.io/en/latest/api/client.html>`_.
|
||||
|
||||
Plugin alias: ``zookeeper``
|
||||
|
||||
:param str path: path to store jobs in
|
||||
:param client: a :class:`~kazoo.client.KazooClient` instance to use instead of
|
||||
providing connection arguments
|
||||
:param int pickle_protocol: pickle protocol level to use (for serialization), defaults to the
|
||||
highest available
|
||||
"""
|
||||
|
||||
def __init__(self, path='/apscheduler', client=None, close_connection_on_exit=False,
|
||||
pickle_protocol=pickle.HIGHEST_PROTOCOL, **connect_args):
|
||||
super(ZooKeeperJobStore, self).__init__()
|
||||
self.pickle_protocol = pickle_protocol
|
||||
self.close_connection_on_exit = close_connection_on_exit
|
||||
|
||||
if not path:
|
||||
raise ValueError('The "path" parameter must not be empty')
|
||||
|
||||
self.path = path
|
||||
|
||||
if client:
|
||||
self.client = maybe_ref(client)
|
||||
else:
|
||||
self.client = KazooClient(**connect_args)
|
||||
self._ensured_path = False
|
||||
|
||||
def _ensure_paths(self):
|
||||
if not self._ensured_path:
|
||||
self.client.ensure_path(self.path)
|
||||
self._ensured_path = True
|
||||
|
||||
def start(self, scheduler, alias):
|
||||
super(ZooKeeperJobStore, self).start(scheduler, alias)
|
||||
if not self.client.connected:
|
||||
self.client.start()
|
||||
|
||||
def lookup_job(self, job_id):
|
||||
self._ensure_paths()
|
||||
node_path = os.path.join(self.path, job_id)
|
||||
try:
|
||||
content, _ = self.client.get(node_path)
|
||||
doc = pickle.loads(content)
|
||||
job = self._reconstitute_job(doc['job_state'])
|
||||
return job
|
||||
except:
|
||||
return None
|
||||
|
||||
def get_due_jobs(self, now):
|
||||
timestamp = datetime_to_utc_timestamp(now)
|
||||
jobs = [job_def['job'] for job_def in self._get_jobs()
|
||||
if job_def['next_run_time'] is not None and job_def['next_run_time'] <= timestamp]
|
||||
return jobs
|
||||
|
||||
def get_next_run_time(self):
|
||||
next_runs = [job_def['next_run_time'] for job_def in self._get_jobs()
|
||||
if job_def['next_run_time'] is not None]
|
||||
return utc_timestamp_to_datetime(min(next_runs)) if len(next_runs) > 0 else None
|
||||
|
||||
def get_all_jobs(self):
|
||||
jobs = [job_def['job'] for job_def in self._get_jobs()]
|
||||
self._fix_paused_jobs_sorting(jobs)
|
||||
return jobs
|
||||
|
||||
def add_job(self, job):
|
||||
self._ensure_paths()
|
||||
node_path = os.path.join(self.path, str(job.id))
|
||||
value = {
|
||||
'next_run_time': datetime_to_utc_timestamp(job.next_run_time),
|
||||
'job_state': job.__getstate__()
|
||||
}
|
||||
data = pickle.dumps(value, self.pickle_protocol)
|
||||
try:
|
||||
self.client.create(node_path, value=data)
|
||||
except NodeExistsError:
|
||||
raise ConflictingIdError(job.id)
|
||||
|
||||
def update_job(self, job):
|
||||
self._ensure_paths()
|
||||
node_path = os.path.join(self.path, str(job.id))
|
||||
changes = {
|
||||
'next_run_time': datetime_to_utc_timestamp(job.next_run_time),
|
||||
'job_state': job.__getstate__()
|
||||
}
|
||||
data = pickle.dumps(changes, self.pickle_protocol)
|
||||
try:
|
||||
self.client.set(node_path, value=data)
|
||||
except NoNodeError:
|
||||
raise JobLookupError(job.id)
|
||||
|
||||
def remove_job(self, job_id):
|
||||
self._ensure_paths()
|
||||
node_path = os.path.join(self.path, str(job_id))
|
||||
try:
|
||||
self.client.delete(node_path)
|
||||
except NoNodeError:
|
||||
raise JobLookupError(job_id)
|
||||
|
||||
def remove_all_jobs(self):
|
||||
try:
|
||||
self.client.delete(self.path, recursive=True)
|
||||
except NoNodeError:
|
||||
pass
|
||||
self._ensured_path = False
|
||||
|
||||
def shutdown(self):
|
||||
if self.close_connection_on_exit:
|
||||
self.client.stop()
|
||||
self.client.close()
|
||||
|
||||
def _reconstitute_job(self, job_state):
|
||||
job_state = job_state
|
||||
job = Job.__new__(Job)
|
||||
job.__setstate__(job_state)
|
||||
job._scheduler = self._scheduler
|
||||
job._jobstore_alias = self._alias
|
||||
return job
|
||||
|
||||
def _get_jobs(self):
|
||||
self._ensure_paths()
|
||||
jobs = []
|
||||
failed_job_ids = []
|
||||
all_ids = self.client.get_children(self.path)
|
||||
for node_name in all_ids:
|
||||
try:
|
||||
node_path = os.path.join(self.path, node_name)
|
||||
content, _ = self.client.get(node_path)
|
||||
doc = pickle.loads(content)
|
||||
job_def = {
|
||||
'job_id': node_name,
|
||||
'next_run_time': doc['next_run_time'] if doc['next_run_time'] else None,
|
||||
'job_state': doc['job_state'],
|
||||
'job': self._reconstitute_job(doc['job_state']),
|
||||
'creation_time': _.ctime
|
||||
}
|
||||
jobs.append(job_def)
|
||||
except:
|
||||
self._logger.exception('Unable to restore job "%s" -- removing it' % node_name)
|
||||
failed_job_ids.append(node_name)
|
||||
|
||||
# Remove all the jobs we failed to restore
|
||||
if failed_job_ids:
|
||||
for failed_id in failed_job_ids:
|
||||
self.remove_job(failed_id)
|
||||
paused_sort_key = datetime(9999, 12, 31, tzinfo=utc)
|
||||
return sorted(jobs, key=lambda job_def: (job_def['job'].next_run_time or paused_sort_key,
|
||||
job_def['creation_time']))
|
||||
|
||||
def __repr__(self):
|
||||
self._logger.exception('<%s (client=%s)>' % (self.__class__.__name__, self.client))
|
||||
return '<%s (client=%s)>' % (self.__class__.__name__, self.client)
|
|
@ -1,559 +0,0 @@
|
|||
"""
|
||||
This module is the main part of the library. It houses the Scheduler class
|
||||
and related exceptions.
|
||||
"""
|
||||
|
||||
from threading import Thread, Event, Lock
|
||||
from datetime import datetime, timedelta
|
||||
from logging import getLogger
|
||||
import os
|
||||
import sys
|
||||
|
||||
from apscheduler.util import *
|
||||
from apscheduler.triggers import SimpleTrigger, IntervalTrigger, CronTrigger
|
||||
from apscheduler.jobstores.ram_store import RAMJobStore
|
||||
from apscheduler.job import Job, MaxInstancesReachedError
|
||||
from apscheduler.events import *
|
||||
from apscheduler.threadpool import ThreadPool
|
||||
|
||||
logger = getLogger(__name__)
|
||||
|
||||
|
||||
class SchedulerAlreadyRunningError(Exception):
|
||||
"""
|
||||
Raised when attempting to start or configure the scheduler when it's
|
||||
already running.
|
||||
"""
|
||||
|
||||
def __str__(self):
|
||||
return 'Scheduler is already running'
|
||||
|
||||
|
||||
class Scheduler(object):
|
||||
"""
|
||||
This class is responsible for scheduling jobs and triggering
|
||||
their execution.
|
||||
"""
|
||||
|
||||
_stopped = False
|
||||
_thread = None
|
||||
|
||||
def __init__(self, gconfig={}, **options):
|
||||
self._wakeup = Event()
|
||||
self._jobstores = {}
|
||||
self._jobstores_lock = Lock()
|
||||
self._listeners = []
|
||||
self._listeners_lock = Lock()
|
||||
self._pending_jobs = []
|
||||
self.configure(gconfig, **options)
|
||||
|
||||
def configure(self, gconfig={}, **options):
|
||||
"""
|
||||
Reconfigures the scheduler with the given options. Can only be done
|
||||
when the scheduler isn't running.
|
||||
"""
|
||||
if self.running:
|
||||
raise SchedulerAlreadyRunningError
|
||||
|
||||
# Set general options
|
||||
config = combine_opts(gconfig, 'apscheduler.', options)
|
||||
self.misfire_grace_time = int(config.pop('misfire_grace_time', 1))
|
||||
self.coalesce = asbool(config.pop('coalesce', True))
|
||||
self.daemonic = asbool(config.pop('daemonic', True))
|
||||
|
||||
# Configure the thread pool
|
||||
if 'threadpool' in config:
|
||||
self._threadpool = maybe_ref(config['threadpool'])
|
||||
else:
|
||||
threadpool_opts = combine_opts(config, 'threadpool.')
|
||||
self._threadpool = ThreadPool(**threadpool_opts)
|
||||
|
||||
# Configure job stores
|
||||
jobstore_opts = combine_opts(config, 'jobstore.')
|
||||
jobstores = {}
|
||||
for key, value in jobstore_opts.items():
|
||||
store_name, option = key.split('.', 1)
|
||||
opts_dict = jobstores.setdefault(store_name, {})
|
||||
opts_dict[option] = value
|
||||
|
||||
for alias, opts in jobstores.items():
|
||||
classname = opts.pop('class')
|
||||
cls = maybe_ref(classname)
|
||||
jobstore = cls(**opts)
|
||||
self.add_jobstore(jobstore, alias, True)
|
||||
|
||||
def start(self):
|
||||
"""
|
||||
Starts the scheduler in a new thread.
|
||||
"""
|
||||
if self.running:
|
||||
raise SchedulerAlreadyRunningError
|
||||
|
||||
# Create a RAMJobStore as the default if there is no default job store
|
||||
if not 'default' in self._jobstores:
|
||||
self.add_jobstore(RAMJobStore(), 'default', True)
|
||||
|
||||
# Schedule all pending jobs
|
||||
for job, jobstore in self._pending_jobs:
|
||||
self._real_add_job(job, jobstore, False)
|
||||
del self._pending_jobs[:]
|
||||
|
||||
self._stopped = False
|
||||
self._thread = Thread(target=self._main_loop, name='APScheduler')
|
||||
self._thread.setDaemon(self.daemonic)
|
||||
self._thread.start()
|
||||
|
||||
def shutdown(self, wait=True, shutdown_threadpool=True):
|
||||
"""
|
||||
Shuts down the scheduler and terminates the thread.
|
||||
Does not interrupt any currently running jobs.
|
||||
|
||||
:param wait: ``True`` to wait until all currently executing jobs have
|
||||
finished (if ``shutdown_threadpool`` is also ``True``)
|
||||
:param shutdown_threadpool: ``True`` to shut down the thread pool
|
||||
"""
|
||||
if not self.running:
|
||||
return
|
||||
|
||||
self._stopped = True
|
||||
self._wakeup.set()
|
||||
|
||||
# Shut down the thread pool
|
||||
if shutdown_threadpool:
|
||||
self._threadpool.shutdown(wait)
|
||||
|
||||
# Wait until the scheduler thread terminates
|
||||
self._thread.join()
|
||||
|
||||
@property
|
||||
def running(self):
|
||||
return not self._stopped and self._thread and self._thread.isAlive()
|
||||
|
||||
def add_jobstore(self, jobstore, alias, quiet=False):
|
||||
"""
|
||||
Adds a job store to this scheduler.
|
||||
|
||||
:param jobstore: job store to be added
|
||||
:param alias: alias for the job store
|
||||
:param quiet: True to suppress scheduler thread wakeup
|
||||
:type jobstore: instance of
|
||||
:class:`~apscheduler.jobstores.base.JobStore`
|
||||
:type alias: str
|
||||
"""
|
||||
self._jobstores_lock.acquire()
|
||||
try:
|
||||
if alias in self._jobstores:
|
||||
raise KeyError('Alias "%s" is already in use' % alias)
|
||||
self._jobstores[alias] = jobstore
|
||||
jobstore.load_jobs()
|
||||
finally:
|
||||
self._jobstores_lock.release()
|
||||
|
||||
# Notify listeners that a new job store has been added
|
||||
self._notify_listeners(JobStoreEvent(EVENT_JOBSTORE_ADDED, alias))
|
||||
|
||||
# Notify the scheduler so it can scan the new job store for jobs
|
||||
if not quiet:
|
||||
self._wakeup.set()
|
||||
|
||||
def remove_jobstore(self, alias):
|
||||
"""
|
||||
Removes the job store by the given alias from this scheduler.
|
||||
|
||||
:type alias: str
|
||||
"""
|
||||
self._jobstores_lock.acquire()
|
||||
try:
|
||||
try:
|
||||
del self._jobstores[alias]
|
||||
except KeyError:
|
||||
raise KeyError('No such job store: %s' % alias)
|
||||
finally:
|
||||
self._jobstores_lock.release()
|
||||
|
||||
# Notify listeners that a job store has been removed
|
||||
self._notify_listeners(JobStoreEvent(EVENT_JOBSTORE_REMOVED, alias))
|
||||
|
||||
def add_listener(self, callback, mask=EVENT_ALL):
|
||||
"""
|
||||
Adds a listener for scheduler events. When a matching event occurs,
|
||||
``callback`` is executed with the event object as its sole argument.
|
||||
If the ``mask`` parameter is not provided, the callback will receive
|
||||
events of all types.
|
||||
|
||||
:param callback: any callable that takes one argument
|
||||
:param mask: bitmask that indicates which events should be listened to
|
||||
"""
|
||||
self._listeners_lock.acquire()
|
||||
try:
|
||||
self._listeners.append((callback, mask))
|
||||
finally:
|
||||
self._listeners_lock.release()
|
||||
|
||||
def remove_listener(self, callback):
|
||||
"""
|
||||
Removes a previously added event listener.
|
||||
"""
|
||||
self._listeners_lock.acquire()
|
||||
try:
|
||||
for i, (cb, _) in enumerate(self._listeners):
|
||||
if callback == cb:
|
||||
del self._listeners[i]
|
||||
finally:
|
||||
self._listeners_lock.release()
|
||||
|
||||
def _notify_listeners(self, event):
|
||||
self._listeners_lock.acquire()
|
||||
try:
|
||||
listeners = tuple(self._listeners)
|
||||
finally:
|
||||
self._listeners_lock.release()
|
||||
|
||||
for cb, mask in listeners:
|
||||
if event.code & mask:
|
||||
try:
|
||||
cb(event)
|
||||
except:
|
||||
logger.exception('Error notifying listener')
|
||||
|
||||
def _real_add_job(self, job, jobstore, wakeup):
|
||||
job.compute_next_run_time(datetime.now())
|
||||
if not job.next_run_time:
|
||||
raise ValueError('Not adding job since it would never be run')
|
||||
|
||||
self._jobstores_lock.acquire()
|
||||
try:
|
||||
try:
|
||||
store = self._jobstores[jobstore]
|
||||
except KeyError:
|
||||
raise KeyError('No such job store: %s' % jobstore)
|
||||
store.add_job(job)
|
||||
finally:
|
||||
self._jobstores_lock.release()
|
||||
|
||||
# Notify listeners that a new job has been added
|
||||
event = JobStoreEvent(EVENT_JOBSTORE_JOB_ADDED, jobstore, job)
|
||||
self._notify_listeners(event)
|
||||
|
||||
logger.info('Added job "%s" to job store "%s"', job, jobstore)
|
||||
|
||||
# Notify the scheduler about the new job
|
||||
if wakeup:
|
||||
self._wakeup.set()
|
||||
|
||||
def add_job(self, trigger, func, args, kwargs, jobstore='default',
|
||||
**options):
|
||||
"""
|
||||
Adds the given job to the job list and notifies the scheduler thread.
|
||||
|
||||
:param trigger: alias of the job store to store the job in
|
||||
:param func: callable to run at the given time
|
||||
:param args: list of positional arguments to call func with
|
||||
:param kwargs: dict of keyword arguments to call func with
|
||||
:param jobstore: alias of the job store to store the job in
|
||||
:rtype: :class:`~apscheduler.job.Job`
|
||||
"""
|
||||
job = Job(trigger, func, args or [], kwargs or {},
|
||||
options.pop('misfire_grace_time', self.misfire_grace_time),
|
||||
options.pop('coalesce', self.coalesce), **options)
|
||||
if not self.running:
|
||||
self._pending_jobs.append((job, jobstore))
|
||||
logger.info('Adding job tentatively -- it will be properly '
|
||||
'scheduled when the scheduler starts')
|
||||
else:
|
||||
self._real_add_job(job, jobstore, True)
|
||||
return job
|
||||
|
||||
def _remove_job(self, job, alias, jobstore):
|
||||
jobstore.remove_job(job)
|
||||
|
||||
# Notify listeners that a job has been removed
|
||||
event = JobStoreEvent(EVENT_JOBSTORE_JOB_REMOVED, alias, job)
|
||||
self._notify_listeners(event)
|
||||
|
||||
logger.info('Removed job "%s"', job)
|
||||
|
||||
def add_date_job(self, func, date, args=None, kwargs=None, **options):
|
||||
"""
|
||||
Schedules a job to be completed on a specific date and time.
|
||||
|
||||
:param func: callable to run at the given time
|
||||
:param date: the date/time to run the job at
|
||||
:param name: name of the job
|
||||
:param jobstore: stored the job in the named (or given) job store
|
||||
:param misfire_grace_time: seconds after the designated run time that
|
||||
the job is still allowed to be run
|
||||
:type date: :class:`datetime.date`
|
||||
:rtype: :class:`~apscheduler.job.Job`
|
||||
"""
|
||||
trigger = SimpleTrigger(date)
|
||||
return self.add_job(trigger, func, args, kwargs, **options)
|
||||
|
||||
def add_interval_job(self, func, weeks=0, days=0, hours=0, minutes=0,
|
||||
seconds=0, start_date=None, args=None, kwargs=None,
|
||||
**options):
|
||||
"""
|
||||
Schedules a job to be completed on specified intervals.
|
||||
|
||||
:param func: callable to run
|
||||
:param weeks: number of weeks to wait
|
||||
:param days: number of days to wait
|
||||
:param hours: number of hours to wait
|
||||
:param minutes: number of minutes to wait
|
||||
:param seconds: number of seconds to wait
|
||||
:param start_date: when to first execute the job and start the
|
||||
counter (default is after the given interval)
|
||||
:param args: list of positional arguments to call func with
|
||||
:param kwargs: dict of keyword arguments to call func with
|
||||
:param name: name of the job
|
||||
:param jobstore: alias of the job store to add the job to
|
||||
:param misfire_grace_time: seconds after the designated run time that
|
||||
the job is still allowed to be run
|
||||
:rtype: :class:`~apscheduler.job.Job`
|
||||
"""
|
||||
interval = timedelta(weeks=weeks, days=days, hours=hours,
|
||||
minutes=minutes, seconds=seconds)
|
||||
trigger = IntervalTrigger(interval, start_date)
|
||||
return self.add_job(trigger, func, args, kwargs, **options)
|
||||
|
||||
def add_cron_job(self, func, year='*', month='*', day='*', week='*',
|
||||
day_of_week='*', hour='*', minute='*', second='*',
|
||||
start_date=None, args=None, kwargs=None, **options):
|
||||
"""
|
||||
Schedules a job to be completed on times that match the given
|
||||
expressions.
|
||||
|
||||
:param func: callable to run
|
||||
:param year: year to run on
|
||||
:param month: month to run on (0 = January)
|
||||
:param day: day of month to run on
|
||||
:param week: week of the year to run on
|
||||
:param day_of_week: weekday to run on (0 = Monday)
|
||||
:param hour: hour to run on
|
||||
:param second: second to run on
|
||||
:param args: list of positional arguments to call func with
|
||||
:param kwargs: dict of keyword arguments to call func with
|
||||
:param name: name of the job
|
||||
:param jobstore: alias of the job store to add the job to
|
||||
:param misfire_grace_time: seconds after the designated run time that
|
||||
the job is still allowed to be run
|
||||
:return: the scheduled job
|
||||
:rtype: :class:`~apscheduler.job.Job`
|
||||
"""
|
||||
trigger = CronTrigger(year=year, month=month, day=day, week=week,
|
||||
day_of_week=day_of_week, hour=hour,
|
||||
minute=minute, second=second,
|
||||
start_date=start_date)
|
||||
return self.add_job(trigger, func, args, kwargs, **options)
|
||||
|
||||
def cron_schedule(self, **options):
|
||||
"""
|
||||
Decorator version of :meth:`add_cron_job`.
|
||||
This decorator does not wrap its host function.
|
||||
Unscheduling decorated functions is possible by passing the ``job``
|
||||
attribute of the scheduled function to :meth:`unschedule_job`.
|
||||
"""
|
||||
def inner(func):
|
||||
func.job = self.add_cron_job(func, **options)
|
||||
return func
|
||||
return inner
|
||||
|
||||
def interval_schedule(self, **options):
|
||||
"""
|
||||
Decorator version of :meth:`add_interval_job`.
|
||||
This decorator does not wrap its host function.
|
||||
Unscheduling decorated functions is possible by passing the ``job``
|
||||
attribute of the scheduled function to :meth:`unschedule_job`.
|
||||
"""
|
||||
def inner(func):
|
||||
func.job = self.add_interval_job(func, **options)
|
||||
return func
|
||||
return inner
|
||||
|
||||
def get_jobs(self):
|
||||
"""
|
||||
Returns a list of all scheduled jobs.
|
||||
|
||||
:return: list of :class:`~apscheduler.job.Job` objects
|
||||
"""
|
||||
self._jobstores_lock.acquire()
|
||||
try:
|
||||
jobs = []
|
||||
for jobstore in itervalues(self._jobstores):
|
||||
jobs.extend(jobstore.jobs)
|
||||
return jobs
|
||||
finally:
|
||||
self._jobstores_lock.release()
|
||||
|
||||
def unschedule_job(self, job):
|
||||
"""
|
||||
Removes a job, preventing it from being run any more.
|
||||
"""
|
||||
self._jobstores_lock.acquire()
|
||||
try:
|
||||
for alias, jobstore in iteritems(self._jobstores):
|
||||
if job in list(jobstore.jobs):
|
||||
self._remove_job(job, alias, jobstore)
|
||||
return
|
||||
finally:
|
||||
self._jobstores_lock.release()
|
||||
|
||||
raise KeyError('Job "%s" is not scheduled in any job store' % job)
|
||||
|
||||
def unschedule_func(self, func):
|
||||
"""
|
||||
Removes all jobs that would execute the given function.
|
||||
"""
|
||||
found = False
|
||||
self._jobstores_lock.acquire()
|
||||
try:
|
||||
for alias, jobstore in iteritems(self._jobstores):
|
||||
for job in list(jobstore.jobs):
|
||||
if job.func == func:
|
||||
self._remove_job(job, alias, jobstore)
|
||||
found = True
|
||||
finally:
|
||||
self._jobstores_lock.release()
|
||||
|
||||
if not found:
|
||||
raise KeyError('The given function is not scheduled in this '
|
||||
'scheduler')
|
||||
|
||||
def print_jobs(self, out=None):
|
||||
"""
|
||||
Prints out a textual listing of all jobs currently scheduled on this
|
||||
scheduler.
|
||||
|
||||
:param out: a file-like object to print to (defaults to **sys.stdout**
|
||||
if nothing is given)
|
||||
"""
|
||||
out = out or sys.stdout
|
||||
job_strs = []
|
||||
self._jobstores_lock.acquire()
|
||||
try:
|
||||
for alias, jobstore in iteritems(self._jobstores):
|
||||
job_strs.append('Jobstore %s:' % alias)
|
||||
if jobstore.jobs:
|
||||
for job in jobstore.jobs:
|
||||
job_strs.append(' %s' % job)
|
||||
else:
|
||||
job_strs.append(' No scheduled jobs')
|
||||
finally:
|
||||
self._jobstores_lock.release()
|
||||
|
||||
out.write(os.linesep.join(job_strs))
|
||||
|
||||
def _run_job(self, job, run_times):
|
||||
"""
|
||||
Acts as a harness that runs the actual job code in a thread.
|
||||
"""
|
||||
for run_time in run_times:
|
||||
# See if the job missed its run time window, and handle possible
|
||||
# misfires accordingly
|
||||
difference = datetime.now() - run_time
|
||||
grace_time = timedelta(seconds=job.misfire_grace_time)
|
||||
if difference > grace_time:
|
||||
# Notify listeners about a missed run
|
||||
event = JobEvent(EVENT_JOB_MISSED, job, run_time)
|
||||
self._notify_listeners(event)
|
||||
logger.warning('Run time of job "%s" was missed by %s',
|
||||
job, difference)
|
||||
else:
|
||||
try:
|
||||
job.add_instance()
|
||||
except MaxInstancesReachedError:
|
||||
event = JobEvent(EVENT_JOB_MISSED, job, run_time)
|
||||
self._notify_listeners(event)
|
||||
logger.warning('Execution of job "%s" skipped: '
|
||||
'maximum number of running instances '
|
||||
'reached (%d)', job, job.max_instances)
|
||||
break
|
||||
|
||||
logger.info('Running job "%s" (scheduled at %s)', job,
|
||||
run_time)
|
||||
|
||||
try:
|
||||
retval = job.func(*job.args, **job.kwargs)
|
||||
except:
|
||||
# Notify listeners about the exception
|
||||
exc, tb = sys.exc_info()[1:]
|
||||
event = JobEvent(EVENT_JOB_ERROR, job, run_time,
|
||||
exception=exc, traceback=tb)
|
||||
self._notify_listeners(event)
|
||||
|
||||
logger.exception('Job "%s" raised an exception', job)
|
||||
else:
|
||||
# Notify listeners about successful execution
|
||||
event = JobEvent(EVENT_JOB_EXECUTED, job, run_time,
|
||||
retval=retval)
|
||||
self._notify_listeners(event)
|
||||
|
||||
logger.info('Job "%s" executed successfully', job)
|
||||
|
||||
job.remove_instance()
|
||||
|
||||
# If coalescing is enabled, don't attempt any further runs
|
||||
if job.coalesce:
|
||||
break
|
||||
|
||||
def _process_jobs(self, now):
|
||||
"""
|
||||
Iterates through jobs in every jobstore, starts pending jobs
|
||||
and figures out the next wakeup time.
|
||||
"""
|
||||
next_wakeup_time = None
|
||||
self._jobstores_lock.acquire()
|
||||
try:
|
||||
for alias, jobstore in iteritems(self._jobstores):
|
||||
for job in tuple(jobstore.jobs):
|
||||
run_times = job.get_run_times(now)
|
||||
if run_times:
|
||||
self._threadpool.submit(self._run_job, job, run_times)
|
||||
|
||||
# Increase the job's run count
|
||||
if job.coalesce:
|
||||
job.runs += 1
|
||||
else:
|
||||
job.runs += len(run_times)
|
||||
|
||||
# Update the job, but don't keep finished jobs around
|
||||
if job.compute_next_run_time(now + timedelta(microseconds=1)):
|
||||
jobstore.update_job(job)
|
||||
else:
|
||||
self._remove_job(job, alias, jobstore)
|
||||
|
||||
if not next_wakeup_time:
|
||||
next_wakeup_time = job.next_run_time
|
||||
elif job.next_run_time:
|
||||
next_wakeup_time = min(next_wakeup_time,
|
||||
job.next_run_time)
|
||||
return next_wakeup_time
|
||||
finally:
|
||||
self._jobstores_lock.release()
|
||||
|
||||
def _main_loop(self):
|
||||
"""Executes jobs on schedule."""
|
||||
|
||||
logger.info('Scheduler started')
|
||||
self._notify_listeners(SchedulerEvent(EVENT_SCHEDULER_START))
|
||||
|
||||
self._wakeup.clear()
|
||||
while not self._stopped:
|
||||
logger.debug('Looking for jobs to run')
|
||||
now = datetime.now()
|
||||
next_wakeup_time = self._process_jobs(now)
|
||||
|
||||
# Sleep until the next job is scheduled to be run,
|
||||
# a new job is added or the scheduler is stopped
|
||||
if next_wakeup_time is not None:
|
||||
wait_seconds = time_difference(next_wakeup_time, now)
|
||||
logger.debug('Next wakeup is due at %s (in %f seconds)',
|
||||
next_wakeup_time, wait_seconds)
|
||||
self._wakeup.wait(wait_seconds)
|
||||
else:
|
||||
logger.debug('No jobs; waiting until a job is added')
|
||||
self._wakeup.wait()
|
||||
self._wakeup.clear()
|
||||
|
||||
logger.info('Scheduler has been shut down')
|
||||
self._notify_listeners(SchedulerEvent(EVENT_SCHEDULER_SHUTDOWN))
|
|
@ -0,0 +1,12 @@
|
|||
class SchedulerAlreadyRunningError(Exception):
|
||||
"""Raised when attempting to start or configure the scheduler when it's already running."""
|
||||
|
||||
def __str__(self):
|
||||
return 'Scheduler is already running'
|
||||
|
||||
|
||||
class SchedulerNotRunningError(Exception):
|
||||
"""Raised when attempting to shutdown the scheduler when it's not running."""
|
||||
|
||||
def __str__(self):
|
||||
return 'Scheduler is not running'
|
|
@ -0,0 +1,67 @@
|
|||
from __future__ import absolute_import
|
||||
from functools import wraps
|
||||
|
||||
from apscheduler.schedulers.base import BaseScheduler
|
||||
from apscheduler.util import maybe_ref
|
||||
|
||||
try:
|
||||
import asyncio
|
||||
except ImportError: # pragma: nocover
|
||||
try:
|
||||
import trollius as asyncio
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
'AsyncIOScheduler requires either Python 3.4 or the asyncio package installed')
|
||||
|
||||
|
||||
def run_in_event_loop(func):
|
||||
@wraps(func)
|
||||
def wrapper(self, *args):
|
||||
self._eventloop.call_soon_threadsafe(func, self, *args)
|
||||
return wrapper
|
||||
|
||||
|
||||
class AsyncIOScheduler(BaseScheduler):
|
||||
"""
|
||||
A scheduler that runs on an asyncio (:pep:`3156`) event loop.
|
||||
|
||||
The default executor can run jobs based on native coroutines (``async def``).
|
||||
|
||||
Extra options:
|
||||
|
||||
============== =============================================================
|
||||
``event_loop`` AsyncIO event loop to use (defaults to the global event loop)
|
||||
============== =============================================================
|
||||
"""
|
||||
|
||||
_eventloop = None
|
||||
_timeout = None
|
||||
|
||||
@run_in_event_loop
|
||||
def shutdown(self, wait=True):
|
||||
super(AsyncIOScheduler, self).shutdown(wait)
|
||||
self._stop_timer()
|
||||
|
||||
def _configure(self, config):
|
||||
self._eventloop = maybe_ref(config.pop('event_loop', None)) or asyncio.get_event_loop()
|
||||
super(AsyncIOScheduler, self)._configure(config)
|
||||
|
||||
def _start_timer(self, wait_seconds):
|
||||
self._stop_timer()
|
||||
if wait_seconds is not None:
|
||||
self._timeout = self._eventloop.call_later(wait_seconds, self.wakeup)
|
||||
|
||||
def _stop_timer(self):
|
||||
if self._timeout:
|
||||
self._timeout.cancel()
|
||||
del self._timeout
|
||||
|
||||
@run_in_event_loop
|
||||
def wakeup(self):
|
||||
self._stop_timer()
|
||||
wait_seconds = self._process_jobs()
|
||||
self._start_timer(wait_seconds)
|
||||
|
||||
def _create_default_executor(self):
|
||||
from apscheduler.executors.asyncio import AsyncIOExecutor
|
||||
return AsyncIOExecutor()
|
|
@ -0,0 +1,41 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
from threading import Thread, Event
|
||||
|
||||
from apscheduler.schedulers.base import BaseScheduler
|
||||
from apscheduler.schedulers.blocking import BlockingScheduler
|
||||
from apscheduler.util import asbool
|
||||
|
||||
|
||||
class BackgroundScheduler(BlockingScheduler):
|
||||
"""
|
||||
A scheduler that runs in the background using a separate thread
|
||||
(:meth:`~apscheduler.schedulers.base.BaseScheduler.start` will return immediately).
|
||||
|
||||
Extra options:
|
||||
|
||||
========== =============================================================================
|
||||
``daemon`` Set the ``daemon`` option in the background thread (defaults to ``True``, see
|
||||
`the documentation
|
||||
<https://docs.python.org/3.4/library/threading.html#thread-objects>`_
|
||||
for further details)
|
||||
========== =============================================================================
|
||||
"""
|
||||
|
||||
_thread = None
|
||||
|
||||
def _configure(self, config):
|
||||
self._daemon = asbool(config.pop('daemon', True))
|
||||
super(BackgroundScheduler, self)._configure(config)
|
||||
|
||||
def start(self, *args, **kwargs):
|
||||
self._event = Event()
|
||||
BaseScheduler.start(self, *args, **kwargs)
|
||||
self._thread = Thread(target=self._main_loop, name='APScheduler')
|
||||
self._thread.daemon = self._daemon
|
||||
self._thread.start()
|
||||
|
||||
def shutdown(self, *args, **kwargs):
|
||||
super(BackgroundScheduler, self).shutdown(*args, **kwargs)
|
||||
self._thread.join()
|
||||
del self._thread
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,33 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
from threading import Event
|
||||
|
||||
from apscheduler.schedulers.base import BaseScheduler, STATE_STOPPED
|
||||
from apscheduler.util import TIMEOUT_MAX
|
||||
|
||||
|
||||
class BlockingScheduler(BaseScheduler):
|
||||
"""
|
||||
A scheduler that runs in the foreground
|
||||
(:meth:`~apscheduler.schedulers.base.BaseScheduler.start` will block).
|
||||
"""
|
||||
_event = None
|
||||
|
||||
def start(self, *args, **kwargs):
|
||||
self._event = Event()
|
||||
super(BlockingScheduler, self).start(*args, **kwargs)
|
||||
self._main_loop()
|
||||
|
||||
def shutdown(self, wait=True):
|
||||
super(BlockingScheduler, self).shutdown(wait)
|
||||
self._event.set()
|
||||
|
||||
def _main_loop(self):
|
||||
wait_seconds = TIMEOUT_MAX
|
||||
while self.state != STATE_STOPPED:
|
||||
self._event.wait(wait_seconds)
|
||||
self._event.clear()
|
||||
wait_seconds = self._process_jobs()
|
||||
|
||||
def wakeup(self):
|
||||
self._event.set()
|
|
@ -0,0 +1,35 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
from apscheduler.schedulers.blocking import BlockingScheduler
|
||||
from apscheduler.schedulers.base import BaseScheduler
|
||||
|
||||
try:
|
||||
from gevent.event import Event
|
||||
from gevent.lock import RLock
|
||||
import gevent
|
||||
except ImportError: # pragma: nocover
|
||||
raise ImportError('GeventScheduler requires gevent installed')
|
||||
|
||||
|
||||
class GeventScheduler(BlockingScheduler):
|
||||
"""A scheduler that runs as a Gevent greenlet."""
|
||||
|
||||
_greenlet = None
|
||||
|
||||
def start(self, *args, **kwargs):
|
||||
self._event = Event()
|
||||
BaseScheduler.start(self, *args, **kwargs)
|
||||
self._greenlet = gevent.spawn(self._main_loop)
|
||||
return self._greenlet
|
||||
|
||||
def shutdown(self, *args, **kwargs):
|
||||
super(GeventScheduler, self).shutdown(*args, **kwargs)
|
||||
self._greenlet.join()
|
||||
del self._greenlet
|
||||
|
||||
def _create_lock(self):
|
||||
return RLock()
|
||||
|
||||
def _create_default_executor(self):
|
||||
from apscheduler.executors.gevent import GeventExecutor
|
||||
return GeventExecutor()
|
|
@ -0,0 +1,42 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
from apscheduler.schedulers.base import BaseScheduler
|
||||
|
||||
try:
|
||||
from PyQt5.QtCore import QObject, QTimer
|
||||
except ImportError: # pragma: nocover
|
||||
try:
|
||||
from PyQt4.QtCore import QObject, QTimer
|
||||
except ImportError:
|
||||
try:
|
||||
from PySide.QtCore import QObject, QTimer # flake8: noqa
|
||||
except ImportError:
|
||||
raise ImportError('QtScheduler requires either PyQt5, PyQt4 or PySide installed')
|
||||
|
||||
|
||||
class QtScheduler(BaseScheduler):
|
||||
"""A scheduler that runs in a Qt event loop."""
|
||||
|
||||
_timer = None
|
||||
|
||||
def shutdown(self, *args, **kwargs):
|
||||
super(QtScheduler, self).shutdown(*args, **kwargs)
|
||||
self._stop_timer()
|
||||
|
||||
def _start_timer(self, wait_seconds):
|
||||
self._stop_timer()
|
||||
if wait_seconds is not None:
|
||||
self._timer = QTimer.singleShot(wait_seconds * 1000, self._process_jobs)
|
||||
|
||||
def _stop_timer(self):
|
||||
if self._timer:
|
||||
if self._timer.isActive():
|
||||
self._timer.stop()
|
||||
del self._timer
|
||||
|
||||
def wakeup(self):
|
||||
self._start_timer(0)
|
||||
|
||||
def _process_jobs(self):
|
||||
wait_seconds = super(QtScheduler, self)._process_jobs()
|
||||
self._start_timer(wait_seconds)
|
|
@ -0,0 +1,63 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
from datetime import timedelta
|
||||
from functools import wraps
|
||||
|
||||
from apscheduler.schedulers.base import BaseScheduler
|
||||
from apscheduler.util import maybe_ref
|
||||
|
||||
try:
|
||||
from tornado.ioloop import IOLoop
|
||||
except ImportError: # pragma: nocover
|
||||
raise ImportError('TornadoScheduler requires tornado installed')
|
||||
|
||||
|
||||
def run_in_ioloop(func):
|
||||
@wraps(func)
|
||||
def wrapper(self, *args, **kwargs):
|
||||
self._ioloop.add_callback(func, self, *args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
|
||||
class TornadoScheduler(BaseScheduler):
|
||||
"""
|
||||
A scheduler that runs on a Tornado IOLoop.
|
||||
|
||||
The default executor can run jobs based on native coroutines (``async def``).
|
||||
|
||||
=========== ===============================================================
|
||||
``io_loop`` Tornado IOLoop instance to use (defaults to the global IO loop)
|
||||
=========== ===============================================================
|
||||
"""
|
||||
|
||||
_ioloop = None
|
||||
_timeout = None
|
||||
|
||||
@run_in_ioloop
|
||||
def shutdown(self, wait=True):
|
||||
super(TornadoScheduler, self).shutdown(wait)
|
||||
self._stop_timer()
|
||||
|
||||
def _configure(self, config):
|
||||
self._ioloop = maybe_ref(config.pop('io_loop', None)) or IOLoop.current()
|
||||
super(TornadoScheduler, self)._configure(config)
|
||||
|
||||
def _start_timer(self, wait_seconds):
|
||||
self._stop_timer()
|
||||
if wait_seconds is not None:
|
||||
self._timeout = self._ioloop.add_timeout(timedelta(seconds=wait_seconds), self.wakeup)
|
||||
|
||||
def _stop_timer(self):
|
||||
if self._timeout:
|
||||
self._ioloop.remove_timeout(self._timeout)
|
||||
del self._timeout
|
||||
|
||||
def _create_default_executor(self):
|
||||
from apscheduler.executors.tornado import TornadoExecutor
|
||||
return TornadoExecutor()
|
||||
|
||||
@run_in_ioloop
|
||||
def wakeup(self):
|
||||
self._stop_timer()
|
||||
wait_seconds = self._process_jobs()
|
||||
self._start_timer(wait_seconds)
|
|
@ -0,0 +1,62 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
from functools import wraps
|
||||
|
||||
from apscheduler.schedulers.base import BaseScheduler
|
||||
from apscheduler.util import maybe_ref
|
||||
|
||||
try:
|
||||
from twisted.internet import reactor as default_reactor
|
||||
except ImportError: # pragma: nocover
|
||||
raise ImportError('TwistedScheduler requires Twisted installed')
|
||||
|
||||
|
||||
def run_in_reactor(func):
|
||||
@wraps(func)
|
||||
def wrapper(self, *args, **kwargs):
|
||||
self._reactor.callFromThread(func, self, *args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
|
||||
class TwistedScheduler(BaseScheduler):
|
||||
"""
|
||||
A scheduler that runs on a Twisted reactor.
|
||||
|
||||
Extra options:
|
||||
|
||||
=========== ========================================================
|
||||
``reactor`` Reactor instance to use (defaults to the global reactor)
|
||||
=========== ========================================================
|
||||
"""
|
||||
|
||||
_reactor = None
|
||||
_delayedcall = None
|
||||
|
||||
def _configure(self, config):
|
||||
self._reactor = maybe_ref(config.pop('reactor', default_reactor))
|
||||
super(TwistedScheduler, self)._configure(config)
|
||||
|
||||
@run_in_reactor
|
||||
def shutdown(self, wait=True):
|
||||
super(TwistedScheduler, self).shutdown(wait)
|
||||
self._stop_timer()
|
||||
|
||||
def _start_timer(self, wait_seconds):
|
||||
self._stop_timer()
|
||||
if wait_seconds is not None:
|
||||
self._delayedcall = self._reactor.callLater(wait_seconds, self.wakeup)
|
||||
|
||||
def _stop_timer(self):
|
||||
if self._delayedcall and self._delayedcall.active():
|
||||
self._delayedcall.cancel()
|
||||
del self._delayedcall
|
||||
|
||||
@run_in_reactor
|
||||
def wakeup(self):
|
||||
self._stop_timer()
|
||||
wait_seconds = self._process_jobs()
|
||||
self._start_timer(wait_seconds)
|
||||
|
||||
def _create_default_executor(self):
|
||||
from apscheduler.executors.twisted import TwistedExecutor
|
||||
return TwistedExecutor()
|
|
@ -1,133 +0,0 @@
|
|||
"""
|
||||
Generic thread pool class. Modeled after Java's ThreadPoolExecutor.
|
||||
Please note that this ThreadPool does *not* fully implement the PEP 3148
|
||||
ThreadPool!
|
||||
"""
|
||||
|
||||
from threading import Thread, Lock, currentThread
|
||||
from weakref import ref
|
||||
import logging
|
||||
import atexit
|
||||
|
||||
try:
|
||||
from queue import Queue, Empty
|
||||
except ImportError:
|
||||
from Queue import Queue, Empty
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
_threadpools = set()
|
||||
|
||||
|
||||
# Worker threads are daemonic in order to let the interpreter exit without
|
||||
# an explicit shutdown of the thread pool. The following trick is necessary
|
||||
# to allow worker threads to finish cleanly.
|
||||
def _shutdown_all():
|
||||
for pool_ref in tuple(_threadpools):
|
||||
pool = pool_ref()
|
||||
if pool:
|
||||
pool.shutdown()
|
||||
|
||||
atexit.register(_shutdown_all)
|
||||
|
||||
|
||||
class ThreadPool(object):
|
||||
def __init__(self, core_threads=0, max_threads=20, keepalive=1):
|
||||
"""
|
||||
:param core_threads: maximum number of persistent threads in the pool
|
||||
:param max_threads: maximum number of total threads in the pool
|
||||
:param thread_class: callable that creates a Thread object
|
||||
:param keepalive: seconds to keep non-core worker threads waiting
|
||||
for new tasks
|
||||
"""
|
||||
self.core_threads = core_threads
|
||||
self.max_threads = max(max_threads, core_threads, 1)
|
||||
self.keepalive = keepalive
|
||||
self._queue = Queue()
|
||||
self._threads_lock = Lock()
|
||||
self._threads = set()
|
||||
self._shutdown = False
|
||||
|
||||
_threadpools.add(ref(self))
|
||||
logger.info('Started thread pool with %d core threads and %s maximum '
|
||||
'threads', core_threads, max_threads or 'unlimited')
|
||||
|
||||
def _adjust_threadcount(self):
|
||||
self._threads_lock.acquire()
|
||||
try:
|
||||
if self.num_threads < self.max_threads:
|
||||
self._add_thread(self.num_threads < self.core_threads)
|
||||
finally:
|
||||
self._threads_lock.release()
|
||||
|
||||
def _add_thread(self, core):
|
||||
t = Thread(target=self._run_jobs, args=(core,))
|
||||
t.setDaemon(True)
|
||||
t.start()
|
||||
self._threads.add(t)
|
||||
|
||||
def _run_jobs(self, core):
|
||||
logger.debug('Started worker thread')
|
||||
block = True
|
||||
timeout = None
|
||||
if not core:
|
||||
block = self.keepalive > 0
|
||||
timeout = self.keepalive
|
||||
|
||||
while True:
|
||||
try:
|
||||
func, args, kwargs = self._queue.get(block, timeout)
|
||||
except Empty:
|
||||
break
|
||||
|
||||
if self._shutdown:
|
||||
break
|
||||
|
||||
try:
|
||||
func(*args, **kwargs)
|
||||
except:
|
||||
logger.exception('Error in worker thread')
|
||||
|
||||
self._threads_lock.acquire()
|
||||
self._threads.remove(currentThread())
|
||||
self._threads_lock.release()
|
||||
|
||||
logger.debug('Exiting worker thread')
|
||||
|
||||
@property
|
||||
def num_threads(self):
|
||||
return len(self._threads)
|
||||
|
||||
def submit(self, func, *args, **kwargs):
|
||||
if self._shutdown:
|
||||
raise RuntimeError('Cannot schedule new tasks after shutdown')
|
||||
|
||||
self._queue.put((func, args, kwargs))
|
||||
self._adjust_threadcount()
|
||||
|
||||
def shutdown(self, wait=True):
|
||||
if self._shutdown:
|
||||
return
|
||||
|
||||
logging.info('Shutting down thread pool')
|
||||
self._shutdown = True
|
||||
_threadpools.remove(ref(self))
|
||||
|
||||
self._threads_lock.acquire()
|
||||
for _ in range(self.num_threads):
|
||||
self._queue.put((None, None, None))
|
||||
self._threads_lock.release()
|
||||
|
||||
if wait:
|
||||
self._threads_lock.acquire()
|
||||
threads = tuple(self._threads)
|
||||
self._threads_lock.release()
|
||||
for thread in threads:
|
||||
thread.join()
|
||||
|
||||
def __repr__(self):
|
||||
if self.max_threads:
|
||||
threadcount = '%d/%d' % (self.num_threads, self.max_threads)
|
||||
else:
|
||||
threadcount = '%d' % self.num_threads
|
||||
|
||||
return '<ThreadPool at %x; threads=%s>' % (id(self), threadcount)
|
|
@ -1,3 +0,0 @@
|
|||
from apscheduler.triggers.cron import CronTrigger
|
||||
from apscheduler.triggers.interval import IntervalTrigger
|
||||
from apscheduler.triggers.simple import SimpleTrigger
|
|
@ -0,0 +1,19 @@
|
|||
from abc import ABCMeta, abstractmethod
|
||||
|
||||
import six
|
||||
|
||||
|
||||
class BaseTrigger(six.with_metaclass(ABCMeta)):
|
||||
"""Abstract base class that defines the interface that every trigger must implement."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
@abstractmethod
|
||||
def get_next_fire_time(self, previous_fire_time, now):
|
||||
"""
|
||||
Returns the next datetime to fire on, If no such datetime can be calculated, returns
|
||||
``None``.
|
||||
|
||||
:param datetime.datetime previous_fire_time: the previous time the trigger was fired
|
||||
:param datetime.datetime now: current datetime
|
||||
"""
|
|
@ -1,32 +1,73 @@
|
|||
from datetime import date, datetime
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from apscheduler.triggers.cron.fields import *
|
||||
from apscheduler.util import datetime_ceil, convert_to_datetime
|
||||
from tzlocal import get_localzone
|
||||
import six
|
||||
|
||||
from apscheduler.triggers.base import BaseTrigger
|
||||
from apscheduler.triggers.cron.fields import (
|
||||
BaseField, WeekField, DayOfMonthField, DayOfWeekField, DEFAULT_VALUES)
|
||||
from apscheduler.util import datetime_ceil, convert_to_datetime, datetime_repr, astimezone
|
||||
|
||||
|
||||
class CronTrigger(object):
|
||||
FIELD_NAMES = ('year', 'month', 'day', 'week', 'day_of_week', 'hour',
|
||||
'minute', 'second')
|
||||
FIELDS_MAP = {'year': BaseField,
|
||||
'month': BaseField,
|
||||
'week': WeekField,
|
||||
'day': DayOfMonthField,
|
||||
'day_of_week': DayOfWeekField,
|
||||
'hour': BaseField,
|
||||
'minute': BaseField,
|
||||
'second': BaseField}
|
||||
class CronTrigger(BaseTrigger):
|
||||
"""
|
||||
Triggers when current time matches all specified time constraints,
|
||||
similarly to how the UNIX cron scheduler works.
|
||||
|
||||
def __init__(self, **values):
|
||||
self.start_date = values.pop('start_date', None)
|
||||
if self.start_date:
|
||||
self.start_date = convert_to_datetime(self.start_date)
|
||||
:param int|str year: 4-digit year
|
||||
:param int|str month: month (1-12)
|
||||
:param int|str day: day of the (1-31)
|
||||
:param int|str week: ISO week (1-53)
|
||||
:param int|str day_of_week: number or name of weekday (0-6 or mon,tue,wed,thu,fri,sat,sun)
|
||||
:param int|str hour: hour (0-23)
|
||||
:param int|str minute: minute (0-59)
|
||||
:param int|str second: second (0-59)
|
||||
:param datetime|str start_date: earliest possible date/time to trigger on (inclusive)
|
||||
:param datetime|str end_date: latest possible date/time to trigger on (inclusive)
|
||||
:param datetime.tzinfo|str timezone: time zone to use for the date/time calculations (defaults
|
||||
to scheduler timezone)
|
||||
|
||||
.. note:: The first weekday is always **monday**.
|
||||
"""
|
||||
|
||||
FIELD_NAMES = ('year', 'month', 'day', 'week', 'day_of_week', 'hour', 'minute', 'second')
|
||||
FIELDS_MAP = {
|
||||
'year': BaseField,
|
||||
'month': BaseField,
|
||||
'week': WeekField,
|
||||
'day': DayOfMonthField,
|
||||
'day_of_week': DayOfWeekField,
|
||||
'hour': BaseField,
|
||||
'minute': BaseField,
|
||||
'second': BaseField
|
||||
}
|
||||
|
||||
__slots__ = 'timezone', 'start_date', 'end_date', 'fields'
|
||||
|
||||
def __init__(self, year=None, month=None, day=None, week=None, day_of_week=None, hour=None,
|
||||
minute=None, second=None, start_date=None, end_date=None, timezone=None):
|
||||
if timezone:
|
||||
self.timezone = astimezone(timezone)
|
||||
elif isinstance(start_date, datetime) and start_date.tzinfo:
|
||||
self.timezone = start_date.tzinfo
|
||||
elif isinstance(end_date, datetime) and end_date.tzinfo:
|
||||
self.timezone = end_date.tzinfo
|
||||
else:
|
||||
self.timezone = get_localzone()
|
||||
|
||||
self.start_date = convert_to_datetime(start_date, self.timezone, 'start_date')
|
||||
self.end_date = convert_to_datetime(end_date, self.timezone, 'end_date')
|
||||
|
||||
values = dict((key, value) for (key, value) in six.iteritems(locals())
|
||||
if key in self.FIELD_NAMES and value is not None)
|
||||
self.fields = []
|
||||
assign_defaults = False
|
||||
for field_name in self.FIELD_NAMES:
|
||||
if field_name in values:
|
||||
exprs = values.pop(field_name)
|
||||
is_default = False
|
||||
elif not values:
|
||||
assign_defaults = not values
|
||||
elif assign_defaults:
|
||||
exprs = DEFAULT_VALUES[field_name]
|
||||
is_default = True
|
||||
else:
|
||||
|
@ -39,18 +80,18 @@ class CronTrigger(object):
|
|||
|
||||
def _increment_field_value(self, dateval, fieldnum):
|
||||
"""
|
||||
Increments the designated field and resets all less significant fields
|
||||
to their minimum values.
|
||||
Increments the designated field and resets all less significant fields to their minimum
|
||||
values.
|
||||
|
||||
:type dateval: datetime
|
||||
:type fieldnum: int
|
||||
:type amount: int
|
||||
:return: a tuple containing the new date, and the number of the field that was actually
|
||||
incremented
|
||||
:rtype: tuple
|
||||
:return: a tuple containing the new date, and the number of the field
|
||||
that was actually incremented
|
||||
"""
|
||||
i = 0
|
||||
|
||||
values = {}
|
||||
i = 0
|
||||
while i < len(self.fields):
|
||||
field = self.fields[i]
|
||||
if not field.REAL:
|
||||
|
@ -77,7 +118,8 @@ class CronTrigger(object):
|
|||
values[field.name] = value + 1
|
||||
i += 1
|
||||
|
||||
return datetime(**values), fieldnum
|
||||
difference = datetime(**values) - dateval.replace(tzinfo=None)
|
||||
return self.timezone.normalize(dateval + difference), fieldnum
|
||||
|
||||
def _set_field_value(self, dateval, fieldnum, new_value):
|
||||
values = {}
|
||||
|
@ -90,13 +132,18 @@ class CronTrigger(object):
|
|||
else:
|
||||
values[field.name] = new_value
|
||||
|
||||
return datetime(**values)
|
||||
return self.timezone.localize(datetime(**values))
|
||||
|
||||
def get_next_fire_time(self, previous_fire_time, now):
|
||||
if previous_fire_time:
|
||||
start_date = min(now, previous_fire_time + timedelta(microseconds=1))
|
||||
if start_date == previous_fire_time:
|
||||
start_date += timedelta(microseconds=1)
|
||||
else:
|
||||
start_date = max(now, self.start_date) if self.start_date else now
|
||||
|
||||
def get_next_fire_time(self, start_date):
|
||||
if self.start_date:
|
||||
start_date = max(start_date, self.start_date)
|
||||
next_date = datetime_ceil(start_date)
|
||||
fieldnum = 0
|
||||
next_date = datetime_ceil(start_date).astimezone(self.timezone)
|
||||
while 0 <= fieldnum < len(self.fields):
|
||||
field = self.fields[fieldnum]
|
||||
curr_value = field.get_value(next_date)
|
||||
|
@ -104,32 +151,56 @@ class CronTrigger(object):
|
|||
|
||||
if next_value is None:
|
||||
# No valid value was found
|
||||
next_date, fieldnum = self._increment_field_value(next_date,
|
||||
fieldnum - 1)
|
||||
next_date, fieldnum = self._increment_field_value(next_date, fieldnum - 1)
|
||||
elif next_value > curr_value:
|
||||
# A valid, but higher than the starting value, was found
|
||||
if field.REAL:
|
||||
next_date = self._set_field_value(next_date, fieldnum,
|
||||
next_value)
|
||||
next_date = self._set_field_value(next_date, fieldnum, next_value)
|
||||
fieldnum += 1
|
||||
else:
|
||||
next_date, fieldnum = self._increment_field_value(next_date,
|
||||
fieldnum)
|
||||
next_date, fieldnum = self._increment_field_value(next_date, fieldnum)
|
||||
else:
|
||||
# A valid value was found, no changes necessary
|
||||
fieldnum += 1
|
||||
|
||||
# Return if the date has rolled past the end date
|
||||
if self.end_date and next_date > self.end_date:
|
||||
return None
|
||||
|
||||
if fieldnum >= 0:
|
||||
return next_date
|
||||
|
||||
def __getstate__(self):
|
||||
return {
|
||||
'version': 1,
|
||||
'timezone': self.timezone,
|
||||
'start_date': self.start_date,
|
||||
'end_date': self.end_date,
|
||||
'fields': self.fields
|
||||
}
|
||||
|
||||
def __setstate__(self, state):
|
||||
# This is for compatibility with APScheduler 3.0.x
|
||||
if isinstance(state, tuple):
|
||||
state = state[1]
|
||||
|
||||
if state.get('version', 1) > 1:
|
||||
raise ValueError(
|
||||
'Got serialized data for version %s of %s, but only version 1 can be handled' %
|
||||
(state['version'], self.__class__.__name__))
|
||||
|
||||
self.timezone = state['timezone']
|
||||
self.start_date = state['start_date']
|
||||
self.end_date = state['end_date']
|
||||
self.fields = state['fields']
|
||||
|
||||
def __str__(self):
|
||||
options = ["%s='%s'" % (f.name, str(f)) for f in self.fields
|
||||
if not f.is_default]
|
||||
options = ["%s='%s'" % (f.name, f) for f in self.fields if not f.is_default]
|
||||
return 'cron[%s]' % (', '.join(options))
|
||||
|
||||
def __repr__(self):
|
||||
options = ["%s='%s'" % (f.name, str(f)) for f in self.fields
|
||||
if not f.is_default]
|
||||
options = ["%s='%s'" % (f.name, f) for f in self.fields if not f.is_default]
|
||||
if self.start_date:
|
||||
options.append("start_date='%s'" % self.start_date.isoformat(' '))
|
||||
return '<%s (%s)>' % (self.__class__.__name__, ', '.join(options))
|
||||
options.append("start_date='%s'" % datetime_repr(self.start_date))
|
||||
return "<%s (%s, timezone='%s')>" % (
|
||||
self.__class__.__name__, ', '.join(options), self.timezone)
|
||||
|
|
|
@ -1,6 +1,4 @@
|
|||
"""
|
||||
This module contains the expressions applicable for CronTrigger's fields.
|
||||
"""
|
||||
"""This module contains the expressions applicable for CronTrigger's fields."""
|
||||
|
||||
from calendar import monthrange
|
||||
import re
|
||||
|
@ -8,7 +6,7 @@ import re
|
|||
from apscheduler.util import asint
|
||||
|
||||
__all__ = ('AllExpression', 'RangeExpression', 'WeekdayRangeExpression',
|
||||
'WeekdayPositionExpression')
|
||||
'WeekdayPositionExpression', 'LastDayOfMonthExpression')
|
||||
|
||||
|
||||
WEEKDAYS = ['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun']
|
||||
|
@ -37,6 +35,9 @@ class AllExpression(object):
|
|||
if next <= maxval:
|
||||
return next
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(other, self.__class__) and self.step == other.step
|
||||
|
||||
def __str__(self):
|
||||
if self.step:
|
||||
return '*/%d' % self.step
|
||||
|
@ -57,30 +58,30 @@ class RangeExpression(AllExpression):
|
|||
if last is None and step is None:
|
||||
last = first
|
||||
if last is not None and first > last:
|
||||
raise ValueError('The minimum value in a range must not be '
|
||||
'higher than the maximum')
|
||||
raise ValueError('The minimum value in a range must not be higher than the maximum')
|
||||
self.first = first
|
||||
self.last = last
|
||||
|
||||
def get_next_value(self, date, field):
|
||||
start = field.get_value(date)
|
||||
startval = field.get_value(date)
|
||||
minval = field.get_min(date)
|
||||
maxval = field.get_max(date)
|
||||
|
||||
# Apply range limits
|
||||
minval = max(minval, self.first)
|
||||
if self.last is not None:
|
||||
maxval = min(maxval, self.last)
|
||||
start = max(start, minval)
|
||||
maxval = min(maxval, self.last) if self.last is not None else maxval
|
||||
nextval = max(minval, startval)
|
||||
|
||||
if not self.step:
|
||||
next = start
|
||||
else:
|
||||
distance_to_next = (self.step - (start - minval)) % self.step
|
||||
next = start + distance_to_next
|
||||
# Apply the step if defined
|
||||
if self.step:
|
||||
distance_to_next = (self.step - (nextval - minval)) % self.step
|
||||
nextval += distance_to_next
|
||||
|
||||
if next <= maxval:
|
||||
return next
|
||||
return nextval if nextval <= maxval else None
|
||||
|
||||
def __eq__(self, other):
|
||||
return (isinstance(other, self.__class__) and self.first == other.first and
|
||||
self.last == other.last)
|
||||
|
||||
def __str__(self):
|
||||
if self.last != self.first and self.last is not None:
|
||||
|
@ -102,8 +103,7 @@ class RangeExpression(AllExpression):
|
|||
|
||||
|
||||
class WeekdayRangeExpression(RangeExpression):
|
||||
value_re = re.compile(r'(?P<first>[a-z]+)(?:-(?P<last>[a-z]+))?',
|
||||
re.IGNORECASE)
|
||||
value_re = re.compile(r'(?P<first>[a-z]+)(?:-(?P<last>[a-z]+))?', re.IGNORECASE)
|
||||
|
||||
def __init__(self, first, last=None):
|
||||
try:
|
||||
|
@ -135,8 +135,8 @@ class WeekdayRangeExpression(RangeExpression):
|
|||
|
||||
class WeekdayPositionExpression(AllExpression):
|
||||
options = ['1st', '2nd', '3rd', '4th', '5th', 'last']
|
||||
value_re = re.compile(r'(?P<option_name>%s) +(?P<weekday_name>(?:\d+|\w+))'
|
||||
% '|'.join(options), re.IGNORECASE)
|
||||
value_re = re.compile(r'(?P<option_name>%s) +(?P<weekday_name>(?:\d+|\w+))' %
|
||||
'|'.join(options), re.IGNORECASE)
|
||||
|
||||
def __init__(self, option_name, weekday_name):
|
||||
try:
|
||||
|
@ -150,8 +150,7 @@ class WeekdayPositionExpression(AllExpression):
|
|||
raise ValueError('Invalid weekday name "%s"' % weekday_name)
|
||||
|
||||
def get_next_value(self, date, field):
|
||||
# Figure out the weekday of the month's first day and the number
|
||||
# of days in that month
|
||||
# Figure out the weekday of the month's first day and the number of days in that month
|
||||
first_day_wday, last_day = monthrange(date.year, date.month)
|
||||
|
||||
# Calculate which day of the month is the first of the target weekdays
|
||||
|
@ -163,16 +162,34 @@ class WeekdayPositionExpression(AllExpression):
|
|||
if self.option_num < 5:
|
||||
target_day = first_hit_day + self.option_num * 7
|
||||
else:
|
||||
target_day = first_hit_day + ((last_day - first_hit_day) / 7) * 7
|
||||
target_day = first_hit_day + ((last_day - first_hit_day) // 7) * 7
|
||||
|
||||
if target_day <= last_day and target_day >= date.day:
|
||||
return target_day
|
||||
|
||||
def __eq__(self, other):
|
||||
return (super(WeekdayPositionExpression, self).__eq__(other) and
|
||||
self.option_num == other.option_num and self.weekday == other.weekday)
|
||||
|
||||
def __str__(self):
|
||||
return '%s %s' % (self.options[self.option_num],
|
||||
WEEKDAYS[self.weekday])
|
||||
return '%s %s' % (self.options[self.option_num], WEEKDAYS[self.weekday])
|
||||
|
||||
def __repr__(self):
|
||||
return "%s('%s', '%s')" % (self.__class__.__name__,
|
||||
self.options[self.option_num],
|
||||
return "%s('%s', '%s')" % (self.__class__.__name__, self.options[self.option_num],
|
||||
WEEKDAYS[self.weekday])
|
||||
|
||||
|
||||
class LastDayOfMonthExpression(AllExpression):
|
||||
value_re = re.compile(r'last', re.IGNORECASE)
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def get_next_value(self, date, field):
|
||||
return monthrange(date.year, date.month)[1]
|
||||
|
||||
def __str__(self):
|
||||
return 'last'
|
||||
|
||||
def __repr__(self):
|
||||
return "%s()" % self.__class__.__name__
|
||||
|
|
|
@ -1,22 +1,22 @@
|
|||
"""
|
||||
Fields represent CronTrigger options which map to :class:`~datetime.datetime`
|
||||
fields.
|
||||
"""
|
||||
"""Fields represent CronTrigger options which map to :class:`~datetime.datetime` fields."""
|
||||
|
||||
from calendar import monthrange
|
||||
|
||||
from apscheduler.triggers.cron.expressions import *
|
||||
|
||||
__all__ = ('MIN_VALUES', 'MAX_VALUES', 'DEFAULT_VALUES', 'BaseField',
|
||||
'WeekField', 'DayOfMonthField', 'DayOfWeekField')
|
||||
from apscheduler.triggers.cron.expressions import (
|
||||
AllExpression, RangeExpression, WeekdayPositionExpression, LastDayOfMonthExpression,
|
||||
WeekdayRangeExpression)
|
||||
|
||||
|
||||
MIN_VALUES = {'year': 1970, 'month': 1, 'day': 1, 'week': 1,
|
||||
'day_of_week': 0, 'hour': 0, 'minute': 0, 'second': 0}
|
||||
MAX_VALUES = {'year': 2 ** 63, 'month': 12, 'day:': 31, 'week': 53,
|
||||
'day_of_week': 6, 'hour': 23, 'minute': 59, 'second': 59}
|
||||
DEFAULT_VALUES = {'year': '*', 'month': 1, 'day': 1, 'week': '*',
|
||||
'day_of_week': '*', 'hour': 0, 'minute': 0, 'second': 0}
|
||||
__all__ = ('MIN_VALUES', 'MAX_VALUES', 'DEFAULT_VALUES', 'BaseField', 'WeekField',
|
||||
'DayOfMonthField', 'DayOfWeekField')
|
||||
|
||||
|
||||
MIN_VALUES = {'year': 1970, 'month': 1, 'day': 1, 'week': 1, 'day_of_week': 0, 'hour': 0,
|
||||
'minute': 0, 'second': 0}
|
||||
MAX_VALUES = {'year': 2 ** 63, 'month': 12, 'day:': 31, 'week': 53, 'day_of_week': 6, 'hour': 23,
|
||||
'minute': 59, 'second': 59}
|
||||
DEFAULT_VALUES = {'year': '*', 'month': 1, 'day': 1, 'week': '*', 'day_of_week': '*', 'hour': 0,
|
||||
'minute': 0, 'second': 0}
|
||||
|
||||
|
||||
class BaseField(object):
|
||||
|
@ -65,16 +65,17 @@ class BaseField(object):
|
|||
self.expressions.append(compiled_expr)
|
||||
return
|
||||
|
||||
raise ValueError('Unrecognized expression "%s" for field "%s"' %
|
||||
(expr, self.name))
|
||||
raise ValueError('Unrecognized expression "%s" for field "%s"' % (expr, self.name))
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(self, self.__class__) and self.expressions == other.expressions
|
||||
|
||||
def __str__(self):
|
||||
expr_strings = (str(e) for e in self.expressions)
|
||||
return ','.join(expr_strings)
|
||||
|
||||
def __repr__(self):
|
||||
return "%s('%s', '%s')" % (self.__class__.__name__, self.name,
|
||||
str(self))
|
||||
return "%s('%s', '%s')" % (self.__class__.__name__, self.name, self)
|
||||
|
||||
|
||||
class WeekField(BaseField):
|
||||
|
@ -85,7 +86,7 @@ class WeekField(BaseField):
|
|||
|
||||
|
||||
class DayOfMonthField(BaseField):
|
||||
COMPILERS = BaseField.COMPILERS + [WeekdayPositionExpression]
|
||||
COMPILERS = BaseField.COMPILERS + [WeekdayPositionExpression, LastDayOfMonthExpression]
|
||||
|
||||
def get_max(self, dateval):
|
||||
return monthrange(dateval.year, dateval.month)[1]
|
||||
|
|
|
@ -0,0 +1,51 @@
|
|||
from datetime import datetime
|
||||
|
||||
from tzlocal import get_localzone
|
||||
|
||||
from apscheduler.triggers.base import BaseTrigger
|
||||
from apscheduler.util import convert_to_datetime, datetime_repr, astimezone
|
||||
|
||||
|
||||
class DateTrigger(BaseTrigger):
|
||||
"""
|
||||
Triggers once on the given datetime. If ``run_date`` is left empty, current time is used.
|
||||
|
||||
:param datetime|str run_date: the date/time to run the job at
|
||||
:param datetime.tzinfo|str timezone: time zone for ``run_date`` if it doesn't have one already
|
||||
"""
|
||||
|
||||
__slots__ = 'run_date'
|
||||
|
||||
def __init__(self, run_date=None, timezone=None):
|
||||
timezone = astimezone(timezone) or get_localzone()
|
||||
if run_date is not None:
|
||||
self.run_date = convert_to_datetime(run_date, timezone, 'run_date')
|
||||
else:
|
||||
self.run_date = datetime.now(timezone)
|
||||
|
||||
def get_next_fire_time(self, previous_fire_time, now):
|
||||
return self.run_date if previous_fire_time is None else None
|
||||
|
||||
def __getstate__(self):
|
||||
return {
|
||||
'version': 1,
|
||||
'run_date': self.run_date
|
||||
}
|
||||
|
||||
def __setstate__(self, state):
|
||||
# This is for compatibility with APScheduler 3.0.x
|
||||
if isinstance(state, tuple):
|
||||
state = state[1]
|
||||
|
||||
if state.get('version', 1) > 1:
|
||||
raise ValueError(
|
||||
'Got serialized data for version %s of %s, but only version 1 can be handled' %
|
||||
(state['version'], self.__class__.__name__))
|
||||
|
||||
self.run_date = state['run_date']
|
||||
|
||||
def __str__(self):
|
||||
return 'date[%s]' % datetime_repr(self.run_date)
|
||||
|
||||
def __repr__(self):
|
||||
return "<%s (run_date='%s')>" % (self.__class__.__name__, datetime_repr(self.run_date))
|
|
@ -1,39 +1,92 @@
|
|||
from datetime import datetime, timedelta
|
||||
from datetime import timedelta, datetime
|
||||
from math import ceil
|
||||
|
||||
from apscheduler.util import convert_to_datetime, timedelta_seconds
|
||||
from tzlocal import get_localzone
|
||||
|
||||
from apscheduler.triggers.base import BaseTrigger
|
||||
from apscheduler.util import convert_to_datetime, timedelta_seconds, datetime_repr, astimezone
|
||||
|
||||
|
||||
class IntervalTrigger(object):
|
||||
def __init__(self, interval, start_date=None):
|
||||
if not isinstance(interval, timedelta):
|
||||
raise TypeError('interval must be a timedelta')
|
||||
if start_date:
|
||||
start_date = convert_to_datetime(start_date)
|
||||
class IntervalTrigger(BaseTrigger):
|
||||
"""
|
||||
Triggers on specified intervals, starting on ``start_date`` if specified, ``datetime.now()`` +
|
||||
interval otherwise.
|
||||
|
||||
self.interval = interval
|
||||
:param int weeks: number of weeks to wait
|
||||
:param int days: number of days to wait
|
||||
:param int hours: number of hours to wait
|
||||
:param int minutes: number of minutes to wait
|
||||
:param int seconds: number of seconds to wait
|
||||
:param datetime|str start_date: starting point for the interval calculation
|
||||
:param datetime|str end_date: latest possible date/time to trigger on
|
||||
:param datetime.tzinfo|str timezone: time zone to use for the date/time calculations
|
||||
"""
|
||||
|
||||
__slots__ = 'timezone', 'start_date', 'end_date', 'interval', 'interval_length'
|
||||
|
||||
def __init__(self, weeks=0, days=0, hours=0, minutes=0, seconds=0, start_date=None,
|
||||
end_date=None, timezone=None):
|
||||
self.interval = timedelta(weeks=weeks, days=days, hours=hours, minutes=minutes,
|
||||
seconds=seconds)
|
||||
self.interval_length = timedelta_seconds(self.interval)
|
||||
if self.interval_length == 0:
|
||||
self.interval = timedelta(seconds=1)
|
||||
self.interval_length = 1
|
||||
|
||||
if start_date is None:
|
||||
self.start_date = datetime.now() + self.interval
|
||||
if timezone:
|
||||
self.timezone = astimezone(timezone)
|
||||
elif isinstance(start_date, datetime) and start_date.tzinfo:
|
||||
self.timezone = start_date.tzinfo
|
||||
elif isinstance(end_date, datetime) and end_date.tzinfo:
|
||||
self.timezone = end_date.tzinfo
|
||||
else:
|
||||
self.start_date = convert_to_datetime(start_date)
|
||||
self.timezone = get_localzone()
|
||||
|
||||
def get_next_fire_time(self, start_date):
|
||||
if start_date < self.start_date:
|
||||
return self.start_date
|
||||
start_date = start_date or (datetime.now(self.timezone) + self.interval)
|
||||
self.start_date = convert_to_datetime(start_date, self.timezone, 'start_date')
|
||||
self.end_date = convert_to_datetime(end_date, self.timezone, 'end_date')
|
||||
|
||||
timediff_seconds = timedelta_seconds(start_date - self.start_date)
|
||||
next_interval_num = int(ceil(timediff_seconds / self.interval_length))
|
||||
return self.start_date + self.interval * next_interval_num
|
||||
def get_next_fire_time(self, previous_fire_time, now):
|
||||
if previous_fire_time:
|
||||
next_fire_time = previous_fire_time + self.interval
|
||||
elif self.start_date > now:
|
||||
next_fire_time = self.start_date
|
||||
else:
|
||||
timediff_seconds = timedelta_seconds(now - self.start_date)
|
||||
next_interval_num = int(ceil(timediff_seconds / self.interval_length))
|
||||
next_fire_time = self.start_date + self.interval * next_interval_num
|
||||
|
||||
if not self.end_date or next_fire_time <= self.end_date:
|
||||
return self.timezone.normalize(next_fire_time)
|
||||
|
||||
def __getstate__(self):
|
||||
return {
|
||||
'version': 1,
|
||||
'timezone': self.timezone,
|
||||
'start_date': self.start_date,
|
||||
'end_date': self.end_date,
|
||||
'interval': self.interval
|
||||
}
|
||||
|
||||
def __setstate__(self, state):
|
||||
# This is for compatibility with APScheduler 3.0.x
|
||||
if isinstance(state, tuple):
|
||||
state = state[1]
|
||||
|
||||
if state.get('version', 1) > 1:
|
||||
raise ValueError(
|
||||
'Got serialized data for version %s of %s, but only version 1 can be handled' %
|
||||
(state['version'], self.__class__.__name__))
|
||||
|
||||
self.timezone = state['timezone']
|
||||
self.start_date = state['start_date']
|
||||
self.end_date = state['end_date']
|
||||
self.interval = state['interval']
|
||||
self.interval_length = timedelta_seconds(self.interval)
|
||||
|
||||
def __str__(self):
|
||||
return 'interval[%s]' % str(self.interval)
|
||||
|
||||
def __repr__(self):
|
||||
return "<%s (interval=%s, start_date=%s)>" % (
|
||||
self.__class__.__name__, repr(self.interval),
|
||||
repr(self.start_date))
|
||||
return "<%s (interval=%r, start_date='%s', timezone='%s')>" % (
|
||||
self.__class__.__name__, self.interval, datetime_repr(self.start_date), self.timezone)
|
||||
|
|
|
@ -1,17 +0,0 @@
|
|||
from apscheduler.util import convert_to_datetime
|
||||
|
||||
|
||||
class SimpleTrigger(object):
|
||||
def __init__(self, run_date):
|
||||
self.run_date = convert_to_datetime(run_date)
|
||||
|
||||
def get_next_fire_time(self, start_date):
|
||||
if self.run_date >= start_date:
|
||||
return self.run_date
|
||||
|
||||
def __str__(self):
|
||||
return 'date[%s]' % str(self.run_date)
|
||||
|
||||
def __repr__(self):
|
||||
return '<%s (run_date=%s)>' % (
|
||||
self.__class__.__name__, repr(self.run_date))
|
|
@ -1,26 +1,50 @@
|
|||
"""
|
||||
This module contains several handy functions primarily meant for internal use.
|
||||
"""
|
||||
"""This module contains several handy functions primarily meant for internal use."""
|
||||
|
||||
from datetime import date, datetime, timedelta
|
||||
from time import mktime
|
||||
from __future__ import division
|
||||
from datetime import date, datetime, time, timedelta, tzinfo
|
||||
from calendar import timegm
|
||||
import re
|
||||
import sys
|
||||
from types import MethodType
|
||||
from functools import partial
|
||||
|
||||
__all__ = ('asint', 'asbool', 'convert_to_datetime', 'timedelta_seconds',
|
||||
'time_difference', 'datetime_ceil', 'combine_opts',
|
||||
'get_callable_name', 'obj_to_ref', 'ref_to_obj', 'maybe_ref',
|
||||
'to_unicode', 'iteritems', 'itervalues', 'xrange')
|
||||
from pytz import timezone, utc
|
||||
import six
|
||||
|
||||
try:
|
||||
from inspect import signature
|
||||
except ImportError: # pragma: nocover
|
||||
from funcsigs import signature
|
||||
|
||||
try:
|
||||
from threading import TIMEOUT_MAX
|
||||
except ImportError:
|
||||
TIMEOUT_MAX = 4294967 # Maximum value accepted by Event.wait() on Windows
|
||||
|
||||
__all__ = ('asint', 'asbool', 'astimezone', 'convert_to_datetime', 'datetime_to_utc_timestamp',
|
||||
'utc_timestamp_to_datetime', 'timedelta_seconds', 'datetime_ceil', 'get_callable_name',
|
||||
'obj_to_ref', 'ref_to_obj', 'maybe_ref', 'repr_escape', 'check_callable_args')
|
||||
|
||||
|
||||
class _Undefined(object):
|
||||
def __nonzero__(self):
|
||||
return False
|
||||
|
||||
def __bool__(self):
|
||||
return False
|
||||
|
||||
def __repr__(self):
|
||||
return '<undefined>'
|
||||
|
||||
|
||||
undefined = _Undefined() #: a unique object that only signifies that no value is defined
|
||||
|
||||
|
||||
def asint(text):
|
||||
"""
|
||||
Safely converts a string to an integer, returning None if the string
|
||||
is None.
|
||||
Safely converts a string to an integer, returning ``None`` if the string is ``None``.
|
||||
|
||||
:type text: str
|
||||
:rtype: int
|
||||
|
||||
"""
|
||||
if text is not None:
|
||||
return int(text)
|
||||
|
@ -31,6 +55,7 @@ def asbool(obj):
|
|||
Interprets an object as a boolean value.
|
||||
|
||||
:rtype: bool
|
||||
|
||||
"""
|
||||
if isinstance(obj, str):
|
||||
obj = obj.strip().lower()
|
||||
|
@ -42,36 +67,105 @@ def asbool(obj):
|
|||
return bool(obj)
|
||||
|
||||
|
||||
def astimezone(obj):
|
||||
"""
|
||||
Interprets an object as a timezone.
|
||||
|
||||
:rtype: tzinfo
|
||||
|
||||
"""
|
||||
if isinstance(obj, six.string_types):
|
||||
return timezone(obj)
|
||||
if isinstance(obj, tzinfo):
|
||||
if not hasattr(obj, 'localize') or not hasattr(obj, 'normalize'):
|
||||
raise TypeError('Only timezones from the pytz library are supported')
|
||||
if obj.zone == 'local':
|
||||
raise ValueError(
|
||||
'Unable to determine the name of the local timezone -- you must explicitly '
|
||||
'specify the name of the local timezone. Please refrain from using timezones like '
|
||||
'EST to prevent problems with daylight saving time. Instead, use a locale based '
|
||||
'timezone name (such as Europe/Helsinki).')
|
||||
return obj
|
||||
if obj is not None:
|
||||
raise TypeError('Expected tzinfo, got %s instead' % obj.__class__.__name__)
|
||||
|
||||
|
||||
_DATE_REGEX = re.compile(
|
||||
r'(?P<year>\d{4})-(?P<month>\d{1,2})-(?P<day>\d{1,2})'
|
||||
r'(?: (?P<hour>\d{1,2}):(?P<minute>\d{1,2}):(?P<second>\d{1,2})'
|
||||
r'(?:\.(?P<microsecond>\d{1,6}))?)?')
|
||||
|
||||
|
||||
def convert_to_datetime(input):
|
||||
def convert_to_datetime(input, tz, arg_name):
|
||||
"""
|
||||
Converts the given object to a datetime object, if possible.
|
||||
If an actual datetime object is passed, it is returned unmodified.
|
||||
If the input is a string, it is parsed as a datetime.
|
||||
Converts the given object to a timezone aware datetime object.
|
||||
|
||||
Date strings are accepted in three different forms: date only (Y-m-d),
|
||||
date with time (Y-m-d H:M:S) or with date+time with microseconds
|
||||
(Y-m-d H:M:S.micro).
|
||||
If a timezone aware datetime object is passed, it is returned unmodified.
|
||||
If a native datetime object is passed, it is given the specified timezone.
|
||||
If the input is a string, it is parsed as a datetime with the given timezone.
|
||||
|
||||
Date strings are accepted in three different forms: date only (Y-m-d), date with time
|
||||
(Y-m-d H:M:S) or with date+time with microseconds (Y-m-d H:M:S.micro).
|
||||
|
||||
:param str|datetime input: the datetime or string to convert to a timezone aware datetime
|
||||
:param datetime.tzinfo tz: timezone to interpret ``input`` in
|
||||
:param str arg_name: the name of the argument (used in an error message)
|
||||
:rtype: datetime
|
||||
|
||||
"""
|
||||
if isinstance(input, datetime):
|
||||
return input
|
||||
if input is None:
|
||||
return
|
||||
elif isinstance(input, datetime):
|
||||
datetime_ = input
|
||||
elif isinstance(input, date):
|
||||
return datetime.fromordinal(input.toordinal())
|
||||
elif isinstance(input, str):
|
||||
datetime_ = datetime.combine(input, time())
|
||||
elif isinstance(input, six.string_types):
|
||||
m = _DATE_REGEX.match(input)
|
||||
if not m:
|
||||
raise ValueError('Invalid date string')
|
||||
values = [(k, int(v or 0)) for k, v in m.groupdict().items()]
|
||||
values = dict(values)
|
||||
return datetime(**values)
|
||||
raise TypeError('Unsupported input type: %s' % type(input))
|
||||
datetime_ = datetime(**values)
|
||||
else:
|
||||
raise TypeError('Unsupported type for %s: %s' % (arg_name, input.__class__.__name__))
|
||||
|
||||
if datetime_.tzinfo is not None:
|
||||
return datetime_
|
||||
if tz is None:
|
||||
raise ValueError(
|
||||
'The "tz" argument must be specified if %s has no timezone information' % arg_name)
|
||||
if isinstance(tz, six.string_types):
|
||||
tz = timezone(tz)
|
||||
|
||||
try:
|
||||
return tz.localize(datetime_, is_dst=None)
|
||||
except AttributeError:
|
||||
raise TypeError(
|
||||
'Only pytz timezones are supported (need the localize() and normalize() methods)')
|
||||
|
||||
|
||||
def datetime_to_utc_timestamp(timeval):
|
||||
"""
|
||||
Converts a datetime instance to a timestamp.
|
||||
|
||||
:type timeval: datetime
|
||||
:rtype: float
|
||||
|
||||
"""
|
||||
if timeval is not None:
|
||||
return timegm(timeval.utctimetuple()) + timeval.microsecond / 1000000
|
||||
|
||||
|
||||
def utc_timestamp_to_datetime(timestamp):
|
||||
"""
|
||||
Converts the given timestamp to a datetime instance.
|
||||
|
||||
:type timestamp: float
|
||||
:rtype: datetime
|
||||
|
||||
"""
|
||||
if timestamp is not None:
|
||||
return datetime.fromtimestamp(timestamp, utc)
|
||||
|
||||
|
||||
def timedelta_seconds(delta):
|
||||
|
@ -80,151 +174,212 @@ def timedelta_seconds(delta):
|
|||
|
||||
:type delta: timedelta
|
||||
:rtype: float
|
||||
|
||||
"""
|
||||
return delta.days * 24 * 60 * 60 + delta.seconds + \
|
||||
delta.microseconds / 1000000.0
|
||||
|
||||
|
||||
def time_difference(date1, date2):
|
||||
"""
|
||||
Returns the time difference in seconds between the given two
|
||||
datetime objects. The difference is calculated as: date1 - date2.
|
||||
|
||||
:param date1: the later datetime
|
||||
:type date1: datetime
|
||||
:param date2: the earlier datetime
|
||||
:type date2: datetime
|
||||
:rtype: float
|
||||
"""
|
||||
later = mktime(date1.timetuple()) + date1.microsecond / 1000000.0
|
||||
earlier = mktime(date2.timetuple()) + date2.microsecond / 1000000.0
|
||||
return later - earlier
|
||||
|
||||
|
||||
def datetime_ceil(dateval):
|
||||
"""
|
||||
Rounds the given datetime object upwards.
|
||||
|
||||
:type dateval: datetime
|
||||
|
||||
"""
|
||||
if dateval.microsecond > 0:
|
||||
return dateval + timedelta(seconds=1,
|
||||
microseconds= -dateval.microsecond)
|
||||
return dateval + timedelta(seconds=1, microseconds=-dateval.microsecond)
|
||||
return dateval
|
||||
|
||||
|
||||
def combine_opts(global_config, prefix, local_config={}):
|
||||
"""
|
||||
Returns a subdictionary from keys and values of ``global_config`` where
|
||||
the key starts with the given prefix, combined with options from
|
||||
local_config. The keys in the subdictionary have the prefix removed.
|
||||
|
||||
:type global_config: dict
|
||||
:type prefix: str
|
||||
:type local_config: dict
|
||||
:rtype: dict
|
||||
"""
|
||||
prefixlen = len(prefix)
|
||||
subconf = {}
|
||||
for key, value in global_config.items():
|
||||
if key.startswith(prefix):
|
||||
key = key[prefixlen:]
|
||||
subconf[key] = value
|
||||
subconf.update(local_config)
|
||||
return subconf
|
||||
def datetime_repr(dateval):
|
||||
return dateval.strftime('%Y-%m-%d %H:%M:%S %Z') if dateval else 'None'
|
||||
|
||||
|
||||
def get_callable_name(func):
|
||||
"""
|
||||
Returns the best available display name for the given function/callable.
|
||||
|
||||
:rtype: str
|
||||
|
||||
"""
|
||||
# the easy case (on Python 3.3+)
|
||||
if hasattr(func, '__qualname__'):
|
||||
return func.__qualname__
|
||||
|
||||
# class methods, bound and unbound methods
|
||||
f_self = getattr(func, '__self__', None) or getattr(func, 'im_self', None)
|
||||
|
||||
if f_self and hasattr(func, '__name__'):
|
||||
if isinstance(f_self, type):
|
||||
# class method
|
||||
return '%s.%s' % (f_self.__name__, func.__name__)
|
||||
# bound method
|
||||
return '%s.%s' % (f_self.__class__.__name__, func.__name__)
|
||||
f_class = f_self if isinstance(f_self, type) else f_self.__class__
|
||||
else:
|
||||
f_class = getattr(func, 'im_class', None)
|
||||
|
||||
if f_class and hasattr(func, '__name__'):
|
||||
return '%s.%s' % (f_class.__name__, func.__name__)
|
||||
|
||||
# class or class instance
|
||||
if hasattr(func, '__call__'):
|
||||
# class
|
||||
if hasattr(func, '__name__'):
|
||||
# function, unbound method or a class with a __call__ method
|
||||
return func.__name__
|
||||
|
||||
# instance of a class with a __call__ method
|
||||
return func.__class__.__name__
|
||||
|
||||
raise TypeError('Unable to determine a name for %s -- '
|
||||
'maybe it is not a callable?' % repr(func))
|
||||
raise TypeError('Unable to determine a name for %r -- maybe it is not a callable?' % func)
|
||||
|
||||
|
||||
def obj_to_ref(obj):
|
||||
"""
|
||||
Returns the path to the given object.
|
||||
"""
|
||||
ref = '%s:%s' % (obj.__module__, get_callable_name(obj))
|
||||
try:
|
||||
obj2 = ref_to_obj(ref)
|
||||
if obj != obj2:
|
||||
raise ValueError
|
||||
except Exception:
|
||||
raise ValueError('Cannot determine the reference to %s' % repr(obj))
|
||||
Returns the path to the given callable.
|
||||
|
||||
return ref
|
||||
:rtype: str
|
||||
:raises TypeError: if the given object is not callable
|
||||
:raises ValueError: if the given object is a :class:`~functools.partial`, lambda or a nested
|
||||
function
|
||||
|
||||
"""
|
||||
if isinstance(obj, partial):
|
||||
raise ValueError('Cannot create a reference to a partial()')
|
||||
|
||||
name = get_callable_name(obj)
|
||||
if '<lambda>' in name:
|
||||
raise ValueError('Cannot create a reference to a lambda')
|
||||
if '<locals>' in name:
|
||||
raise ValueError('Cannot create a reference to a nested function')
|
||||
|
||||
return '%s:%s' % (obj.__module__, name)
|
||||
|
||||
|
||||
def ref_to_obj(ref):
|
||||
"""
|
||||
Returns the object pointed to by ``ref``.
|
||||
|
||||
:type ref: str
|
||||
|
||||
"""
|
||||
if not isinstance(ref, basestring):
|
||||
if not isinstance(ref, six.string_types):
|
||||
raise TypeError('References must be strings')
|
||||
if not ':' in ref:
|
||||
if ':' not in ref:
|
||||
raise ValueError('Invalid reference')
|
||||
|
||||
modulename, rest = ref.split(':', 1)
|
||||
try:
|
||||
obj = __import__(modulename)
|
||||
obj = __import__(modulename, fromlist=[rest])
|
||||
except ImportError:
|
||||
raise LookupError('Error resolving reference %s: '
|
||||
'could not import module' % ref)
|
||||
raise LookupError('Error resolving reference %s: could not import module' % ref)
|
||||
|
||||
try:
|
||||
for name in modulename.split('.')[1:] + rest.split('.'):
|
||||
for name in rest.split('.'):
|
||||
obj = getattr(obj, name)
|
||||
return obj
|
||||
except Exception:
|
||||
raise LookupError('Error resolving reference %s: '
|
||||
'error looking up object' % ref)
|
||||
raise LookupError('Error resolving reference %s: error looking up object' % ref)
|
||||
|
||||
|
||||
def maybe_ref(ref):
|
||||
"""
|
||||
Returns the object that the given reference points to, if it is indeed
|
||||
a reference. If it is not a reference, the object is returned as-is.
|
||||
Returns the object that the given reference points to, if it is indeed a reference.
|
||||
If it is not a reference, the object is returned as-is.
|
||||
|
||||
"""
|
||||
if not isinstance(ref, str):
|
||||
return ref
|
||||
return ref_to_obj(ref)
|
||||
|
||||
|
||||
def to_unicode(string, encoding='ascii'):
|
||||
"""
|
||||
Safely converts a string to a unicode representation on any
|
||||
Python version.
|
||||
"""
|
||||
if hasattr(string, 'decode'):
|
||||
return string.decode(encoding, 'ignore')
|
||||
return string # pragma: nocover
|
||||
if six.PY2:
|
||||
def repr_escape(string):
|
||||
if isinstance(string, six.text_type):
|
||||
return string.encode('ascii', 'backslashreplace')
|
||||
return string
|
||||
else:
|
||||
def repr_escape(string):
|
||||
return string
|
||||
|
||||
|
||||
if sys.version_info < (3, 0): # pragma: nocover
|
||||
iteritems = lambda d: d.iteritems()
|
||||
itervalues = lambda d: d.itervalues()
|
||||
xrange = xrange
|
||||
basestring = basestring
|
||||
else: # pragma: nocover
|
||||
iteritems = lambda d: d.items()
|
||||
itervalues = lambda d: d.values()
|
||||
xrange = range
|
||||
basestring = str
|
||||
def check_callable_args(func, args, kwargs):
|
||||
"""
|
||||
Ensures that the given callable can be called with the given arguments.
|
||||
|
||||
:type args: tuple
|
||||
:type kwargs: dict
|
||||
|
||||
"""
|
||||
pos_kwargs_conflicts = [] # parameters that have a match in both args and kwargs
|
||||
positional_only_kwargs = [] # positional-only parameters that have a match in kwargs
|
||||
unsatisfied_args = [] # parameters in signature that don't have a match in args or kwargs
|
||||
unsatisfied_kwargs = [] # keyword-only arguments that don't have a match in kwargs
|
||||
unmatched_args = list(args) # args that didn't match any of the parameters in the signature
|
||||
# kwargs that didn't match any of the parameters in the signature
|
||||
unmatched_kwargs = list(kwargs)
|
||||
# indicates if the signature defines *args and **kwargs respectively
|
||||
has_varargs = has_var_kwargs = False
|
||||
|
||||
try:
|
||||
sig = signature(func)
|
||||
except ValueError:
|
||||
# signature() doesn't work against every kind of callable
|
||||
return
|
||||
|
||||
for param in six.itervalues(sig.parameters):
|
||||
if param.kind == param.POSITIONAL_OR_KEYWORD:
|
||||
if param.name in unmatched_kwargs and unmatched_args:
|
||||
pos_kwargs_conflicts.append(param.name)
|
||||
elif unmatched_args:
|
||||
del unmatched_args[0]
|
||||
elif param.name in unmatched_kwargs:
|
||||
unmatched_kwargs.remove(param.name)
|
||||
elif param.default is param.empty:
|
||||
unsatisfied_args.append(param.name)
|
||||
elif param.kind == param.POSITIONAL_ONLY:
|
||||
if unmatched_args:
|
||||
del unmatched_args[0]
|
||||
elif param.name in unmatched_kwargs:
|
||||
unmatched_kwargs.remove(param.name)
|
||||
positional_only_kwargs.append(param.name)
|
||||
elif param.default is param.empty:
|
||||
unsatisfied_args.append(param.name)
|
||||
elif param.kind == param.KEYWORD_ONLY:
|
||||
if param.name in unmatched_kwargs:
|
||||
unmatched_kwargs.remove(param.name)
|
||||
elif param.default is param.empty:
|
||||
unsatisfied_kwargs.append(param.name)
|
||||
elif param.kind == param.VAR_POSITIONAL:
|
||||
has_varargs = True
|
||||
elif param.kind == param.VAR_KEYWORD:
|
||||
has_var_kwargs = True
|
||||
|
||||
# Make sure there are no conflicts between args and kwargs
|
||||
if pos_kwargs_conflicts:
|
||||
raise ValueError('The following arguments are supplied in both args and kwargs: %s' %
|
||||
', '.join(pos_kwargs_conflicts))
|
||||
|
||||
# Check if keyword arguments are being fed to positional-only parameters
|
||||
if positional_only_kwargs:
|
||||
raise ValueError('The following arguments cannot be given as keyword arguments: %s' %
|
||||
', '.join(positional_only_kwargs))
|
||||
|
||||
# Check that the number of positional arguments minus the number of matched kwargs matches the
|
||||
# argspec
|
||||
if unsatisfied_args:
|
||||
raise ValueError('The following arguments have not been supplied: %s' %
|
||||
', '.join(unsatisfied_args))
|
||||
|
||||
# Check that all keyword-only arguments have been supplied
|
||||
if unsatisfied_kwargs:
|
||||
raise ValueError(
|
||||
'The following keyword-only arguments have not been supplied in kwargs: %s' %
|
||||
', '.join(unsatisfied_kwargs))
|
||||
|
||||
# Check that the callable can accept the given number of positional arguments
|
||||
if not has_varargs and unmatched_args:
|
||||
raise ValueError(
|
||||
'The list of positional arguments is longer than the target callable can handle '
|
||||
'(allowed: %d, given in args: %d)' % (len(args) - len(unmatched_args), len(args)))
|
||||
|
||||
# Check that the callable can accept the given keyword arguments
|
||||
if not has_var_kwargs and unmatched_kwargs:
|
||||
raise ValueError(
|
||||
'The target callable does not accept the following keyword arguments: %s' %
|
||||
', '.join(unmatched_kwargs))
|
||||
|
|
|
@ -0,0 +1,48 @@
|
|||
PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
|
||||
--------------------------------------------
|
||||
|
||||
1. This LICENSE AGREEMENT is between the Python Software Foundation
|
||||
("PSF"), and the Individual or Organization ("Licensee") accessing and
|
||||
otherwise using this software ("Python") in source or binary form and
|
||||
its associated documentation.
|
||||
|
||||
2. Subject to the terms and conditions of this License Agreement, PSF
|
||||
hereby grants Licensee a nonexclusive, royalty-free, world-wide
|
||||
license to reproduce, analyze, test, perform and/or display publicly,
|
||||
prepare derivative works, distribute, and otherwise use Python
|
||||
alone or in any derivative version, provided, however, that PSF's
|
||||
License Agreement and PSF's notice of copyright, i.e., "Copyright (c)
|
||||
2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation; All Rights
|
||||
Reserved" are retained in Python alone or in any derivative version
|
||||
prepared by Licensee.
|
||||
|
||||
3. In the event Licensee prepares a derivative work that is based on
|
||||
or incorporates Python or any part thereof, and wants to make
|
||||
the derivative work available to others as provided herein, then
|
||||
Licensee hereby agrees to include in any such work a brief summary of
|
||||
the changes made to Python.
|
||||
|
||||
4. PSF is making Python available to Licensee on an "AS IS"
|
||||
basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
|
||||
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
|
||||
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
|
||||
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
|
||||
INFRINGE ANY THIRD PARTY RIGHTS.
|
||||
|
||||
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
|
||||
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
|
||||
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
|
||||
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
|
||||
|
||||
6. This License Agreement will automatically terminate upon a material
|
||||
breach of its terms and conditions.
|
||||
|
||||
7. Nothing in this License Agreement shall be deemed to create any
|
||||
relationship of agency, partnership, or joint venture between PSF and
|
||||
Licensee. This License Agreement does not grant permission to use PSF
|
||||
trademarks or trade name in a trademark sense to endorse or promote
|
||||
products or services of Licensee, or any third party.
|
||||
|
||||
8. By copying, installing or otherwise using Python, Licensee
|
||||
agrees to be bound by the terms and conditions of this License
|
||||
Agreement.
|
|
@ -0,0 +1,16 @@
|
|||
Metadata-Version: 1.1
|
||||
Name: futures
|
||||
Version: 3.1.1
|
||||
Summary: Backport of the concurrent.futures package from Python 3.2
|
||||
Home-page: https://github.com/agronholm/pythonfutures
|
||||
Author: Alex Gronholm
|
||||
Author-email: alex.gronholm+pypi@nextday.fi
|
||||
License: PSF
|
||||
Description: UNKNOWN
|
||||
Platform: UNKNOWN
|
||||
Classifier: License :: OSI Approved :: Python Software Foundation License
|
||||
Classifier: Development Status :: 5 - Production/Stable
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: Programming Language :: Python :: 2.6
|
||||
Classifier: Programming Language :: Python :: 2.7
|
||||
Classifier: Programming Language :: Python :: 2 :: Only
|
|
@ -0,0 +1,3 @@
|
|||
from pkgutil import extend_path
|
||||
|
||||
__path__ = extend_path(__path__, __name__)
|
|
@ -0,0 +1,23 @@
|
|||
# Copyright 2009 Brian Quinlan. All Rights Reserved.
|
||||
# Licensed to PSF under a Contributor Agreement.
|
||||
|
||||
"""Execute computations asynchronously using threads or processes."""
|
||||
|
||||
__author__ = 'Brian Quinlan (brian@sweetapp.com)'
|
||||
|
||||
from concurrent.futures._base import (FIRST_COMPLETED,
|
||||
FIRST_EXCEPTION,
|
||||
ALL_COMPLETED,
|
||||
CancelledError,
|
||||
TimeoutError,
|
||||
Future,
|
||||
Executor,
|
||||
wait,
|
||||
as_completed)
|
||||
from concurrent.futures.thread import ThreadPoolExecutor
|
||||
|
||||
try:
|
||||
from concurrent.futures.process import ProcessPoolExecutor
|
||||
except ImportError:
|
||||
# some platforms don't have multiprocessing
|
||||
pass
|
|
@ -0,0 +1,631 @@
|
|||
# Copyright 2009 Brian Quinlan. All Rights Reserved.
|
||||
# Licensed to PSF under a Contributor Agreement.
|
||||
|
||||
import collections
|
||||
import logging
|
||||
import threading
|
||||
import itertools
|
||||
import time
|
||||
import types
|
||||
|
||||
__author__ = 'Brian Quinlan (brian@sweetapp.com)'
|
||||
|
||||
FIRST_COMPLETED = 'FIRST_COMPLETED'
|
||||
FIRST_EXCEPTION = 'FIRST_EXCEPTION'
|
||||
ALL_COMPLETED = 'ALL_COMPLETED'
|
||||
_AS_COMPLETED = '_AS_COMPLETED'
|
||||
|
||||
# Possible future states (for internal use by the futures package).
|
||||
PENDING = 'PENDING'
|
||||
RUNNING = 'RUNNING'
|
||||
# The future was cancelled by the user...
|
||||
CANCELLED = 'CANCELLED'
|
||||
# ...and _Waiter.add_cancelled() was called by a worker.
|
||||
CANCELLED_AND_NOTIFIED = 'CANCELLED_AND_NOTIFIED'
|
||||
FINISHED = 'FINISHED'
|
||||
|
||||
_FUTURE_STATES = [
|
||||
PENDING,
|
||||
RUNNING,
|
||||
CANCELLED,
|
||||
CANCELLED_AND_NOTIFIED,
|
||||
FINISHED
|
||||
]
|
||||
|
||||
_STATE_TO_DESCRIPTION_MAP = {
|
||||
PENDING: "pending",
|
||||
RUNNING: "running",
|
||||
CANCELLED: "cancelled",
|
||||
CANCELLED_AND_NOTIFIED: "cancelled",
|
||||
FINISHED: "finished"
|
||||
}
|
||||
|
||||
# Logger for internal use by the futures package.
|
||||
LOGGER = logging.getLogger("concurrent.futures")
|
||||
|
||||
class Error(Exception):
|
||||
"""Base class for all future-related exceptions."""
|
||||
pass
|
||||
|
||||
class CancelledError(Error):
|
||||
"""The Future was cancelled."""
|
||||
pass
|
||||
|
||||
class TimeoutError(Error):
|
||||
"""The operation exceeded the given deadline."""
|
||||
pass
|
||||
|
||||
class _Waiter(object):
|
||||
"""Provides the event that wait() and as_completed() block on."""
|
||||
def __init__(self):
|
||||
self.event = threading.Event()
|
||||
self.finished_futures = []
|
||||
|
||||
def add_result(self, future):
|
||||
self.finished_futures.append(future)
|
||||
|
||||
def add_exception(self, future):
|
||||
self.finished_futures.append(future)
|
||||
|
||||
def add_cancelled(self, future):
|
||||
self.finished_futures.append(future)
|
||||
|
||||
class _AsCompletedWaiter(_Waiter):
|
||||
"""Used by as_completed()."""
|
||||
|
||||
def __init__(self):
|
||||
super(_AsCompletedWaiter, self).__init__()
|
||||
self.lock = threading.Lock()
|
||||
|
||||
def add_result(self, future):
|
||||
with self.lock:
|
||||
super(_AsCompletedWaiter, self).add_result(future)
|
||||
self.event.set()
|
||||
|
||||
def add_exception(self, future):
|
||||
with self.lock:
|
||||
super(_AsCompletedWaiter, self).add_exception(future)
|
||||
self.event.set()
|
||||
|
||||
def add_cancelled(self, future):
|
||||
with self.lock:
|
||||
super(_AsCompletedWaiter, self).add_cancelled(future)
|
||||
self.event.set()
|
||||
|
||||
class _FirstCompletedWaiter(_Waiter):
|
||||
"""Used by wait(return_when=FIRST_COMPLETED)."""
|
||||
|
||||
def add_result(self, future):
|
||||
super(_FirstCompletedWaiter, self).add_result(future)
|
||||
self.event.set()
|
||||
|
||||
def add_exception(self, future):
|
||||
super(_FirstCompletedWaiter, self).add_exception(future)
|
||||
self.event.set()
|
||||
|
||||
def add_cancelled(self, future):
|
||||
super(_FirstCompletedWaiter, self).add_cancelled(future)
|
||||
self.event.set()
|
||||
|
||||
class _AllCompletedWaiter(_Waiter):
|
||||
"""Used by wait(return_when=FIRST_EXCEPTION and ALL_COMPLETED)."""
|
||||
|
||||
def __init__(self, num_pending_calls, stop_on_exception):
|
||||
self.num_pending_calls = num_pending_calls
|
||||
self.stop_on_exception = stop_on_exception
|
||||
self.lock = threading.Lock()
|
||||
super(_AllCompletedWaiter, self).__init__()
|
||||
|
||||
def _decrement_pending_calls(self):
|
||||
with self.lock:
|
||||
self.num_pending_calls -= 1
|
||||
if not self.num_pending_calls:
|
||||
self.event.set()
|
||||
|
||||
def add_result(self, future):
|
||||
super(_AllCompletedWaiter, self).add_result(future)
|
||||
self._decrement_pending_calls()
|
||||
|
||||
def add_exception(self, future):
|
||||
super(_AllCompletedWaiter, self).add_exception(future)
|
||||
if self.stop_on_exception:
|
||||
self.event.set()
|
||||
else:
|
||||
self._decrement_pending_calls()
|
||||
|
||||
def add_cancelled(self, future):
|
||||
super(_AllCompletedWaiter, self).add_cancelled(future)
|
||||
self._decrement_pending_calls()
|
||||
|
||||
class _AcquireFutures(object):
|
||||
"""A context manager that does an ordered acquire of Future conditions."""
|
||||
|
||||
def __init__(self, futures):
|
||||
self.futures = sorted(futures, key=id)
|
||||
|
||||
def __enter__(self):
|
||||
for future in self.futures:
|
||||
future._condition.acquire()
|
||||
|
||||
def __exit__(self, *args):
|
||||
for future in self.futures:
|
||||
future._condition.release()
|
||||
|
||||
def _create_and_install_waiters(fs, return_when):
|
||||
if return_when == _AS_COMPLETED:
|
||||
waiter = _AsCompletedWaiter()
|
||||
elif return_when == FIRST_COMPLETED:
|
||||
waiter = _FirstCompletedWaiter()
|
||||
else:
|
||||
pending_count = sum(
|
||||
f._state not in [CANCELLED_AND_NOTIFIED, FINISHED] for f in fs)
|
||||
|
||||
if return_when == FIRST_EXCEPTION:
|
||||
waiter = _AllCompletedWaiter(pending_count, stop_on_exception=True)
|
||||
elif return_when == ALL_COMPLETED:
|
||||
waiter = _AllCompletedWaiter(pending_count, stop_on_exception=False)
|
||||
else:
|
||||
raise ValueError("Invalid return condition: %r" % return_when)
|
||||
|
||||
for f in fs:
|
||||
f._waiters.append(waiter)
|
||||
|
||||
return waiter
|
||||
|
||||
def as_completed(fs, timeout=None):
|
||||
"""An iterator over the given futures that yields each as it completes.
|
||||
|
||||
Args:
|
||||
fs: The sequence of Futures (possibly created by different Executors) to
|
||||
iterate over.
|
||||
timeout: The maximum number of seconds to wait. If None, then there
|
||||
is no limit on the wait time.
|
||||
|
||||
Returns:
|
||||
An iterator that yields the given Futures as they complete (finished or
|
||||
cancelled). If any given Futures are duplicated, they will be returned
|
||||
once.
|
||||
|
||||
Raises:
|
||||
TimeoutError: If the entire result iterator could not be generated
|
||||
before the given timeout.
|
||||
"""
|
||||
if timeout is not None:
|
||||
end_time = timeout + time.time()
|
||||
|
||||
fs = set(fs)
|
||||
with _AcquireFutures(fs):
|
||||
finished = set(
|
||||
f for f in fs
|
||||
if f._state in [CANCELLED_AND_NOTIFIED, FINISHED])
|
||||
pending = fs - finished
|
||||
waiter = _create_and_install_waiters(fs, _AS_COMPLETED)
|
||||
|
||||
try:
|
||||
for future in finished:
|
||||
yield future
|
||||
|
||||
while pending:
|
||||
if timeout is None:
|
||||
wait_timeout = None
|
||||
else:
|
||||
wait_timeout = end_time - time.time()
|
||||
if wait_timeout < 0:
|
||||
raise TimeoutError(
|
||||
'%d (of %d) futures unfinished' % (
|
||||
len(pending), len(fs)))
|
||||
|
||||
waiter.event.wait(wait_timeout)
|
||||
|
||||
with waiter.lock:
|
||||
finished = waiter.finished_futures
|
||||
waiter.finished_futures = []
|
||||
waiter.event.clear()
|
||||
|
||||
for future in finished:
|
||||
yield future
|
||||
pending.remove(future)
|
||||
|
||||
finally:
|
||||
for f in fs:
|
||||
with f._condition:
|
||||
f._waiters.remove(waiter)
|
||||
|
||||
DoneAndNotDoneFutures = collections.namedtuple(
|
||||
'DoneAndNotDoneFutures', 'done not_done')
|
||||
def wait(fs, timeout=None, return_when=ALL_COMPLETED):
|
||||
"""Wait for the futures in the given sequence to complete.
|
||||
|
||||
Args:
|
||||
fs: The sequence of Futures (possibly created by different Executors) to
|
||||
wait upon.
|
||||
timeout: The maximum number of seconds to wait. If None, then there
|
||||
is no limit on the wait time.
|
||||
return_when: Indicates when this function should return. The options
|
||||
are:
|
||||
|
||||
FIRST_COMPLETED - Return when any future finishes or is
|
||||
cancelled.
|
||||
FIRST_EXCEPTION - Return when any future finishes by raising an
|
||||
exception. If no future raises an exception
|
||||
then it is equivalent to ALL_COMPLETED.
|
||||
ALL_COMPLETED - Return when all futures finish or are cancelled.
|
||||
|
||||
Returns:
|
||||
A named 2-tuple of sets. The first set, named 'done', contains the
|
||||
futures that completed (is finished or cancelled) before the wait
|
||||
completed. The second set, named 'not_done', contains uncompleted
|
||||
futures.
|
||||
"""
|
||||
with _AcquireFutures(fs):
|
||||
done = set(f for f in fs
|
||||
if f._state in [CANCELLED_AND_NOTIFIED, FINISHED])
|
||||
not_done = set(fs) - done
|
||||
|
||||
if (return_when == FIRST_COMPLETED) and done:
|
||||
return DoneAndNotDoneFutures(done, not_done)
|
||||
elif (return_when == FIRST_EXCEPTION) and done:
|
||||
if any(f for f in done
|
||||
if not f.cancelled() and f.exception() is not None):
|
||||
return DoneAndNotDoneFutures(done, not_done)
|
||||
|
||||
if len(done) == len(fs):
|
||||
return DoneAndNotDoneFutures(done, not_done)
|
||||
|
||||
waiter = _create_and_install_waiters(fs, return_when)
|
||||
|
||||
waiter.event.wait(timeout)
|
||||
for f in fs:
|
||||
with f._condition:
|
||||
f._waiters.remove(waiter)
|
||||
|
||||
done.update(waiter.finished_futures)
|
||||
return DoneAndNotDoneFutures(done, set(fs) - done)
|
||||
|
||||
class Future(object):
|
||||
"""Represents the result of an asynchronous computation."""
|
||||
|
||||
def __init__(self):
|
||||
"""Initializes the future. Should not be called by clients."""
|
||||
self._condition = threading.Condition()
|
||||
self._state = PENDING
|
||||
self._result = None
|
||||
self._exception = None
|
||||
self._traceback = None
|
||||
self._waiters = []
|
||||
self._done_callbacks = []
|
||||
|
||||
def _invoke_callbacks(self):
|
||||
for callback in self._done_callbacks:
|
||||
try:
|
||||
callback(self)
|
||||
except Exception:
|
||||
LOGGER.exception('exception calling callback for %r', self)
|
||||
except BaseException:
|
||||
# Explicitly let all other new-style exceptions through so
|
||||
# that we can catch all old-style exceptions with a simple
|
||||
# "except:" clause below.
|
||||
#
|
||||
# All old-style exception objects are instances of
|
||||
# types.InstanceType, but "except types.InstanceType:" does
|
||||
# not catch old-style exceptions for some reason. Thus, the
|
||||
# only way to catch all old-style exceptions without catching
|
||||
# any new-style exceptions is to filter out the new-style
|
||||
# exceptions, which all derive from BaseException.
|
||||
raise
|
||||
except:
|
||||
# Because of the BaseException clause above, this handler only
|
||||
# executes for old-style exception objects.
|
||||
LOGGER.exception('exception calling callback for %r', self)
|
||||
|
||||
def __repr__(self):
|
||||
with self._condition:
|
||||
if self._state == FINISHED:
|
||||
if self._exception:
|
||||
return '<Future at %s state=%s raised %s>' % (
|
||||
hex(id(self)),
|
||||
_STATE_TO_DESCRIPTION_MAP[self._state],
|
||||
self._exception.__class__.__name__)
|
||||
else:
|
||||
return '<Future at %s state=%s returned %s>' % (
|
||||
hex(id(self)),
|
||||
_STATE_TO_DESCRIPTION_MAP[self._state],
|
||||
self._result.__class__.__name__)
|
||||
return '<Future at %s state=%s>' % (
|
||||
hex(id(self)),
|
||||
_STATE_TO_DESCRIPTION_MAP[self._state])
|
||||
|
||||
def cancel(self):
|
||||
"""Cancel the future if possible.
|
||||
|
||||
Returns True if the future was cancelled, False otherwise. A future
|
||||
cannot be cancelled if it is running or has already completed.
|
||||
"""
|
||||
with self._condition:
|
||||
if self._state in [RUNNING, FINISHED]:
|
||||
return False
|
||||
|
||||
if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
|
||||
return True
|
||||
|
||||
self._state = CANCELLED
|
||||
self._condition.notify_all()
|
||||
|
||||
self._invoke_callbacks()
|
||||
return True
|
||||
|
||||
def cancelled(self):
|
||||
"""Return True if the future has cancelled."""
|
||||
with self._condition:
|
||||
return self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]
|
||||
|
||||
def running(self):
|
||||
"""Return True if the future is currently executing."""
|
||||
with self._condition:
|
||||
return self._state == RUNNING
|
||||
|
||||
def done(self):
|
||||
"""Return True of the future was cancelled or finished executing."""
|
||||
with self._condition:
|
||||
return self._state in [CANCELLED, CANCELLED_AND_NOTIFIED, FINISHED]
|
||||
|
||||
def __get_result(self):
|
||||
if self._exception:
|
||||
if isinstance(self._exception, types.InstanceType):
|
||||
# The exception is an instance of an old-style class, which
|
||||
# means type(self._exception) returns types.ClassType instead
|
||||
# of the exception's actual class type.
|
||||
exception_type = self._exception.__class__
|
||||
else:
|
||||
exception_type = type(self._exception)
|
||||
raise exception_type, self._exception, self._traceback
|
||||
else:
|
||||
return self._result
|
||||
|
||||
def add_done_callback(self, fn):
|
||||
"""Attaches a callable that will be called when the future finishes.
|
||||
|
||||
Args:
|
||||
fn: A callable that will be called with this future as its only
|
||||
argument when the future completes or is cancelled. The callable
|
||||
will always be called by a thread in the same process in which
|
||||
it was added. If the future has already completed or been
|
||||
cancelled then the callable will be called immediately. These
|
||||
callables are called in the order that they were added.
|
||||
"""
|
||||
with self._condition:
|
||||
if self._state not in [CANCELLED, CANCELLED_AND_NOTIFIED, FINISHED]:
|
||||
self._done_callbacks.append(fn)
|
||||
return
|
||||
fn(self)
|
||||
|
||||
def result(self, timeout=None):
|
||||
"""Return the result of the call that the future represents.
|
||||
|
||||
Args:
|
||||
timeout: The number of seconds to wait for the result if the future
|
||||
isn't done. If None, then there is no limit on the wait time.
|
||||
|
||||
Returns:
|
||||
The result of the call that the future represents.
|
||||
|
||||
Raises:
|
||||
CancelledError: If the future was cancelled.
|
||||
TimeoutError: If the future didn't finish executing before the given
|
||||
timeout.
|
||||
Exception: If the call raised then that exception will be raised.
|
||||
"""
|
||||
with self._condition:
|
||||
if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
|
||||
raise CancelledError()
|
||||
elif self._state == FINISHED:
|
||||
return self.__get_result()
|
||||
|
||||
self._condition.wait(timeout)
|
||||
|
||||
if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
|
||||
raise CancelledError()
|
||||
elif self._state == FINISHED:
|
||||
return self.__get_result()
|
||||
else:
|
||||
raise TimeoutError()
|
||||
|
||||
def exception_info(self, timeout=None):
|
||||
"""Return a tuple of (exception, traceback) raised by the call that the
|
||||
future represents.
|
||||
|
||||
Args:
|
||||
timeout: The number of seconds to wait for the exception if the
|
||||
future isn't done. If None, then there is no limit on the wait
|
||||
time.
|
||||
|
||||
Returns:
|
||||
The exception raised by the call that the future represents or None
|
||||
if the call completed without raising.
|
||||
|
||||
Raises:
|
||||
CancelledError: If the future was cancelled.
|
||||
TimeoutError: If the future didn't finish executing before the given
|
||||
timeout.
|
||||
"""
|
||||
with self._condition:
|
||||
if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
|
||||
raise CancelledError()
|
||||
elif self._state == FINISHED:
|
||||
return self._exception, self._traceback
|
||||
|
||||
self._condition.wait(timeout)
|
||||
|
||||
if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
|
||||
raise CancelledError()
|
||||
elif self._state == FINISHED:
|
||||
return self._exception, self._traceback
|
||||
else:
|
||||
raise TimeoutError()
|
||||
|
||||
def exception(self, timeout=None):
|
||||
"""Return the exception raised by the call that the future represents.
|
||||
|
||||
Args:
|
||||
timeout: The number of seconds to wait for the exception if the
|
||||
future isn't done. If None, then there is no limit on the wait
|
||||
time.
|
||||
|
||||
Returns:
|
||||
The exception raised by the call that the future represents or None
|
||||
if the call completed without raising.
|
||||
|
||||
Raises:
|
||||
CancelledError: If the future was cancelled.
|
||||
TimeoutError: If the future didn't finish executing before the given
|
||||
timeout.
|
||||
"""
|
||||
return self.exception_info(timeout)[0]
|
||||
|
||||
# The following methods should only be used by Executors and in tests.
|
||||
def set_running_or_notify_cancel(self):
|
||||
"""Mark the future as running or process any cancel notifications.
|
||||
|
||||
Should only be used by Executor implementations and unit tests.
|
||||
|
||||
If the future has been cancelled (cancel() was called and returned
|
||||
True) then any threads waiting on the future completing (though calls
|
||||
to as_completed() or wait()) are notified and False is returned.
|
||||
|
||||
If the future was not cancelled then it is put in the running state
|
||||
(future calls to running() will return True) and True is returned.
|
||||
|
||||
This method should be called by Executor implementations before
|
||||
executing the work associated with this future. If this method returns
|
||||
False then the work should not be executed.
|
||||
|
||||
Returns:
|
||||
False if the Future was cancelled, True otherwise.
|
||||
|
||||
Raises:
|
||||
RuntimeError: if this method was already called or if set_result()
|
||||
or set_exception() was called.
|
||||
"""
|
||||
with self._condition:
|
||||
if self._state == CANCELLED:
|
||||
self._state = CANCELLED_AND_NOTIFIED
|
||||
for waiter in self._waiters:
|
||||
waiter.add_cancelled(self)
|
||||
# self._condition.notify_all() is not necessary because
|
||||
# self.cancel() triggers a notification.
|
||||
return False
|
||||
elif self._state == PENDING:
|
||||
self._state = RUNNING
|
||||
return True
|
||||
else:
|
||||
LOGGER.critical('Future %s in unexpected state: %s',
|
||||
id(self),
|
||||
self._state)
|
||||
raise RuntimeError('Future in unexpected state')
|
||||
|
||||
def set_result(self, result):
|
||||
"""Sets the return value of work associated with the future.
|
||||
|
||||
Should only be used by Executor implementations and unit tests.
|
||||
"""
|
||||
with self._condition:
|
||||
self._result = result
|
||||
self._state = FINISHED
|
||||
for waiter in self._waiters:
|
||||
waiter.add_result(self)
|
||||
self._condition.notify_all()
|
||||
self._invoke_callbacks()
|
||||
|
||||
def set_exception_info(self, exception, traceback):
|
||||
"""Sets the result of the future as being the given exception
|
||||
and traceback.
|
||||
|
||||
Should only be used by Executor implementations and unit tests.
|
||||
"""
|
||||
with self._condition:
|
||||
self._exception = exception
|
||||
self._traceback = traceback
|
||||
self._state = FINISHED
|
||||
for waiter in self._waiters:
|
||||
waiter.add_exception(self)
|
||||
self._condition.notify_all()
|
||||
self._invoke_callbacks()
|
||||
|
||||
def set_exception(self, exception):
|
||||
"""Sets the result of the future as being the given exception.
|
||||
|
||||
Should only be used by Executor implementations and unit tests.
|
||||
"""
|
||||
self.set_exception_info(exception, None)
|
||||
|
||||
class Executor(object):
|
||||
"""This is an abstract base class for concrete asynchronous executors."""
|
||||
|
||||
def submit(self, fn, *args, **kwargs):
|
||||
"""Submits a callable to be executed with the given arguments.
|
||||
|
||||
Schedules the callable to be executed as fn(*args, **kwargs) and returns
|
||||
a Future instance representing the execution of the callable.
|
||||
|
||||
Returns:
|
||||
A Future representing the given call.
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
def map(self, fn, *iterables, **kwargs):
|
||||
"""Returns a iterator equivalent to map(fn, iter).
|
||||
|
||||
Args:
|
||||
fn: A callable that will take as many arguments as there are
|
||||
passed iterables.
|
||||
timeout: The maximum number of seconds to wait. If None, then there
|
||||
is no limit on the wait time.
|
||||
|
||||
Returns:
|
||||
An iterator equivalent to: map(func, *iterables) but the calls may
|
||||
be evaluated out-of-order.
|
||||
|
||||
Raises:
|
||||
TimeoutError: If the entire result iterator could not be generated
|
||||
before the given timeout.
|
||||
Exception: If fn(*args) raises for any values.
|
||||
"""
|
||||
timeout = kwargs.get('timeout')
|
||||
if timeout is not None:
|
||||
end_time = timeout + time.time()
|
||||
|
||||
fs = [self.submit(fn, *args) for args in itertools.izip(*iterables)]
|
||||
|
||||
# Yield must be hidden in closure so that the futures are submitted
|
||||
# before the first iterator value is required.
|
||||
def result_iterator():
|
||||
try:
|
||||
for future in fs:
|
||||
if timeout is None:
|
||||
yield future.result()
|
||||
else:
|
||||
yield future.result(end_time - time.time())
|
||||
finally:
|
||||
for future in fs:
|
||||
future.cancel()
|
||||
return result_iterator()
|
||||
|
||||
def shutdown(self, wait=True):
|
||||
"""Clean-up the resources associated with the Executor.
|
||||
|
||||
It is safe to call this method several times. Otherwise, no other
|
||||
methods can be called after this one.
|
||||
|
||||
Args:
|
||||
wait: If True then shutdown will not return until all running
|
||||
futures have finished executing and the resources used by the
|
||||
executor have been reclaimed.
|
||||
"""
|
||||
pass
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
self.shutdown(wait=True)
|
||||
return False
|
|
@ -0,0 +1,363 @@
|
|||
# Copyright 2009 Brian Quinlan. All Rights Reserved.
|
||||
# Licensed to PSF under a Contributor Agreement.
|
||||
|
||||
"""Implements ProcessPoolExecutor.
|
||||
|
||||
The follow diagram and text describe the data-flow through the system:
|
||||
|
||||
|======================= In-process =====================|== Out-of-process ==|
|
||||
|
||||
+----------+ +----------+ +--------+ +-----------+ +---------+
|
||||
| | => | Work Ids | => | | => | Call Q | => | |
|
||||
| | +----------+ | | +-----------+ | |
|
||||
| | | ... | | | | ... | | |
|
||||
| | | 6 | | | | 5, call() | | |
|
||||
| | | 7 | | | | ... | | |
|
||||
| Process | | ... | | Local | +-----------+ | Process |
|
||||
| Pool | +----------+ | Worker | | #1..n |
|
||||
| Executor | | Thread | | |
|
||||
| | +----------- + | | +-----------+ | |
|
||||
| | <=> | Work Items | <=> | | <= | Result Q | <= | |
|
||||
| | +------------+ | | +-----------+ | |
|
||||
| | | 6: call() | | | | ... | | |
|
||||
| | | future | | | | 4, result | | |
|
||||
| | | ... | | | | 3, except | | |
|
||||
+----------+ +------------+ +--------+ +-----------+ +---------+
|
||||
|
||||
Executor.submit() called:
|
||||
- creates a uniquely numbered _WorkItem and adds it to the "Work Items" dict
|
||||
- adds the id of the _WorkItem to the "Work Ids" queue
|
||||
|
||||
Local worker thread:
|
||||
- reads work ids from the "Work Ids" queue and looks up the corresponding
|
||||
WorkItem from the "Work Items" dict: if the work item has been cancelled then
|
||||
it is simply removed from the dict, otherwise it is repackaged as a
|
||||
_CallItem and put in the "Call Q". New _CallItems are put in the "Call Q"
|
||||
until "Call Q" is full. NOTE: the size of the "Call Q" is kept small because
|
||||
calls placed in the "Call Q" can no longer be cancelled with Future.cancel().
|
||||
- reads _ResultItems from "Result Q", updates the future stored in the
|
||||
"Work Items" dict and deletes the dict entry
|
||||
|
||||
Process #1..n:
|
||||
- reads _CallItems from "Call Q", executes the calls, and puts the resulting
|
||||
_ResultItems in "Request Q"
|
||||
"""
|
||||
|
||||
import atexit
|
||||
from concurrent.futures import _base
|
||||
import Queue as queue
|
||||
import multiprocessing
|
||||
import threading
|
||||
import weakref
|
||||
import sys
|
||||
|
||||
__author__ = 'Brian Quinlan (brian@sweetapp.com)'
|
||||
|
||||
# Workers are created as daemon threads and processes. This is done to allow the
|
||||
# interpreter to exit when there are still idle processes in a
|
||||
# ProcessPoolExecutor's process pool (i.e. shutdown() was not called). However,
|
||||
# allowing workers to die with the interpreter has two undesirable properties:
|
||||
# - The workers would still be running during interpretor shutdown,
|
||||
# meaning that they would fail in unpredictable ways.
|
||||
# - The workers could be killed while evaluating a work item, which could
|
||||
# be bad if the callable being evaluated has external side-effects e.g.
|
||||
# writing to a file.
|
||||
#
|
||||
# To work around this problem, an exit handler is installed which tells the
|
||||
# workers to exit when their work queues are empty and then waits until the
|
||||
# threads/processes finish.
|
||||
|
||||
_threads_queues = weakref.WeakKeyDictionary()
|
||||
_shutdown = False
|
||||
|
||||
def _python_exit():
|
||||
global _shutdown
|
||||
_shutdown = True
|
||||
items = list(_threads_queues.items()) if _threads_queues else ()
|
||||
for t, q in items:
|
||||
q.put(None)
|
||||
for t, q in items:
|
||||
t.join(sys.maxint)
|
||||
|
||||
# Controls how many more calls than processes will be queued in the call queue.
|
||||
# A smaller number will mean that processes spend more time idle waiting for
|
||||
# work while a larger number will make Future.cancel() succeed less frequently
|
||||
# (Futures in the call queue cannot be cancelled).
|
||||
EXTRA_QUEUED_CALLS = 1
|
||||
|
||||
class _WorkItem(object):
|
||||
def __init__(self, future, fn, args, kwargs):
|
||||
self.future = future
|
||||
self.fn = fn
|
||||
self.args = args
|
||||
self.kwargs = kwargs
|
||||
|
||||
class _ResultItem(object):
|
||||
def __init__(self, work_id, exception=None, result=None):
|
||||
self.work_id = work_id
|
||||
self.exception = exception
|
||||
self.result = result
|
||||
|
||||
class _CallItem(object):
|
||||
def __init__(self, work_id, fn, args, kwargs):
|
||||
self.work_id = work_id
|
||||
self.fn = fn
|
||||
self.args = args
|
||||
self.kwargs = kwargs
|
||||
|
||||
def _process_worker(call_queue, result_queue):
|
||||
"""Evaluates calls from call_queue and places the results in result_queue.
|
||||
|
||||
This worker is run in a separate process.
|
||||
|
||||
Args:
|
||||
call_queue: A multiprocessing.Queue of _CallItems that will be read and
|
||||
evaluated by the worker.
|
||||
result_queue: A multiprocessing.Queue of _ResultItems that will written
|
||||
to by the worker.
|
||||
shutdown: A multiprocessing.Event that will be set as a signal to the
|
||||
worker that it should exit when call_queue is empty.
|
||||
"""
|
||||
while True:
|
||||
call_item = call_queue.get(block=True)
|
||||
if call_item is None:
|
||||
# Wake up queue management thread
|
||||
result_queue.put(None)
|
||||
return
|
||||
try:
|
||||
r = call_item.fn(*call_item.args, **call_item.kwargs)
|
||||
except:
|
||||
e = sys.exc_info()[1]
|
||||
result_queue.put(_ResultItem(call_item.work_id,
|
||||
exception=e))
|
||||
else:
|
||||
result_queue.put(_ResultItem(call_item.work_id,
|
||||
result=r))
|
||||
|
||||
def _add_call_item_to_queue(pending_work_items,
|
||||
work_ids,
|
||||
call_queue):
|
||||
"""Fills call_queue with _WorkItems from pending_work_items.
|
||||
|
||||
This function never blocks.
|
||||
|
||||
Args:
|
||||
pending_work_items: A dict mapping work ids to _WorkItems e.g.
|
||||
{5: <_WorkItem...>, 6: <_WorkItem...>, ...}
|
||||
work_ids: A queue.Queue of work ids e.g. Queue([5, 6, ...]). Work ids
|
||||
are consumed and the corresponding _WorkItems from
|
||||
pending_work_items are transformed into _CallItems and put in
|
||||
call_queue.
|
||||
call_queue: A multiprocessing.Queue that will be filled with _CallItems
|
||||
derived from _WorkItems.
|
||||
"""
|
||||
while True:
|
||||
if call_queue.full():
|
||||
return
|
||||
try:
|
||||
work_id = work_ids.get(block=False)
|
||||
except queue.Empty:
|
||||
return
|
||||
else:
|
||||
work_item = pending_work_items[work_id]
|
||||
|
||||
if work_item.future.set_running_or_notify_cancel():
|
||||
call_queue.put(_CallItem(work_id,
|
||||
work_item.fn,
|
||||
work_item.args,
|
||||
work_item.kwargs),
|
||||
block=True)
|
||||
else:
|
||||
del pending_work_items[work_id]
|
||||
continue
|
||||
|
||||
def _queue_management_worker(executor_reference,
|
||||
processes,
|
||||
pending_work_items,
|
||||
work_ids_queue,
|
||||
call_queue,
|
||||
result_queue):
|
||||
"""Manages the communication between this process and the worker processes.
|
||||
|
||||
This function is run in a local thread.
|
||||
|
||||
Args:
|
||||
executor_reference: A weakref.ref to the ProcessPoolExecutor that owns
|
||||
this thread. Used to determine if the ProcessPoolExecutor has been
|
||||
garbage collected and that this function can exit.
|
||||
process: A list of the multiprocessing.Process instances used as
|
||||
workers.
|
||||
pending_work_items: A dict mapping work ids to _WorkItems e.g.
|
||||
{5: <_WorkItem...>, 6: <_WorkItem...>, ...}
|
||||
work_ids_queue: A queue.Queue of work ids e.g. Queue([5, 6, ...]).
|
||||
call_queue: A multiprocessing.Queue that will be filled with _CallItems
|
||||
derived from _WorkItems for processing by the process workers.
|
||||
result_queue: A multiprocessing.Queue of _ResultItems generated by the
|
||||
process workers.
|
||||
"""
|
||||
nb_shutdown_processes = [0]
|
||||
def shutdown_one_process():
|
||||
"""Tell a worker to terminate, which will in turn wake us again"""
|
||||
call_queue.put(None)
|
||||
nb_shutdown_processes[0] += 1
|
||||
while True:
|
||||
_add_call_item_to_queue(pending_work_items,
|
||||
work_ids_queue,
|
||||
call_queue)
|
||||
|
||||
result_item = result_queue.get(block=True)
|
||||
if result_item is not None:
|
||||
work_item = pending_work_items[result_item.work_id]
|
||||
del pending_work_items[result_item.work_id]
|
||||
|
||||
if result_item.exception:
|
||||
work_item.future.set_exception(result_item.exception)
|
||||
else:
|
||||
work_item.future.set_result(result_item.result)
|
||||
# Delete references to object. See issue16284
|
||||
del work_item
|
||||
# Check whether we should start shutting down.
|
||||
executor = executor_reference()
|
||||
# No more work items can be added if:
|
||||
# - The interpreter is shutting down OR
|
||||
# - The executor that owns this worker has been collected OR
|
||||
# - The executor that owns this worker has been shutdown.
|
||||
if _shutdown or executor is None or executor._shutdown_thread:
|
||||
# Since no new work items can be added, it is safe to shutdown
|
||||
# this thread if there are no pending work items.
|
||||
if not pending_work_items:
|
||||
while nb_shutdown_processes[0] < len(processes):
|
||||
shutdown_one_process()
|
||||
# If .join() is not called on the created processes then
|
||||
# some multiprocessing.Queue methods may deadlock on Mac OS
|
||||
# X.
|
||||
for p in processes:
|
||||
p.join()
|
||||
call_queue.close()
|
||||
return
|
||||
del executor
|
||||
|
||||
_system_limits_checked = False
|
||||
_system_limited = None
|
||||
def _check_system_limits():
|
||||
global _system_limits_checked, _system_limited
|
||||
if _system_limits_checked:
|
||||
if _system_limited:
|
||||
raise NotImplementedError(_system_limited)
|
||||
_system_limits_checked = True
|
||||
try:
|
||||
import os
|
||||
nsems_max = os.sysconf("SC_SEM_NSEMS_MAX")
|
||||
except (AttributeError, ValueError):
|
||||
# sysconf not available or setting not available
|
||||
return
|
||||
if nsems_max == -1:
|
||||
# indetermine limit, assume that limit is determined
|
||||
# by available memory only
|
||||
return
|
||||
if nsems_max >= 256:
|
||||
# minimum number of semaphores available
|
||||
# according to POSIX
|
||||
return
|
||||
_system_limited = "system provides too few semaphores (%d available, 256 necessary)" % nsems_max
|
||||
raise NotImplementedError(_system_limited)
|
||||
|
||||
|
||||
class ProcessPoolExecutor(_base.Executor):
|
||||
def __init__(self, max_workers=None):
|
||||
"""Initializes a new ProcessPoolExecutor instance.
|
||||
|
||||
Args:
|
||||
max_workers: The maximum number of processes that can be used to
|
||||
execute the given calls. If None or not given then as many
|
||||
worker processes will be created as the machine has processors.
|
||||
"""
|
||||
_check_system_limits()
|
||||
|
||||
if max_workers is None:
|
||||
self._max_workers = multiprocessing.cpu_count()
|
||||
else:
|
||||
if max_workers <= 0:
|
||||
raise ValueError("max_workers must be greater than 0")
|
||||
|
||||
self._max_workers = max_workers
|
||||
|
||||
# Make the call queue slightly larger than the number of processes to
|
||||
# prevent the worker processes from idling. But don't make it too big
|
||||
# because futures in the call queue cannot be cancelled.
|
||||
self._call_queue = multiprocessing.Queue(self._max_workers +
|
||||
EXTRA_QUEUED_CALLS)
|
||||
self._result_queue = multiprocessing.Queue()
|
||||
self._work_ids = queue.Queue()
|
||||
self._queue_management_thread = None
|
||||
self._processes = set()
|
||||
|
||||
# Shutdown is a two-step process.
|
||||
self._shutdown_thread = False
|
||||
self._shutdown_lock = threading.Lock()
|
||||
self._queue_count = 0
|
||||
self._pending_work_items = {}
|
||||
|
||||
def _start_queue_management_thread(self):
|
||||
# When the executor gets lost, the weakref callback will wake up
|
||||
# the queue management thread.
|
||||
def weakref_cb(_, q=self._result_queue):
|
||||
q.put(None)
|
||||
if self._queue_management_thread is None:
|
||||
self._queue_management_thread = threading.Thread(
|
||||
target=_queue_management_worker,
|
||||
args=(weakref.ref(self, weakref_cb),
|
||||
self._processes,
|
||||
self._pending_work_items,
|
||||
self._work_ids,
|
||||
self._call_queue,
|
||||
self._result_queue))
|
||||
self._queue_management_thread.daemon = True
|
||||
self._queue_management_thread.start()
|
||||
_threads_queues[self._queue_management_thread] = self._result_queue
|
||||
|
||||
def _adjust_process_count(self):
|
||||
for _ in range(len(self._processes), self._max_workers):
|
||||
p = multiprocessing.Process(
|
||||
target=_process_worker,
|
||||
args=(self._call_queue,
|
||||
self._result_queue))
|
||||
p.start()
|
||||
self._processes.add(p)
|
||||
|
||||
def submit(self, fn, *args, **kwargs):
|
||||
with self._shutdown_lock:
|
||||
if self._shutdown_thread:
|
||||
raise RuntimeError('cannot schedule new futures after shutdown')
|
||||
|
||||
f = _base.Future()
|
||||
w = _WorkItem(f, fn, args, kwargs)
|
||||
|
||||
self._pending_work_items[self._queue_count] = w
|
||||
self._work_ids.put(self._queue_count)
|
||||
self._queue_count += 1
|
||||
# Wake up queue management thread
|
||||
self._result_queue.put(None)
|
||||
|
||||
self._start_queue_management_thread()
|
||||
self._adjust_process_count()
|
||||
return f
|
||||
submit.__doc__ = _base.Executor.submit.__doc__
|
||||
|
||||
def shutdown(self, wait=True):
|
||||
with self._shutdown_lock:
|
||||
self._shutdown_thread = True
|
||||
if self._queue_management_thread:
|
||||
# Wake up queue management thread
|
||||
self._result_queue.put(None)
|
||||
if wait:
|
||||
self._queue_management_thread.join(sys.maxint)
|
||||
# To reduce the risk of openning too many files, remove references to
|
||||
# objects that use file descriptors.
|
||||
self._queue_management_thread = None
|
||||
self._call_queue = None
|
||||
self._result_queue = None
|
||||
self._processes = None
|
||||
shutdown.__doc__ = _base.Executor.shutdown.__doc__
|
||||
|
||||
atexit.register(_python_exit)
|
|
@ -0,0 +1,149 @@
|
|||
# Copyright 2009 Brian Quinlan. All Rights Reserved.
|
||||
# Licensed to PSF under a Contributor Agreement.
|
||||
|
||||
"""Implements ThreadPoolExecutor."""
|
||||
|
||||
import atexit
|
||||
from concurrent.futures import _base
|
||||
import Queue as queue
|
||||
import threading
|
||||
import weakref
|
||||
import sys
|
||||
|
||||
try:
|
||||
from multiprocessing import cpu_count
|
||||
except ImportError:
|
||||
# some platforms don't have multiprocessing
|
||||
def cpu_count():
|
||||
return None
|
||||
|
||||
__author__ = 'Brian Quinlan (brian@sweetapp.com)'
|
||||
|
||||
# Workers are created as daemon threads. This is done to allow the interpreter
|
||||
# to exit when there are still idle threads in a ThreadPoolExecutor's thread
|
||||
# pool (i.e. shutdown() was not called). However, allowing workers to die with
|
||||
# the interpreter has two undesirable properties:
|
||||
# - The workers would still be running during interpretor shutdown,
|
||||
# meaning that they would fail in unpredictable ways.
|
||||
# - The workers could be killed while evaluating a work item, which could
|
||||
# be bad if the callable being evaluated has external side-effects e.g.
|
||||
# writing to a file.
|
||||
#
|
||||
# To work around this problem, an exit handler is installed which tells the
|
||||
# workers to exit when their work queues are empty and then waits until the
|
||||
# threads finish.
|
||||
|
||||
_threads_queues = weakref.WeakKeyDictionary()
|
||||
_shutdown = False
|
||||
|
||||
def _python_exit():
|
||||
global _shutdown
|
||||
_shutdown = True
|
||||
items = list(_threads_queues.items()) if _threads_queues else ()
|
||||
for t, q in items:
|
||||
q.put(None)
|
||||
for t, q in items:
|
||||
t.join(sys.maxint)
|
||||
|
||||
atexit.register(_python_exit)
|
||||
|
||||
class _WorkItem(object):
|
||||
def __init__(self, future, fn, args, kwargs):
|
||||
self.future = future
|
||||
self.fn = fn
|
||||
self.args = args
|
||||
self.kwargs = kwargs
|
||||
|
||||
def run(self):
|
||||
if not self.future.set_running_or_notify_cancel():
|
||||
return
|
||||
|
||||
try:
|
||||
result = self.fn(*self.args, **self.kwargs)
|
||||
except:
|
||||
e, tb = sys.exc_info()[1:]
|
||||
self.future.set_exception_info(e, tb)
|
||||
else:
|
||||
self.future.set_result(result)
|
||||
|
||||
def _worker(executor_reference, work_queue):
|
||||
try:
|
||||
while True:
|
||||
work_item = work_queue.get(block=True)
|
||||
if work_item is not None:
|
||||
work_item.run()
|
||||
# Delete references to object. See issue16284
|
||||
del work_item
|
||||
continue
|
||||
executor = executor_reference()
|
||||
# Exit if:
|
||||
# - The interpreter is shutting down OR
|
||||
# - The executor that owns the worker has been collected OR
|
||||
# - The executor that owns the worker has been shutdown.
|
||||
if _shutdown or executor is None or executor._shutdown:
|
||||
# Notice other workers
|
||||
work_queue.put(None)
|
||||
return
|
||||
del executor
|
||||
except:
|
||||
_base.LOGGER.critical('Exception in worker', exc_info=True)
|
||||
|
||||
|
||||
class ThreadPoolExecutor(_base.Executor):
|
||||
def __init__(self, max_workers=None):
|
||||
"""Initializes a new ThreadPoolExecutor instance.
|
||||
|
||||
Args:
|
||||
max_workers: The maximum number of threads that can be used to
|
||||
execute the given calls.
|
||||
"""
|
||||
if max_workers is None:
|
||||
# Use this number because ThreadPoolExecutor is often
|
||||
# used to overlap I/O instead of CPU work.
|
||||
max_workers = (cpu_count() or 1) * 5
|
||||
if max_workers <= 0:
|
||||
raise ValueError("max_workers must be greater than 0")
|
||||
|
||||
self._max_workers = max_workers
|
||||
self._work_queue = queue.Queue()
|
||||
self._threads = set()
|
||||
self._shutdown = False
|
||||
self._shutdown_lock = threading.Lock()
|
||||
|
||||
def submit(self, fn, *args, **kwargs):
|
||||
with self._shutdown_lock:
|
||||
if self._shutdown:
|
||||
raise RuntimeError('cannot schedule new futures after shutdown')
|
||||
|
||||
f = _base.Future()
|
||||
w = _WorkItem(f, fn, args, kwargs)
|
||||
|
||||
self._work_queue.put(w)
|
||||
self._adjust_thread_count()
|
||||
return f
|
||||
submit.__doc__ = _base.Executor.submit.__doc__
|
||||
|
||||
def _adjust_thread_count(self):
|
||||
# When the executor gets lost, the weakref callback will wake up
|
||||
# the worker threads.
|
||||
def weakref_cb(_, q=self._work_queue):
|
||||
q.put(None)
|
||||
# TODO(bquinlan): Should avoid creating new threads if there are more
|
||||
# idle threads than items in the work queue.
|
||||
if len(self._threads) < self._max_workers:
|
||||
t = threading.Thread(target=_worker,
|
||||
args=(weakref.ref(self, weakref_cb),
|
||||
self._work_queue))
|
||||
t.daemon = True
|
||||
t.start()
|
||||
self._threads.add(t)
|
||||
_threads_queues[t] = self._work_queue
|
||||
|
||||
def shutdown(self, wait=True):
|
||||
with self._shutdown_lock:
|
||||
self._shutdown = True
|
||||
self._work_queue.put(None)
|
||||
if wait:
|
||||
for t in self._threads:
|
||||
t.join(sys.maxint)
|
||||
shutdown.__doc__ = _base.Executor.shutdown.__doc__
|
|
@ -0,0 +1,829 @@
|
|||
# Copyright 2001-2013 Python Software Foundation; All Rights Reserved
|
||||
"""Function signature objects for callables
|
||||
|
||||
Back port of Python 3.3's function signature tools from the inspect module,
|
||||
modified to be compatible with Python 2.6, 2.7 and 3.3+.
|
||||
"""
|
||||
from __future__ import absolute_import, division, print_function
|
||||
import itertools
|
||||
import functools
|
||||
import re
|
||||
import types
|
||||
|
||||
try:
|
||||
from collections import OrderedDict
|
||||
except ImportError:
|
||||
from ordereddict import OrderedDict
|
||||
|
||||
from funcsigs.version import __version__
|
||||
|
||||
__all__ = ['BoundArguments', 'Parameter', 'Signature', 'signature']
|
||||
|
||||
|
||||
_WrapperDescriptor = type(type.__call__)
|
||||
_MethodWrapper = type(all.__call__)
|
||||
|
||||
_NonUserDefinedCallables = (_WrapperDescriptor,
|
||||
_MethodWrapper,
|
||||
types.BuiltinFunctionType)
|
||||
|
||||
|
||||
def formatannotation(annotation, base_module=None):
|
||||
if isinstance(annotation, type):
|
||||
if annotation.__module__ in ('builtins', '__builtin__', base_module):
|
||||
return annotation.__name__
|
||||
return annotation.__module__+'.'+annotation.__name__
|
||||
return repr(annotation)
|
||||
|
||||
|
||||
def _get_user_defined_method(cls, method_name, *nested):
|
||||
try:
|
||||
if cls is type:
|
||||
return
|
||||
meth = getattr(cls, method_name)
|
||||
for name in nested:
|
||||
meth = getattr(meth, name, meth)
|
||||
except AttributeError:
|
||||
return
|
||||
else:
|
||||
if not isinstance(meth, _NonUserDefinedCallables):
|
||||
# Once '__signature__' will be added to 'C'-level
|
||||
# callables, this check won't be necessary
|
||||
return meth
|
||||
|
||||
|
||||
def signature(obj):
|
||||
'''Get a signature object for the passed callable.'''
|
||||
|
||||
if not callable(obj):
|
||||
raise TypeError('{0!r} is not a callable object'.format(obj))
|
||||
|
||||
if isinstance(obj, types.MethodType):
|
||||
sig = signature(obj.__func__)
|
||||
if obj.__self__ is None:
|
||||
# Unbound method - preserve as-is.
|
||||
return sig
|
||||
else:
|
||||
# Bound method. Eat self - if we can.
|
||||
params = tuple(sig.parameters.values())
|
||||
|
||||
if not params or params[0].kind in (_VAR_KEYWORD, _KEYWORD_ONLY):
|
||||
raise ValueError('invalid method signature')
|
||||
|
||||
kind = params[0].kind
|
||||
if kind in (_POSITIONAL_OR_KEYWORD, _POSITIONAL_ONLY):
|
||||
# Drop first parameter:
|
||||
# '(p1, p2[, ...])' -> '(p2[, ...])'
|
||||
params = params[1:]
|
||||
else:
|
||||
if kind is not _VAR_POSITIONAL:
|
||||
# Unless we add a new parameter type we never
|
||||
# get here
|
||||
raise ValueError('invalid argument type')
|
||||
# It's a var-positional parameter.
|
||||
# Do nothing. '(*args[, ...])' -> '(*args[, ...])'
|
||||
|
||||
return sig.replace(parameters=params)
|
||||
|
||||
try:
|
||||
sig = obj.__signature__
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
if sig is not None:
|
||||
return sig
|
||||
|
||||
try:
|
||||
# Was this function wrapped by a decorator?
|
||||
wrapped = obj.__wrapped__
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
return signature(wrapped)
|
||||
|
||||
if isinstance(obj, types.FunctionType):
|
||||
return Signature.from_function(obj)
|
||||
|
||||
if isinstance(obj, functools.partial):
|
||||
sig = signature(obj.func)
|
||||
|
||||
new_params = OrderedDict(sig.parameters.items())
|
||||
|
||||
partial_args = obj.args or ()
|
||||
partial_keywords = obj.keywords or {}
|
||||
try:
|
||||
ba = sig.bind_partial(*partial_args, **partial_keywords)
|
||||
except TypeError as ex:
|
||||
msg = 'partial object {0!r} has incorrect arguments'.format(obj)
|
||||
raise ValueError(msg)
|
||||
|
||||
for arg_name, arg_value in ba.arguments.items():
|
||||
param = new_params[arg_name]
|
||||
if arg_name in partial_keywords:
|
||||
# We set a new default value, because the following code
|
||||
# is correct:
|
||||
#
|
||||
# >>> def foo(a): print(a)
|
||||
# >>> print(partial(partial(foo, a=10), a=20)())
|
||||
# 20
|
||||
# >>> print(partial(partial(foo, a=10), a=20)(a=30))
|
||||
# 30
|
||||
#
|
||||
# So, with 'partial' objects, passing a keyword argument is
|
||||
# like setting a new default value for the corresponding
|
||||
# parameter
|
||||
#
|
||||
# We also mark this parameter with '_partial_kwarg'
|
||||
# flag. Later, in '_bind', the 'default' value of this
|
||||
# parameter will be added to 'kwargs', to simulate
|
||||
# the 'functools.partial' real call.
|
||||
new_params[arg_name] = param.replace(default=arg_value,
|
||||
_partial_kwarg=True)
|
||||
|
||||
elif (param.kind not in (_VAR_KEYWORD, _VAR_POSITIONAL) and
|
||||
not param._partial_kwarg):
|
||||
new_params.pop(arg_name)
|
||||
|
||||
return sig.replace(parameters=new_params.values())
|
||||
|
||||
sig = None
|
||||
if isinstance(obj, type):
|
||||
# obj is a class or a metaclass
|
||||
|
||||
# First, let's see if it has an overloaded __call__ defined
|
||||
# in its metaclass
|
||||
call = _get_user_defined_method(type(obj), '__call__')
|
||||
if call is not None:
|
||||
sig = signature(call)
|
||||
else:
|
||||
# Now we check if the 'obj' class has a '__new__' method
|
||||
new = _get_user_defined_method(obj, '__new__')
|
||||
if new is not None:
|
||||
sig = signature(new)
|
||||
else:
|
||||
# Finally, we should have at least __init__ implemented
|
||||
init = _get_user_defined_method(obj, '__init__')
|
||||
if init is not None:
|
||||
sig = signature(init)
|
||||
elif not isinstance(obj, _NonUserDefinedCallables):
|
||||
# An object with __call__
|
||||
# We also check that the 'obj' is not an instance of
|
||||
# _WrapperDescriptor or _MethodWrapper to avoid
|
||||
# infinite recursion (and even potential segfault)
|
||||
call = _get_user_defined_method(type(obj), '__call__', 'im_func')
|
||||
if call is not None:
|
||||
sig = signature(call)
|
||||
|
||||
if sig is not None:
|
||||
# For classes and objects we skip the first parameter of their
|
||||
# __call__, __new__, or __init__ methods
|
||||
return sig.replace(parameters=tuple(sig.parameters.values())[1:])
|
||||
|
||||
if isinstance(obj, types.BuiltinFunctionType):
|
||||
# Raise a nicer error message for builtins
|
||||
msg = 'no signature found for builtin function {0!r}'.format(obj)
|
||||
raise ValueError(msg)
|
||||
|
||||
raise ValueError('callable {0!r} is not supported by signature'.format(obj))
|
||||
|
||||
|
||||
class _void(object):
|
||||
'''A private marker - used in Parameter & Signature'''
|
||||
|
||||
|
||||
class _empty(object):
|
||||
pass
|
||||
|
||||
|
||||
class _ParameterKind(int):
|
||||
def __new__(self, *args, **kwargs):
|
||||
obj = int.__new__(self, *args)
|
||||
obj._name = kwargs['name']
|
||||
return obj
|
||||
|
||||
def __str__(self):
|
||||
return self._name
|
||||
|
||||
def __repr__(self):
|
||||
return '<_ParameterKind: {0!r}>'.format(self._name)
|
||||
|
||||
|
||||
_POSITIONAL_ONLY = _ParameterKind(0, name='POSITIONAL_ONLY')
|
||||
_POSITIONAL_OR_KEYWORD = _ParameterKind(1, name='POSITIONAL_OR_KEYWORD')
|
||||
_VAR_POSITIONAL = _ParameterKind(2, name='VAR_POSITIONAL')
|
||||
_KEYWORD_ONLY = _ParameterKind(3, name='KEYWORD_ONLY')
|
||||
_VAR_KEYWORD = _ParameterKind(4, name='VAR_KEYWORD')
|
||||
|
||||
|
||||
class Parameter(object):
|
||||
'''Represents a parameter in a function signature.
|
||||
|
||||
Has the following public attributes:
|
||||
|
||||
* name : str
|
||||
The name of the parameter as a string.
|
||||
* default : object
|
||||
The default value for the parameter if specified. If the
|
||||
parameter has no default value, this attribute is not set.
|
||||
* annotation
|
||||
The annotation for the parameter if specified. If the
|
||||
parameter has no annotation, this attribute is not set.
|
||||
* kind : str
|
||||
Describes how argument values are bound to the parameter.
|
||||
Possible values: `Parameter.POSITIONAL_ONLY`,
|
||||
`Parameter.POSITIONAL_OR_KEYWORD`, `Parameter.VAR_POSITIONAL`,
|
||||
`Parameter.KEYWORD_ONLY`, `Parameter.VAR_KEYWORD`.
|
||||
'''
|
||||
|
||||
__slots__ = ('_name', '_kind', '_default', '_annotation', '_partial_kwarg')
|
||||
|
||||
POSITIONAL_ONLY = _POSITIONAL_ONLY
|
||||
POSITIONAL_OR_KEYWORD = _POSITIONAL_OR_KEYWORD
|
||||
VAR_POSITIONAL = _VAR_POSITIONAL
|
||||
KEYWORD_ONLY = _KEYWORD_ONLY
|
||||
VAR_KEYWORD = _VAR_KEYWORD
|
||||
|
||||
empty = _empty
|
||||
|
||||
def __init__(self, name, kind, default=_empty, annotation=_empty,
|
||||
_partial_kwarg=False):
|
||||
|
||||
if kind not in (_POSITIONAL_ONLY, _POSITIONAL_OR_KEYWORD,
|
||||
_VAR_POSITIONAL, _KEYWORD_ONLY, _VAR_KEYWORD):
|
||||
raise ValueError("invalid value for 'Parameter.kind' attribute")
|
||||
self._kind = kind
|
||||
|
||||
if default is not _empty:
|
||||
if kind in (_VAR_POSITIONAL, _VAR_KEYWORD):
|
||||
msg = '{0} parameters cannot have default values'.format(kind)
|
||||
raise ValueError(msg)
|
||||
self._default = default
|
||||
self._annotation = annotation
|
||||
|
||||
if name is None:
|
||||
if kind != _POSITIONAL_ONLY:
|
||||
raise ValueError("None is not a valid name for a "
|
||||
"non-positional-only parameter")
|
||||
self._name = name
|
||||
else:
|
||||
name = str(name)
|
||||
if kind != _POSITIONAL_ONLY and not re.match(r'[a-z_]\w*$', name, re.I):
|
||||
msg = '{0!r} is not a valid parameter name'.format(name)
|
||||
raise ValueError(msg)
|
||||
self._name = name
|
||||
|
||||
self._partial_kwarg = _partial_kwarg
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return self._name
|
||||
|
||||
@property
|
||||
def default(self):
|
||||
return self._default
|
||||
|
||||
@property
|
||||
def annotation(self):
|
||||
return self._annotation
|
||||
|
||||
@property
|
||||
def kind(self):
|
||||
return self._kind
|
||||
|
||||
def replace(self, name=_void, kind=_void, annotation=_void,
|
||||
default=_void, _partial_kwarg=_void):
|
||||
'''Creates a customized copy of the Parameter.'''
|
||||
|
||||
if name is _void:
|
||||
name = self._name
|
||||
|
||||
if kind is _void:
|
||||
kind = self._kind
|
||||
|
||||
if annotation is _void:
|
||||
annotation = self._annotation
|
||||
|
||||
if default is _void:
|
||||
default = self._default
|
||||
|
||||
if _partial_kwarg is _void:
|
||||
_partial_kwarg = self._partial_kwarg
|
||||
|
||||
return type(self)(name, kind, default=default, annotation=annotation,
|
||||
_partial_kwarg=_partial_kwarg)
|
||||
|
||||
def __str__(self):
|
||||
kind = self.kind
|
||||
|
||||
formatted = self._name
|
||||
if kind == _POSITIONAL_ONLY:
|
||||
if formatted is None:
|
||||
formatted = ''
|
||||
formatted = '<{0}>'.format(formatted)
|
||||
|
||||
# Add annotation and default value
|
||||
if self._annotation is not _empty:
|
||||
formatted = '{0}:{1}'.format(formatted,
|
||||
formatannotation(self._annotation))
|
||||
|
||||
if self._default is not _empty:
|
||||
formatted = '{0}={1}'.format(formatted, repr(self._default))
|
||||
|
||||
if kind == _VAR_POSITIONAL:
|
||||
formatted = '*' + formatted
|
||||
elif kind == _VAR_KEYWORD:
|
||||
formatted = '**' + formatted
|
||||
|
||||
return formatted
|
||||
|
||||
def __repr__(self):
|
||||
return '<{0} at {1:#x} {2!r}>'.format(self.__class__.__name__,
|
||||
id(self), self.name)
|
||||
|
||||
def __hash__(self):
|
||||
msg = "unhashable type: '{0}'".format(self.__class__.__name__)
|
||||
raise TypeError(msg)
|
||||
|
||||
def __eq__(self, other):
|
||||
return (issubclass(other.__class__, Parameter) and
|
||||
self._name == other._name and
|
||||
self._kind == other._kind and
|
||||
self._default == other._default and
|
||||
self._annotation == other._annotation)
|
||||
|
||||
def __ne__(self, other):
|
||||
return not self.__eq__(other)
|
||||
|
||||
|
||||
class BoundArguments(object):
|
||||
'''Result of `Signature.bind` call. Holds the mapping of arguments
|
||||
to the function's parameters.
|
||||
|
||||
Has the following public attributes:
|
||||
|
||||
* arguments : OrderedDict
|
||||
An ordered mutable mapping of parameters' names to arguments' values.
|
||||
Does not contain arguments' default values.
|
||||
* signature : Signature
|
||||
The Signature object that created this instance.
|
||||
* args : tuple
|
||||
Tuple of positional arguments values.
|
||||
* kwargs : dict
|
||||
Dict of keyword arguments values.
|
||||
'''
|
||||
|
||||
def __init__(self, signature, arguments):
|
||||
self.arguments = arguments
|
||||
self._signature = signature
|
||||
|
||||
@property
|
||||
def signature(self):
|
||||
return self._signature
|
||||
|
||||
@property
|
||||
def args(self):
|
||||
args = []
|
||||
for param_name, param in self._signature.parameters.items():
|
||||
if (param.kind in (_VAR_KEYWORD, _KEYWORD_ONLY) or
|
||||
param._partial_kwarg):
|
||||
# Keyword arguments mapped by 'functools.partial'
|
||||
# (Parameter._partial_kwarg is True) are mapped
|
||||
# in 'BoundArguments.kwargs', along with VAR_KEYWORD &
|
||||
# KEYWORD_ONLY
|
||||
break
|
||||
|
||||
try:
|
||||
arg = self.arguments[param_name]
|
||||
except KeyError:
|
||||
# We're done here. Other arguments
|
||||
# will be mapped in 'BoundArguments.kwargs'
|
||||
break
|
||||
else:
|
||||
if param.kind == _VAR_POSITIONAL:
|
||||
# *args
|
||||
args.extend(arg)
|
||||
else:
|
||||
# plain argument
|
||||
args.append(arg)
|
||||
|
||||
return tuple(args)
|
||||
|
||||
@property
|
||||
def kwargs(self):
|
||||
kwargs = {}
|
||||
kwargs_started = False
|
||||
for param_name, param in self._signature.parameters.items():
|
||||
if not kwargs_started:
|
||||
if (param.kind in (_VAR_KEYWORD, _KEYWORD_ONLY) or
|
||||
param._partial_kwarg):
|
||||
kwargs_started = True
|
||||
else:
|
||||
if param_name not in self.arguments:
|
||||
kwargs_started = True
|
||||
continue
|
||||
|
||||
if not kwargs_started:
|
||||
continue
|
||||
|
||||
try:
|
||||
arg = self.arguments[param_name]
|
||||
except KeyError:
|
||||
pass
|
||||
else:
|
||||
if param.kind == _VAR_KEYWORD:
|
||||
# **kwargs
|
||||
kwargs.update(arg)
|
||||
else:
|
||||
# plain keyword argument
|
||||
kwargs[param_name] = arg
|
||||
|
||||
return kwargs
|
||||
|
||||
def __hash__(self):
|
||||
msg = "unhashable type: '{0}'".format(self.__class__.__name__)
|
||||
raise TypeError(msg)
|
||||
|
||||
def __eq__(self, other):
|
||||
return (issubclass(other.__class__, BoundArguments) and
|
||||
self.signature == other.signature and
|
||||
self.arguments == other.arguments)
|
||||
|
||||
def __ne__(self, other):
|
||||
return not self.__eq__(other)
|
||||
|
||||
|
||||
class Signature(object):
|
||||
'''A Signature object represents the overall signature of a function.
|
||||
It stores a Parameter object for each parameter accepted by the
|
||||
function, as well as information specific to the function itself.
|
||||
|
||||
A Signature object has the following public attributes and methods:
|
||||
|
||||
* parameters : OrderedDict
|
||||
An ordered mapping of parameters' names to the corresponding
|
||||
Parameter objects (keyword-only arguments are in the same order
|
||||
as listed in `code.co_varnames`).
|
||||
* return_annotation : object
|
||||
The annotation for the return type of the function if specified.
|
||||
If the function has no annotation for its return type, this
|
||||
attribute is not set.
|
||||
* bind(*args, **kwargs) -> BoundArguments
|
||||
Creates a mapping from positional and keyword arguments to
|
||||
parameters.
|
||||
* bind_partial(*args, **kwargs) -> BoundArguments
|
||||
Creates a partial mapping from positional and keyword arguments
|
||||
to parameters (simulating 'functools.partial' behavior.)
|
||||
'''
|
||||
|
||||
__slots__ = ('_return_annotation', '_parameters')
|
||||
|
||||
_parameter_cls = Parameter
|
||||
_bound_arguments_cls = BoundArguments
|
||||
|
||||
empty = _empty
|
||||
|
||||
def __init__(self, parameters=None, return_annotation=_empty,
|
||||
__validate_parameters__=True):
|
||||
'''Constructs Signature from the given list of Parameter
|
||||
objects and 'return_annotation'. All arguments are optional.
|
||||
'''
|
||||
|
||||
if parameters is None:
|
||||
params = OrderedDict()
|
||||
else:
|
||||
if __validate_parameters__:
|
||||
params = OrderedDict()
|
||||
top_kind = _POSITIONAL_ONLY
|
||||
|
||||
for idx, param in enumerate(parameters):
|
||||
kind = param.kind
|
||||
if kind < top_kind:
|
||||
msg = 'wrong parameter order: {0} before {1}'
|
||||
msg = msg.format(top_kind, param.kind)
|
||||
raise ValueError(msg)
|
||||
else:
|
||||
top_kind = kind
|
||||
|
||||
name = param.name
|
||||
if name is None:
|
||||
name = str(idx)
|
||||
param = param.replace(name=name)
|
||||
|
||||
if name in params:
|
||||
msg = 'duplicate parameter name: {0!r}'.format(name)
|
||||
raise ValueError(msg)
|
||||
params[name] = param
|
||||
else:
|
||||
params = OrderedDict(((param.name, param)
|
||||
for param in parameters))
|
||||
|
||||
self._parameters = params
|
||||
self._return_annotation = return_annotation
|
||||
|
||||
@classmethod
|
||||
def from_function(cls, func):
|
||||
'''Constructs Signature for the given python function'''
|
||||
|
||||
if not isinstance(func, types.FunctionType):
|
||||
raise TypeError('{0!r} is not a Python function'.format(func))
|
||||
|
||||
Parameter = cls._parameter_cls
|
||||
|
||||
# Parameter information.
|
||||
func_code = func.__code__
|
||||
pos_count = func_code.co_argcount
|
||||
arg_names = func_code.co_varnames
|
||||
positional = tuple(arg_names[:pos_count])
|
||||
keyword_only_count = getattr(func_code, 'co_kwonlyargcount', 0)
|
||||
keyword_only = arg_names[pos_count:(pos_count + keyword_only_count)]
|
||||
annotations = getattr(func, '__annotations__', {})
|
||||
defaults = func.__defaults__
|
||||
kwdefaults = getattr(func, '__kwdefaults__', None)
|
||||
|
||||
if defaults:
|
||||
pos_default_count = len(defaults)
|
||||
else:
|
||||
pos_default_count = 0
|
||||
|
||||
parameters = []
|
||||
|
||||
# Non-keyword-only parameters w/o defaults.
|
||||
non_default_count = pos_count - pos_default_count
|
||||
for name in positional[:non_default_count]:
|
||||
annotation = annotations.get(name, _empty)
|
||||
parameters.append(Parameter(name, annotation=annotation,
|
||||
kind=_POSITIONAL_OR_KEYWORD))
|
||||
|
||||
# ... w/ defaults.
|
||||
for offset, name in enumerate(positional[non_default_count:]):
|
||||
annotation = annotations.get(name, _empty)
|
||||
parameters.append(Parameter(name, annotation=annotation,
|
||||
kind=_POSITIONAL_OR_KEYWORD,
|
||||
default=defaults[offset]))
|
||||
|
||||
# *args
|
||||
if func_code.co_flags & 0x04:
|
||||
name = arg_names[pos_count + keyword_only_count]
|
||||
annotation = annotations.get(name, _empty)
|
||||
parameters.append(Parameter(name, annotation=annotation,
|
||||
kind=_VAR_POSITIONAL))
|
||||
|
||||
# Keyword-only parameters.
|
||||
for name in keyword_only:
|
||||
default = _empty
|
||||
if kwdefaults is not None:
|
||||
default = kwdefaults.get(name, _empty)
|
||||
|
||||
annotation = annotations.get(name, _empty)
|
||||
parameters.append(Parameter(name, annotation=annotation,
|
||||
kind=_KEYWORD_ONLY,
|
||||
default=default))
|
||||
# **kwargs
|
||||
if func_code.co_flags & 0x08:
|
||||
index = pos_count + keyword_only_count
|
||||
if func_code.co_flags & 0x04:
|
||||
index += 1
|
||||
|
||||
name = arg_names[index]
|
||||
annotation = annotations.get(name, _empty)
|
||||
parameters.append(Parameter(name, annotation=annotation,
|
||||
kind=_VAR_KEYWORD))
|
||||
|
||||
return cls(parameters,
|
||||
return_annotation=annotations.get('return', _empty),
|
||||
__validate_parameters__=False)
|
||||
|
||||
@property
|
||||
def parameters(self):
|
||||
try:
|
||||
return types.MappingProxyType(self._parameters)
|
||||
except AttributeError:
|
||||
return OrderedDict(self._parameters.items())
|
||||
|
||||
@property
|
||||
def return_annotation(self):
|
||||
return self._return_annotation
|
||||
|
||||
def replace(self, parameters=_void, return_annotation=_void):
|
||||
'''Creates a customized copy of the Signature.
|
||||
Pass 'parameters' and/or 'return_annotation' arguments
|
||||
to override them in the new copy.
|
||||
'''
|
||||
|
||||
if parameters is _void:
|
||||
parameters = self.parameters.values()
|
||||
|
||||
if return_annotation is _void:
|
||||
return_annotation = self._return_annotation
|
||||
|
||||
return type(self)(parameters,
|
||||
return_annotation=return_annotation)
|
||||
|
||||
def __hash__(self):
|
||||
msg = "unhashable type: '{0}'".format(self.__class__.__name__)
|
||||
raise TypeError(msg)
|
||||
|
||||
def __eq__(self, other):
|
||||
if (not issubclass(type(other), Signature) or
|
||||
self.return_annotation != other.return_annotation or
|
||||
len(self.parameters) != len(other.parameters)):
|
||||
return False
|
||||
|
||||
other_positions = dict((param, idx)
|
||||
for idx, param in enumerate(other.parameters.keys()))
|
||||
|
||||
for idx, (param_name, param) in enumerate(self.parameters.items()):
|
||||
if param.kind == _KEYWORD_ONLY:
|
||||
try:
|
||||
other_param = other.parameters[param_name]
|
||||
except KeyError:
|
||||
return False
|
||||
else:
|
||||
if param != other_param:
|
||||
return False
|
||||
else:
|
||||
try:
|
||||
other_idx = other_positions[param_name]
|
||||
except KeyError:
|
||||
return False
|
||||
else:
|
||||
if (idx != other_idx or
|
||||
param != other.parameters[param_name]):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def __ne__(self, other):
|
||||
return not self.__eq__(other)
|
||||
|
||||
def _bind(self, args, kwargs, partial=False):
|
||||
'''Private method. Don't use directly.'''
|
||||
|
||||
arguments = OrderedDict()
|
||||
|
||||
parameters = iter(self.parameters.values())
|
||||
parameters_ex = ()
|
||||
arg_vals = iter(args)
|
||||
|
||||
if partial:
|
||||
# Support for binding arguments to 'functools.partial' objects.
|
||||
# See 'functools.partial' case in 'signature()' implementation
|
||||
# for details.
|
||||
for param_name, param in self.parameters.items():
|
||||
if (param._partial_kwarg and param_name not in kwargs):
|
||||
# Simulating 'functools.partial' behavior
|
||||
kwargs[param_name] = param.default
|
||||
|
||||
while True:
|
||||
# Let's iterate through the positional arguments and corresponding
|
||||
# parameters
|
||||
try:
|
||||
arg_val = next(arg_vals)
|
||||
except StopIteration:
|
||||
# No more positional arguments
|
||||
try:
|
||||
param = next(parameters)
|
||||
except StopIteration:
|
||||
# No more parameters. That's it. Just need to check that
|
||||
# we have no `kwargs` after this while loop
|
||||
break
|
||||
else:
|
||||
if param.kind == _VAR_POSITIONAL:
|
||||
# That's OK, just empty *args. Let's start parsing
|
||||
# kwargs
|
||||
break
|
||||
elif param.name in kwargs:
|
||||
if param.kind == _POSITIONAL_ONLY:
|
||||
msg = '{arg!r} parameter is positional only, ' \
|
||||
'but was passed as a keyword'
|
||||
msg = msg.format(arg=param.name)
|
||||
raise TypeError(msg)
|
||||
parameters_ex = (param,)
|
||||
break
|
||||
elif (param.kind == _VAR_KEYWORD or
|
||||
param.default is not _empty):
|
||||
# That's fine too - we have a default value for this
|
||||
# parameter. So, lets start parsing `kwargs`, starting
|
||||
# with the current parameter
|
||||
parameters_ex = (param,)
|
||||
break
|
||||
else:
|
||||
if partial:
|
||||
parameters_ex = (param,)
|
||||
break
|
||||
else:
|
||||
msg = '{arg!r} parameter lacking default value'
|
||||
msg = msg.format(arg=param.name)
|
||||
raise TypeError(msg)
|
||||
else:
|
||||
# We have a positional argument to process
|
||||
try:
|
||||
param = next(parameters)
|
||||
except StopIteration:
|
||||
raise TypeError('too many positional arguments')
|
||||
else:
|
||||
if param.kind in (_VAR_KEYWORD, _KEYWORD_ONLY):
|
||||
# Looks like we have no parameter for this positional
|
||||
# argument
|
||||
raise TypeError('too many positional arguments')
|
||||
|
||||
if param.kind == _VAR_POSITIONAL:
|
||||
# We have an '*args'-like argument, let's fill it with
|
||||
# all positional arguments we have left and move on to
|
||||
# the next phase
|
||||
values = [arg_val]
|
||||
values.extend(arg_vals)
|
||||
arguments[param.name] = tuple(values)
|
||||
break
|
||||
|
||||
if param.name in kwargs:
|
||||
raise TypeError('multiple values for argument '
|
||||
'{arg!r}'.format(arg=param.name))
|
||||
|
||||
arguments[param.name] = arg_val
|
||||
|
||||
# Now, we iterate through the remaining parameters to process
|
||||
# keyword arguments
|
||||
kwargs_param = None
|
||||
for param in itertools.chain(parameters_ex, parameters):
|
||||
if param.kind == _POSITIONAL_ONLY:
|
||||
# This should never happen in case of a properly built
|
||||
# Signature object (but let's have this check here
|
||||
# to ensure correct behaviour just in case)
|
||||
raise TypeError('{arg!r} parameter is positional only, '
|
||||
'but was passed as a keyword'. \
|
||||
format(arg=param.name))
|
||||
|
||||
if param.kind == _VAR_KEYWORD:
|
||||
# Memorize that we have a '**kwargs'-like parameter
|
||||
kwargs_param = param
|
||||
continue
|
||||
|
||||
param_name = param.name
|
||||
try:
|
||||
arg_val = kwargs.pop(param_name)
|
||||
except KeyError:
|
||||
# We have no value for this parameter. It's fine though,
|
||||
# if it has a default value, or it is an '*args'-like
|
||||
# parameter, left alone by the processing of positional
|
||||
# arguments.
|
||||
if (not partial and param.kind != _VAR_POSITIONAL and
|
||||
param.default is _empty):
|
||||
raise TypeError('{arg!r} parameter lacking default value'. \
|
||||
format(arg=param_name))
|
||||
|
||||
else:
|
||||
arguments[param_name] = arg_val
|
||||
|
||||
if kwargs:
|
||||
if kwargs_param is not None:
|
||||
# Process our '**kwargs'-like parameter
|
||||
arguments[kwargs_param.name] = kwargs
|
||||
else:
|
||||
raise TypeError('too many keyword arguments %r' % kwargs)
|
||||
|
||||
return self._bound_arguments_cls(self, arguments)
|
||||
|
||||
def bind(*args, **kwargs):
|
||||
'''Get a BoundArguments object, that maps the passed `args`
|
||||
and `kwargs` to the function's signature. Raises `TypeError`
|
||||
if the passed arguments can not be bound.
|
||||
'''
|
||||
return args[0]._bind(args[1:], kwargs)
|
||||
|
||||
def bind_partial(self, *args, **kwargs):
|
||||
'''Get a BoundArguments object, that partially maps the
|
||||
passed `args` and `kwargs` to the function's signature.
|
||||
Raises `TypeError` if the passed arguments can not be bound.
|
||||
'''
|
||||
return self._bind(args, kwargs, partial=True)
|
||||
|
||||
def __str__(self):
|
||||
result = []
|
||||
render_kw_only_separator = True
|
||||
for idx, param in enumerate(self.parameters.values()):
|
||||
formatted = str(param)
|
||||
|
||||
kind = param.kind
|
||||
if kind == _VAR_POSITIONAL:
|
||||
# OK, we have an '*args'-like parameter, so we won't need
|
||||
# a '*' to separate keyword-only arguments
|
||||
render_kw_only_separator = False
|
||||
elif kind == _KEYWORD_ONLY and render_kw_only_separator:
|
||||
# We have a keyword-only parameter to render and we haven't
|
||||
# rendered an '*args'-like parameter before, so add a '*'
|
||||
# separator to the parameters list ("foo(arg1, *, arg2)" case)
|
||||
result.append('*')
|
||||
# This condition should be only triggered once, so
|
||||
# reset the flag
|
||||
render_kw_only_separator = False
|
||||
|
||||
result.append(formatted)
|
||||
|
||||
rendered = '({0})'.format(', '.join(result))
|
||||
|
||||
if self.return_annotation is not _empty:
|
||||
anno = formatannotation(self.return_annotation)
|
||||
rendered += ' -> {0}'.format(anno)
|
||||
|
||||
return rendered
|
|
@ -0,0 +1 @@
|
|||
__version__ = "1.0.2"
|
|
@ -0,0 +1,19 @@
|
|||
Copyright (c) 2003-2009 Stuart Bishop <stuart@stuartbishop.net>
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a
|
||||
copy of this software and associated documentation files (the "Software"),
|
||||
to deal in the Software without restriction, including without limitation
|
||||
the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
||||
and/or sell copies of the Software, and to permit persons to whom the
|
||||
Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
||||
DEALINGS IN THE SOFTWARE.
|
|
@ -0,0 +1,575 @@
|
|||
pytz - World Timezone Definitions for Python
|
||||
============================================
|
||||
|
||||
:Author: Stuart Bishop <stuart@stuartbishop.net>
|
||||
|
||||
Introduction
|
||||
~~~~~~~~~~~~
|
||||
|
||||
pytz brings the Olson tz database into Python. This library allows
|
||||
accurate and cross platform timezone calculations using Python 2.4
|
||||
or higher. It also solves the issue of ambiguous times at the end
|
||||
of daylight saving time, which you can read more about in the Python
|
||||
Library Reference (``datetime.tzinfo``).
|
||||
|
||||
Almost all of the Olson timezones are supported.
|
||||
|
||||
.. note::
|
||||
|
||||
This library differs from the documented Python API for
|
||||
tzinfo implementations; if you want to create local wallclock
|
||||
times you need to use the ``localize()`` method documented in this
|
||||
document. In addition, if you perform date arithmetic on local
|
||||
times that cross DST boundaries, the result may be in an incorrect
|
||||
timezone (ie. subtract 1 minute from 2002-10-27 1:00 EST and you get
|
||||
2002-10-27 0:59 EST instead of the correct 2002-10-27 1:59 EDT). A
|
||||
``normalize()`` method is provided to correct this. Unfortunately these
|
||||
issues cannot be resolved without modifying the Python datetime
|
||||
implementation (see PEP-431).
|
||||
|
||||
|
||||
Installation
|
||||
~~~~~~~~~~~~
|
||||
|
||||
This package can either be installed from a .egg file using setuptools,
|
||||
or from the tarball using the standard Python distutils.
|
||||
|
||||
If you are installing from a tarball, run the following command as an
|
||||
administrative user::
|
||||
|
||||
python setup.py install
|
||||
|
||||
If you are installing using setuptools, you don't even need to download
|
||||
anything as the latest version will be downloaded for you
|
||||
from the Python package index::
|
||||
|
||||
easy_install --upgrade pytz
|
||||
|
||||
If you already have the .egg file, you can use that too::
|
||||
|
||||
easy_install pytz-2008g-py2.6.egg
|
||||
|
||||
|
||||
Example & Usage
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Localized times and date arithmetic
|
||||
-----------------------------------
|
||||
|
||||
>>> from datetime import datetime, timedelta
|
||||
>>> from pytz import timezone
|
||||
>>> import pytz
|
||||
>>> utc = pytz.utc
|
||||
>>> utc.zone
|
||||
'UTC'
|
||||
>>> eastern = timezone('US/Eastern')
|
||||
>>> eastern.zone
|
||||
'US/Eastern'
|
||||
>>> amsterdam = timezone('Europe/Amsterdam')
|
||||
>>> fmt = '%Y-%m-%d %H:%M:%S %Z%z'
|
||||
|
||||
This library only supports two ways of building a localized time. The
|
||||
first is to use the ``localize()`` method provided by the pytz library.
|
||||
This is used to localize a naive datetime (datetime with no timezone
|
||||
information):
|
||||
|
||||
>>> loc_dt = eastern.localize(datetime(2002, 10, 27, 6, 0, 0))
|
||||
>>> print(loc_dt.strftime(fmt))
|
||||
2002-10-27 06:00:00 EST-0500
|
||||
|
||||
The second way of building a localized time is by converting an existing
|
||||
localized time using the standard ``astimezone()`` method:
|
||||
|
||||
>>> ams_dt = loc_dt.astimezone(amsterdam)
|
||||
>>> ams_dt.strftime(fmt)
|
||||
'2002-10-27 12:00:00 CET+0100'
|
||||
|
||||
Unfortunately using the tzinfo argument of the standard datetime
|
||||
constructors ''does not work'' with pytz for many timezones.
|
||||
|
||||
>>> datetime(2002, 10, 27, 12, 0, 0, tzinfo=amsterdam).strftime(fmt)
|
||||
'2002-10-27 12:00:00 LMT+0020'
|
||||
|
||||
It is safe for timezones without daylight saving transitions though, such
|
||||
as UTC:
|
||||
|
||||
>>> datetime(2002, 10, 27, 12, 0, 0, tzinfo=pytz.utc).strftime(fmt)
|
||||
'2002-10-27 12:00:00 UTC+0000'
|
||||
|
||||
The preferred way of dealing with times is to always work in UTC,
|
||||
converting to localtime only when generating output to be read
|
||||
by humans.
|
||||
|
||||
>>> utc_dt = datetime(2002, 10, 27, 6, 0, 0, tzinfo=utc)
|
||||
>>> loc_dt = utc_dt.astimezone(eastern)
|
||||
>>> loc_dt.strftime(fmt)
|
||||
'2002-10-27 01:00:00 EST-0500'
|
||||
|
||||
This library also allows you to do date arithmetic using local
|
||||
times, although it is more complicated than working in UTC as you
|
||||
need to use the ``normalize()`` method to handle daylight saving time
|
||||
and other timezone transitions. In this example, ``loc_dt`` is set
|
||||
to the instant when daylight saving time ends in the US/Eastern
|
||||
timezone.
|
||||
|
||||
>>> before = loc_dt - timedelta(minutes=10)
|
||||
>>> before.strftime(fmt)
|
||||
'2002-10-27 00:50:00 EST-0500'
|
||||
>>> eastern.normalize(before).strftime(fmt)
|
||||
'2002-10-27 01:50:00 EDT-0400'
|
||||
>>> after = eastern.normalize(before + timedelta(minutes=20))
|
||||
>>> after.strftime(fmt)
|
||||
'2002-10-27 01:10:00 EST-0500'
|
||||
|
||||
Creating local times is also tricky, and the reason why working with
|
||||
local times is not recommended. Unfortunately, you cannot just pass
|
||||
a ``tzinfo`` argument when constructing a datetime (see the next
|
||||
section for more details)
|
||||
|
||||
>>> dt = datetime(2002, 10, 27, 1, 30, 0)
|
||||
>>> dt1 = eastern.localize(dt, is_dst=True)
|
||||
>>> dt1.strftime(fmt)
|
||||
'2002-10-27 01:30:00 EDT-0400'
|
||||
>>> dt2 = eastern.localize(dt, is_dst=False)
|
||||
>>> dt2.strftime(fmt)
|
||||
'2002-10-27 01:30:00 EST-0500'
|
||||
|
||||
Converting between timezones also needs special attention. We also need
|
||||
to use the ``normalize()`` method to ensure the conversion is correct.
|
||||
|
||||
>>> utc_dt = utc.localize(datetime.utcfromtimestamp(1143408899))
|
||||
>>> utc_dt.strftime(fmt)
|
||||
'2006-03-26 21:34:59 UTC+0000'
|
||||
>>> au_tz = timezone('Australia/Sydney')
|
||||
>>> au_dt = au_tz.normalize(utc_dt.astimezone(au_tz))
|
||||
>>> au_dt.strftime(fmt)
|
||||
'2006-03-27 08:34:59 AEDT+1100'
|
||||
>>> utc_dt2 = utc.normalize(au_dt.astimezone(utc))
|
||||
>>> utc_dt2.strftime(fmt)
|
||||
'2006-03-26 21:34:59 UTC+0000'
|
||||
|
||||
You can take shortcuts when dealing with the UTC side of timezone
|
||||
conversions. ``normalize()`` and ``localize()`` are not really
|
||||
necessary when there are no daylight saving time transitions to
|
||||
deal with.
|
||||
|
||||
>>> utc_dt = datetime.utcfromtimestamp(1143408899).replace(tzinfo=utc)
|
||||
>>> utc_dt.strftime(fmt)
|
||||
'2006-03-26 21:34:59 UTC+0000'
|
||||
>>> au_tz = timezone('Australia/Sydney')
|
||||
>>> au_dt = au_tz.normalize(utc_dt.astimezone(au_tz))
|
||||
>>> au_dt.strftime(fmt)
|
||||
'2006-03-27 08:34:59 AEDT+1100'
|
||||
>>> utc_dt2 = au_dt.astimezone(utc)
|
||||
>>> utc_dt2.strftime(fmt)
|
||||
'2006-03-26 21:34:59 UTC+0000'
|
||||
|
||||
|
||||
``tzinfo`` API
|
||||
--------------
|
||||
|
||||
The ``tzinfo`` instances returned by the ``timezone()`` function have
|
||||
been extended to cope with ambiguous times by adding an ``is_dst``
|
||||
parameter to the ``utcoffset()``, ``dst()`` && ``tzname()`` methods.
|
||||
|
||||
>>> tz = timezone('America/St_Johns')
|
||||
|
||||
>>> normal = datetime(2009, 9, 1)
|
||||
>>> ambiguous = datetime(2009, 10, 31, 23, 30)
|
||||
|
||||
The ``is_dst`` parameter is ignored for most timestamps. It is only used
|
||||
during DST transition ambiguous periods to resulve that ambiguity.
|
||||
|
||||
>>> tz.utcoffset(normal, is_dst=True)
|
||||
datetime.timedelta(-1, 77400)
|
||||
>>> tz.dst(normal, is_dst=True)
|
||||
datetime.timedelta(0, 3600)
|
||||
>>> tz.tzname(normal, is_dst=True)
|
||||
'NDT'
|
||||
|
||||
>>> tz.utcoffset(ambiguous, is_dst=True)
|
||||
datetime.timedelta(-1, 77400)
|
||||
>>> tz.dst(ambiguous, is_dst=True)
|
||||
datetime.timedelta(0, 3600)
|
||||
>>> tz.tzname(ambiguous, is_dst=True)
|
||||
'NDT'
|
||||
|
||||
>>> tz.utcoffset(normal, is_dst=False)
|
||||
datetime.timedelta(-1, 77400)
|
||||
>>> tz.dst(normal, is_dst=False)
|
||||
datetime.timedelta(0, 3600)
|
||||
>>> tz.tzname(normal, is_dst=False)
|
||||
'NDT'
|
||||
|
||||
>>> tz.utcoffset(ambiguous, is_dst=False)
|
||||
datetime.timedelta(-1, 73800)
|
||||
>>> tz.dst(ambiguous, is_dst=False)
|
||||
datetime.timedelta(0)
|
||||
>>> tz.tzname(ambiguous, is_dst=False)
|
||||
'NST'
|
||||
|
||||
If ``is_dst`` is not specified, ambiguous timestamps will raise
|
||||
an ``pytz.exceptions.AmbiguousTimeError`` exception.
|
||||
|
||||
>>> tz.utcoffset(normal)
|
||||
datetime.timedelta(-1, 77400)
|
||||
>>> tz.dst(normal)
|
||||
datetime.timedelta(0, 3600)
|
||||
>>> tz.tzname(normal)
|
||||
'NDT'
|
||||
|
||||
>>> import pytz.exceptions
|
||||
>>> try:
|
||||
... tz.utcoffset(ambiguous)
|
||||
... except pytz.exceptions.AmbiguousTimeError:
|
||||
... print('pytz.exceptions.AmbiguousTimeError: %s' % ambiguous)
|
||||
pytz.exceptions.AmbiguousTimeError: 2009-10-31 23:30:00
|
||||
>>> try:
|
||||
... tz.dst(ambiguous)
|
||||
... except pytz.exceptions.AmbiguousTimeError:
|
||||
... print('pytz.exceptions.AmbiguousTimeError: %s' % ambiguous)
|
||||
pytz.exceptions.AmbiguousTimeError: 2009-10-31 23:30:00
|
||||
>>> try:
|
||||
... tz.tzname(ambiguous)
|
||||
... except pytz.exceptions.AmbiguousTimeError:
|
||||
... print('pytz.exceptions.AmbiguousTimeError: %s' % ambiguous)
|
||||
pytz.exceptions.AmbiguousTimeError: 2009-10-31 23:30:00
|
||||
|
||||
|
||||
Problems with Localtime
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The major problem we have to deal with is that certain datetimes
|
||||
may occur twice in a year. For example, in the US/Eastern timezone
|
||||
on the last Sunday morning in October, the following sequence
|
||||
happens:
|
||||
|
||||
- 01:00 EDT occurs
|
||||
- 1 hour later, instead of 2:00am the clock is turned back 1 hour
|
||||
and 01:00 happens again (this time 01:00 EST)
|
||||
|
||||
In fact, every instant between 01:00 and 02:00 occurs twice. This means
|
||||
that if you try and create a time in the 'US/Eastern' timezone
|
||||
the standard datetime syntax, there is no way to specify if you meant
|
||||
before of after the end-of-daylight-saving-time transition. Using the
|
||||
pytz custom syntax, the best you can do is make an educated guess:
|
||||
|
||||
>>> loc_dt = eastern.localize(datetime(2002, 10, 27, 1, 30, 00))
|
||||
>>> loc_dt.strftime(fmt)
|
||||
'2002-10-27 01:30:00 EST-0500'
|
||||
|
||||
As you can see, the system has chosen one for you and there is a 50%
|
||||
chance of it being out by one hour. For some applications, this does
|
||||
not matter. However, if you are trying to schedule meetings with people
|
||||
in different timezones or analyze log files it is not acceptable.
|
||||
|
||||
The best and simplest solution is to stick with using UTC. The pytz
|
||||
package encourages using UTC for internal timezone representation by
|
||||
including a special UTC implementation based on the standard Python
|
||||
reference implementation in the Python documentation.
|
||||
|
||||
The UTC timezone unpickles to be the same instance, and pickles to a
|
||||
smaller size than other pytz tzinfo instances. The UTC implementation
|
||||
can be obtained as pytz.utc, pytz.UTC, or pytz.timezone('UTC').
|
||||
|
||||
>>> import pickle, pytz
|
||||
>>> dt = datetime(2005, 3, 1, 14, 13, 21, tzinfo=utc)
|
||||
>>> naive = dt.replace(tzinfo=None)
|
||||
>>> p = pickle.dumps(dt, 1)
|
||||
>>> naive_p = pickle.dumps(naive, 1)
|
||||
>>> len(p) - len(naive_p)
|
||||
17
|
||||
>>> new = pickle.loads(p)
|
||||
>>> new == dt
|
||||
True
|
||||
>>> new is dt
|
||||
False
|
||||
>>> new.tzinfo is dt.tzinfo
|
||||
True
|
||||
>>> pytz.utc is pytz.UTC is pytz.timezone('UTC')
|
||||
True
|
||||
|
||||
Note that some other timezones are commonly thought of as the same (GMT,
|
||||
Greenwich, Universal, etc.). The definition of UTC is distinct from these
|
||||
other timezones, and they are not equivalent. For this reason, they will
|
||||
not compare the same in Python.
|
||||
|
||||
>>> utc == pytz.timezone('GMT')
|
||||
False
|
||||
|
||||
See the section `What is UTC`_, below.
|
||||
|
||||
If you insist on working with local times, this library provides a
|
||||
facility for constructing them unambiguously:
|
||||
|
||||
>>> loc_dt = datetime(2002, 10, 27, 1, 30, 00)
|
||||
>>> est_dt = eastern.localize(loc_dt, is_dst=True)
|
||||
>>> edt_dt = eastern.localize(loc_dt, is_dst=False)
|
||||
>>> print(est_dt.strftime(fmt) + ' / ' + edt_dt.strftime(fmt))
|
||||
2002-10-27 01:30:00 EDT-0400 / 2002-10-27 01:30:00 EST-0500
|
||||
|
||||
If you pass None as the is_dst flag to localize(), pytz will refuse to
|
||||
guess and raise exceptions if you try to build ambiguous or non-existent
|
||||
times.
|
||||
|
||||
For example, 1:30am on 27th Oct 2002 happened twice in the US/Eastern
|
||||
timezone when the clocks where put back at the end of Daylight Saving
|
||||
Time:
|
||||
|
||||
>>> dt = datetime(2002, 10, 27, 1, 30, 00)
|
||||
>>> try:
|
||||
... eastern.localize(dt, is_dst=None)
|
||||
... except pytz.exceptions.AmbiguousTimeError:
|
||||
... print('pytz.exceptions.AmbiguousTimeError: %s' % dt)
|
||||
pytz.exceptions.AmbiguousTimeError: 2002-10-27 01:30:00
|
||||
|
||||
Similarly, 2:30am on 7th April 2002 never happened at all in the
|
||||
US/Eastern timezone, as the clocks where put forward at 2:00am skipping
|
||||
the entire hour:
|
||||
|
||||
>>> dt = datetime(2002, 4, 7, 2, 30, 00)
|
||||
>>> try:
|
||||
... eastern.localize(dt, is_dst=None)
|
||||
... except pytz.exceptions.NonExistentTimeError:
|
||||
... print('pytz.exceptions.NonExistentTimeError: %s' % dt)
|
||||
pytz.exceptions.NonExistentTimeError: 2002-04-07 02:30:00
|
||||
|
||||
Both of these exceptions share a common base class to make error handling
|
||||
easier:
|
||||
|
||||
>>> isinstance(pytz.AmbiguousTimeError(), pytz.InvalidTimeError)
|
||||
True
|
||||
>>> isinstance(pytz.NonExistentTimeError(), pytz.InvalidTimeError)
|
||||
True
|
||||
|
||||
|
||||
A special case is where countries change their timezone definitions
|
||||
with no daylight savings time switch. For example, in 1915 Warsaw
|
||||
switched from Warsaw time to Central European time with no daylight savings
|
||||
transition. So at the stroke of midnight on August 5th 1915 the clocks
|
||||
were wound back 24 minutes creating an ambiguous time period that cannot
|
||||
be specified without referring to the timezone abbreviation or the
|
||||
actual UTC offset. In this case midnight happened twice, neither time
|
||||
during a daylight saving time period. pytz handles this transition by
|
||||
treating the ambiguous period before the switch as daylight savings
|
||||
time, and the ambiguous period after as standard time.
|
||||
|
||||
|
||||
>>> warsaw = pytz.timezone('Europe/Warsaw')
|
||||
>>> amb_dt1 = warsaw.localize(datetime(1915, 8, 4, 23, 59, 59), is_dst=True)
|
||||
>>> amb_dt1.strftime(fmt)
|
||||
'1915-08-04 23:59:59 WMT+0124'
|
||||
>>> amb_dt2 = warsaw.localize(datetime(1915, 8, 4, 23, 59, 59), is_dst=False)
|
||||
>>> amb_dt2.strftime(fmt)
|
||||
'1915-08-04 23:59:59 CET+0100'
|
||||
>>> switch_dt = warsaw.localize(datetime(1915, 8, 5, 00, 00, 00), is_dst=False)
|
||||
>>> switch_dt.strftime(fmt)
|
||||
'1915-08-05 00:00:00 CET+0100'
|
||||
>>> str(switch_dt - amb_dt1)
|
||||
'0:24:01'
|
||||
>>> str(switch_dt - amb_dt2)
|
||||
'0:00:01'
|
||||
|
||||
The best way of creating a time during an ambiguous time period is
|
||||
by converting from another timezone such as UTC:
|
||||
|
||||
>>> utc_dt = datetime(1915, 8, 4, 22, 36, tzinfo=pytz.utc)
|
||||
>>> utc_dt.astimezone(warsaw).strftime(fmt)
|
||||
'1915-08-04 23:36:00 CET+0100'
|
||||
|
||||
The standard Python way of handling all these ambiguities is not to
|
||||
handle them, such as demonstrated in this example using the US/Eastern
|
||||
timezone definition from the Python documentation (Note that this
|
||||
implementation only works for dates between 1987 and 2006 - it is
|
||||
included for tests only!):
|
||||
|
||||
>>> from pytz.reference import Eastern # pytz.reference only for tests
|
||||
>>> dt = datetime(2002, 10, 27, 0, 30, tzinfo=Eastern)
|
||||
>>> str(dt)
|
||||
'2002-10-27 00:30:00-04:00'
|
||||
>>> str(dt + timedelta(hours=1))
|
||||
'2002-10-27 01:30:00-05:00'
|
||||
>>> str(dt + timedelta(hours=2))
|
||||
'2002-10-27 02:30:00-05:00'
|
||||
>>> str(dt + timedelta(hours=3))
|
||||
'2002-10-27 03:30:00-05:00'
|
||||
|
||||
Notice the first two results? At first glance you might think they are
|
||||
correct, but taking the UTC offset into account you find that they are
|
||||
actually two hours appart instead of the 1 hour we asked for.
|
||||
|
||||
>>> from pytz.reference import UTC # pytz.reference only for tests
|
||||
>>> str(dt.astimezone(UTC))
|
||||
'2002-10-27 04:30:00+00:00'
|
||||
>>> str((dt + timedelta(hours=1)).astimezone(UTC))
|
||||
'2002-10-27 06:30:00+00:00'
|
||||
|
||||
|
||||
Country Information
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A mechanism is provided to access the timezones commonly in use
|
||||
for a particular country, looked up using the ISO 3166 country code.
|
||||
It returns a list of strings that can be used to retrieve the relevant
|
||||
tzinfo instance using ``pytz.timezone()``:
|
||||
|
||||
>>> print(' '.join(pytz.country_timezones['nz']))
|
||||
Pacific/Auckland Pacific/Chatham
|
||||
|
||||
The Olson database comes with a ISO 3166 country code to English country
|
||||
name mapping that pytz exposes as a dictionary:
|
||||
|
||||
>>> print(pytz.country_names['nz'])
|
||||
New Zealand
|
||||
|
||||
|
||||
What is UTC
|
||||
~~~~~~~~~~~
|
||||
|
||||
'UTC' is `Coordinated Universal Time`_. It is a successor to, but distinct
|
||||
from, Greenwich Mean Time (GMT) and the various definitions of Universal
|
||||
Time. UTC is now the worldwide standard for regulating clocks and time
|
||||
measurement.
|
||||
|
||||
All other timezones are defined relative to UTC, and include offsets like
|
||||
UTC+0800 - hours to add or subtract from UTC to derive the local time. No
|
||||
daylight saving time occurs in UTC, making it a useful timezone to perform
|
||||
date arithmetic without worrying about the confusion and ambiguities caused
|
||||
by daylight saving time transitions, your country changing its timezone, or
|
||||
mobile computers that roam through multiple timezones.
|
||||
|
||||
.. _Coordinated Universal Time: https://en.wikipedia.org/wiki/Coordinated_Universal_Time
|
||||
|
||||
|
||||
Helpers
|
||||
~~~~~~~
|
||||
|
||||
There are two lists of timezones provided.
|
||||
|
||||
``all_timezones`` is the exhaustive list of the timezone names that can
|
||||
be used.
|
||||
|
||||
>>> from pytz import all_timezones
|
||||
>>> len(all_timezones) >= 500
|
||||
True
|
||||
>>> 'Etc/Greenwich' in all_timezones
|
||||
True
|
||||
|
||||
``common_timezones`` is a list of useful, current timezones. It doesn't
|
||||
contain deprecated zones or historical zones, except for a few I've
|
||||
deemed in common usage, such as US/Eastern (open a bug report if you
|
||||
think other timezones are deserving of being included here). It is also
|
||||
a sequence of strings.
|
||||
|
||||
>>> from pytz import common_timezones
|
||||
>>> len(common_timezones) < len(all_timezones)
|
||||
True
|
||||
>>> 'Etc/Greenwich' in common_timezones
|
||||
False
|
||||
>>> 'Australia/Melbourne' in common_timezones
|
||||
True
|
||||
>>> 'US/Eastern' in common_timezones
|
||||
True
|
||||
>>> 'Canada/Eastern' in common_timezones
|
||||
True
|
||||
>>> 'US/Pacific-New' in all_timezones
|
||||
True
|
||||
>>> 'US/Pacific-New' in common_timezones
|
||||
False
|
||||
|
||||
Both ``common_timezones`` and ``all_timezones`` are alphabetically
|
||||
sorted:
|
||||
|
||||
>>> common_timezones_dupe = common_timezones[:]
|
||||
>>> common_timezones_dupe.sort()
|
||||
>>> common_timezones == common_timezones_dupe
|
||||
True
|
||||
>>> all_timezones_dupe = all_timezones[:]
|
||||
>>> all_timezones_dupe.sort()
|
||||
>>> all_timezones == all_timezones_dupe
|
||||
True
|
||||
|
||||
``all_timezones`` and ``common_timezones`` are also available as sets.
|
||||
|
||||
>>> from pytz import all_timezones_set, common_timezones_set
|
||||
>>> 'US/Eastern' in all_timezones_set
|
||||
True
|
||||
>>> 'US/Eastern' in common_timezones_set
|
||||
True
|
||||
>>> 'Australia/Victoria' in common_timezones_set
|
||||
False
|
||||
|
||||
You can also retrieve lists of timezones used by particular countries
|
||||
using the ``country_timezones()`` function. It requires an ISO-3166
|
||||
two letter country code.
|
||||
|
||||
>>> from pytz import country_timezones
|
||||
>>> print(' '.join(country_timezones('ch')))
|
||||
Europe/Zurich
|
||||
>>> print(' '.join(country_timezones('CH')))
|
||||
Europe/Zurich
|
||||
|
||||
|
||||
License
|
||||
~~~~~~~
|
||||
|
||||
MIT license.
|
||||
|
||||
This code is also available as part of Zope 3 under the Zope Public
|
||||
License, Version 2.1 (ZPL).
|
||||
|
||||
I'm happy to relicense this code if necessary for inclusion in other
|
||||
open source projects.
|
||||
|
||||
|
||||
Latest Versions
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
This package will be updated after releases of the Olson timezone
|
||||
database. The latest version can be downloaded from the `Python Package
|
||||
Index <http://pypi.python.org/pypi/pytz/>`_. The code that is used
|
||||
to generate this distribution is hosted on launchpad.net and available
|
||||
using the `Bazaar version control system <http://bazaar-vcs.org>`_
|
||||
using::
|
||||
|
||||
bzr branch lp:pytz
|
||||
|
||||
Announcements of new releases are made on
|
||||
`Launchpad <https://launchpad.net/pytz>`_, and the
|
||||
`Atom feed <http://feeds.launchpad.net/pytz/announcements.atom>`_
|
||||
hosted there.
|
||||
|
||||
|
||||
Bugs, Feature Requests & Patches
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Bugs can be reported using `Launchpad <https://bugs.launchpad.net/pytz>`_.
|
||||
|
||||
|
||||
Issues & Limitations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Offsets from UTC are rounded to the nearest whole minute, so timezones
|
||||
such as Europe/Amsterdam pre 1937 will be up to 30 seconds out. This
|
||||
is a limitation of the Python datetime library.
|
||||
|
||||
- If you think a timezone definition is incorrect, I probably can't fix
|
||||
it. pytz is a direct translation of the Olson timezone database, and
|
||||
changes to the timezone definitions need to be made to this source.
|
||||
If you find errors they should be reported to the time zone mailing
|
||||
list, linked from http://www.iana.org/time-zones.
|
||||
|
||||
|
||||
Further Reading
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
More info than you want to know about timezones:
|
||||
http://www.twinsun.com/tz/tz-link.htm
|
||||
|
||||
|
||||
Contact
|
||||
~~~~~~~
|
||||
|
||||
Stuart Bishop <stuart@stuartbishop.net>
|
||||
|
||||
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,48 @@
|
|||
'''
|
||||
Custom exceptions raised by pytz.
|
||||
'''
|
||||
|
||||
__all__ = [
|
||||
'UnknownTimeZoneError', 'InvalidTimeError', 'AmbiguousTimeError',
|
||||
'NonExistentTimeError',
|
||||
]
|
||||
|
||||
|
||||
class UnknownTimeZoneError(KeyError):
|
||||
'''Exception raised when pytz is passed an unknown timezone.
|
||||
|
||||
>>> isinstance(UnknownTimeZoneError(), LookupError)
|
||||
True
|
||||
|
||||
This class is actually a subclass of KeyError to provide backwards
|
||||
compatibility with code relying on the undocumented behavior of earlier
|
||||
pytz releases.
|
||||
|
||||
>>> isinstance(UnknownTimeZoneError(), KeyError)
|
||||
True
|
||||
'''
|
||||
pass
|
||||
|
||||
|
||||
class InvalidTimeError(Exception):
|
||||
'''Base class for invalid time exceptions.'''
|
||||
|
||||
|
||||
class AmbiguousTimeError(InvalidTimeError):
|
||||
'''Exception raised when attempting to create an ambiguous wallclock time.
|
||||
|
||||
At the end of a DST transition period, a particular wallclock time will
|
||||
occur twice (once before the clocks are set back, once after). Both
|
||||
possibilities may be correct, unless further information is supplied.
|
||||
|
||||
See DstTzInfo.normalize() for more info
|
||||
'''
|
||||
|
||||
|
||||
class NonExistentTimeError(InvalidTimeError):
|
||||
'''Exception raised when attempting to create a wallclock time that
|
||||
cannot exist.
|
||||
|
||||
At the start of a DST transition period, the wallclock time jumps forward.
|
||||
The instants jumped over never occur.
|
||||
'''
|
|
@ -0,0 +1,168 @@
|
|||
from threading import RLock
|
||||
try:
|
||||
from UserDict import DictMixin
|
||||
except ImportError:
|
||||
from collections import Mapping as DictMixin
|
||||
|
||||
|
||||
# With lazy loading, we might end up with multiple threads triggering
|
||||
# it at the same time. We need a lock.
|
||||
_fill_lock = RLock()
|
||||
|
||||
|
||||
class LazyDict(DictMixin):
|
||||
"""Dictionary populated on first use."""
|
||||
data = None
|
||||
def __getitem__(self, key):
|
||||
if self.data is None:
|
||||
_fill_lock.acquire()
|
||||
try:
|
||||
if self.data is None:
|
||||
self._fill()
|
||||
finally:
|
||||
_fill_lock.release()
|
||||
return self.data[key.upper()]
|
||||
|
||||
def __contains__(self, key):
|
||||
if self.data is None:
|
||||
_fill_lock.acquire()
|
||||
try:
|
||||
if self.data is None:
|
||||
self._fill()
|
||||
finally:
|
||||
_fill_lock.release()
|
||||
return key in self.data
|
||||
|
||||
def __iter__(self):
|
||||
if self.data is None:
|
||||
_fill_lock.acquire()
|
||||
try:
|
||||
if self.data is None:
|
||||
self._fill()
|
||||
finally:
|
||||
_fill_lock.release()
|
||||
return iter(self.data)
|
||||
|
||||
def __len__(self):
|
||||
if self.data is None:
|
||||
_fill_lock.acquire()
|
||||
try:
|
||||
if self.data is None:
|
||||
self._fill()
|
||||
finally:
|
||||
_fill_lock.release()
|
||||
return len(self.data)
|
||||
|
||||
def keys(self):
|
||||
if self.data is None:
|
||||
_fill_lock.acquire()
|
||||
try:
|
||||
if self.data is None:
|
||||
self._fill()
|
||||
finally:
|
||||
_fill_lock.release()
|
||||
return self.data.keys()
|
||||
|
||||
|
||||
class LazyList(list):
|
||||
"""List populated on first use."""
|
||||
|
||||
_props = [
|
||||
'__str__', '__repr__', '__unicode__',
|
||||
'__hash__', '__sizeof__', '__cmp__',
|
||||
'__lt__', '__le__', '__eq__', '__ne__', '__gt__', '__ge__',
|
||||
'append', 'count', 'index', 'extend', 'insert', 'pop', 'remove',
|
||||
'reverse', 'sort', '__add__', '__radd__', '__iadd__', '__mul__',
|
||||
'__rmul__', '__imul__', '__contains__', '__len__', '__nonzero__',
|
||||
'__getitem__', '__setitem__', '__delitem__', '__iter__',
|
||||
'__reversed__', '__getslice__', '__setslice__', '__delslice__']
|
||||
|
||||
def __new__(cls, fill_iter=None):
|
||||
|
||||
if fill_iter is None:
|
||||
return list()
|
||||
|
||||
# We need a new class as we will be dynamically messing with its
|
||||
# methods.
|
||||
class LazyList(list):
|
||||
pass
|
||||
|
||||
fill_iter = [fill_iter]
|
||||
|
||||
def lazy(name):
|
||||
def _lazy(self, *args, **kw):
|
||||
_fill_lock.acquire()
|
||||
try:
|
||||
if len(fill_iter) > 0:
|
||||
list.extend(self, fill_iter.pop())
|
||||
for method_name in cls._props:
|
||||
delattr(LazyList, method_name)
|
||||
finally:
|
||||
_fill_lock.release()
|
||||
return getattr(list, name)(self, *args, **kw)
|
||||
return _lazy
|
||||
|
||||
for name in cls._props:
|
||||
setattr(LazyList, name, lazy(name))
|
||||
|
||||
new_list = LazyList()
|
||||
return new_list
|
||||
|
||||
# Not all versions of Python declare the same magic methods.
|
||||
# Filter out properties that don't exist in this version of Python
|
||||
# from the list.
|
||||
LazyList._props = [prop for prop in LazyList._props if hasattr(list, prop)]
|
||||
|
||||
|
||||
class LazySet(set):
|
||||
"""Set populated on first use."""
|
||||
|
||||
_props = (
|
||||
'__str__', '__repr__', '__unicode__',
|
||||
'__hash__', '__sizeof__', '__cmp__',
|
||||
'__lt__', '__le__', '__eq__', '__ne__', '__gt__', '__ge__',
|
||||
'__contains__', '__len__', '__nonzero__',
|
||||
'__getitem__', '__setitem__', '__delitem__', '__iter__',
|
||||
'__sub__', '__and__', '__xor__', '__or__',
|
||||
'__rsub__', '__rand__', '__rxor__', '__ror__',
|
||||
'__isub__', '__iand__', '__ixor__', '__ior__',
|
||||
'add', 'clear', 'copy', 'difference', 'difference_update',
|
||||
'discard', 'intersection', 'intersection_update', 'isdisjoint',
|
||||
'issubset', 'issuperset', 'pop', 'remove',
|
||||
'symmetric_difference', 'symmetric_difference_update',
|
||||
'union', 'update')
|
||||
|
||||
def __new__(cls, fill_iter=None):
|
||||
|
||||
if fill_iter is None:
|
||||
return set()
|
||||
|
||||
class LazySet(set):
|
||||
pass
|
||||
|
||||
fill_iter = [fill_iter]
|
||||
|
||||
def lazy(name):
|
||||
def _lazy(self, *args, **kw):
|
||||
_fill_lock.acquire()
|
||||
try:
|
||||
if len(fill_iter) > 0:
|
||||
for i in fill_iter.pop():
|
||||
set.add(self, i)
|
||||
for method_name in cls._props:
|
||||
delattr(LazySet, method_name)
|
||||
finally:
|
||||
_fill_lock.release()
|
||||
return getattr(set, name)(self, *args, **kw)
|
||||
return _lazy
|
||||
|
||||
for name in cls._props:
|
||||
setattr(LazySet, name, lazy(name))
|
||||
|
||||
new_set = LazySet()
|
||||
return new_set
|
||||
|
||||
# Not all versions of Python declare the same magic methods.
|
||||
# Filter out properties that don't exist in this version of Python
|
||||
# from the list.
|
||||
LazySet._props = [prop for prop in LazySet._props if hasattr(set, prop)]
|
|
@ -0,0 +1,127 @@
|
|||
'''
|
||||
Reference tzinfo implementations from the Python docs.
|
||||
Used for testing against as they are only correct for the years
|
||||
1987 to 2006. Do not use these for real code.
|
||||
'''
|
||||
|
||||
from datetime import tzinfo, timedelta, datetime
|
||||
from pytz import utc, UTC, HOUR, ZERO
|
||||
|
||||
# A class building tzinfo objects for fixed-offset time zones.
|
||||
# Note that FixedOffset(0, "UTC") is a different way to build a
|
||||
# UTC tzinfo object.
|
||||
|
||||
class FixedOffset(tzinfo):
|
||||
"""Fixed offset in minutes east from UTC."""
|
||||
|
||||
def __init__(self, offset, name):
|
||||
self.__offset = timedelta(minutes = offset)
|
||||
self.__name = name
|
||||
|
||||
def utcoffset(self, dt):
|
||||
return self.__offset
|
||||
|
||||
def tzname(self, dt):
|
||||
return self.__name
|
||||
|
||||
def dst(self, dt):
|
||||
return ZERO
|
||||
|
||||
# A class capturing the platform's idea of local time.
|
||||
|
||||
import time as _time
|
||||
|
||||
STDOFFSET = timedelta(seconds = -_time.timezone)
|
||||
if _time.daylight:
|
||||
DSTOFFSET = timedelta(seconds = -_time.altzone)
|
||||
else:
|
||||
DSTOFFSET = STDOFFSET
|
||||
|
||||
DSTDIFF = DSTOFFSET - STDOFFSET
|
||||
|
||||
class LocalTimezone(tzinfo):
|
||||
|
||||
def utcoffset(self, dt):
|
||||
if self._isdst(dt):
|
||||
return DSTOFFSET
|
||||
else:
|
||||
return STDOFFSET
|
||||
|
||||
def dst(self, dt):
|
||||
if self._isdst(dt):
|
||||
return DSTDIFF
|
||||
else:
|
||||
return ZERO
|
||||
|
||||
def tzname(self, dt):
|
||||
return _time.tzname[self._isdst(dt)]
|
||||
|
||||
def _isdst(self, dt):
|
||||
tt = (dt.year, dt.month, dt.day,
|
||||
dt.hour, dt.minute, dt.second,
|
||||
dt.weekday(), 0, -1)
|
||||
stamp = _time.mktime(tt)
|
||||
tt = _time.localtime(stamp)
|
||||
return tt.tm_isdst > 0
|
||||
|
||||
Local = LocalTimezone()
|
||||
|
||||
# A complete implementation of current DST rules for major US time zones.
|
||||
|
||||
def first_sunday_on_or_after(dt):
|
||||
days_to_go = 6 - dt.weekday()
|
||||
if days_to_go:
|
||||
dt += timedelta(days_to_go)
|
||||
return dt
|
||||
|
||||
# In the US, DST starts at 2am (standard time) on the first Sunday in April.
|
||||
DSTSTART = datetime(1, 4, 1, 2)
|
||||
# and ends at 2am (DST time; 1am standard time) on the last Sunday of Oct.
|
||||
# which is the first Sunday on or after Oct 25.
|
||||
DSTEND = datetime(1, 10, 25, 1)
|
||||
|
||||
class USTimeZone(tzinfo):
|
||||
|
||||
def __init__(self, hours, reprname, stdname, dstname):
|
||||
self.stdoffset = timedelta(hours=hours)
|
||||
self.reprname = reprname
|
||||
self.stdname = stdname
|
||||
self.dstname = dstname
|
||||
|
||||
def __repr__(self):
|
||||
return self.reprname
|
||||
|
||||
def tzname(self, dt):
|
||||
if self.dst(dt):
|
||||
return self.dstname
|
||||
else:
|
||||
return self.stdname
|
||||
|
||||
def utcoffset(self, dt):
|
||||
return self.stdoffset + self.dst(dt)
|
||||
|
||||
def dst(self, dt):
|
||||
if dt is None or dt.tzinfo is None:
|
||||
# An exception may be sensible here, in one or both cases.
|
||||
# It depends on how you want to treat them. The default
|
||||
# fromutc() implementation (called by the default astimezone()
|
||||
# implementation) passes a datetime with dt.tzinfo is self.
|
||||
return ZERO
|
||||
assert dt.tzinfo is self
|
||||
|
||||
# Find first Sunday in April & the last in October.
|
||||
start = first_sunday_on_or_after(DSTSTART.replace(year=dt.year))
|
||||
end = first_sunday_on_or_after(DSTEND.replace(year=dt.year))
|
||||
|
||||
# Can't compare naive to aware objects, so strip the timezone from
|
||||
# dt first.
|
||||
if start <= dt.replace(tzinfo=None) < end:
|
||||
return HOUR
|
||||
else:
|
||||
return ZERO
|
||||
|
||||
Eastern = USTimeZone(-5, "Eastern", "EST", "EDT")
|
||||
Central = USTimeZone(-6, "Central", "CST", "CDT")
|
||||
Mountain = USTimeZone(-7, "Mountain", "MST", "MDT")
|
||||
Pacific = USTimeZone(-8, "Pacific", "PST", "PDT")
|
||||
|
|
@ -0,0 +1,137 @@
|
|||
#!/usr/bin/env python
|
||||
'''
|
||||
$Id: tzfile.py,v 1.8 2004/06/03 00:15:24 zenzen Exp $
|
||||
'''
|
||||
|
||||
try:
|
||||
from cStringIO import StringIO
|
||||
except ImportError:
|
||||
from io import StringIO
|
||||
from datetime import datetime, timedelta
|
||||
from struct import unpack, calcsize
|
||||
|
||||
from pytz.tzinfo import StaticTzInfo, DstTzInfo, memorized_ttinfo
|
||||
from pytz.tzinfo import memorized_datetime, memorized_timedelta
|
||||
|
||||
def _byte_string(s):
|
||||
"""Cast a string or byte string to an ASCII byte string."""
|
||||
return s.encode('US-ASCII')
|
||||
|
||||
_NULL = _byte_string('\0')
|
||||
|
||||
def _std_string(s):
|
||||
"""Cast a string or byte string to an ASCII string."""
|
||||
return str(s.decode('US-ASCII'))
|
||||
|
||||
def build_tzinfo(zone, fp):
|
||||
head_fmt = '>4s c 15x 6l'
|
||||
head_size = calcsize(head_fmt)
|
||||
(magic, format, ttisgmtcnt, ttisstdcnt,leapcnt, timecnt,
|
||||
typecnt, charcnt) = unpack(head_fmt, fp.read(head_size))
|
||||
|
||||
# Make sure it is a tzfile(5) file
|
||||
assert magic == _byte_string('TZif'), 'Got magic %s' % repr(magic)
|
||||
|
||||
# Read out the transition times, localtime indices and ttinfo structures.
|
||||
data_fmt = '>%(timecnt)dl %(timecnt)dB %(ttinfo)s %(charcnt)ds' % dict(
|
||||
timecnt=timecnt, ttinfo='lBB'*typecnt, charcnt=charcnt)
|
||||
data_size = calcsize(data_fmt)
|
||||
data = unpack(data_fmt, fp.read(data_size))
|
||||
|
||||
# make sure we unpacked the right number of values
|
||||
assert len(data) == 2 * timecnt + 3 * typecnt + 1
|
||||
transitions = [memorized_datetime(trans)
|
||||
for trans in data[:timecnt]]
|
||||
lindexes = list(data[timecnt:2 * timecnt])
|
||||
ttinfo_raw = data[2 * timecnt:-1]
|
||||
tznames_raw = data[-1]
|
||||
del data
|
||||
|
||||
# Process ttinfo into separate structs
|
||||
ttinfo = []
|
||||
tznames = {}
|
||||
i = 0
|
||||
while i < len(ttinfo_raw):
|
||||
# have we looked up this timezone name yet?
|
||||
tzname_offset = ttinfo_raw[i+2]
|
||||
if tzname_offset not in tznames:
|
||||
nul = tznames_raw.find(_NULL, tzname_offset)
|
||||
if nul < 0:
|
||||
nul = len(tznames_raw)
|
||||
tznames[tzname_offset] = _std_string(
|
||||
tznames_raw[tzname_offset:nul])
|
||||
ttinfo.append((ttinfo_raw[i],
|
||||
bool(ttinfo_raw[i+1]),
|
||||
tznames[tzname_offset]))
|
||||
i += 3
|
||||
|
||||
# Now build the timezone object
|
||||
if len(transitions) == 0:
|
||||
ttinfo[0][0], ttinfo[0][2]
|
||||
cls = type(zone, (StaticTzInfo,), dict(
|
||||
zone=zone,
|
||||
_utcoffset=memorized_timedelta(ttinfo[0][0]),
|
||||
_tzname=ttinfo[0][2]))
|
||||
else:
|
||||
# Early dates use the first standard time ttinfo
|
||||
i = 0
|
||||
while ttinfo[i][1]:
|
||||
i += 1
|
||||
if ttinfo[i] == ttinfo[lindexes[0]]:
|
||||
transitions[0] = datetime.min
|
||||
else:
|
||||
transitions.insert(0, datetime.min)
|
||||
lindexes.insert(0, i)
|
||||
|
||||
# calculate transition info
|
||||
transition_info = []
|
||||
for i in range(len(transitions)):
|
||||
inf = ttinfo[lindexes[i]]
|
||||
utcoffset = inf[0]
|
||||
if not inf[1]:
|
||||
dst = 0
|
||||
else:
|
||||
for j in range(i-1, -1, -1):
|
||||
prev_inf = ttinfo[lindexes[j]]
|
||||
if not prev_inf[1]:
|
||||
break
|
||||
dst = inf[0] - prev_inf[0] # dst offset
|
||||
|
||||
# Bad dst? Look further. DST > 24 hours happens when
|
||||
# a timzone has moved across the international dateline.
|
||||
if dst <= 0 or dst > 3600*3:
|
||||
for j in range(i+1, len(transitions)):
|
||||
stdinf = ttinfo[lindexes[j]]
|
||||
if not stdinf[1]:
|
||||
dst = inf[0] - stdinf[0]
|
||||
if dst > 0:
|
||||
break # Found a useful std time.
|
||||
|
||||
tzname = inf[2]
|
||||
|
||||
# Round utcoffset and dst to the nearest minute or the
|
||||
# datetime library will complain. Conversions to these timezones
|
||||
# might be up to plus or minus 30 seconds out, but it is
|
||||
# the best we can do.
|
||||
utcoffset = int((utcoffset + 30) // 60) * 60
|
||||
dst = int((dst + 30) // 60) * 60
|
||||
transition_info.append(memorized_ttinfo(utcoffset, dst, tzname))
|
||||
|
||||
cls = type(zone, (DstTzInfo,), dict(
|
||||
zone=zone,
|
||||
_utc_transition_times=transitions,
|
||||
_transition_info=transition_info))
|
||||
|
||||
return cls()
|
||||
|
||||
if __name__ == '__main__':
|
||||
import os.path
|
||||
from pprint import pprint
|
||||
base = os.path.join(os.path.dirname(__file__), 'zoneinfo')
|
||||
tz = build_tzinfo('Australia/Melbourne',
|
||||
open(os.path.join(base,'Australia','Melbourne'), 'rb'))
|
||||
tz = build_tzinfo('US/Eastern',
|
||||
open(os.path.join(base,'US','Eastern'), 'rb'))
|
||||
pprint(tz._utc_transition_times)
|
||||
#print tz.asPython(4)
|
||||
#print tz.transitions_mapping
|
|
@ -0,0 +1,564 @@
|
|||
'''Base classes and helpers for building zone specific tzinfo classes'''
|
||||
|
||||
from datetime import datetime, timedelta, tzinfo
|
||||
from bisect import bisect_right
|
||||
try:
|
||||
set
|
||||
except NameError:
|
||||
from sets import Set as set
|
||||
|
||||
import pytz
|
||||
from pytz.exceptions import AmbiguousTimeError, NonExistentTimeError
|
||||
|
||||
__all__ = []
|
||||
|
||||
_timedelta_cache = {}
|
||||
def memorized_timedelta(seconds):
|
||||
'''Create only one instance of each distinct timedelta'''
|
||||
try:
|
||||
return _timedelta_cache[seconds]
|
||||
except KeyError:
|
||||
delta = timedelta(seconds=seconds)
|
||||
_timedelta_cache[seconds] = delta
|
||||
return delta
|
||||
|
||||
_epoch = datetime.utcfromtimestamp(0)
|
||||
_datetime_cache = {0: _epoch}
|
||||
def memorized_datetime(seconds):
|
||||
'''Create only one instance of each distinct datetime'''
|
||||
try:
|
||||
return _datetime_cache[seconds]
|
||||
except KeyError:
|
||||
# NB. We can't just do datetime.utcfromtimestamp(seconds) as this
|
||||
# fails with negative values under Windows (Bug #90096)
|
||||
dt = _epoch + timedelta(seconds=seconds)
|
||||
_datetime_cache[seconds] = dt
|
||||
return dt
|
||||
|
||||
_ttinfo_cache = {}
|
||||
def memorized_ttinfo(*args):
|
||||
'''Create only one instance of each distinct tuple'''
|
||||
try:
|
||||
return _ttinfo_cache[args]
|
||||
except KeyError:
|
||||
ttinfo = (
|
||||
memorized_timedelta(args[0]),
|
||||
memorized_timedelta(args[1]),
|
||||
args[2]
|
||||
)
|
||||
_ttinfo_cache[args] = ttinfo
|
||||
return ttinfo
|
||||
|
||||
_notime = memorized_timedelta(0)
|
||||
|
||||
def _to_seconds(td):
|
||||
'''Convert a timedelta to seconds'''
|
||||
return td.seconds + td.days * 24 * 60 * 60
|
||||
|
||||
|
||||
class BaseTzInfo(tzinfo):
|
||||
# Overridden in subclass
|
||||
_utcoffset = None
|
||||
_tzname = None
|
||||
zone = None
|
||||
|
||||
def __str__(self):
|
||||
return self.zone
|
||||
|
||||
|
||||
class StaticTzInfo(BaseTzInfo):
|
||||
'''A timezone that has a constant offset from UTC
|
||||
|
||||
These timezones are rare, as most locations have changed their
|
||||
offset at some point in their history
|
||||
'''
|
||||
def fromutc(self, dt):
|
||||
'''See datetime.tzinfo.fromutc'''
|
||||
if dt.tzinfo is not None and dt.tzinfo is not self:
|
||||
raise ValueError('fromutc: dt.tzinfo is not self')
|
||||
return (dt + self._utcoffset).replace(tzinfo=self)
|
||||
|
||||
def utcoffset(self, dt, is_dst=None):
|
||||
'''See datetime.tzinfo.utcoffset
|
||||
|
||||
is_dst is ignored for StaticTzInfo, and exists only to
|
||||
retain compatibility with DstTzInfo.
|
||||
'''
|
||||
return self._utcoffset
|
||||
|
||||
def dst(self, dt, is_dst=None):
|
||||
'''See datetime.tzinfo.dst
|
||||
|
||||
is_dst is ignored for StaticTzInfo, and exists only to
|
||||
retain compatibility with DstTzInfo.
|
||||
'''
|
||||
return _notime
|
||||
|
||||
def tzname(self, dt, is_dst=None):
|
||||
'''See datetime.tzinfo.tzname
|
||||
|
||||
is_dst is ignored for StaticTzInfo, and exists only to
|
||||
retain compatibility with DstTzInfo.
|
||||
'''
|
||||
return self._tzname
|
||||
|
||||
def localize(self, dt, is_dst=False):
|
||||
'''Convert naive time to local time'''
|
||||
if dt.tzinfo is not None:
|
||||
raise ValueError('Not naive datetime (tzinfo is already set)')
|
||||
return dt.replace(tzinfo=self)
|
||||
|
||||
def normalize(self, dt, is_dst=False):
|
||||
'''Correct the timezone information on the given datetime.
|
||||
|
||||
This is normally a no-op, as StaticTzInfo timezones never have
|
||||
ambiguous cases to correct:
|
||||
|
||||
>>> from pytz import timezone
|
||||
>>> gmt = timezone('GMT')
|
||||
>>> isinstance(gmt, StaticTzInfo)
|
||||
True
|
||||
>>> dt = datetime(2011, 5, 8, 1, 2, 3, tzinfo=gmt)
|
||||
>>> gmt.normalize(dt) is dt
|
||||
True
|
||||
|
||||
The supported method of converting between timezones is to use
|
||||
datetime.astimezone(). Currently normalize() also works:
|
||||
|
||||
>>> la = timezone('America/Los_Angeles')
|
||||
>>> dt = la.localize(datetime(2011, 5, 7, 1, 2, 3))
|
||||
>>> fmt = '%Y-%m-%d %H:%M:%S %Z (%z)'
|
||||
>>> gmt.normalize(dt).strftime(fmt)
|
||||
'2011-05-07 08:02:03 GMT (+0000)'
|
||||
'''
|
||||
if dt.tzinfo is self:
|
||||
return dt
|
||||
if dt.tzinfo is None:
|
||||
raise ValueError('Naive time - no tzinfo set')
|
||||
return dt.astimezone(self)
|
||||
|
||||
def __repr__(self):
|
||||
return '<StaticTzInfo %r>' % (self.zone,)
|
||||
|
||||
def __reduce__(self):
|
||||
# Special pickle to zone remains a singleton and to cope with
|
||||
# database changes.
|
||||
return pytz._p, (self.zone,)
|
||||
|
||||
|
||||
class DstTzInfo(BaseTzInfo):
|
||||
'''A timezone that has a variable offset from UTC
|
||||
|
||||
The offset might change if daylight saving time comes into effect,
|
||||
or at a point in history when the region decides to change their
|
||||
timezone definition.
|
||||
'''
|
||||
# Overridden in subclass
|
||||
_utc_transition_times = None # Sorted list of DST transition times in UTC
|
||||
_transition_info = None # [(utcoffset, dstoffset, tzname)] corresponding
|
||||
# to _utc_transition_times entries
|
||||
zone = None
|
||||
|
||||
# Set in __init__
|
||||
_tzinfos = None
|
||||
_dst = None # DST offset
|
||||
|
||||
def __init__(self, _inf=None, _tzinfos=None):
|
||||
if _inf:
|
||||
self._tzinfos = _tzinfos
|
||||
self._utcoffset, self._dst, self._tzname = _inf
|
||||
else:
|
||||
_tzinfos = {}
|
||||
self._tzinfos = _tzinfos
|
||||
self._utcoffset, self._dst, self._tzname = self._transition_info[0]
|
||||
_tzinfos[self._transition_info[0]] = self
|
||||
for inf in self._transition_info[1:]:
|
||||
if inf not in _tzinfos:
|
||||
_tzinfos[inf] = self.__class__(inf, _tzinfos)
|
||||
|
||||
def fromutc(self, dt):
|
||||
'''See datetime.tzinfo.fromutc'''
|
||||
if (dt.tzinfo is not None
|
||||
and getattr(dt.tzinfo, '_tzinfos', None) is not self._tzinfos):
|
||||
raise ValueError('fromutc: dt.tzinfo is not self')
|
||||
dt = dt.replace(tzinfo=None)
|
||||
idx = max(0, bisect_right(self._utc_transition_times, dt) - 1)
|
||||
inf = self._transition_info[idx]
|
||||
return (dt + inf[0]).replace(tzinfo=self._tzinfos[inf])
|
||||
|
||||
def normalize(self, dt):
|
||||
'''Correct the timezone information on the given datetime
|
||||
|
||||
If date arithmetic crosses DST boundaries, the tzinfo
|
||||
is not magically adjusted. This method normalizes the
|
||||
tzinfo to the correct one.
|
||||
|
||||
To test, first we need to do some setup
|
||||
|
||||
>>> from pytz import timezone
|
||||
>>> utc = timezone('UTC')
|
||||
>>> eastern = timezone('US/Eastern')
|
||||
>>> fmt = '%Y-%m-%d %H:%M:%S %Z (%z)'
|
||||
|
||||
We next create a datetime right on an end-of-DST transition point,
|
||||
the instant when the wallclocks are wound back one hour.
|
||||
|
||||
>>> utc_dt = datetime(2002, 10, 27, 6, 0, 0, tzinfo=utc)
|
||||
>>> loc_dt = utc_dt.astimezone(eastern)
|
||||
>>> loc_dt.strftime(fmt)
|
||||
'2002-10-27 01:00:00 EST (-0500)'
|
||||
|
||||
Now, if we subtract a few minutes from it, note that the timezone
|
||||
information has not changed.
|
||||
|
||||
>>> before = loc_dt - timedelta(minutes=10)
|
||||
>>> before.strftime(fmt)
|
||||
'2002-10-27 00:50:00 EST (-0500)'
|
||||
|
||||
But we can fix that by calling the normalize method
|
||||
|
||||
>>> before = eastern.normalize(before)
|
||||
>>> before.strftime(fmt)
|
||||
'2002-10-27 01:50:00 EDT (-0400)'
|
||||
|
||||
The supported method of converting between timezones is to use
|
||||
datetime.astimezone(). Currently, normalize() also works:
|
||||
|
||||
>>> th = timezone('Asia/Bangkok')
|
||||
>>> am = timezone('Europe/Amsterdam')
|
||||
>>> dt = th.localize(datetime(2011, 5, 7, 1, 2, 3))
|
||||
>>> fmt = '%Y-%m-%d %H:%M:%S %Z (%z)'
|
||||
>>> am.normalize(dt).strftime(fmt)
|
||||
'2011-05-06 20:02:03 CEST (+0200)'
|
||||
'''
|
||||
if dt.tzinfo is None:
|
||||
raise ValueError('Naive time - no tzinfo set')
|
||||
|
||||
# Convert dt in localtime to UTC
|
||||
offset = dt.tzinfo._utcoffset
|
||||
dt = dt.replace(tzinfo=None)
|
||||
dt = dt - offset
|
||||
# convert it back, and return it
|
||||
return self.fromutc(dt)
|
||||
|
||||
def localize(self, dt, is_dst=False):
|
||||
'''Convert naive time to local time.
|
||||
|
||||
This method should be used to construct localtimes, rather
|
||||
than passing a tzinfo argument to a datetime constructor.
|
||||
|
||||
is_dst is used to determine the correct timezone in the ambigous
|
||||
period at the end of daylight saving time.
|
||||
|
||||
>>> from pytz import timezone
|
||||
>>> fmt = '%Y-%m-%d %H:%M:%S %Z (%z)'
|
||||
>>> amdam = timezone('Europe/Amsterdam')
|
||||
>>> dt = datetime(2004, 10, 31, 2, 0, 0)
|
||||
>>> loc_dt1 = amdam.localize(dt, is_dst=True)
|
||||
>>> loc_dt2 = amdam.localize(dt, is_dst=False)
|
||||
>>> loc_dt1.strftime(fmt)
|
||||
'2004-10-31 02:00:00 CEST (+0200)'
|
||||
>>> loc_dt2.strftime(fmt)
|
||||
'2004-10-31 02:00:00 CET (+0100)'
|
||||
>>> str(loc_dt2 - loc_dt1)
|
||||
'1:00:00'
|
||||
|
||||
Use is_dst=None to raise an AmbiguousTimeError for ambiguous
|
||||
times at the end of daylight saving time
|
||||
|
||||
>>> try:
|
||||
... loc_dt1 = amdam.localize(dt, is_dst=None)
|
||||
... except AmbiguousTimeError:
|
||||
... print('Ambiguous')
|
||||
Ambiguous
|
||||
|
||||
is_dst defaults to False
|
||||
|
||||
>>> amdam.localize(dt) == amdam.localize(dt, False)
|
||||
True
|
||||
|
||||
is_dst is also used to determine the correct timezone in the
|
||||
wallclock times jumped over at the start of daylight saving time.
|
||||
|
||||
>>> pacific = timezone('US/Pacific')
|
||||
>>> dt = datetime(2008, 3, 9, 2, 0, 0)
|
||||
>>> ploc_dt1 = pacific.localize(dt, is_dst=True)
|
||||
>>> ploc_dt2 = pacific.localize(dt, is_dst=False)
|
||||
>>> ploc_dt1.strftime(fmt)
|
||||
'2008-03-09 02:00:00 PDT (-0700)'
|
||||
>>> ploc_dt2.strftime(fmt)
|
||||
'2008-03-09 02:00:00 PST (-0800)'
|
||||
>>> str(ploc_dt2 - ploc_dt1)
|
||||
'1:00:00'
|
||||
|
||||
Use is_dst=None to raise a NonExistentTimeError for these skipped
|
||||
times.
|
||||
|
||||
>>> try:
|
||||
... loc_dt1 = pacific.localize(dt, is_dst=None)
|
||||
... except NonExistentTimeError:
|
||||
... print('Non-existent')
|
||||
Non-existent
|
||||
'''
|
||||
if dt.tzinfo is not None:
|
||||
raise ValueError('Not naive datetime (tzinfo is already set)')
|
||||
|
||||
# Find the two best possibilities.
|
||||
possible_loc_dt = set()
|
||||
for delta in [timedelta(days=-1), timedelta(days=1)]:
|
||||
loc_dt = dt + delta
|
||||
idx = max(0, bisect_right(
|
||||
self._utc_transition_times, loc_dt) - 1)
|
||||
inf = self._transition_info[idx]
|
||||
tzinfo = self._tzinfos[inf]
|
||||
loc_dt = tzinfo.normalize(dt.replace(tzinfo=tzinfo))
|
||||
if loc_dt.replace(tzinfo=None) == dt:
|
||||
possible_loc_dt.add(loc_dt)
|
||||
|
||||
if len(possible_loc_dt) == 1:
|
||||
return possible_loc_dt.pop()
|
||||
|
||||
# If there are no possibly correct timezones, we are attempting
|
||||
# to convert a time that never happened - the time period jumped
|
||||
# during the start-of-DST transition period.
|
||||
if len(possible_loc_dt) == 0:
|
||||
# If we refuse to guess, raise an exception.
|
||||
if is_dst is None:
|
||||
raise NonExistentTimeError(dt)
|
||||
|
||||
# If we are forcing the pre-DST side of the DST transition, we
|
||||
# obtain the correct timezone by winding the clock forward a few
|
||||
# hours.
|
||||
elif is_dst:
|
||||
return self.localize(
|
||||
dt + timedelta(hours=6), is_dst=True) - timedelta(hours=6)
|
||||
|
||||
# If we are forcing the post-DST side of the DST transition, we
|
||||
# obtain the correct timezone by winding the clock back.
|
||||
else:
|
||||
return self.localize(
|
||||
dt - timedelta(hours=6), is_dst=False) + timedelta(hours=6)
|
||||
|
||||
|
||||
# If we get this far, we have multiple possible timezones - this
|
||||
# is an ambiguous case occuring during the end-of-DST transition.
|
||||
|
||||
# If told to be strict, raise an exception since we have an
|
||||
# ambiguous case
|
||||
if is_dst is None:
|
||||
raise AmbiguousTimeError(dt)
|
||||
|
||||
# Filter out the possiblilities that don't match the requested
|
||||
# is_dst
|
||||
filtered_possible_loc_dt = [
|
||||
p for p in possible_loc_dt
|
||||
if bool(p.tzinfo._dst) == is_dst
|
||||
]
|
||||
|
||||
# Hopefully we only have one possibility left. Return it.
|
||||
if len(filtered_possible_loc_dt) == 1:
|
||||
return filtered_possible_loc_dt[0]
|
||||
|
||||
if len(filtered_possible_loc_dt) == 0:
|
||||
filtered_possible_loc_dt = list(possible_loc_dt)
|
||||
|
||||
# If we get this far, we have in a wierd timezone transition
|
||||
# where the clocks have been wound back but is_dst is the same
|
||||
# in both (eg. Europe/Warsaw 1915 when they switched to CET).
|
||||
# At this point, we just have to guess unless we allow more
|
||||
# hints to be passed in (such as the UTC offset or abbreviation),
|
||||
# but that is just getting silly.
|
||||
#
|
||||
# Choose the earliest (by UTC) applicable timezone if is_dst=True
|
||||
# Choose the latest (by UTC) applicable timezone if is_dst=False
|
||||
# i.e., behave like end-of-DST transition
|
||||
dates = {} # utc -> local
|
||||
for local_dt in filtered_possible_loc_dt:
|
||||
utc_time = local_dt.replace(tzinfo=None) - local_dt.tzinfo._utcoffset
|
||||
assert utc_time not in dates
|
||||
dates[utc_time] = local_dt
|
||||
return dates[[min, max][not is_dst](dates)]
|
||||
|
||||
def utcoffset(self, dt, is_dst=None):
|
||||
'''See datetime.tzinfo.utcoffset
|
||||
|
||||
The is_dst parameter may be used to remove ambiguity during DST
|
||||
transitions.
|
||||
|
||||
>>> from pytz import timezone
|
||||
>>> tz = timezone('America/St_Johns')
|
||||
>>> ambiguous = datetime(2009, 10, 31, 23, 30)
|
||||
|
||||
>>> tz.utcoffset(ambiguous, is_dst=False)
|
||||
datetime.timedelta(-1, 73800)
|
||||
|
||||
>>> tz.utcoffset(ambiguous, is_dst=True)
|
||||
datetime.timedelta(-1, 77400)
|
||||
|
||||
>>> try:
|
||||
... tz.utcoffset(ambiguous)
|
||||
... except AmbiguousTimeError:
|
||||
... print('Ambiguous')
|
||||
Ambiguous
|
||||
|
||||
'''
|
||||
if dt is None:
|
||||
return None
|
||||
elif dt.tzinfo is not self:
|
||||
dt = self.localize(dt, is_dst)
|
||||
return dt.tzinfo._utcoffset
|
||||
else:
|
||||
return self._utcoffset
|
||||
|
||||
def dst(self, dt, is_dst=None):
|
||||
'''See datetime.tzinfo.dst
|
||||
|
||||
The is_dst parameter may be used to remove ambiguity during DST
|
||||
transitions.
|
||||
|
||||
>>> from pytz import timezone
|
||||
>>> tz = timezone('America/St_Johns')
|
||||
|
||||
>>> normal = datetime(2009, 9, 1)
|
||||
|
||||
>>> tz.dst(normal)
|
||||
datetime.timedelta(0, 3600)
|
||||
>>> tz.dst(normal, is_dst=False)
|
||||
datetime.timedelta(0, 3600)
|
||||
>>> tz.dst(normal, is_dst=True)
|
||||
datetime.timedelta(0, 3600)
|
||||
|
||||
>>> ambiguous = datetime(2009, 10, 31, 23, 30)
|
||||
|
||||
>>> tz.dst(ambiguous, is_dst=False)
|
||||
datetime.timedelta(0)
|
||||
>>> tz.dst(ambiguous, is_dst=True)
|
||||
datetime.timedelta(0, 3600)
|
||||
>>> try:
|
||||
... tz.dst(ambiguous)
|
||||
... except AmbiguousTimeError:
|
||||
... print('Ambiguous')
|
||||
Ambiguous
|
||||
|
||||
'''
|
||||
if dt is None:
|
||||
return None
|
||||
elif dt.tzinfo is not self:
|
||||
dt = self.localize(dt, is_dst)
|
||||
return dt.tzinfo._dst
|
||||
else:
|
||||
return self._dst
|
||||
|
||||
def tzname(self, dt, is_dst=None):
|
||||
'''See datetime.tzinfo.tzname
|
||||
|
||||
The is_dst parameter may be used to remove ambiguity during DST
|
||||
transitions.
|
||||
|
||||
>>> from pytz import timezone
|
||||
>>> tz = timezone('America/St_Johns')
|
||||
|
||||
>>> normal = datetime(2009, 9, 1)
|
||||
|
||||
>>> tz.tzname(normal)
|
||||
'NDT'
|
||||
>>> tz.tzname(normal, is_dst=False)
|
||||
'NDT'
|
||||
>>> tz.tzname(normal, is_dst=True)
|
||||
'NDT'
|
||||
|
||||
>>> ambiguous = datetime(2009, 10, 31, 23, 30)
|
||||
|
||||
>>> tz.tzname(ambiguous, is_dst=False)
|
||||
'NST'
|
||||
>>> tz.tzname(ambiguous, is_dst=True)
|
||||
'NDT'
|
||||
>>> try:
|
||||
... tz.tzname(ambiguous)
|
||||
... except AmbiguousTimeError:
|
||||
... print('Ambiguous')
|
||||
Ambiguous
|
||||
'''
|
||||
if dt is None:
|
||||
return self.zone
|
||||
elif dt.tzinfo is not self:
|
||||
dt = self.localize(dt, is_dst)
|
||||
return dt.tzinfo._tzname
|
||||
else:
|
||||
return self._tzname
|
||||
|
||||
def __repr__(self):
|
||||
if self._dst:
|
||||
dst = 'DST'
|
||||
else:
|
||||
dst = 'STD'
|
||||
if self._utcoffset > _notime:
|
||||
return '<DstTzInfo %r %s+%s %s>' % (
|
||||
self.zone, self._tzname, self._utcoffset, dst
|
||||
)
|
||||
else:
|
||||
return '<DstTzInfo %r %s%s %s>' % (
|
||||
self.zone, self._tzname, self._utcoffset, dst
|
||||
)
|
||||
|
||||
def __reduce__(self):
|
||||
# Special pickle to zone remains a singleton and to cope with
|
||||
# database changes.
|
||||
return pytz._p, (
|
||||
self.zone,
|
||||
_to_seconds(self._utcoffset),
|
||||
_to_seconds(self._dst),
|
||||
self._tzname
|
||||
)
|
||||
|
||||
|
||||
|
||||
def unpickler(zone, utcoffset=None, dstoffset=None, tzname=None):
|
||||
"""Factory function for unpickling pytz tzinfo instances.
|
||||
|
||||
This is shared for both StaticTzInfo and DstTzInfo instances, because
|
||||
database changes could cause a zones implementation to switch between
|
||||
these two base classes and we can't break pickles on a pytz version
|
||||
upgrade.
|
||||
"""
|
||||
# Raises a KeyError if zone no longer exists, which should never happen
|
||||
# and would be a bug.
|
||||
tz = pytz.timezone(zone)
|
||||
|
||||
# A StaticTzInfo - just return it
|
||||
if utcoffset is None:
|
||||
return tz
|
||||
|
||||
# This pickle was created from a DstTzInfo. We need to
|
||||
# determine which of the list of tzinfo instances for this zone
|
||||
# to use in order to restore the state of any datetime instances using
|
||||
# it correctly.
|
||||
utcoffset = memorized_timedelta(utcoffset)
|
||||
dstoffset = memorized_timedelta(dstoffset)
|
||||
try:
|
||||
return tz._tzinfos[(utcoffset, dstoffset, tzname)]
|
||||
except KeyError:
|
||||
# The particular state requested in this timezone no longer exists.
|
||||
# This indicates a corrupt pickle, or the timezone database has been
|
||||
# corrected violently enough to make this particular
|
||||
# (utcoffset,dstoffset) no longer exist in the zone, or the
|
||||
# abbreviation has been changed.
|
||||
pass
|
||||
|
||||
# See if we can find an entry differing only by tzname. Abbreviations
|
||||
# get changed from the initial guess by the database maintainers to
|
||||
# match reality when this information is discovered.
|
||||
for localized_tz in tz._tzinfos.values():
|
||||
if (localized_tz._utcoffset == utcoffset
|
||||
and localized_tz._dst == dstoffset):
|
||||
return localized_tz
|
||||
|
||||
# This (utcoffset, dstoffset) information has been removed from the
|
||||
# zone. Add it back. This might occur when the database maintainers have
|
||||
# corrected incorrect information. datetime instances using this
|
||||
# incorrect information will continue to do so, exactly as they were
|
||||
# before being pickled. This is purely an overly paranoid safety net - I
|
||||
# doubt this will ever been needed in real life.
|
||||
inf = (utcoffset, dstoffset, tzname)
|
||||
tz._tzinfos[inf] = tz.__class__(inf, tz._tzinfos)
|
||||
return tz._tzinfos[inf]
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue