Version 308
This commit is contained in:
parent
7f39f3312c
commit
17a65692dc
|
@ -8,6 +8,48 @@
|
|||
<div class="content">
|
||||
<h3>changelog</h3>
|
||||
<ul>
|
||||
<li><h3>version 308</h3></li>
|
||||
<ul>
|
||||
<li>the multiple watcher will now discard new urls if it is already watching them</li>
|
||||
<li>the multiple watcher will list x/y progress as just 'x' if x==y (making it easier to scan the list)</li>
|
||||
<li>the multiple watcher now lists a couple of 'total' summary lines on its ui--the top lists total number of watchers and queue progress, the bottom lists the usual '23 successful, 3 deleted' line, but summed for all watchers</li>
|
||||
<li>the multiple watcher will now warn you if you try to remove the highlit, alive or un-caught-up watchers</li>
|
||||
<li>the multiple watcher will now resort if the thread subject (or rather, any data in the current sort column) changes (which usually happens right after it is added, when you see it change from from 'unknown subject' to 'mlp is kino')</li>
|
||||
<li>fixed an issue where multiple watchers were not unscheduling down their update job correctly on page close</li>
|
||||
<li>the booru selector in the edit subscription panel should now be in the tab traversal order for keyboard/automated focusing tasks</li>
|
||||
<li>the boorus in that selector are now alphabetised</li>
|
||||
<li>tag import options namespaces are now alphabetised</li>
|
||||
<li>removed/renamed pretty much all references to 'thread' in the watcher code and ui presentation, since it can now do a bunch of other stuff. it is now just the 'watcher' and the 'multiple watcher'</li>
|
||||
<li>deleted a bunch of old static thread watcher and page of images code from the old downloading system</li>
|
||||
<li>added an experimental 'compact' button to advanced mode users' manage subscriptions panels. this removes urls from the selected subscriptions' caches that are no longer useful, keeping their load/save snappy. this is still in testing--be careful with it!</li>
|
||||
<li>the hydrus splash screen now has a bare frame caption and will appear in the taskbar--which helps with some alt-tab and 'where the hell did it go?' stuff if you need to enter a password</li>
|
||||
<li>wrote five 'reasonable defaults' buttons for the 'check timings' options panel for quick entry for different thread/subscription scenarios.</li>
|
||||
<li>added a checkbox to this panel that will swap the reactive options with a simpler single checkbox</li>
|
||||
<li>also clarified/fleshed out the help button on this panel</li>
|
||||
<li>fixed an important source of program instability related to page alive/dead status checking that was inadvertantly talking subtly to the main gui frame even on non ui threads</li>
|
||||
<li>improved how some 'page is closed but not destroyed' test logic for pages inside a closed-but-not-destroyed notebook</li>
|
||||
<li>fixed another small place where the db was talking to the main gui object about status bar updates in a potentially unstable way</li>
|
||||
<li>fixed another small place where the foreground daemons were talking to the main gui frame in a trivial but potentially unstable way</li>
|
||||
<li>played around with some taglist sizer and layout settings</li>
|
||||
<li>the gallery and simple download pages are now a little shorter--the pause and cancel buttons are now just to the right of the status texts, rather than on their own row beneath the network job controls.</li>
|
||||
<li>the various bandwidth-overriding network jobs in the download system--like gallery page downloading--now wait 30s before overriding their bandwidth. hence these jobs will now obey the usual bandwidth rules up to a point</li>
|
||||
<li>the simple downloader also obeys the usual bandwidth rules for 30s but no longer has a static wait, so it can run much faster in certain situations</li>
|
||||
<li>network jobs that will override bandwidth in the future will now report that countdown in their status texts</li>
|
||||
<li>fixed a bug in the old booru code that meant some boorus were superfluously requesting the 0th indexed page of a gallery more frequently than needed in order to reestablish a 'page size' cache. this value is now cached globally and will be replaced by a completely different system in the new gallery downloader</li>
|
||||
<li>added a decent tooltip to the 'gallery fixed delay' widgets in the options->downloading panel</li>
|
||||
<li>the autocomplete input should clear itself after a 'broadcast' event a bit quicker and stop some dupe inputs in certain edge cases</li>
|
||||
<li>the tumblr url class now recognises that tumblr posts can have multiple files, which helps some source url lookup logic</li>
|
||||
<li>added a url class for artstation file pages</li>
|
||||
<li>the primary file import url (the one listed in the file import list) will now correctly not associate with the resulting file if its url class is so set</li>
|
||||
<li>all the import objects now have much lower idle CPU time and thread needs and start in slightly offset times, smoothing out the thread count spikes</li>
|
||||
<li>all the import objects will now respond quickly to changes to the underlying file import cache (like right-click->try again events)</li>
|
||||
<li>the new job scheduling system now uses two queues--fast and slow, in order to reduce some resort/insert overhead</li>
|
||||
<li>a couple more improvements to the new job scheduling system to smooth out spikes</li>
|
||||
<li>if the temporary path override does not exist, the client will now compain with spammy popup messages and fall back to the default</li>
|
||||
<li>if the temporary path override does not exist or is not writeable-to on options dialog ok, a veto exception will be raised</li>
|
||||
<li>refactored the watcher and multiple watcher to their own file, ClientImportWatchers</li>
|
||||
<li>misc fixes</li>
|
||||
</ul>
|
||||
<li><h3>version 307</h3></li>
|
||||
<ul>
|
||||
<li>wrote a gelbooru 0.2.5 (which matches gelbooru itself) parser in the new system. it now has some more redundancy and produces md5 hash and source urls</li>
|
||||
|
|
|
@ -137,14 +137,17 @@ FLAGS_BIG_INDENT = wx.SizerFlags( 0 ).Border( wx.ALL, 10 )
|
|||
FLAGS_CENTER = wx.SizerFlags( 0 ).Border( wx.ALL, 2 ).Center()
|
||||
|
||||
FLAGS_EXPAND_PERPENDICULAR = wx.SizerFlags( 0 ).Border( wx.ALL, 2 ).Expand()
|
||||
FLAGS_EXPAND_BOTH_WAYS = wx.SizerFlags( 2 ).Border( wx.ALL, 2 ).Expand()
|
||||
FLAGS_EXPAND_DEPTH_ONLY = wx.SizerFlags( 2 ).Border( wx.ALL, 2 ).Align( wx.ALIGN_CENTER_VERTICAL )
|
||||
FLAGS_EXPAND_BOTH_WAYS = wx.SizerFlags( 5 ).Border( wx.ALL, 2 ).Expand()
|
||||
FLAGS_EXPAND_DEPTH_ONLY = wx.SizerFlags( 5 ).Border( wx.ALL, 2 ).Align( wx.ALIGN_CENTER_VERTICAL )
|
||||
|
||||
FLAGS_SIZER_CENTER = wx.SizerFlags( 2 ).Center()
|
||||
FLAGS_EXPAND_BOTH_WAYS_POLITE = wx.SizerFlags( 3 ).Border( wx.ALL, 2 ).Expand()
|
||||
FLAGS_EXPAND_BOTH_WAYS_SHY = wx.SizerFlags( 1 ).Border( wx.ALL, 2 ).Expand()
|
||||
|
||||
FLAGS_SIZER_CENTER = wx.SizerFlags( 5 ).Center()
|
||||
|
||||
FLAGS_EXPAND_SIZER_PERPENDICULAR = wx.SizerFlags( 0 ).Expand()
|
||||
FLAGS_EXPAND_SIZER_BOTH_WAYS = wx.SizerFlags( 2 ).Expand()
|
||||
FLAGS_EXPAND_SIZER_DEPTH_ONLY = wx.SizerFlags( 2 ).Align( wx.ALIGN_CENTER_VERTICAL )
|
||||
FLAGS_EXPAND_SIZER_BOTH_WAYS = wx.SizerFlags( 5 ).Expand()
|
||||
FLAGS_EXPAND_SIZER_DEPTH_ONLY = wx.SizerFlags( 5 ).Align( wx.ALIGN_CENTER_VERTICAL )
|
||||
|
||||
FLAGS_BUTTON_SIZER = wx.SizerFlags( 0 ).Align( wx.ALIGN_RIGHT )
|
||||
|
||||
|
@ -152,7 +155,7 @@ FLAGS_LONE_BUTTON = wx.SizerFlags( 0 ).Border( wx.ALL, 2 ).Align( wx.ALIGN_RIGHT
|
|||
|
||||
FLAGS_VCENTER = wx.SizerFlags( 0 ).Border( wx.ALL, 2 ).Align( wx.ALIGN_CENTER_VERTICAL )
|
||||
FLAGS_SIZER_VCENTER = wx.SizerFlags( 0 ).Align( wx.ALIGN_CENTRE_VERTICAL )
|
||||
FLAGS_VCENTER_EXPAND_DEPTH_ONLY = wx.SizerFlags( 2 ).Border( wx.ALL, 2 ).Align( wx.ALIGN_CENTER_VERTICAL )
|
||||
FLAGS_VCENTER_EXPAND_DEPTH_ONLY = wx.SizerFlags( 5 ).Border( wx.ALL, 2 ).Align( wx.ALIGN_CENTER_VERTICAL )
|
||||
|
||||
DAY = 0
|
||||
WEEK = 1
|
||||
|
@ -268,7 +271,7 @@ NETWORK_CONTEXT_DOMAIN = 2
|
|||
NETWORK_CONTEXT_DOWNLOADER = 3
|
||||
NETWORK_CONTEXT_DOWNLOADER_PAGE = 4
|
||||
NETWORK_CONTEXT_SUBSCRIPTION = 5
|
||||
NETWORK_CONTEXT_THREAD_WATCHER_PAGE = 6
|
||||
NETWORK_CONTEXT_WATCHER_PAGE = 6
|
||||
|
||||
network_context_type_string_lookup = {}
|
||||
|
||||
|
@ -278,7 +281,7 @@ network_context_type_string_lookup[ NETWORK_CONTEXT_DOMAIN ] = 'web domain'
|
|||
network_context_type_string_lookup[ NETWORK_CONTEXT_DOWNLOADER ] = 'downloader'
|
||||
network_context_type_string_lookup[ NETWORK_CONTEXT_DOWNLOADER_PAGE ] = 'downloader page'
|
||||
network_context_type_string_lookup[ NETWORK_CONTEXT_SUBSCRIPTION ] = 'subscription'
|
||||
network_context_type_string_lookup[ NETWORK_CONTEXT_THREAD_WATCHER_PAGE ] = 'watcher page'
|
||||
network_context_type_string_lookup[ NETWORK_CONTEXT_WATCHER_PAGE ] = 'watcher page'
|
||||
|
||||
network_context_type_description_lookup = {}
|
||||
|
||||
|
@ -288,7 +291,7 @@ network_context_type_description_lookup[ NETWORK_CONTEXT_DOMAIN ] = 'Network tra
|
|||
network_context_type_description_lookup[ NETWORK_CONTEXT_DOWNLOADER ] = 'Network traffic going through a downloader. This is no longer used.'
|
||||
network_context_type_description_lookup[ NETWORK_CONTEXT_DOWNLOADER_PAGE ] = 'Network traffic going through a single downloader page. This is an ephemeral context--it will not be saved through a client restart. It is useful to throttle individual downloader pages so they give the db and other import pages time to do work.'
|
||||
network_context_type_description_lookup[ NETWORK_CONTEXT_SUBSCRIPTION ] = 'Network traffic going through a subscription query. Each query gets its own network context, named \'[subscription name]: [query text]\'.'
|
||||
network_context_type_description_lookup[ NETWORK_CONTEXT_THREAD_WATCHER_PAGE ] = 'Network traffic going through a single watcher page. This is an ephemeral context--it will not be saved through a client restart. It is useful to throttle individual watcher pages so they give the db and other import pages time to do work.'
|
||||
network_context_type_description_lookup[ NETWORK_CONTEXT_WATCHER_PAGE ] = 'Network traffic going through a single watcher page. This is an ephemeral context--it will not be saved through a client restart. It is useful to throttle individual watcher pages so they give the db and other import pages time to do work.'
|
||||
|
||||
PAGE_FILE_COUNT_DISPLAY_ALL = 0
|
||||
PAGE_FILE_COUNT_DISPLAY_NONE = 1
|
||||
|
|
|
@ -74,6 +74,11 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
HC.options = self.options
|
||||
|
||||
self._page_key_lock = threading.Lock()
|
||||
|
||||
self._alive_page_keys = set()
|
||||
self._closed_page_keys = set()
|
||||
|
||||
self._last_mouse_position = None
|
||||
self._menu_open = False
|
||||
self._previously_idle = False
|
||||
|
@ -109,6 +114,18 @@ class Controller( HydrusController.HydrusController ):
|
|||
self.pub( 'splash_set_status_subtext', ', '.join( names ) )
|
||||
|
||||
|
||||
def AcquirePageKey( self ):
|
||||
|
||||
with self._page_key_lock:
|
||||
|
||||
page_key = HydrusData.GenerateKey()
|
||||
|
||||
self._alive_page_keys.add( page_key )
|
||||
|
||||
return page_key
|
||||
|
||||
|
||||
|
||||
def CallBlockingToWx( self, func, *args, **kwargs ):
|
||||
|
||||
def wx_code( job_key ):
|
||||
|
@ -173,22 +190,26 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
def CallLaterWXSafe( self, window, initial_delay, func, *args, **kwargs ):
|
||||
|
||||
job_scheduler = self._GetAppropriateJobScheduler( initial_delay )
|
||||
|
||||
call = HydrusData.Call( func, *args, **kwargs )
|
||||
|
||||
job = ClientThreading.WXAwareJob( self, self._job_scheduler, window, initial_delay, call )
|
||||
job = ClientThreading.WXAwareJob( self, job_scheduler, window, initial_delay, call )
|
||||
|
||||
self._job_scheduler.AddJob( job )
|
||||
job_scheduler.AddJob( job )
|
||||
|
||||
return job
|
||||
|
||||
|
||||
def CallRepeatingWXSafe( self, window, initial_delay, period, func, *args, **kwargs ):
|
||||
|
||||
job_scheduler = self._GetAppropriateJobScheduler( period )
|
||||
|
||||
call = HydrusData.Call( func, *args, **kwargs )
|
||||
|
||||
job = ClientThreading.WXAwareRepeatingJob( self, self._job_scheduler, window, initial_delay, period, call )
|
||||
job = ClientThreading.WXAwareRepeatingJob( self, job_scheduler, window, initial_delay, period, call )
|
||||
|
||||
self._job_scheduler.AddJob( job )
|
||||
job_scheduler.AddJob( job )
|
||||
|
||||
return job
|
||||
|
||||
|
@ -252,11 +273,19 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
if move_knocked_us_out_of_idle:
|
||||
|
||||
self.gui.SetStatusBarDirty()
|
||||
self.pub( 'set_status_bar_dirty' )
|
||||
|
||||
|
||||
|
||||
|
||||
def ClosePageKeys( self, page_keys ):
|
||||
|
||||
with self._page_key_lock:
|
||||
|
||||
self._closed_page_keys.update( page_keys )
|
||||
|
||||
|
||||
|
||||
def CreateSplash( self ):
|
||||
|
||||
try:
|
||||
|
@ -480,18 +509,6 @@ class Controller( HydrusController.HydrusController ):
|
|||
return self.new_options
|
||||
|
||||
|
||||
def GoodTimeToDoForegroundWork( self ):
|
||||
|
||||
if self.gui:
|
||||
|
||||
return not self.gui.CurrentlyBusy()
|
||||
|
||||
else:
|
||||
|
||||
return True
|
||||
|
||||
|
||||
|
||||
def InitClientFilesManager( self ):
|
||||
|
||||
def wx_code( missing_locations ):
|
||||
|
@ -861,27 +878,19 @@ class Controller( HydrusController.HydrusController ):
|
|||
return self._menu_open
|
||||
|
||||
|
||||
def PageCompletelyDestroyed( self, page_key ):
|
||||
def PageAlive( self, page_key ):
|
||||
|
||||
if self.gui:
|
||||
with self._page_key_lock:
|
||||
|
||||
return self.gui.PageCompletelyDestroyed( page_key )
|
||||
|
||||
else:
|
||||
|
||||
return True
|
||||
return page_key in self._alive_page_keys
|
||||
|
||||
|
||||
|
||||
def PageClosedButNotDestroyed( self, page_key ):
|
||||
|
||||
if self.gui:
|
||||
with self._page_key_lock:
|
||||
|
||||
return self.gui.PageClosedButNotDestroyed( page_key )
|
||||
|
||||
else:
|
||||
|
||||
return False
|
||||
return page_key in self._closed_page_keys
|
||||
|
||||
|
||||
|
||||
|
@ -914,6 +923,15 @@ class Controller( HydrusController.HydrusController ):
|
|||
self.services_manager.RefreshServices()
|
||||
|
||||
|
||||
def ReleasePageKey( self, page_key ):
|
||||
|
||||
with self._page_key_lock:
|
||||
|
||||
self._alive_page_keys.discard( page_key )
|
||||
self._closed_page_keys.discard( page_key )
|
||||
|
||||
|
||||
|
||||
def ResetIdleTimer( self ):
|
||||
|
||||
self._timestamps[ 'last_user_action' ] = HydrusData.GetNow()
|
||||
|
@ -1381,6 +1399,14 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
|
||||
|
||||
def UnclosePageKeys( self, page_keys ):
|
||||
|
||||
with self._page_key_lock:
|
||||
|
||||
self._closed_page_keys.difference_update( page_keys )
|
||||
|
||||
|
||||
|
||||
def WaitUntilViewFree( self ):
|
||||
|
||||
self.WaitUntilModelFree()
|
||||
|
|
|
@ -10302,6 +10302,36 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
|
||||
if version == 307:
|
||||
|
||||
try:
|
||||
|
||||
domain_manager = self._GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER )
|
||||
|
||||
domain_manager.Initialise()
|
||||
|
||||
#
|
||||
|
||||
domain_manager.OverwriteDefaultURLMatches( ( 'tumblr file page', 'artstation file page' ) )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.TryToLinkURLMatchesAndParsers()
|
||||
|
||||
#
|
||||
|
||||
self._SetJSONDump( domain_manager )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
message = 'Trying to update some url classes failed! Please let hydrus dev know!'
|
||||
|
||||
self.pub_initial_message( message )
|
||||
|
||||
|
||||
|
||||
self._controller.pub( 'splash_set_title_text', 'updated db to v' + str( version + 1 ) )
|
||||
|
||||
self._c.execute( 'UPDATE version SET version = ?;', ( version + 1, ) )
|
||||
|
@ -10933,10 +10963,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
def publish_status_update( self ):
|
||||
|
||||
if self._controller.IsBooted() and self._controller.gui:
|
||||
|
||||
self._controller.gui.SetStatusBarDirty()
|
||||
|
||||
self._controller.pub( 'set_status_bar_dirty' )
|
||||
|
||||
|
||||
def GetInitialMessages( self ):
|
||||
|
|
|
@ -7,130 +7,6 @@ import HydrusSerialisable
|
|||
import os
|
||||
import wx
|
||||
|
||||
def SetDefaultBandwidthManagerRules( bandwidth_manager ):
|
||||
|
||||
import ClientNetworkingContexts
|
||||
|
||||
KB = 1024
|
||||
MB = 1024 ** 2
|
||||
GB = 1024 ** 3
|
||||
|
||||
#
|
||||
|
||||
rules = HydrusNetworking.BandwidthRules()
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 1, 5 ) # stop accidental spam
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 60, 120 ) # smooth out heavy usage/bugspam. db and gui prob need a break
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 10 * GB ) # check your inbox lad
|
||||
|
||||
bandwidth_manager.SetRules( ClientNetworkingContexts.GLOBAL_NETWORK_CONTEXT, rules )
|
||||
|
||||
#
|
||||
|
||||
rules = HydrusNetworking.BandwidthRules()
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 1, 1 ) # don't ever hammer a domain
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 2 * GB ) # don't go nuts on a site in a single day
|
||||
|
||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_DOMAIN ), rules )
|
||||
|
||||
#
|
||||
|
||||
rules = HydrusNetworking.BandwidthRules()
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 64 * MB ) # don't sync a giant db in one day
|
||||
|
||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_HYDRUS ), rules )
|
||||
|
||||
#
|
||||
|
||||
rules = HydrusNetworking.BandwidthRules()
|
||||
|
||||
# most gallery downloaders need two rqs per file (page and file), remember
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 300, 200 ) # after that first sample of small files, take it easy
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 300, 128 * MB ) # after that first sample of big files, take it easy
|
||||
|
||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_DOWNLOADER_PAGE ), rules )
|
||||
|
||||
#
|
||||
|
||||
rules = HydrusNetworking.BandwidthRules()
|
||||
|
||||
# most gallery downloaders need two rqs per file (page and file), remember
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 86400, 400 ) # catch up on a big sub in little chunks every day
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 256 * MB ) # catch up on a big sub in little chunks every day
|
||||
|
||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_SUBSCRIPTION ), rules )
|
||||
|
||||
#
|
||||
|
||||
rules = HydrusNetworking.BandwidthRules()
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 300, 100 ) # after that first sample of small files, take it easy
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 300, 128 * MB ) # after that first sample of big files, take it easy
|
||||
|
||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_THREAD_WATCHER_PAGE ), rules )
|
||||
|
||||
#
|
||||
|
||||
rules = HydrusNetworking.BandwidthRules()
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 60 * 7, 80 )
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 4, 1 )
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 2 * GB ) # keep this in there so subs can know better when to stop running (the files come from a subdomain, which causes a pain for bandwidth calcs)
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 64 * MB ) # added as a compromise to try to reduce hydrus sankaku bandwidth usage until their new API and subscription model comes in
|
||||
|
||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_DOMAIN, 'sankakucomplex.com' ), rules )
|
||||
|
||||
def SetDefaultDomainManagerData( domain_manager ):
|
||||
|
||||
network_contexts_to_custom_header_dicts = {}
|
||||
|
||||
#
|
||||
|
||||
import ClientNetworkingContexts
|
||||
import ClientNetworkingDomain
|
||||
|
||||
custom_header_dict = {}
|
||||
|
||||
custom_header_dict[ 'User-Agent' ] = ( 'Mozilla/5.0 (compatible; Hydrus Client)', ClientNetworkingDomain.VALID_APPROVED, 'This is the default User-Agent identifier for the client for all network connections.' )
|
||||
|
||||
network_contexts_to_custom_header_dicts[ ClientNetworkingContexts.GLOBAL_NETWORK_CONTEXT ] = custom_header_dict
|
||||
|
||||
#
|
||||
|
||||
custom_header_dict = {}
|
||||
|
||||
custom_header_dict[ 'User-Agent' ] = ( 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:56.0) Gecko/20100101 Firefox/56.0', ClientNetworkingDomain.VALID_UNKNOWN, 'Sankaku have unusual User-Agent rules on certain requests. Setting this User-Agent allows the sankaku downloader to work.' )
|
||||
|
||||
network_context = ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_DOMAIN, 'sankakucomplex.com' )
|
||||
|
||||
network_contexts_to_custom_header_dicts[ network_context ] = custom_header_dict
|
||||
|
||||
#
|
||||
|
||||
domain_manager.SetNetworkContextsToCustomHeaderDicts( network_contexts_to_custom_header_dicts )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.SetURLMatches( GetDefaultURLMatches() )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.SetParsers( GetDefaultParsers() )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.TryToLinkURLMatchesAndParsers()
|
||||
|
||||
def GetClientDefaultOptions():
|
||||
|
||||
options = {}
|
||||
|
@ -149,7 +25,6 @@ def GetClientDefaultOptions():
|
|||
options[ 'default_gui_session' ] = 'last session'
|
||||
options[ 'fetch_ac_results_automatically' ] = True
|
||||
options[ 'ac_timings' ] = ( 3, 500, 250 )
|
||||
options[ 'thread_checker_timings' ] = ( 3, 1200 )
|
||||
options[ 'idle_period' ] = 60 * 30
|
||||
options[ 'idle_mouse_period' ] = 60 * 10
|
||||
options[ 'idle_cpu_max' ] = 50
|
||||
|
@ -232,6 +107,31 @@ def GetClientDefaultOptions():
|
|||
|
||||
return options
|
||||
|
||||
def GetDefaultCheckerOptions( name ):
|
||||
|
||||
import ClientImportOptions
|
||||
|
||||
if name == 'thread':
|
||||
|
||||
return ClientImportOptions.CheckerOptions( intended_files_per_check = 4, never_faster_than = 300, never_slower_than = 86400, death_file_velocity = ( 1, 3 * 86400 ) )
|
||||
|
||||
elif name == 'slow thread':
|
||||
|
||||
return ClientImportOptions.CheckerOptions( intended_files_per_check = 1, never_faster_than = 4 * 3600, never_slower_than = 7 * 86400, death_file_velocity = ( 1, 30 * 86400 ) )
|
||||
|
||||
elif name == 'artist subscription':
|
||||
|
||||
return ClientImportOptions.CheckerOptions( intended_files_per_check = 4, never_faster_than = 86400, never_slower_than = 90 * 86400, death_file_velocity = ( 1, 180 * 86400 ) )
|
||||
|
||||
elif name == 'fast tag subscription':
|
||||
|
||||
return ClientImportOptions.CheckerOptions( intended_files_per_check = 10, never_faster_than = 43200, never_slower_than = 30 * 86400, death_file_velocity = ( 1, 90 * 86400 ) )
|
||||
|
||||
elif name == 'slow tag subscription':
|
||||
|
||||
return ClientImportOptions.CheckerOptions( intended_files_per_check = 1, never_faster_than = 7 * 86400, never_slower_than = 180 * 86400, death_file_velocity = ( 1, 360 * 86400 ) )
|
||||
|
||||
|
||||
def GetDefaultHentaiFoundryInfo():
|
||||
|
||||
info = {}
|
||||
|
@ -354,7 +254,7 @@ def GetDefaultNamespacesAndSearchValue( gallery_identifier ):
|
|||
namespaces = [ '' ]
|
||||
search_value = 'username'
|
||||
|
||||
elif site_type == HC.SITE_TYPE_THREAD_WATCHER:
|
||||
elif site_type == HC.SITE_TYPE_WATCHER:
|
||||
|
||||
namespaces = [ 'filename' ]
|
||||
search_value = 'thread url'
|
||||
|
@ -828,3 +728,127 @@ def GetDefaultObjectsFromPNGs( dir_path, allowed_object_types ):
|
|||
|
||||
return default_objects
|
||||
|
||||
def SetDefaultBandwidthManagerRules( bandwidth_manager ):
|
||||
|
||||
import ClientNetworkingContexts
|
||||
|
||||
KB = 1024
|
||||
MB = 1024 ** 2
|
||||
GB = 1024 ** 3
|
||||
|
||||
#
|
||||
|
||||
rules = HydrusNetworking.BandwidthRules()
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 1, 5 ) # stop accidental spam
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 60, 120 ) # smooth out heavy usage/bugspam. db and gui prob need a break
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 10 * GB ) # check your inbox lad
|
||||
|
||||
bandwidth_manager.SetRules( ClientNetworkingContexts.GLOBAL_NETWORK_CONTEXT, rules )
|
||||
|
||||
#
|
||||
|
||||
rules = HydrusNetworking.BandwidthRules()
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 1, 1 ) # don't ever hammer a domain
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 2 * GB ) # don't go nuts on a site in a single day
|
||||
|
||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_DOMAIN ), rules )
|
||||
|
||||
#
|
||||
|
||||
rules = HydrusNetworking.BandwidthRules()
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 64 * MB ) # don't sync a giant db in one day
|
||||
|
||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_HYDRUS ), rules )
|
||||
|
||||
#
|
||||
|
||||
rules = HydrusNetworking.BandwidthRules()
|
||||
|
||||
# most gallery downloaders need two rqs per file (page and file), remember
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 300, 200 ) # after that first sample of small files, take it easy
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 300, 128 * MB ) # after that first sample of big files, take it easy
|
||||
|
||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_DOWNLOADER_PAGE ), rules )
|
||||
|
||||
#
|
||||
|
||||
rules = HydrusNetworking.BandwidthRules()
|
||||
|
||||
# most gallery downloaders need two rqs per file (page and file), remember
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 86400, 400 ) # catch up on a big sub in little chunks every day
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 256 * MB ) # catch up on a big sub in little chunks every day
|
||||
|
||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_SUBSCRIPTION ), rules )
|
||||
|
||||
#
|
||||
|
||||
rules = HydrusNetworking.BandwidthRules()
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 300, 100 ) # after that first sample of small files, take it easy
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 300, 128 * MB ) # after that first sample of big files, take it easy
|
||||
|
||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_WATCHER_PAGE ), rules )
|
||||
|
||||
#
|
||||
|
||||
rules = HydrusNetworking.BandwidthRules()
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 60 * 7, 80 )
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 4, 1 )
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 2 * GB ) # keep this in there so subs can know better when to stop running (the files come from a subdomain, which causes a pain for bandwidth calcs)
|
||||
|
||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 64 * MB ) # added as a compromise to try to reduce hydrus sankaku bandwidth usage until their new API and subscription model comes in
|
||||
|
||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_DOMAIN, 'sankakucomplex.com' ), rules )
|
||||
|
||||
def SetDefaultDomainManagerData( domain_manager ):
|
||||
|
||||
network_contexts_to_custom_header_dicts = {}
|
||||
|
||||
#
|
||||
|
||||
import ClientNetworkingContexts
|
||||
import ClientNetworkingDomain
|
||||
|
||||
custom_header_dict = {}
|
||||
|
||||
custom_header_dict[ 'User-Agent' ] = ( 'Mozilla/5.0 (compatible; Hydrus Client)', ClientNetworkingDomain.VALID_APPROVED, 'This is the default User-Agent identifier for the client for all network connections.' )
|
||||
|
||||
network_contexts_to_custom_header_dicts[ ClientNetworkingContexts.GLOBAL_NETWORK_CONTEXT ] = custom_header_dict
|
||||
|
||||
#
|
||||
|
||||
custom_header_dict = {}
|
||||
|
||||
custom_header_dict[ 'User-Agent' ] = ( 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:56.0) Gecko/20100101 Firefox/56.0', ClientNetworkingDomain.VALID_UNKNOWN, 'Sankaku have unusual User-Agent rules on certain requests. Setting this User-Agent allows the sankaku downloader to work.' )
|
||||
|
||||
network_context = ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_DOMAIN, 'sankakucomplex.com' )
|
||||
|
||||
network_contexts_to_custom_header_dicts[ network_context ] = custom_header_dict
|
||||
|
||||
#
|
||||
|
||||
domain_manager.SetNetworkContextsToCustomHeaderDicts( network_contexts_to_custom_header_dicts )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.SetURLMatches( GetDefaultURLMatches() )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.SetParsers( GetDefaultParsers() )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.TryToLinkURLMatchesAndParsers()
|
||||
|
||||
|
|
|
@ -103,94 +103,6 @@ def GetGallery( gallery_identifier ):
|
|||
return GalleryTumblr()
|
||||
|
||||
|
||||
_8CHAN_BOARDS_TO_MEDIA_HOSTS = {}
|
||||
|
||||
def GetImageboardFileURL( thread_url, filename, ext ):
|
||||
|
||||
( thread_url, host, board, thread_id ) = ParseImageboardThreadURL( thread_url )
|
||||
|
||||
is_4chan = '4chan.org' in host
|
||||
is_8chan = '8ch.net' in host
|
||||
|
||||
if is_4chan:
|
||||
|
||||
return 'https://i.4cdn.org/' + board + '/' + filename + ext
|
||||
|
||||
elif is_8chan:
|
||||
|
||||
if len( filename ) == 64: # new sha256 filename
|
||||
|
||||
return 'https://media.8ch.net/file_store/' + filename + ext
|
||||
|
||||
else:
|
||||
|
||||
if board not in _8CHAN_BOARDS_TO_MEDIA_HOSTS:
|
||||
|
||||
try:
|
||||
|
||||
html_url = 'https://8ch.net/' + board + '/res/' + thread_id + '.html'
|
||||
|
||||
network_job = ClientNetworkingJobs.NetworkJob( 'GET', html_url )
|
||||
|
||||
network_job.OverrideBandwidth()
|
||||
|
||||
HG.client_controller.network_engine.AddJob( network_job )
|
||||
|
||||
network_job.WaitUntilDone()
|
||||
|
||||
thread_html = network_job.GetContent()
|
||||
|
||||
soup = ClientParsing.GetSoup( thread_html )
|
||||
|
||||
file_infos = soup.find_all( 'p', class_ = "fileinfo" )
|
||||
|
||||
example_file_url = file_infos[0].find( 'a' )[ 'href' ]
|
||||
|
||||
parse_result = urlparse.urlparse( example_file_url )
|
||||
|
||||
hostname = parse_result.hostname
|
||||
|
||||
if hostname is None:
|
||||
|
||||
hostname = '8ch.net'
|
||||
|
||||
|
||||
_8CHAN_BOARDS_TO_MEDIA_HOSTS[ board ] = hostname
|
||||
|
||||
except Exception as e:
|
||||
|
||||
_8CHAN_BOARDS_TO_MEDIA_HOSTS[ board ] = 'media.8ch.net'
|
||||
|
||||
|
||||
|
||||
media_host = _8CHAN_BOARDS_TO_MEDIA_HOSTS[ board ]
|
||||
|
||||
return 'https://' + media_host + '/' + board + '/src/' + filename + ext
|
||||
|
||||
|
||||
|
||||
def GetImageboardThreadJSONURL( thread_url ):
|
||||
|
||||
( thread_url, host, board, thread_id ) = ParseImageboardThreadURL( thread_url )
|
||||
|
||||
is_4chan = '4chan.org' in host
|
||||
is_8chan = '8ch.net' in host
|
||||
|
||||
# 4chan
|
||||
# https://a.4cdn.org/asp/thread/382059.json
|
||||
|
||||
# 8chan
|
||||
# https://8ch.net/v/res/406061.json
|
||||
|
||||
if is_4chan:
|
||||
|
||||
return 'https://a.4cdn.org/' + board + '/thread/' + thread_id + '.json'
|
||||
|
||||
elif is_8chan:
|
||||
|
||||
return 'https://8ch.net/' + board + '/res/' + thread_id + '.json'
|
||||
|
||||
|
||||
def GetYoutubeFormats( youtube_url ):
|
||||
|
||||
try:
|
||||
|
@ -250,185 +162,6 @@ def Parse4chanPostScreen( html ):
|
|||
except: return ( 'error', 'unknown error' )
|
||||
|
||||
|
||||
def ParseImageboardFileURLFromPost( thread_url, post, source_timestamp ):
|
||||
|
||||
url_filename = str( post[ 'tim' ] )
|
||||
url_ext = post[ 'ext' ]
|
||||
|
||||
file_original_filename = post[ 'filename' ]
|
||||
file_url = GetImageboardFileURL( thread_url, url_filename, url_ext )
|
||||
|
||||
if 'md5' in post:
|
||||
|
||||
file_md5_base64 = post[ 'md5' ]
|
||||
|
||||
else:
|
||||
|
||||
file_md5_base64 = None
|
||||
|
||||
|
||||
return ( file_url, file_md5_base64, file_original_filename, source_timestamp )
|
||||
|
||||
def ParseImageboardFileURLsFromJSON( thread_url, raw_json ):
|
||||
|
||||
json_dict = json.loads( raw_json )
|
||||
|
||||
posts_list = json_dict[ 'posts' ]
|
||||
|
||||
file_infos = []
|
||||
|
||||
for post in posts_list:
|
||||
|
||||
if 'filename' not in post:
|
||||
|
||||
continue
|
||||
|
||||
|
||||
if 'time' in post:
|
||||
|
||||
source_timestamp = post[ 'time' ]
|
||||
|
||||
else:
|
||||
|
||||
source_timestamp = HydrusData.GetNow()
|
||||
|
||||
|
||||
file_infos.append( ParseImageboardFileURLFromPost( thread_url, post, source_timestamp ) )
|
||||
|
||||
if 'extra_files' in post:
|
||||
|
||||
for extra_file in post[ 'extra_files' ]:
|
||||
|
||||
if 'filename' not in extra_file:
|
||||
|
||||
continue
|
||||
|
||||
|
||||
file_infos.append( ParseImageboardFileURLFromPost( thread_url, extra_file, source_timestamp ) )
|
||||
|
||||
|
||||
|
||||
|
||||
return file_infos
|
||||
|
||||
def ParseImageboardThreadSubject( raw_json ):
|
||||
|
||||
json_dict = json.loads( raw_json )
|
||||
|
||||
posts_list = json_dict[ 'posts' ]
|
||||
|
||||
if len( posts_list ) > 0:
|
||||
|
||||
top_post = posts_list[0]
|
||||
|
||||
if 'sub' in top_post:
|
||||
|
||||
return top_post[ 'sub' ]
|
||||
|
||||
|
||||
|
||||
return ''
|
||||
|
||||
def IsImageboardThread( url ):
|
||||
|
||||
if '4chan.org' in url:
|
||||
|
||||
if '/thread/' in url:
|
||||
|
||||
return True
|
||||
|
||||
|
||||
|
||||
if '8ch.net' in url:
|
||||
|
||||
if '/res/' in url:
|
||||
|
||||
return True
|
||||
|
||||
|
||||
|
||||
return False
|
||||
|
||||
def ParseImageboardThreadURL( thread_url ):
|
||||
|
||||
try:
|
||||
|
||||
if '#' in thread_url:
|
||||
|
||||
( thread_url, post_anchor_gumpf ) = thread_url.split( '#', 1 )
|
||||
|
||||
|
||||
parse_result = urlparse.urlparse( thread_url )
|
||||
|
||||
host = parse_result.hostname
|
||||
|
||||
request = parse_result.path
|
||||
|
||||
if host is None or request is None:
|
||||
|
||||
raise Exception()
|
||||
|
||||
|
||||
except:
|
||||
|
||||
raise Exception ( 'Could not understand that url!' )
|
||||
|
||||
|
||||
is_4chan = '4chan.org' in host or 'a.4cdn.org' in host
|
||||
is_8chan = '8ch.net' in host
|
||||
|
||||
if not ( is_4chan or is_8chan ):
|
||||
|
||||
raise Exception( 'This only works for 4chan and 8chan right now!' )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
# 4chan
|
||||
# /asp/thread/382059/post-your-favourite-martial-arts-video-if-martin
|
||||
|
||||
# 8chan
|
||||
# /v/res/406061.html
|
||||
|
||||
if is_4chan:
|
||||
|
||||
( board, rest_of_request ) = request[1:].split( '/thread/', 1 )
|
||||
|
||||
if '/' in rest_of_request:
|
||||
|
||||
( thread_id, gumpf ) = rest_of_request.split( '/' )
|
||||
|
||||
else:
|
||||
|
||||
thread_id = rest_of_request
|
||||
|
||||
|
||||
elif is_8chan:
|
||||
|
||||
( board, rest_of_request ) = request[1:].split( '/res/', 1 )
|
||||
|
||||
thread_id = rest_of_request[:-5]
|
||||
|
||||
|
||||
except Exception as e:
|
||||
|
||||
raise Exception( 'Could not understand that thread url! Either the board or the thread id components were malformed or missing.' )
|
||||
|
||||
|
||||
return ( thread_url, host, board, thread_id )
|
||||
|
||||
def ParsePageForURLs( html, starting_url ):
|
||||
|
||||
soup = ClientParsing.GetSoup( html )
|
||||
|
||||
all_links = soup.find_all( 'a' )
|
||||
|
||||
links_with_images = [ link for link in all_links if len( link.find_all( 'img' ) ) > 0 ]
|
||||
|
||||
urls = [ urlparse.urljoin( starting_url, link[ 'href' ] ) for link in links_with_images ]
|
||||
|
||||
return urls
|
||||
|
||||
class GalleryIdentifier( HydrusSerialisable.SerialisableBase ):
|
||||
|
||||
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_GALLERY_IDENTIFIER
|
||||
|
@ -606,10 +339,14 @@ class Gallery( object ):
|
|||
self._network_job_factory = network_job_factory
|
||||
|
||||
|
||||
gallery_advance_nums = {}
|
||||
|
||||
class GalleryBooru( Gallery ):
|
||||
|
||||
def __init__( self, booru_name ):
|
||||
|
||||
self._booru_name = booru_name
|
||||
|
||||
try:
|
||||
|
||||
self._booru = HG.client_controller.Read( 'remote_booru', booru_name )
|
||||
|
@ -619,8 +356,6 @@ class GalleryBooru( Gallery ):
|
|||
raise Exception( 'Attempted to find booru "' + booru_name + '", but it was missing from the database!' )
|
||||
|
||||
|
||||
self._gallery_advance_num = None
|
||||
|
||||
( self._search_url, self._advance_by_page_num, self._search_separator, self._thumb_classname ) = self._booru.GetGalleryParsingInfo()
|
||||
|
||||
Gallery.__init__( self )
|
||||
|
@ -634,29 +369,30 @@ class GalleryBooru( Gallery ):
|
|||
|
||||
else:
|
||||
|
||||
if self._gallery_advance_num is None:
|
||||
if page_index == 0:
|
||||
|
||||
if page_index == 0:
|
||||
|
||||
url_index = page_index
|
||||
|
||||
else:
|
||||
|
||||
self.GetPage( query, 0 )
|
||||
|
||||
if self._gallery_advance_num is None:
|
||||
|
||||
raise Exception( 'Unable to calculate the booru\'s gallery advance number.' )
|
||||
|
||||
else:
|
||||
|
||||
url_index = page_index * self._gallery_advance_num
|
||||
|
||||
|
||||
url_index = 0
|
||||
|
||||
else:
|
||||
|
||||
url_index = page_index * self._gallery_advance_num
|
||||
if self._booru_name not in gallery_advance_nums:
|
||||
|
||||
if page_index == 0:
|
||||
|
||||
url_index = page_index
|
||||
|
||||
else:
|
||||
|
||||
self.GetPage( query, 0 )
|
||||
|
||||
if self._booru_name not in gallery_advance_nums:
|
||||
|
||||
raise Exception( 'Unable to calculate the booru\'s gallery advance number.' )
|
||||
|
||||
|
||||
|
||||
|
||||
url_index = page_index * gallery_advance_nums[ self._booru_name ]
|
||||
|
||||
|
||||
|
||||
|
@ -739,15 +475,16 @@ class GalleryBooru( Gallery ):
|
|||
|
||||
|
||||
|
||||
if self._gallery_advance_num is None:
|
||||
if len( urls ) == 0:
|
||||
|
||||
if len( urls ) == 0:
|
||||
definitely_no_more_pages = True
|
||||
|
||||
|
||||
if self._booru_name not in gallery_advance_nums:
|
||||
|
||||
if len( urls ) > 0:
|
||||
|
||||
definitely_no_more_pages = True
|
||||
|
||||
else:
|
||||
|
||||
self._gallery_advance_num = len( urls )
|
||||
gallery_advance_nums[ self._booru_name ] = len( urls )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -94,8 +94,7 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
|
|||
self._focus_holder.SetSize( ( 0, 0 ) )
|
||||
|
||||
self._closed_pages = []
|
||||
self._closed_page_keys = set()
|
||||
self._deleted_page_keys = set()
|
||||
|
||||
self._lock = threading.Lock()
|
||||
|
||||
self._delayed_dialog_lock = threading.Lock()
|
||||
|
@ -139,6 +138,7 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
|
|||
self._controller.sub( self, 'RenamePage', 'rename_page' )
|
||||
self._controller.sub( self, 'SetDBLockedStatus', 'db_locked_status' )
|
||||
self._controller.sub( self, 'SetMediaFocus', 'set_media_focus' )
|
||||
self._controller.sub( self, 'SetStatusBarDirty', 'set_status_bar_dirty' )
|
||||
self._controller.sub( self, 'SetTitle', 'main_gui_title' )
|
||||
self._controller.sub( self, 'SyncToTagArchive', 'sync_to_tag_archive' )
|
||||
|
||||
|
@ -960,14 +960,6 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
|
|||
|
||||
def _DestroyPages( self, pages ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
for page in pages:
|
||||
|
||||
self._deleted_page_keys.add( page.GetPageKey() )
|
||||
|
||||
|
||||
|
||||
for page in pages:
|
||||
|
||||
page.CleanBeforeDestroy()
|
||||
|
@ -1183,10 +1175,7 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
|
|||
|
||||
def undo():
|
||||
|
||||
with self._lock:
|
||||
|
||||
have_closed_pages = len( self._closed_pages ) > 0
|
||||
|
||||
have_closed_pages = len( self._closed_pages ) > 0
|
||||
|
||||
undo_manager = self._controller.GetManager( 'undo' )
|
||||
|
||||
|
@ -1220,14 +1209,11 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
|
|||
|
||||
args = []
|
||||
|
||||
with self._lock:
|
||||
for ( i, ( time_closed, page ) ) in enumerate( self._closed_pages ):
|
||||
|
||||
for ( i, ( time_closed, page ) ) in enumerate( self._closed_pages ):
|
||||
|
||||
name = page.GetDisplayName()
|
||||
|
||||
args.append( ( i, name + ' - ' + page.GetPrettyStatus() ) )
|
||||
|
||||
name = page.GetDisplayName()
|
||||
|
||||
args.append( ( i, name + ' - ' + page.GetPrettyStatus() ) )
|
||||
|
||||
|
||||
args.reverse() # so that recently closed are at the top
|
||||
|
@ -1386,7 +1372,7 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
|
|||
download_menu = wx.Menu()
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, download_menu, 'url download', 'Open a new tab to download some separate urls.', self.ProcessApplicationCommand, ClientData.ApplicationCommand( CC.APPLICATION_COMMAND_TYPE_SIMPLE, 'new_url_downloader_page' ) )
|
||||
ClientGUIMenus.AppendMenuItem( self, download_menu, 'thread watcher', 'Open a new tab to watch a thread.', self.ProcessApplicationCommand, ClientData.ApplicationCommand( CC.APPLICATION_COMMAND_TYPE_SIMPLE, 'new_watcher_downloader_page' ) )
|
||||
ClientGUIMenus.AppendMenuItem( self, download_menu, 'watcher', 'Open a new tab to watch a thread or other single changing location.', self.ProcessApplicationCommand, ClientData.ApplicationCommand( CC.APPLICATION_COMMAND_TYPE_SIMPLE, 'new_watcher_downloader_page' ) )
|
||||
ClientGUIMenus.AppendMenuItem( self, download_menu, 'simple downloader', 'Open a new tab to download files from generic galleries or threads.', self.ProcessApplicationCommand, ClientData.ApplicationCommand( CC.APPLICATION_COMMAND_TYPE_SIMPLE, 'new_simple_downloader_page' ) )
|
||||
|
||||
gallery_menu = wx.Menu()
|
||||
|
@ -2299,7 +2285,7 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
|
|||
|
||||
|
||||
self._controller.pub( 'wake_daemons' )
|
||||
self._controller.gui.SetStatusBarDirty()
|
||||
self.SetStatusBarDirty()
|
||||
self._controller.pub( 'refresh_page_name' )
|
||||
self._controller.pub( 'notify_new_colourset' )
|
||||
self._controller.pub( 'notify_new_favourite_tags' )
|
||||
|
@ -2876,7 +2862,10 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
password = dlg.GetValue()
|
||||
|
||||
if password == '': password = None
|
||||
if password == '':
|
||||
|
||||
password = None
|
||||
|
||||
|
||||
self._controller.Write( 'set_password', password )
|
||||
|
||||
|
@ -3128,7 +3117,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
HG.force_idle_mode = not HG.force_idle_mode
|
||||
|
||||
self._controller.pub( 'wake_daemons' )
|
||||
self._controller.gui.SetStatusBarDirty()
|
||||
self.SetStatusBarDirty()
|
||||
|
||||
elif name == 'no_page_limit_mode':
|
||||
|
||||
|
@ -3148,12 +3137,9 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
closed_page_index = 0
|
||||
|
||||
|
||||
with self._lock:
|
||||
|
||||
( time_closed, page ) = self._closed_pages.pop( closed_page_index )
|
||||
|
||||
self._closed_page_keys.discard( page.GetPageKey() )
|
||||
|
||||
( time_closed, page ) = self._closed_pages.pop( closed_page_index )
|
||||
|
||||
self._controller.UnclosePageKeys( page.GetPageKeys() )
|
||||
|
||||
self._controller.pub( 'notify_page_unclosed', page )
|
||||
|
||||
|
@ -3491,20 +3477,11 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
|
||||
|
||||
def CurrentlyBusy( self ):
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def DeleteAllClosedPages( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
deletee_pages = [ page for ( time_closed, page ) in self._closed_pages ]
|
||||
|
||||
self._closed_pages = []
|
||||
self._closed_page_keys = set()
|
||||
|
||||
deletee_pages = [ page for ( time_closed, page ) in self._closed_pages ]
|
||||
|
||||
self._closed_pages = []
|
||||
|
||||
if len( deletee_pages ) > 0:
|
||||
|
||||
|
@ -3524,32 +3501,27 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
timeout = 60 * 60
|
||||
|
||||
with self._lock:
|
||||
deletee_pages = []
|
||||
|
||||
old_closed_pages = self._closed_pages
|
||||
|
||||
self._closed_pages = []
|
||||
|
||||
for ( time_closed, page ) in old_closed_pages:
|
||||
|
||||
deletee_pages = []
|
||||
|
||||
old_closed_pages = self._closed_pages
|
||||
|
||||
self._closed_pages = []
|
||||
self._closed_page_keys = set()
|
||||
|
||||
for ( time_closed, page ) in old_closed_pages:
|
||||
if time_closed + timeout < now:
|
||||
|
||||
if time_closed + timeout < now:
|
||||
|
||||
deletee_pages.append( page )
|
||||
|
||||
else:
|
||||
|
||||
self._closed_pages.append( ( time_closed, page ) )
|
||||
self._closed_page_keys.add( page.GetPageKey() )
|
||||
|
||||
deletee_pages.append( page )
|
||||
|
||||
else:
|
||||
|
||||
self._closed_pages.append( ( time_closed, page ) )
|
||||
|
||||
|
||||
if len( old_closed_pages ) != len( self._closed_pages ):
|
||||
|
||||
self._controller.pub( 'notify_new_undo' )
|
||||
|
||||
|
||||
if len( old_closed_pages ) != len( self._closed_pages ):
|
||||
|
||||
self._controller.pub( 'notify_new_undo' )
|
||||
|
||||
|
||||
self._DestroyPages( deletee_pages )
|
||||
|
@ -4000,7 +3972,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
else:
|
||||
|
||||
self._notebook.NewPageImportThreadWatcher( url, on_deepest_notebook = True )
|
||||
self._notebook.NewPageImportWatcher( url, on_deepest_notebook = True )
|
||||
|
||||
|
||||
|
||||
|
@ -4046,11 +4018,9 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
close_time = HydrusData.GetNow()
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._closed_pages.append( ( close_time, page ) )
|
||||
self._closed_page_keys.add( page.GetPageKey() )
|
||||
|
||||
self._closed_pages.append( ( close_time, page ) )
|
||||
|
||||
self._controller.ClosePageKeys( page.GetPageKeys() )
|
||||
|
||||
if self._notebook.GetNumPages() == 0:
|
||||
|
||||
|
@ -4137,24 +4107,6 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
self._menu_updater.Update()
|
||||
|
||||
|
||||
def PageCompletelyDestroyed( self, page_key ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return page_key in self._deleted_page_keys
|
||||
|
||||
|
||||
|
||||
def PageClosedButNotDestroyed( self, page_key ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return page_key in self._closed_page_keys
|
||||
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def PresentImportedFilesToPage( self, hashes, page_name ):
|
||||
|
||||
tlp = ClientGUICommon.GetTLP( self )
|
||||
|
@ -4209,7 +4161,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
elif action == 'new_watcher_downloader_page':
|
||||
|
||||
self._notebook.NewPageImportThreadWatcher( on_deepest_notebook = True )
|
||||
self._notebook.NewPageImportWatcher( on_deepest_notebook = True )
|
||||
|
||||
elif action == 'close_page':
|
||||
|
||||
|
@ -4391,11 +4343,6 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
def SetStatusBarDirty( self ):
|
||||
|
||||
if not self:
|
||||
|
||||
return
|
||||
|
||||
|
||||
self._statusbar_thread_updater.Update()
|
||||
|
||||
|
||||
|
@ -4509,7 +4456,7 @@ class FrameSplash( wx.Frame ):
|
|||
|
||||
self._controller = controller
|
||||
|
||||
style = wx.FRAME_NO_TASKBAR
|
||||
style = wx.CAPTION
|
||||
|
||||
wx.Frame.__init__( self, None, style = style, title = 'hydrus client' )
|
||||
|
||||
|
@ -4519,7 +4466,7 @@ class FrameSplash( wx.Frame ):
|
|||
|
||||
self._bmp = wx.Bitmap( self.WIDTH, self.HEIGHT, 24 )
|
||||
|
||||
self.SetSize( ( self.WIDTH, self.HEIGHT ) )
|
||||
self.SetClientSize( ( self.WIDTH, self.HEIGHT ) )
|
||||
|
||||
self.Center()
|
||||
|
||||
|
|
|
@ -100,13 +100,13 @@ class AutoCompleteDropdown( wx.Panel ):
|
|||
|
||||
self._dropdown_hidden = True
|
||||
|
||||
self._list_height = 250
|
||||
self._list_height_num_chars = 19
|
||||
|
||||
else:
|
||||
|
||||
self._dropdown_window = wx.Panel( self )
|
||||
|
||||
self._list_height = 125
|
||||
self._list_height_num_chars = 12
|
||||
|
||||
|
||||
self._dropdown_notebook = wx.Notebook( self._dropdown_window )
|
||||
|
@ -194,6 +194,13 @@ class AutoCompleteDropdown( wx.Panel ):
|
|||
|
||||
|
||||
|
||||
def _ClearInput( self ):
|
||||
|
||||
self._text_ctrl.SetValue( '' )
|
||||
|
||||
self._ScheduleListRefresh( 0.0 )
|
||||
|
||||
|
||||
def _DropdownHideShow( self ):
|
||||
|
||||
if not self._float_mode:
|
||||
|
@ -895,13 +902,10 @@ class AutoCompleteDropdownTagsRead( AutoCompleteDropdownTags ):
|
|||
|
||||
def _BroadcastChoices( self, predicates ):
|
||||
|
||||
if self._text_ctrl.GetValue() != '':
|
||||
|
||||
self._text_ctrl.SetValue( '' )
|
||||
|
||||
|
||||
HG.client_controller.pub( 'enter_predicates', self._page_key, predicates )
|
||||
|
||||
self._ClearInput()
|
||||
|
||||
|
||||
def _BroadcastCurrentText( self ):
|
||||
|
||||
|
@ -943,14 +947,14 @@ class AutoCompleteDropdownTagsRead( AutoCompleteDropdownTags ):
|
|||
|
||||
def _InitFavouritesList( self ):
|
||||
|
||||
favs_list = ClientGUIListBoxes.ListBoxTagsACRead( self._dropdown_notebook, self.BroadcastChoices, self._tag_service_key, min_height = self._list_height )
|
||||
favs_list = ClientGUIListBoxes.ListBoxTagsACRead( self._dropdown_notebook, self.BroadcastChoices, self._tag_service_key, height_num_chars = self._list_height_num_chars )
|
||||
|
||||
return favs_list
|
||||
|
||||
|
||||
def _InitSearchResultsList( self ):
|
||||
|
||||
return ClientGUIListBoxes.ListBoxTagsACRead( self._dropdown_notebook, self.BroadcastChoices, self._tag_service_key, min_height = self._list_height )
|
||||
return ClientGUIListBoxes.ListBoxTagsACRead( self._dropdown_notebook, self.BroadcastChoices, self._tag_service_key, height_num_chars = self._list_height_num_chars )
|
||||
|
||||
|
||||
def _ParseSearchText( self ):
|
||||
|
@ -1300,11 +1304,6 @@ class AutoCompleteDropdownTagsWrite( AutoCompleteDropdownTags ):
|
|||
|
||||
def _BroadcastChoices( self, predicates ):
|
||||
|
||||
if self._text_ctrl.GetValue() != '':
|
||||
|
||||
self._text_ctrl.SetValue( '' )
|
||||
|
||||
|
||||
tags = { predicate.GetValue() for predicate in predicates }
|
||||
|
||||
if len( tags ) > 0:
|
||||
|
@ -1312,6 +1311,8 @@ class AutoCompleteDropdownTagsWrite( AutoCompleteDropdownTags ):
|
|||
self._chosen_tag_callable( tags )
|
||||
|
||||
|
||||
self._ClearInput()
|
||||
|
||||
|
||||
def _ParseSearchText( self ):
|
||||
|
||||
|
@ -1428,14 +1429,14 @@ class AutoCompleteDropdownTagsWrite( AutoCompleteDropdownTags ):
|
|||
|
||||
def _InitFavouritesList( self ):
|
||||
|
||||
favs_list = ClientGUIListBoxes.ListBoxTagsACWrite( self._dropdown_notebook, self.BroadcastChoices, self._tag_service_key, min_height = self._list_height )
|
||||
favs_list = ClientGUIListBoxes.ListBoxTagsACWrite( self._dropdown_notebook, self.BroadcastChoices, self._tag_service_key, height_num_chars = self._list_height_num_chars )
|
||||
|
||||
return favs_list
|
||||
|
||||
|
||||
def _InitSearchResultsList( self ):
|
||||
|
||||
return ClientGUIListBoxes.ListBoxTagsACWrite( self._dropdown_notebook, self.BroadcastChoices, self._tag_service_key, min_height = self._list_height )
|
||||
return ClientGUIListBoxes.ListBoxTagsACWrite( self._dropdown_notebook, self.BroadcastChoices, self._tag_service_key, height_num_chars = self._list_height_num_chars )
|
||||
|
||||
|
||||
def _PutAtTopOfMatches( self, matches, predicate ):
|
||||
|
|
|
@ -1802,12 +1802,14 @@ class Canvas( wx.Window ):
|
|||
self._media_container.SetSize( new_size )
|
||||
|
||||
|
||||
if HC.PLATFORM_OSX and new_position == self._media_container.GetPosition():
|
||||
if new_position == self._media_container.GetPosition():
|
||||
|
||||
self._media_container.Refresh()
|
||||
if HC.PLATFORM_OSX:
|
||||
|
||||
self._media_container.Refresh()
|
||||
|
||||
|
||||
|
||||
if new_position != self._media_container.GetPosition():
|
||||
else:
|
||||
|
||||
self._media_container.SetPosition( new_position )
|
||||
|
||||
|
|
|
@ -1343,7 +1343,7 @@ class DialogManageExportFoldersEdit( ClientGUIDialogs.Dialog ):
|
|||
|
||||
self._query_box = ClientGUICommon.StaticBox( self, 'query to export' )
|
||||
|
||||
self._page_key = HydrusData.GenerateKey()
|
||||
self._page_key = 'export folders placeholder'
|
||||
|
||||
predicates = file_search_context.GetPredicates()
|
||||
|
||||
|
|
|
@ -804,7 +804,7 @@ class ListBox( wx.ScrolledWindow ):
|
|||
|
||||
TEXT_X_PADDING = 3
|
||||
|
||||
def __init__( self, parent, min_height = 150 ):
|
||||
def __init__( self, parent, height_num_chars = 10 ):
|
||||
|
||||
wx.ScrolledWindow.__init__( self, parent, style = wx.VSCROLL | wx.BORDER_DOUBLE )
|
||||
|
||||
|
@ -832,7 +832,9 @@ class ListBox( wx.ScrolledWindow ):
|
|||
|
||||
self.SetScrollRate( 0, self._text_y )
|
||||
|
||||
self.SetMinSize( ( 50, min_height ) )
|
||||
( min_width, min_height ) = ClientGUICommon.ConvertTextToPixels( self, ( 16, height_num_chars ) )
|
||||
|
||||
self.SetMinClientSize( ( min_width, min_height ) )
|
||||
|
||||
self.Bind( wx.EVT_PAINT, self.EventPaint )
|
||||
self.Bind( wx.EVT_SIZE, self.EventResize )
|
||||
|
@ -1909,7 +1911,7 @@ class ListBoxTagsActiveSearchPredicates( ListBoxTagsPredicates ):
|
|||
initial_predicates = []
|
||||
|
||||
|
||||
ListBoxTagsPredicates.__init__( self, parent, min_height = 100 )
|
||||
ListBoxTagsPredicates.__init__( self, parent, height_num_chars = 6 )
|
||||
|
||||
self._page_key = page_key
|
||||
self._get_current_predicates_callable = self.GetPredicates
|
||||
|
@ -2543,7 +2545,7 @@ class ListBoxTagsSelection( ListBoxTags ):
|
|||
|
||||
def __init__( self, parent, include_counts = True, collapse_siblings = False ):
|
||||
|
||||
ListBoxTags.__init__( self, parent, min_height = 200 )
|
||||
ListBoxTags.__init__( self, parent, height_num_chars = 12 )
|
||||
|
||||
self._sort = HC.options[ 'default_tag_sort' ]
|
||||
|
||||
|
|
|
@ -644,7 +644,12 @@ class BetterListCtrl( wx.ListCtrl, ListCtrlAutoWidthMixin ):
|
|||
|
||||
for ( column_index, value ) in enumerate( display_tuple ):
|
||||
|
||||
self.SetItem( index, column_index, value )
|
||||
existing_value = self.GetItem( index, column_index )
|
||||
|
||||
if existing_value != value:
|
||||
|
||||
self.SetItem( index, column_index, value )
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -897,6 +902,8 @@ class BetterListCtrl( wx.ListCtrl, ListCtrlAutoWidthMixin ):
|
|||
datas = list( self._data_to_indices.keys() )
|
||||
|
||||
|
||||
sort_data_has_changed = False
|
||||
|
||||
for data in datas:
|
||||
|
||||
( display_tuple, sort_tuple ) = self._data_to_tuples_func( data )
|
||||
|
@ -905,7 +912,20 @@ class BetterListCtrl( wx.ListCtrl, ListCtrlAutoWidthMixin ):
|
|||
|
||||
index = self._data_to_indices[ data ]
|
||||
|
||||
if data_info != self._indices_to_data_info[ index ]:
|
||||
existing_data_info = self._indices_to_data_info[ index ]
|
||||
|
||||
if data_info != existing_data_info:
|
||||
|
||||
if not sort_data_has_changed:
|
||||
|
||||
( existing_data, existing_display_tuple, existing_sort_tuple ) = existing_data_info
|
||||
|
||||
if sort_tuple[ self._sort_column ] != existing_sort_tuple[ self._sort_column ]: # this does not govern secondary sorts, but let's not spam sorts m8
|
||||
|
||||
sort_data_has_changed = True
|
||||
|
||||
|
||||
|
||||
|
||||
self._indices_to_data_info[ index ] = data_info
|
||||
|
||||
|
@ -915,6 +935,8 @@ class BetterListCtrl( wx.ListCtrl, ListCtrlAutoWidthMixin ):
|
|||
|
||||
wx.QueueEvent( self.GetEventHandler(), ListCtrlEvent( -1 ) )
|
||||
|
||||
return sort_data_has_changed
|
||||
|
||||
|
||||
class BetterListCtrlPanel( wx.Panel ):
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1683,7 +1683,7 @@ class MediaPanel( ClientMedia.ListeningMediaList, wx.ScrolledWindow ):
|
|||
|
||||
def ClearPageKey( self ):
|
||||
|
||||
self._page_key = HydrusData.GenerateKey()
|
||||
self._page_key = 'dead media panel page key'
|
||||
|
||||
|
||||
def Collect( self, page_key, collect_by = -1 ):
|
||||
|
@ -4110,6 +4110,9 @@ class Thumbnail( Selectable ):
|
|||
|
||||
text_colour_with_alpha = upper_tag_summary_generator.GetTextColour()
|
||||
|
||||
# protip, this renders unicode characters (such as \U0001f50a) in the Supplementary Multilingual Plane incorrectly, wew
|
||||
# DeviceContext does render them ok somehow
|
||||
|
||||
gc.SetFont( wx.SystemSettings.GetFont( wx.SYS_DEFAULT_GUI_FONT ), text_colour_with_alpha )
|
||||
|
||||
background_colour_with_alpha = upper_tag_summary_generator.GetBackgroundColour()
|
||||
|
|
|
@ -127,9 +127,9 @@ class DialogPageChooser( ClientGUIDialogs.Dialog ):
|
|||
|
||||
button.SetLabelText( 'multiple watcher' )
|
||||
|
||||
elif entry_type == 'page_import_thread_watcher':
|
||||
elif entry_type == 'page_import_watcher':
|
||||
|
||||
button.SetLabelText( 'thread watcher' )
|
||||
button.SetLabelText( 'watcher' )
|
||||
|
||||
elif entry_type == 'page_import_urls':
|
||||
|
||||
|
@ -214,9 +214,9 @@ class DialogPageChooser( ClientGUIDialogs.Dialog ):
|
|||
|
||||
self._result = ( 'page', ClientGUIManagement.CreateManagementControllerImportMultipleWatcher() )
|
||||
|
||||
elif entry_type == 'page_import_thread_watcher':
|
||||
elif entry_type == 'page_import_watcher':
|
||||
|
||||
self._result = ( 'page', ClientGUIManagement.CreateManagementControllerImportThreadWatcher() )
|
||||
self._result = ( 'page', ClientGUIManagement.CreateManagementControllerImportWatcher() )
|
||||
|
||||
elif entry_type == 'page_import_urls':
|
||||
|
||||
|
@ -269,7 +269,7 @@ class DialogPageChooser( ClientGUIDialogs.Dialog ):
|
|||
elif menu_keyword == 'download':
|
||||
|
||||
entries.append( ( 'page_import_urls', None ) )
|
||||
entries.append( ( 'page_import_thread_watcher', None ) )
|
||||
entries.append( ( 'page_import_watcher', None ) )
|
||||
entries.append( ( 'menu', 'gallery' ) )
|
||||
entries.append( ( 'page_import_simple_downloader', None ) )
|
||||
|
||||
|
@ -404,10 +404,10 @@ class Page( wx.SplitterWindow ):
|
|||
|
||||
wx.SplitterWindow.__init__( self, parent )
|
||||
|
||||
self._page_key = HydrusData.GenerateKey()
|
||||
|
||||
self._controller = controller
|
||||
|
||||
self._page_key = self._controller.AcquirePageKey()
|
||||
|
||||
self._management_controller = management_controller
|
||||
|
||||
self._initial_hashes = initial_hashes
|
||||
|
@ -499,6 +499,8 @@ class Page( wx.SplitterWindow ):
|
|||
|
||||
self._management_panel.CleanBeforeDestroy()
|
||||
|
||||
self._controller.ReleasePageKey( self._page_key )
|
||||
|
||||
|
||||
def EventPreviewUnsplit( self, event ):
|
||||
|
||||
|
@ -596,6 +598,11 @@ class Page( wx.SplitterWindow ):
|
|||
return self._page_key
|
||||
|
||||
|
||||
def GetPageKeys( self ):
|
||||
|
||||
return { self._page_key }
|
||||
|
||||
|
||||
def GetPrettyStatus( self ):
|
||||
|
||||
return self._pretty_status
|
||||
|
@ -858,6 +865,8 @@ class PagesNotebook( wx.Notebook ):
|
|||
|
||||
self._controller = controller
|
||||
|
||||
self._page_key = self._controller.AcquirePageKey()
|
||||
|
||||
self._name = name
|
||||
|
||||
self._next_new_page_index = None
|
||||
|
@ -866,8 +875,6 @@ class PagesNotebook( wx.Notebook ):
|
|||
|
||||
self._closed_pages = []
|
||||
|
||||
self._page_key = HydrusData.GenerateKey()
|
||||
|
||||
self._last_last_session_hash = None
|
||||
|
||||
self._controller.sub( self, 'RefreshPageName', 'refresh_page_name' )
|
||||
|
@ -1022,11 +1029,11 @@ class PagesNotebook( wx.Notebook ):
|
|||
|
||||
|
||||
|
||||
def _GatherDeadThreadWatchers( self, insertion_page ):
|
||||
def _GatherDeadWatchers( self, insertion_page ):
|
||||
|
||||
top_notebook = self._GetTopNotebook()
|
||||
|
||||
gathered_pages = top_notebook.GetGatherPages( 'dead_thread_watchers' )
|
||||
gathered_pages = top_notebook.GetGatherPages( 'dead_watchers' )
|
||||
|
||||
self._MovePages( gathered_pages, insertion_page )
|
||||
|
||||
|
@ -1515,7 +1522,7 @@ class PagesNotebook( wx.Notebook ):
|
|||
|
||||
submenu = wx.Menu()
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, submenu, 'dead thread watchers', 'Find all currently open dead thread watchers and move them to this page of pages.', self._GatherDeadThreadWatchers, page )
|
||||
ClientGUIMenus.AppendMenuItem( self, submenu, 'dead watchers', 'Find all currently open dead watchers and move them to this page of pages.', self._GatherDeadWatchers, page )
|
||||
|
||||
ClientGUIMenus.AppendMenu( menu, submenu, 'gather on this page of pages' )
|
||||
|
||||
|
@ -1664,6 +1671,8 @@ class PagesNotebook( wx.Notebook ):
|
|||
page.CleanBeforeDestroy()
|
||||
|
||||
|
||||
self._controller.ReleasePageKey( self._page_key )
|
||||
|
||||
|
||||
def CloseCurrentPage( self, polite = True ):
|
||||
|
||||
|
@ -1892,13 +1901,13 @@ class PagesNotebook( wx.Notebook ):
|
|||
|
||||
def GetGatherPages( self, gather_type ):
|
||||
|
||||
if gather_type == 'dead_thread_watchers':
|
||||
if gather_type == 'dead_watchers':
|
||||
|
||||
def test( page ):
|
||||
|
||||
management_controller = page.GetManagementController()
|
||||
|
||||
return management_controller.IsDeadThreadWatcher()
|
||||
return management_controller.IsDeadWatcher()
|
||||
|
||||
|
||||
else:
|
||||
|
@ -2022,6 +2031,18 @@ class PagesNotebook( wx.Notebook ):
|
|||
return self._page_key
|
||||
|
||||
|
||||
def GetPageKeys( self ):
|
||||
|
||||
page_keys = { self._page_key }
|
||||
|
||||
for page in self._GetPages():
|
||||
|
||||
page_keys.update( page.GetPageKeys() )
|
||||
|
||||
|
||||
return page_keys
|
||||
|
||||
|
||||
def GetPages( self ):
|
||||
|
||||
return self._GetPages()
|
||||
|
@ -2374,16 +2395,16 @@ class PagesNotebook( wx.Notebook ):
|
|||
return self.NewPage( management_controller, on_deepest_notebook = on_deepest_notebook )
|
||||
|
||||
|
||||
def NewPageImportMultipleWatcher( self, thread_url = None, on_deepest_notebook = False ):
|
||||
def NewPageImportMultipleWatcher( self, url = None, on_deepest_notebook = False ):
|
||||
|
||||
management_controller = ClientGUIManagement.CreateManagementControllerImportMultipleWatcher( thread_url )
|
||||
management_controller = ClientGUIManagement.CreateManagementControllerImportMultipleWatcher( url )
|
||||
|
||||
return self.NewPage( management_controller, on_deepest_notebook = on_deepest_notebook )
|
||||
|
||||
|
||||
def NewPageImportThreadWatcher( self, thread_url = None, on_deepest_notebook = False ):
|
||||
def NewPageImportWatcher( self, url = None, on_deepest_notebook = False ):
|
||||
|
||||
management_controller = ClientGUIManagement.CreateManagementControllerImportThreadWatcher( thread_url )
|
||||
management_controller = ClientGUIManagement.CreateManagementControllerImportWatcher( url )
|
||||
|
||||
return self.NewPage( management_controller, on_deepest_notebook = on_deepest_notebook )
|
||||
|
||||
|
|
|
@ -1424,7 +1424,7 @@ class EditContentParserPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
self._content_type.Append( 'tags', HC.CONTENT_TYPE_MAPPINGS )
|
||||
self._content_type.Append( 'file hash', HC.CONTENT_TYPE_HASH )
|
||||
self._content_type.Append( 'timestamp', HC.CONTENT_TYPE_TIMESTAMP )
|
||||
self._content_type.Append( 'thread watcher page title', HC.CONTENT_TYPE_TITLE )
|
||||
self._content_type.Append( 'watcher page title', HC.CONTENT_TYPE_TITLE )
|
||||
self._content_type.Append( 'veto', HC.CONTENT_TYPE_VETO )
|
||||
|
||||
self._content_type.Bind( wx.EVT_CHOICE, self.EventContentTypeChange )
|
||||
|
|
|
@ -1465,7 +1465,7 @@ class EditNetworkContextPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
if limited_types is None:
|
||||
|
||||
limited_types = ( CC.NETWORK_CONTEXT_GLOBAL, CC.NETWORK_CONTEXT_DOMAIN, CC.NETWORK_CONTEXT_HYDRUS, CC.NETWORK_CONTEXT_DOWNLOADER_PAGE, CC.NETWORK_CONTEXT_SUBSCRIPTION, CC.NETWORK_CONTEXT_THREAD_WATCHER_PAGE )
|
||||
limited_types = ( CC.NETWORK_CONTEXT_GLOBAL, CC.NETWORK_CONTEXT_DOMAIN, CC.NETWORK_CONTEXT_HYDRUS, CC.NETWORK_CONTEXT_DOWNLOADER_PAGE, CC.NETWORK_CONTEXT_SUBSCRIPTION, CC.NETWORK_CONTEXT_WATCHER_PAGE )
|
||||
|
||||
|
||||
self._context_type = ClientGUICommon.BetterChoice( self )
|
||||
|
@ -1558,7 +1558,7 @@ class EditNetworkContextPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
self._context_data_services.Disable()
|
||||
self._context_data_subscriptions.Disable()
|
||||
|
||||
if context_type in ( CC.NETWORK_CONTEXT_GLOBAL, CC.NETWORK_CONTEXT_DOWNLOADER_PAGE, CC.NETWORK_CONTEXT_THREAD_WATCHER_PAGE ):
|
||||
if context_type in ( CC.NETWORK_CONTEXT_GLOBAL, CC.NETWORK_CONTEXT_DOWNLOADER_PAGE, CC.NETWORK_CONTEXT_WATCHER_PAGE ):
|
||||
|
||||
self._context_data_none.SetValue( True )
|
||||
|
||||
|
@ -2209,6 +2209,9 @@ class EditSubscriptionPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
self._site_type.Bind( wx.EVT_CHOICE, self.EventSiteChanged )
|
||||
|
||||
self._booru_selector = wx.ListBox( self._query_panel )
|
||||
self._booru_selector.Bind( wx.EVT_LISTBOX, self.EventBooruSelected )
|
||||
|
||||
queries_panel = ClientGUIListCtrl.BetterListCtrlPanel( self._query_panel )
|
||||
|
||||
self._queries = ClientGUIListCtrl.BetterListCtrl( queries_panel, 'subscription_queries', 20, 20, [ ( 'query', 20 ), ( 'paused', 8 ), ( 'status', 8 ), ( 'last new file time', 20 ), ( 'last check time', 20 ), ( 'next check time', 20 ), ( 'file velocity', 20 ), ( 'recent delays', 20 ), ( 'urls', 8 ), ( 'file summary', -1 ) ], self._ConvertQueryToListCtrlTuples, delete_key_callback = self._DeleteQuery, activation_callback = self._EditQuery )
|
||||
|
@ -2225,9 +2228,6 @@ class EditSubscriptionPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
queries_panel.AddButton( 'check now', self._CheckNow, enabled_check_func = self._ListCtrlCanCheckNow )
|
||||
queries_panel.AddButton( 'reset cache', self._ResetCache, enabled_check_func = self._ListCtrlCanResetCache )
|
||||
|
||||
self._booru_selector = wx.ListBox( self._query_panel )
|
||||
self._booru_selector.Bind( wx.EVT_LISTBOX, self.EventBooruSelected )
|
||||
|
||||
self._checker_options_button = ClientGUICommon.BetterButton( self._query_panel, 'edit check timings', self._EditCheckerOptions )
|
||||
|
||||
#
|
||||
|
@ -2726,7 +2726,14 @@ class EditSubscriptionPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
boorus = HG.client_controller.Read( 'remote_boorus' )
|
||||
|
||||
for ( name, booru ) in boorus.items(): self._booru_selector.Append( name, booru )
|
||||
names_and_boorus = list( boorus.items() )
|
||||
|
||||
names_and_boorus.sort()
|
||||
|
||||
for ( name, booru ) in names_and_boorus:
|
||||
|
||||
self._booru_selector.Append( name, booru )
|
||||
|
||||
|
||||
self._booru_selector.Select( 0 )
|
||||
|
||||
|
@ -2987,6 +2994,12 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
subscriptions_panel.AddButton( 'retry failures', self.RetryFailures, enabled_check_func = self._CanRetryFailures )
|
||||
subscriptions_panel.AddButton( 'scrub delays', self.ScrubDelays, enabled_check_func = self._CanScrubDelays )
|
||||
subscriptions_panel.AddButton( 'check queries now', self.CheckNow, enabled_check_func = self._CanCheckNow )
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'advanced_mode' ):
|
||||
|
||||
subscriptions_panel.AddButton( 'compact', self.Compact, enabled_check_func = self._CanCompact )
|
||||
|
||||
|
||||
subscriptions_panel.AddButton( 'reset', self.Reset, enabled_check_func = self._CanReset )
|
||||
|
||||
subscriptions_panel.NewButtonRow()
|
||||
|
@ -3026,6 +3039,13 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
return True in ( subscription.CanCheckNow() for subscription in subscriptions )
|
||||
|
||||
|
||||
def _CanCompact( self ):
|
||||
|
||||
subscriptions = self._subscriptions.GetData( only_selected = True )
|
||||
|
||||
return True in ( subscription.CanCompact() for subscription in subscriptions )
|
||||
|
||||
|
||||
def _CanMerge( self ):
|
||||
|
||||
subscriptions = self._subscriptions.GetData( only_selected = True )
|
||||
|
@ -3329,6 +3349,26 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
self._subscriptions.UpdateDatas( subscriptions )
|
||||
|
||||
|
||||
def Compact( self ):
|
||||
|
||||
message = 'WARNING! EXPERIMENTAL! This will tell all the select subscriptions to remove any processed urls old urls that it is no longer worth keeping around. It helps to keep subs clean and snappy on load/save.'
|
||||
|
||||
with ClientGUIDialogs.DialogYesNo( self, message ) as dlg:
|
||||
|
||||
if dlg.ShowModal() == wx.ID_YES:
|
||||
|
||||
subscriptions = self._subscriptions.GetData( only_selected = True )
|
||||
|
||||
for subscription in subscriptions:
|
||||
|
||||
subscription.Compact()
|
||||
|
||||
|
||||
self._subscriptions.UpdateDatas( subscriptions )
|
||||
|
||||
|
||||
|
||||
|
||||
def Delete( self ):
|
||||
|
||||
with ClientGUIDialogs.DialogYesNo( self, 'Remove all selected?' ) as dlg:
|
||||
|
@ -3777,7 +3817,7 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
def SetCheckerOptions( self ):
|
||||
|
||||
checker_options = ClientImportOptions.CheckerOptions( intended_files_per_check = 5, never_faster_than = 86400, never_slower_than = 90 * 86400, death_file_velocity = ( 1, 90 * 86400 ) )
|
||||
checker_options = ClientDefaults.GetDefaultCheckerOptions( 'artist subscription' )
|
||||
|
||||
with ClientGUITopLevelWindows.DialogEdit( self, 'edit check timings' ) as dlg:
|
||||
|
||||
|
@ -4155,6 +4195,10 @@ class EditTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
def _InitialiseNamespaces( self, namespaces ):
|
||||
|
||||
namespaces = list( namespaces )
|
||||
|
||||
namespaces.sort()
|
||||
|
||||
services = HG.client_controller.services_manager.GetServices( HC.TAG_SERVICES, randomised = False )
|
||||
|
||||
if len( services ) > 0:
|
||||
|
|
|
@ -1712,26 +1712,40 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
#
|
||||
|
||||
thread_checker = ClientGUICommon.StaticBox( self, 'thread checker' )
|
||||
watchers = ClientGUICommon.StaticBox( self, 'watchers' )
|
||||
|
||||
self._permit_watchers_to_name_their_pages = wx.CheckBox( thread_checker )
|
||||
self._permit_watchers_to_name_their_pages = wx.CheckBox( watchers )
|
||||
|
||||
self._use_multiple_watcher_for_drag_and_drops = wx.CheckBox( thread_checker )
|
||||
self._use_multiple_watcher_for_drag_and_drops = wx.CheckBox( watchers )
|
||||
|
||||
self._thread_watcher_not_found_page_string = ClientGUICommon.NoneableTextCtrl( thread_checker, none_phrase = 'do not show' )
|
||||
self._thread_watcher_dead_page_string = ClientGUICommon.NoneableTextCtrl( thread_checker, none_phrase = 'do not show' )
|
||||
self._thread_watcher_paused_page_string = ClientGUICommon.NoneableTextCtrl( thread_checker, none_phrase = 'do not show' )
|
||||
self._thread_watcher_not_found_page_string = ClientGUICommon.NoneableTextCtrl( watchers, none_phrase = 'do not show' )
|
||||
self._thread_watcher_dead_page_string = ClientGUICommon.NoneableTextCtrl( watchers, none_phrase = 'do not show' )
|
||||
self._thread_watcher_paused_page_string = ClientGUICommon.NoneableTextCtrl( watchers, none_phrase = 'do not show' )
|
||||
|
||||
checker_options = self._new_options.GetDefaultThreadCheckerOptions()
|
||||
checker_options = self._new_options.GetDefaultWatcherCheckerOptions()
|
||||
|
||||
self._thread_checker_options = ClientGUITime.EditCheckerOptions( thread_checker, checker_options )
|
||||
self._watcher_checker_options = ClientGUITime.EditCheckerOptions( watchers, checker_options )
|
||||
|
||||
#
|
||||
|
||||
gallery_page_tt = 'Gallery page fetches are heavy requests with unusual fetch-time requirements. It is important they not wait too long, but it is also useful to throttle them:'
|
||||
gallery_page_tt += os.linesep * 2
|
||||
gallery_page_tt += '- So they do not compete with file downloads for bandwidth, leading to very unbalanced 20/4400-type queues.'
|
||||
gallery_page_tt += os.linesep
|
||||
gallery_page_tt += '- So you do not get 1000 items in your queue before realising you did not like that tag anyway.'
|
||||
gallery_page_tt += os.linesep
|
||||
gallery_page_tt += '- To give servers a break (some gallery pages can be CPU-expensive to generate).'
|
||||
gallery_page_tt += os.linesep * 2
|
||||
gallery_page_tt += 'After this fixed wait has occurred, the gallery download job will run like any other network job, except that it will ignore bandwidth limits after thirty seconds to guarantee throughput and to stay synced with the source.'
|
||||
gallery_page_tt += os.linesep * 2
|
||||
gallery_page_tt += 'If you do not understand this stuff, you can just leave it alone.'
|
||||
|
||||
self._gallery_page_wait_period_pages.SetValue( self._new_options.GetInteger( 'gallery_page_wait_period_pages' ) )
|
||||
self._gallery_page_wait_period_pages.SetToolTip( gallery_page_tt )
|
||||
self._gallery_file_limit.SetValue( HC.options[ 'gallery_file_limit' ] )
|
||||
|
||||
self._gallery_page_wait_period_subscriptions.SetValue( self._new_options.GetInteger( 'gallery_page_wait_period_subscriptions' ) )
|
||||
self._gallery_page_wait_period_subscriptions.SetToolTip( gallery_page_tt )
|
||||
self._max_simultaneous_subscriptions.SetValue( self._new_options.GetInteger( 'max_simultaneous_subscriptions' ) )
|
||||
self._process_subs_in_random_order.SetValue( self._new_options.GetBoolean( 'process_subs_in_random_order' ) )
|
||||
|
||||
|
@ -1747,7 +1761,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
rows = []
|
||||
|
||||
rows.append( ( 'Fixed time (in seconds) to wait between gallery page fetches:', self._gallery_page_wait_period_pages ) )
|
||||
rows.append( ( 'Additional fixed time (in seconds) to wait between gallery page fetches:', self._gallery_page_wait_period_pages ) )
|
||||
rows.append( ( 'By default, stop searching once this many files are found:', self._gallery_file_limit ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( gallery_downloader, rows )
|
||||
|
@ -1758,7 +1772,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
rows = []
|
||||
|
||||
rows.append( ( 'Fixed time (in seconds) to wait between gallery page fetches:', self._gallery_page_wait_period_subscriptions ) )
|
||||
rows.append( ( 'Additional fixed time (in seconds) to wait between gallery page fetches:', self._gallery_page_wait_period_subscriptions ) )
|
||||
rows.append( ( 'Maximum number of subscriptions that can sync simultaneously:', self._max_simultaneous_subscriptions ) )
|
||||
rows.append( ( 'Sync subscriptions in random order:', self._process_subs_in_random_order ) )
|
||||
|
||||
|
@ -1777,10 +1791,10 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
rows.append( ( 'Prepend dead thread checker page names with this:', self._thread_watcher_dead_page_string ) )
|
||||
rows.append( ( 'Prepend paused thread checker page names with this:', self._thread_watcher_paused_page_string ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( thread_checker, rows )
|
||||
gridbox = ClientGUICommon.WrapInGrid( watchers, rows )
|
||||
|
||||
thread_checker.Add( gridbox, CC.FLAGS_EXPAND_SIZER_BOTH_WAYS )
|
||||
thread_checker.Add( self._thread_checker_options, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
watchers.Add( gridbox, CC.FLAGS_EXPAND_SIZER_BOTH_WAYS )
|
||||
watchers.Add( self._watcher_checker_options, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
#
|
||||
|
||||
|
@ -1788,7 +1802,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
vbox.Add( gallery_downloader, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.Add( subscriptions, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.Add( thread_checker, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.Add( watchers, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
self.SetSizer( vbox )
|
||||
|
||||
|
@ -1810,7 +1824,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
self._new_options.SetNoneableString( 'thread_watcher_dead_page_string', self._thread_watcher_dead_page_string.GetValue() )
|
||||
self._new_options.SetNoneableString( 'thread_watcher_paused_page_string', self._thread_watcher_paused_page_string.GetValue() )
|
||||
|
||||
self._new_options.SetDefaultThreadCheckerOptions( self._thread_checker_options.GetValue() )
|
||||
self._new_options.SetDefaultWatcherCheckerOptions( self._watcher_checker_options.GetValue() )
|
||||
|
||||
|
||||
|
||||
|
@ -1898,7 +1912,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
gallery_identifiers = []
|
||||
|
||||
for site_type in [ HC.SITE_TYPE_DEFAULT, HC.SITE_TYPE_DEVIANT_ART, HC.SITE_TYPE_HENTAI_FOUNDRY, HC.SITE_TYPE_NEWGROUNDS, HC.SITE_TYPE_PIXIV, HC.SITE_TYPE_TUMBLR, HC.SITE_TYPE_THREAD_WATCHER ]:
|
||||
for site_type in [ HC.SITE_TYPE_DEFAULT, HC.SITE_TYPE_DEVIANT_ART, HC.SITE_TYPE_HENTAI_FOUNDRY, HC.SITE_TYPE_NEWGROUNDS, HC.SITE_TYPE_PIXIV, HC.SITE_TYPE_TUMBLR, HC.SITE_TYPE_WATCHER ]:
|
||||
|
||||
gallery_identifiers.append( ClientDownloading.GalleryIdentifier( site_type ) )
|
||||
|
||||
|
@ -2460,6 +2474,13 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
temp_path_override = None
|
||||
|
||||
else:
|
||||
|
||||
if not HydrusPaths.DirectoryIsWritable( temp_path_override ):
|
||||
|
||||
raise HydrusExceptions.VetoException( 'The temporary path override either did not exist or was not writeable-to! Please change it or fix its permissions!' )
|
||||
|
||||
|
||||
|
||||
self._new_options.SetNoneableString( 'temp_path_override', temp_path_override )
|
||||
|
||||
|
|
|
@ -19,20 +19,128 @@ class EditCheckerOptions( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
help_hbox = ClientGUICommon.WrapInText( help_button, self, 'help for this panel -->', wx.Colour( 0, 0, 255 ) )
|
||||
|
||||
import ClientDefaults
|
||||
|
||||
defaults_panel = ClientGUICommon.StaticBox( self, 'reasonable defaults' )
|
||||
|
||||
defaults_1 = ClientGUICommon.BetterButton( defaults_panel, 'thread', self.SetValue, ClientDefaults.GetDefaultCheckerOptions( 'thread' ) )
|
||||
defaults_2 = ClientGUICommon.BetterButton( defaults_panel, 'slow thread', self.SetValue, ClientDefaults.GetDefaultCheckerOptions( 'slow thread' ) )
|
||||
defaults_3 = ClientGUICommon.BetterButton( defaults_panel, 'faster tag subscription', self.SetValue, ClientDefaults.GetDefaultCheckerOptions( 'fast tag subscription' ) )
|
||||
defaults_4 = ClientGUICommon.BetterButton( defaults_panel, 'medium tag/artist subscription', self.SetValue, ClientDefaults.GetDefaultCheckerOptions( 'artist subscription' ) )
|
||||
defaults_5 = ClientGUICommon.BetterButton( defaults_panel, 'slower tag subscription', self.SetValue, ClientDefaults.GetDefaultCheckerOptions( 'slow tag subscription' ) )
|
||||
|
||||
#
|
||||
|
||||
# add statictext or whatever that will update on any updates above to say 'given velocity of blah and last check at blah, next check in 5 mins'
|
||||
# or indeed this could just take the seed cache and last check of the caller, if there is one
|
||||
# this would be more useful to the user, to know 'right, on ok, it'll refresh in 30 mins'
|
||||
# this is actually more complicated--it also needs last check time to calc a fresh file velocity based on new death_file_velocity
|
||||
|
||||
self._intended_files_per_check = wx.SpinCtrl( self, min = 1, max = 1000 )
|
||||
|
||||
self._never_faster_than = TimeDeltaCtrl( self, min = 30, days = True, hours = True, minutes = True, seconds = True )
|
||||
|
||||
self._never_slower_than = TimeDeltaCtrl( self, min = 600, days = True, hours = True, minutes = True )
|
||||
#
|
||||
|
||||
self._death_file_velocity = VelocityCtrl( self, min_time_delta = 60, days = True, hours = True, minutes = True, per_phrase = 'in', unit = 'files' )
|
||||
|
||||
self._flat_check_period_checkbox = wx.CheckBox( self )
|
||||
|
||||
#
|
||||
|
||||
self._reactive_check_panel = ClientGUICommon.StaticBox( self, 'reactive checking' )
|
||||
|
||||
self._intended_files_per_check = wx.SpinCtrl( self._reactive_check_panel, min = 1, max = 1000 )
|
||||
|
||||
self._never_faster_than = TimeDeltaCtrl( self._reactive_check_panel, min = 30, days = True, hours = True, minutes = True, seconds = True )
|
||||
|
||||
self._never_slower_than = TimeDeltaCtrl( self._reactive_check_panel, min = 600, days = True, hours = True, minutes = True )
|
||||
|
||||
#
|
||||
|
||||
self._static_check_panel = ClientGUICommon.StaticBox( self, 'static checking' )
|
||||
|
||||
self._flat_check_period = TimeDeltaCtrl( self._static_check_panel, min = 180, days = True, hours = True, minutes = True )
|
||||
|
||||
#
|
||||
|
||||
self.SetValue( checker_options )
|
||||
|
||||
#
|
||||
|
||||
defaults_panel.Add( defaults_1, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
defaults_panel.Add( defaults_2, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
defaults_panel.Add( defaults_3, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
defaults_panel.Add( defaults_4, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
defaults_panel.Add( defaults_5, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
#
|
||||
|
||||
rows = []
|
||||
|
||||
rows.append( ( 'intended new files per check: ', self._intended_files_per_check ) )
|
||||
rows.append( ( 'never check faster than once per: ', self._never_faster_than ) )
|
||||
rows.append( ( 'never check slower than once per: ', self._never_slower_than ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( self._reactive_check_panel, rows )
|
||||
|
||||
self._reactive_check_panel.Add( gridbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
||||
#
|
||||
|
||||
rows = []
|
||||
|
||||
rows.append( ( 'check period: ', self._flat_check_period ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( self._static_check_panel, rows )
|
||||
|
||||
self._static_check_panel.Add( gridbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
||||
#
|
||||
|
||||
rows = []
|
||||
|
||||
rows.append( ( 'stop checking if new files found falls below: ', self._death_file_velocity ) )
|
||||
rows.append( ( 'just check at a static, regular interval: ', self._flat_check_period_checkbox ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( self, rows )
|
||||
|
||||
vbox = wx.BoxSizer( wx.VERTICAL )
|
||||
|
||||
vbox.Add( help_hbox, CC.FLAGS_BUTTON_SIZER )
|
||||
vbox.Add( defaults_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.Add( gridbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
vbox.Add( self._reactive_check_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.Add( self._static_check_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
self.SetSizer( vbox )
|
||||
|
||||
#
|
||||
|
||||
self._flat_check_period_checkbox.Bind( wx.EVT_CHECKBOX, self.EventFlatPeriodCheck )
|
||||
|
||||
|
||||
def _UpdateEnabledControls( self ):
|
||||
|
||||
if self._flat_check_period_checkbox.GetValue() == True:
|
||||
|
||||
self._reactive_check_panel.Hide()
|
||||
self._static_check_panel.Show()
|
||||
|
||||
else:
|
||||
|
||||
self._reactive_check_panel.Show()
|
||||
self._static_check_panel.Hide()
|
||||
|
||||
|
||||
self.Layout()
|
||||
|
||||
ClientGUITopLevelWindows.PostSizeChangedEvent( self )
|
||||
|
||||
|
||||
def EventFlatPeriodCheck( self, event ):
|
||||
|
||||
self._UpdateEnabledControls()
|
||||
|
||||
|
||||
def SetValue( self, checker_options ):
|
||||
|
||||
( intended_files_per_check, never_faster_than, never_slower_than, death_file_velocity ) = checker_options.ToTuple()
|
||||
|
||||
self._intended_files_per_check.SetValue( intended_files_per_check )
|
||||
|
@ -40,45 +148,51 @@ class EditCheckerOptions( ClientGUIScrolledPanels.EditPanel ):
|
|||
self._never_slower_than.SetValue( never_slower_than )
|
||||
self._death_file_velocity.SetValue( death_file_velocity )
|
||||
|
||||
#
|
||||
self._flat_check_period.SetValue( never_faster_than )
|
||||
|
||||
rows = []
|
||||
self._flat_check_period_checkbox.SetValue( never_faster_than == never_slower_than )
|
||||
|
||||
rows.append( ( 'intended new files per check: ', self._intended_files_per_check ) )
|
||||
rows.append( ( 'stop checking if new files found falls below: ', self._death_file_velocity ) )
|
||||
rows.append( ( 'never check faster than once per: ', self._never_faster_than ) )
|
||||
rows.append( ( 'never check slower than once per: ', self._never_slower_than ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( self, rows )
|
||||
|
||||
vbox = wx.BoxSizer( wx.VERTICAL )
|
||||
|
||||
vbox.Add( help_hbox, CC.FLAGS_BUTTON_SIZER )
|
||||
vbox.Add( gridbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
||||
self.SetSizer( vbox )
|
||||
self._UpdateEnabledControls()
|
||||
|
||||
|
||||
def _ShowHelp( self ):
|
||||
|
||||
help = 'PROTIP: Do not change anything here unless you understand what it means!'
|
||||
help = 'The intention of this object is to govern how frequently the watcher or subscription checks for new files--and when it should stop completely.'
|
||||
help += os.linesep * 2
|
||||
help += 'After its initialisation check, the checker times future checks so that it will probably find the same specified number of new files each time. When files are being posted frequently, it will check more often. When things are slow, it will slow down as well.'
|
||||
help += 'PROTIP: Do not change anything here unless you understand what it means!'
|
||||
help += os.linesep * 2
|
||||
help += 'For instance, if it were set to try for 5 new files with every check, and at the last check it knew that the last 24 hours had produced 10 new files, it would check again 12 hours later. When that check was done and any new files found, it would then recalculate and repeat the process.'
|
||||
help += 'In general, checkers can and should be set up to check faster or slower based on how fast new files are coming in. This is polite to the server you are talking to and saves you CPU and bandwidth. The rate of new files is called the \'file velocity\' and is based on how many files appeared in a certain period before the _most recent check time_.'
|
||||
help += os.linesep * 2
|
||||
help += 'If the \'file velocity\' drops below a certain amount, the checker considers the source of files dead and will stop checking. If it falls into this state but you think there might have been a rush of new files, hit the \'check now\' button in an attempt to revive the checker. If there are new files, it will start checking again until they drop off once more.'
|
||||
help += 'Once the first check is done and an initial file velocity is established, the time to the next check will be based on what you set for the \'intended files per check\'. If the current file velocity is 10 files per 24 hours, and you set the intended files per check to 5 files, the checker will set the next check time to be 12 hours after the previous check time.'
|
||||
help += os.linesep * 2
|
||||
help += 'After a check is completed, the new file velocity and next check time is calculated, so when files are being posted frequently, it will check more often. When things are slow, it will slow down as well. There are also minimum and maximum check periods to smooth out the bumps.'
|
||||
help += os.linesep * 2
|
||||
help += 'But if you would rather just check at a fixed rate, check the checkbox and you will get a simpler \'static checking\' panel.'
|
||||
help += os.linesep * 2
|
||||
help += 'If the \'file velocity\' drops below a certain amount, the checker considers the source of files dead and will stop checking. If it falls into this state but you think there might have since been a rush of new files, hit the watcher or subscription\'s \'check now\' button in an attempt to revive the checker. If there are new files, it will start checking again until they drop off once more.'
|
||||
help += os.linesep * 2
|
||||
help += 'If you are still not comfortable with how this system works, the \'reasonable defaults\' are good fallbacks. Most of the time, setting some reasonable rules and leaving checkers to do their work is the best way to deal with this stuff, rather than obsessing over the exact perfect values you want for each situation.'
|
||||
|
||||
wx.MessageBox( help )
|
||||
|
||||
|
||||
def GetValue( self ):
|
||||
|
||||
intended_files_per_check = self._intended_files_per_check.GetValue()
|
||||
never_faster_than = self._never_faster_than.GetValue()
|
||||
never_slower_than = self._never_slower_than.GetValue()
|
||||
death_file_velocity = self._death_file_velocity.GetValue()
|
||||
|
||||
intended_files_per_check = self._intended_files_per_check.GetValue()
|
||||
|
||||
if self._flat_check_period_checkbox.GetValue() == True:
|
||||
|
||||
never_faster_than = self._flat_check_period.GetValue()
|
||||
never_slower_than = never_faster_than
|
||||
|
||||
else:
|
||||
|
||||
never_faster_than = self._never_faster_than.GetValue()
|
||||
never_slower_than = self._never_slower_than.GetValue()
|
||||
|
||||
|
||||
return ClientImportOptions.CheckerOptions( intended_files_per_check, never_faster_than, never_slower_than, death_file_velocity )
|
||||
|
||||
|
||||
|
|
|
@ -66,6 +66,13 @@ class CheckerOptions( HydrusSerialisable.SerialisableBase ):
|
|||
( self._intended_files_per_check, self._never_faster_than, self._never_slower_than, self._death_file_velocity ) = serialisable_info
|
||||
|
||||
|
||||
def GetDeathFileVelocityPeriod( self ):
|
||||
|
||||
( death_files_found, death_time_delta ) = self._death_file_velocity
|
||||
|
||||
return death_time_delta
|
||||
|
||||
|
||||
def GetNextCheckTime( self, seed_cache, last_check_time ):
|
||||
|
||||
if len( seed_cache ) == 0:
|
||||
|
@ -845,7 +852,7 @@ class TagImportOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
bad_tags = HydrusTags.SortNumericTags( bad_tags )
|
||||
|
||||
raise HydrusExceptions.VetoException( ', '.join( bad_tags ) + ' are blacklisted!' )
|
||||
raise HydrusExceptions.VetoException( ', '.join( bad_tags ) + ' is blacklisted!' )
|
||||
|
||||
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -26,7 +26,7 @@ class NetworkBandwidthManager( HydrusSerialisable.SerialisableBase ):
|
|||
self._network_contexts_to_bandwidth_trackers = collections.defaultdict( HydrusNetworking.BandwidthTracker )
|
||||
self._network_contexts_to_bandwidth_rules = collections.defaultdict( HydrusNetworking.BandwidthRules )
|
||||
|
||||
for context_type in [ CC.NETWORK_CONTEXT_GLOBAL, CC.NETWORK_CONTEXT_HYDRUS, CC.NETWORK_CONTEXT_DOMAIN, CC.NETWORK_CONTEXT_DOWNLOADER_PAGE, CC.NETWORK_CONTEXT_SUBSCRIPTION, CC.NETWORK_CONTEXT_THREAD_WATCHER_PAGE ]:
|
||||
for context_type in [ CC.NETWORK_CONTEXT_GLOBAL, CC.NETWORK_CONTEXT_HYDRUS, CC.NETWORK_CONTEXT_DOMAIN, CC.NETWORK_CONTEXT_DOWNLOADER_PAGE, CC.NETWORK_CONTEXT_SUBSCRIPTION, CC.NETWORK_CONTEXT_WATCHER_PAGE ]:
|
||||
|
||||
self._network_contexts_to_bandwidth_rules[ ClientNetworkingContexts.NetworkContext( context_type ) ] = HydrusNetworking.BandwidthRules()
|
||||
|
||||
|
|
|
@ -110,7 +110,7 @@ class NetworkContext( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
def IsEphemeral( self ):
|
||||
|
||||
return self.context_type in ( CC.NETWORK_CONTEXT_DOWNLOADER_PAGE, CC.NETWORK_CONTEXT_THREAD_WATCHER_PAGE )
|
||||
return self.context_type in ( CC.NETWORK_CONTEXT_DOWNLOADER_PAGE, CC.NETWORK_CONTEXT_WATCHER_PAGE )
|
||||
|
||||
|
||||
def GetSummary( self ):
|
||||
|
|
|
@ -427,19 +427,30 @@ class NetworkJob( object ):
|
|||
|
||||
else:
|
||||
|
||||
waiting_duration = self.engine.bandwidth_manager.GetWaitingEstimate( self._network_contexts )
|
||||
|
||||
if waiting_duration < 2:
|
||||
if self._bandwidth_manual_override_delayed_timestamp is not None and not HydrusData.TimeHasPassed( self._bandwidth_manual_override_delayed_timestamp ):
|
||||
|
||||
self._status_text = u'bandwidth free imminently\u2026'
|
||||
waiting_str = HydrusData.ConvertTimestampToPrettyPending( self._bandwidth_manual_override_delayed_timestamp )
|
||||
|
||||
self._status_text = u'overriding bandwidth ' + waiting_str + u'\u2026'
|
||||
|
||||
waiting_duration = HydrusData.GetNow() - self._bandwidth_manual_override_delayed_timestamp
|
||||
|
||||
else:
|
||||
|
||||
pending_timestamp = HydrusData.GetNow() + waiting_duration
|
||||
waiting_duration = self.engine.bandwidth_manager.GetWaitingEstimate( self._network_contexts )
|
||||
|
||||
waiting_str = HydrusData.ConvertTimestampToPrettyPending( pending_timestamp )
|
||||
|
||||
self._status_text = u'bandwidth free in ' + waiting_str + u'\u2026'
|
||||
if waiting_duration < 2:
|
||||
|
||||
self._status_text = u'bandwidth free imminently\u2026'
|
||||
|
||||
else:
|
||||
|
||||
pending_timestamp = HydrusData.GetNow() + waiting_duration
|
||||
|
||||
waiting_str = HydrusData.ConvertTimestampToPrettyPending( pending_timestamp )
|
||||
|
||||
self._status_text = u'bandwidth free in ' + waiting_str + u'\u2026'
|
||||
|
||||
|
||||
|
||||
if waiting_duration > 1200:
|
||||
|
@ -1067,9 +1078,9 @@ class NetworkJobHydrus( NetworkJob ):
|
|||
|
||||
class NetworkJobWatcherPage( NetworkJob ):
|
||||
|
||||
def __init__( self, thread_key, method, url, body = None, referral_url = None, temp_path = None ):
|
||||
def __init__( self, watcher_key, method, url, body = None, referral_url = None, temp_path = None ):
|
||||
|
||||
self._thread_key = thread_key
|
||||
self._watcher_key = watcher_key
|
||||
|
||||
NetworkJob.__init__( self, method, url, body = body, referral_url = referral_url, temp_path = temp_path )
|
||||
|
||||
|
@ -1078,7 +1089,7 @@ class NetworkJobWatcherPage( NetworkJob ):
|
|||
|
||||
network_contexts = NetworkJob._GenerateNetworkContexts( self )
|
||||
|
||||
network_contexts.append( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_THREAD_WATCHER_PAGE, self._thread_key ) )
|
||||
network_contexts.append( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_WATCHER_PAGE, self._watcher_key ) )
|
||||
|
||||
return network_contexts
|
||||
|
||||
|
|
|
@ -419,7 +419,7 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._dictionary[ 'misc' ] = HydrusSerialisable.SerialisableDictionary()
|
||||
|
||||
self._dictionary[ 'misc' ][ 'default_thread_watcher_options' ] = ClientImportOptions.CheckerOptions( intended_files_per_check = 4, never_faster_than = 300, never_slower_than = 86400, death_file_velocity = ( 1, 86400 ) )
|
||||
self._dictionary[ 'misc' ][ 'default_thread_watcher_options' ] = ClientDefaults.GetDefaultCheckerOptions( 'thread' )
|
||||
|
||||
#
|
||||
|
||||
|
@ -675,7 +675,7 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
site_type = gallery_identifier.GetSiteType()
|
||||
|
||||
if site_type == HC.SITE_TYPE_THREAD_WATCHER:
|
||||
if site_type == HC.SITE_TYPE_WATCHER:
|
||||
|
||||
import ClientImportOptions
|
||||
|
||||
|
@ -741,7 +741,7 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetDefaultThreadCheckerOptions( self ):
|
||||
def GetDefaultWatcherCheckerOptions( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
|
@ -1028,7 +1028,7 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def SetDefaultThreadCheckerOptions( self, checker_options ):
|
||||
def SetDefaultWatcherCheckerOptions( self, checker_options ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
|
|
|
@ -89,7 +89,7 @@ def ConvertParseResultToPrettyString( result ):
|
|||
|
||||
priority = additional_info
|
||||
|
||||
return 'thread watcher page title (priority ' + str( priority ) + '): ' + parsed_text
|
||||
return 'watcher page title (priority ' + str( priority ) + '): ' + parsed_text
|
||||
|
||||
elif content_type == HC.CONTENT_TYPE_VETO:
|
||||
|
||||
|
@ -168,7 +168,7 @@ def ConvertParsableContentToPrettyString( parsable_content, include_veto = False
|
|||
|
||||
elif content_type == HC.CONTENT_TYPE_TITLE:
|
||||
|
||||
pretty_strings.append( 'thread watcher page title' )
|
||||
pretty_strings.append( 'watcher page title' )
|
||||
|
||||
elif content_type == HC.CONTENT_TYPE_VETO:
|
||||
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
import HydrusConstants as HC
|
||||
import HydrusData
|
||||
import HydrusGlobals as HG
|
||||
import HydrusPaths
|
||||
import os
|
||||
import webbrowser
|
||||
|
||||
def DeletePath( path ):
|
||||
|
@ -16,7 +18,7 @@ def DeletePath( path ):
|
|||
|
||||
def GetCurrentTempDir():
|
||||
|
||||
temp_path_override = HG.client_controller.new_options.GetNoneableString( 'temp_path_override' )
|
||||
temp_path_override = GetTempPathOverride()
|
||||
|
||||
if temp_path_override is None:
|
||||
|
||||
|
@ -29,13 +31,13 @@ def GetCurrentTempDir():
|
|||
|
||||
def GetTempDir():
|
||||
|
||||
temp_path_override = HG.client_controller.new_options.GetNoneableString( 'temp_path_override' )
|
||||
temp_path_override = GetTempPathOverride()
|
||||
|
||||
return HydrusPaths.GetTempDir( dir = temp_path_override ) # none means default
|
||||
|
||||
def GetTempPath( suffix = '' ):
|
||||
|
||||
temp_path_override = HG.client_controller.new_options.GetNoneableString( 'temp_path_override' )
|
||||
temp_path_override = GetTempPathOverride()
|
||||
|
||||
return HydrusPaths.GetTempPath( suffix = suffix, dir = temp_path_override )
|
||||
|
||||
|
@ -56,3 +58,16 @@ def LaunchURLInWebBrowser( url ):
|
|||
HydrusPaths.LaunchFile( url, launch_path = web_browser_path )
|
||||
|
||||
|
||||
def GetTempPathOverride():
|
||||
|
||||
temp_path_override = HG.client_controller.new_options.GetNoneableString( 'temp_path_override' )
|
||||
|
||||
if temp_path_override is not None and not os.path.exists( temp_path_override ):
|
||||
|
||||
HydrusData.ShowText( 'The temp path ' + temp_path_override + ' does not exist! Please either create it or change the option!' )
|
||||
|
||||
return None
|
||||
|
||||
|
||||
return temp_path_override
|
||||
|
||||
|
|
|
@ -49,7 +49,7 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 18
|
||||
SOFTWARE_VERSION = 307
|
||||
SOFTWARE_VERSION = 308
|
||||
|
||||
UNSCALED_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
||||
|
@ -643,7 +643,7 @@ SITE_TYPE_HENTAI_FOUNDRY_TAGS = 12
|
|||
SITE_TYPE_PIXIV_ARTIST_ID = 13
|
||||
SITE_TYPE_PIXIV_TAG = 14
|
||||
SITE_TYPE_DEFAULT = 15
|
||||
SITE_TYPE_THREAD_WATCHER = 16
|
||||
SITE_TYPE_WATCHER = 16
|
||||
|
||||
site_type_string_lookup = {}
|
||||
|
||||
|
@ -663,7 +663,7 @@ site_type_string_lookup[ SITE_TYPE_PIXIV ] = 'pixiv'
|
|||
site_type_string_lookup[ SITE_TYPE_PIXIV_ARTIST_ID ] = 'pixiv artist id'
|
||||
site_type_string_lookup[ SITE_TYPE_PIXIV_TAG ] = 'pixiv tag'
|
||||
site_type_string_lookup[ SITE_TYPE_TUMBLR ] = 'tumblr'
|
||||
site_type_string_lookup[ SITE_TYPE_THREAD_WATCHER ] = 'thread watcher'
|
||||
site_type_string_lookup[ SITE_TYPE_WATCHER ] = 'watcher'
|
||||
|
||||
TIMESTAMP_TYPE_SOURCE = 0
|
||||
|
||||
|
|
|
@ -45,7 +45,8 @@ class HydrusController( object ):
|
|||
self._caches = {}
|
||||
self._managers = {}
|
||||
|
||||
self._job_scheduler = None
|
||||
self._fast_job_scheduler = None
|
||||
self._slow_job_scheduler = None
|
||||
|
||||
self._call_to_threads = []
|
||||
self._long_running_call_to_threads = []
|
||||
|
@ -117,6 +118,18 @@ class HydrusController( object ):
|
|||
|
||||
|
||||
|
||||
def _GetAppropriateJobScheduler( self, time_delta ):
|
||||
|
||||
if time_delta < 1.0:
|
||||
|
||||
return self._fast_job_scheduler
|
||||
|
||||
else:
|
||||
|
||||
return self._slow_job_scheduler
|
||||
|
||||
|
||||
|
||||
def _InitDB( self ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
@ -209,22 +222,26 @@ class HydrusController( object ):
|
|||
|
||||
def CallLater( self, initial_delay, func, *args, **kwargs ):
|
||||
|
||||
job_scheduler = self._GetAppropriateJobScheduler( initial_delay )
|
||||
|
||||
call = HydrusData.Call( func, *args, **kwargs )
|
||||
|
||||
job = HydrusThreading.SchedulableJob( self, self._job_scheduler, initial_delay, call )
|
||||
job = HydrusThreading.SchedulableJob( self, job_scheduler, initial_delay, call )
|
||||
|
||||
self._job_scheduler.AddJob( job )
|
||||
job_scheduler.AddJob( job )
|
||||
|
||||
return job
|
||||
|
||||
|
||||
def CallRepeating( self, initial_delay, period, func, *args, **kwargs ):
|
||||
|
||||
job_scheduler = self._GetAppropriateJobScheduler( period )
|
||||
|
||||
call = HydrusData.Call( func, *args, **kwargs )
|
||||
|
||||
job = HydrusThreading.RepeatingJob( self, self._job_scheduler, initial_delay, period, call )
|
||||
job = HydrusThreading.RepeatingJob( self, job_scheduler, initial_delay, period, call )
|
||||
|
||||
self._job_scheduler.AddJob( job )
|
||||
job_scheduler.AddJob( job )
|
||||
|
||||
return job
|
||||
|
||||
|
@ -309,8 +326,14 @@ class HydrusController( object ):
|
|||
|
||||
def DebugShowScheduledJobs( self ):
|
||||
|
||||
summary = self._job_scheduler.GetPrettyJobSummary()
|
||||
summary = self._fast_job_scheduler.GetPrettyJobSummary()
|
||||
|
||||
HydrusData.ShowText( 'fast scheduler:' )
|
||||
HydrusData.ShowText( summary )
|
||||
|
||||
summary = self._slow_job_scheduler.GetPrettyJobSummary()
|
||||
|
||||
HydrusData.ShowText( 'slow scheduler:' )
|
||||
HydrusData.ShowText( summary )
|
||||
|
||||
|
||||
|
@ -358,9 +381,11 @@ class HydrusController( object ):
|
|||
|
||||
def InitModel( self ):
|
||||
|
||||
self._job_scheduler = HydrusThreading.JobScheduler( self )
|
||||
self._fast_job_scheduler = HydrusThreading.JobScheduler( self )
|
||||
self._slow_job_scheduler = HydrusThreading.JobScheduler( self )
|
||||
|
||||
self._job_scheduler.start()
|
||||
self._fast_job_scheduler.start()
|
||||
self._slow_job_scheduler.start()
|
||||
|
||||
self.db = self._InitDB()
|
||||
|
||||
|
@ -398,7 +423,8 @@ class HydrusController( object ):
|
|||
|
||||
self.pub( 'memory_maintenance_pulse' )
|
||||
|
||||
self._job_scheduler.ClearOutDead()
|
||||
self._fast_job_scheduler.ClearOutDead()
|
||||
self._slow_job_scheduler.ClearOutDead()
|
||||
|
||||
|
||||
def MaintainMemorySlow( self ):
|
||||
|
@ -469,11 +495,18 @@ class HydrusController( object ):
|
|||
|
||||
|
||||
|
||||
if self._job_scheduler is not None:
|
||||
if self._fast_job_scheduler is not None:
|
||||
|
||||
self._job_scheduler.shutdown()
|
||||
self._fast_job_scheduler.shutdown()
|
||||
|
||||
self._job_scheduler = None
|
||||
self._fast_job_scheduler = None
|
||||
|
||||
|
||||
if self._slow_job_scheduler is not None:
|
||||
|
||||
self._slow_job_scheduler.shutdown()
|
||||
|
||||
self._slow_job_scheduler = None
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -244,6 +244,11 @@ def DeletePath( path ):
|
|||
|
||||
def DirectoryIsWritable( path ):
|
||||
|
||||
if not os.path.exists( path ):
|
||||
|
||||
return False
|
||||
|
||||
|
||||
try:
|
||||
|
||||
t = tempfile.TemporaryFile( dir = path )
|
||||
|
|
|
@ -31,7 +31,7 @@ SERIALISABLE_TYPE_GUI_SESSION = 13
|
|||
SERIALISABLE_TYPE_PREDICATE = 14
|
||||
SERIALISABLE_TYPE_FILE_SEARCH_CONTEXT = 15
|
||||
SERIALISABLE_TYPE_EXPORT_FOLDER = 16
|
||||
SERIALISABLE_TYPE_THREAD_WATCHER_IMPORT = 17
|
||||
SERIALISABLE_TYPE_WATCHER_IMPORT = 17
|
||||
SERIALISABLE_TYPE_SIMPLE_DOWNLOADER_IMPORT = 18
|
||||
SERIALISABLE_TYPE_IMPORT_FOLDER = 19
|
||||
SERIALISABLE_TYPE_GALLERY_IMPORT = 20
|
||||
|
|
|
@ -2,6 +2,7 @@ import bisect
|
|||
import collections
|
||||
import HydrusExceptions
|
||||
import Queue
|
||||
import random
|
||||
import threading
|
||||
import time
|
||||
import traceback
|
||||
|
@ -381,6 +382,8 @@ class JobScheduler( threading.Thread ):
|
|||
|
||||
def _StartWork( self ):
|
||||
|
||||
jobs_started = 0
|
||||
|
||||
while True:
|
||||
|
||||
with self._waiting_lock:
|
||||
|
@ -390,6 +393,11 @@ class JobScheduler( threading.Thread ):
|
|||
break
|
||||
|
||||
|
||||
if jobs_started >= 10: # try to avoid spikes
|
||||
|
||||
break
|
||||
|
||||
|
||||
next_job = self._waiting[0]
|
||||
|
||||
if next_job.IsDue():
|
||||
|
@ -398,6 +406,8 @@ class JobScheduler( threading.Thread ):
|
|||
|
||||
next_job.StartWork()
|
||||
|
||||
jobs_started += 1
|
||||
|
||||
else:
|
||||
|
||||
break # all the rest in the queue are not due
|
||||
|
@ -587,9 +597,14 @@ class SchedulableJob( object ):
|
|||
self._BootWorker()
|
||||
|
||||
|
||||
def Wake( self ):
|
||||
def Wake( self, next_work_time = None ):
|
||||
|
||||
self._next_work_time = HydrusData.GetNowFloat()
|
||||
if next_work_time is None:
|
||||
|
||||
next_work_time = HydrusData.GetNowFloat()
|
||||
|
||||
|
||||
self._next_work_time = next_work_time
|
||||
|
||||
self._scheduler.WorkTimesHaveChanged()
|
||||
|
||||
|
@ -641,6 +656,11 @@ class RepeatingJob( SchedulableJob ):
|
|||
|
||||
def SetPeriod( self, period ):
|
||||
|
||||
if period > 10.0:
|
||||
|
||||
period += random.random() # smooth out future spikes if ten of these all fire at the same time
|
||||
|
||||
|
||||
self._period = period
|
||||
|
||||
|
||||
|
|
|
@ -655,7 +655,7 @@ class TestClientDB( unittest.TestCase ):
|
|||
|
||||
#
|
||||
|
||||
management_controller = ClientGUIManagement.CreateManagementControllerImportThreadWatcher()
|
||||
management_controller = ClientGUIManagement.CreateManagementControllerImportWatcher()
|
||||
|
||||
page = ClientGUIPages.Page( test_frame, HG.test_controller, management_controller, [] )
|
||||
|
||||
|
@ -745,7 +745,7 @@ class TestClientDB( unittest.TestCase ):
|
|||
|
||||
|
||||
|
||||
self.assertEqual( page_names, [ u'hentai foundry artist', u'import', u'thread watcher', u'simple downloader', u'example tag repo petitions', u'search', u'search', u'files', u'wew lad', u'files' ] )
|
||||
self.assertEqual( page_names, [ u'hentai foundry artist', u'import', u'watcher', u'simple downloader', u'example tag repo petitions', u'search', u'search', u'files', u'wew lad', u'files' ] )
|
||||
|
||||
finally:
|
||||
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 2.2 KiB |
Binary file not shown.
Before Width: | Height: | Size: 2.1 KiB After Width: | Height: | Size: 2.0 KiB |
12
test.py
12
test.py
|
@ -218,6 +218,11 @@ class Controller( object ):
|
|||
self._pubsub.sub( object, method_name, topic )
|
||||
|
||||
|
||||
def AcquirePageKey( self ):
|
||||
|
||||
return HydrusData.GenerateKey()
|
||||
|
||||
|
||||
def CallBlockingToWx( self, func, *args, **kwargs ):
|
||||
|
||||
def wx_code( job_key ):
|
||||
|
@ -387,7 +392,7 @@ class Controller( object ):
|
|||
return HG.model_shutdown
|
||||
|
||||
|
||||
def PageCompletelyDestroyed( self, page_key ):
|
||||
def PageAlive( self, page_key ):
|
||||
|
||||
return False
|
||||
|
||||
|
@ -407,6 +412,11 @@ class Controller( object ):
|
|||
pass
|
||||
|
||||
|
||||
def ReleasePageKey( self, page_key ):
|
||||
|
||||
pass
|
||||
|
||||
|
||||
def ReportDataUsed( self, num_bytes ):
|
||||
|
||||
pass
|
||||
|
|
Loading…
Reference in New Issue