Version 265

This commit is contained in:
Hydrus Network Developer 2017-07-19 16:21:41 -05:00
parent 0f28bf5e9c
commit a1c9416d2d
36 changed files with 1630 additions and 536 deletions

View File

@ -4,7 +4,7 @@ The hydrus network client is an application written for Anon and other internet-
I am continually working on the software and try to put out a new release every Wednesday by 8pm Eastern.
This github repository is currently a weekly sync with my home dev environment, where I work on hydrus by myself. Feel free to fork, but please don't make pull requests at this time. I am also not active on Github, so if you have feedback of any sort, please email me, post on my 8chan board, or message me on tumblr or twitter.
This github repository is currently a weekly sync with my home dev environment, where I work on hydrus by myself. Feel free to fork, but please don't make pull requests at this time. I am also not active on Github, so if you have feedback of any sort, please email me, post on my 8chan board, or message me on tumblr or twitter or the discord.
The client can do quite a lot! Please check out the help inside the release or [here](http://hydrusnetwork.github.io/hydrus/help), which includes a comprehensive getting started guide.

View File

@ -8,6 +8,40 @@
<div class="content">
<h3>changelog</h3>
<ul>
<li><h3>version 265</h3></li>
<ul>
<li>the bandwidth engine now recognises individual thread watcher threads as a network context that can inherit default bandwidth rules</li>
<li>tweaked default bandwidth rules and reset existing rules to this new default</li>
<li>review all bandwidth frame now has a time delta button to choose how the network contexts are filtered</li>
<li>review all bandwidth frame now updates itself every 20 seconds or so</li>
<li>review all bandwidth frame now has a 'delete history' button</li>
<li>review all bandwidth frame now shows if services have specific rules</li>
<li>review all bandwidth frame now has an 'edit default rules' button that lets you select and set rules for default network contexts</li>
<li>review network context bandwidth frame now has a bar chart to show historical usage!</li>
<li>bar chart is optional based on matplotlib availability</li>
<li>review network context bandwidth frame now lists current bandwidth rules and current usage. it says whether these are default or specific rules</li>
<li>review network context bandwidth frame now has a button to edit/clear specific rules</li>
<li>rows of bandwidth rules and current usage, where presented in ui, are now ordered in ascending time delta</li>
<li>misc bandwidth code improvements</li>
<li>client file imports are now bundled into their own job object that generates cpu-expensive file metadata outside of the main file and database locks. file imports are now much less laggy and should generally block the feel of the ui much less</li>
<li>removed the database 'rebalance files' menu entry</li>
<li>removed the 'client files location' page from options</li>
<li>db client_files rebalance will no longer occur in idle or shutdown time</li>
<li>(this stuff is now handled in the migrate database dialog)</li>
<li>'migrate database' now uses a dialog, meaning you cannot interact with the rest of the program while it is open</li>
<li>migrate database now has file location editing verbs--add, remove, +/- weight, rebalance_now. thumbnail location and portable db migration will be added next week</li>
<li>flushed out the backup guide in the getting started help, including to reflect the new internal process</li>
<li>the client now saves the 'last session' gui session before running a database backup</li>
<li>the shutdown maintenance yes/no dialog will now auto-no after 15 seconds</li>
<li>gave status bar tabs a bit more space for their text (some window managers were cutting them off)</li>
<li>tumblr api lookups are now https</li>
<li>tumblr files uploaded pre-2013 will no longer receive the 68. subdomain stripping, as they are not supported at the media.tumblr.com domain (much like 'raw' urls)</li>
<li>pages will now not 'start' their download queues or thread checkers or whatever data checking loops they have until their initial media results are loaded</li>
<li>key events started from an autocomplete entry but consumed by a higher window (typically F5 or F9/ctrl+t for refresh or new page at the main gui level) will no longer be duplicated</li>
<li>fixed a shutdown issue with network job controls that could break a clean shutdown in some circumstances</li>
<li>if the user attempts to create more than 128 pages, the client will now instead complain with a popup message. Due to OS-based gui handle limits, more than this many pages increasingly risks a crash</li>
<li>if the client has more than 128 pages both open and waiting in the undo menu, it will destroy the 'closed' ones</li>
</ul>
<li><h3>version 264</h3></li>
<ul>
<li>converted the page of images downloader to the new network engine</li>

View File

@ -6,6 +6,7 @@
</head>
<body>
<div class="content">
<p><b class="warning">DRAFT</b></p>
<h3>the hydrus database</h3>
<p>A hydrus client consists of three components:</p>
<ol>
@ -22,11 +23,11 @@
<li>
<b>your media files</b>
<p>All of your jpegs and webms and so on (and their thumbnails) are stored in a single complicated directory that is by default at <i>install_dir/db/client_files</i>. All the files are named by their hash and stored in efficient hash-based subdirectories. In general, it is not navigable by humans, but it works very well for the fast access from a giant pool of files the client needs to do to manage your media.</p>
<p>Thumbnails tend to be fetched dozens at a time, so it is, again, ideal if they are stored on an SSD. Your regular media files--which on many clients total hundreds of GB--are usually fetched one at a time for human consumption and do not benefit from the expensive low-latency of an SSD. They are best stored on a cheap HDD, and, if desired, also work well when stored across a network file system.</p>
<p>Thumbnails tend to be fetched dozens at a time, so it is, again, ideal if they are stored on an SSD. Your regular media files--which on many clients total hundreds of GB--are usually fetched one at a time for human consumption and do not benefit from the expensive low-latency of an SSD. They are best stored on a cheap HDD, and, if desired, also work well across a network file system.</p>
</li>
</ol>
<h3>these components can be put on different drives</h3>
<p>Although an initial install will keep these parts together, it is possible to run the database on a fast drive but keep your files in cheap slow storage. And if you have a very large collection, you can even spread your files across multiple drives. It is not very technically difficult, but I do not recommend it for new users.</p>
<p>Although an initial install will keep these parts together, it is possible to run the database on a fast drive but keep your media in cheap slow storage. And if you have a very large collection, you can even spread your files across multiple drives. It is not very technically difficult, but I do not recommend it for new users.</p>
<h3>pulling them apart</h3>
<p><b class="warning">As always, I recommend creating a backup before you try any of this, just in case it goes wrong.</b></p>
<p>If you have multiple drives and would like to spread your database across them, please do not move the folders around yourself--the database has an internal 'knowledge' of where it thinks its thumbnail and file folders are, and if you move them while it is closed, it will throw 'missing path' errors as soon as it boots. The internal hydrus logic of relative and absolute paths is not always obvious, so it is easy to make mistakes, even if you think you know what you are doing. Instead, please do it through the gui:</p>

View File

@ -46,10 +46,22 @@
<p>Unless the update specifically disables or reconfigures something, all your files and tags and settings will be remembered after the update.</p>
<h3>backing up</h3>
<p>You <i>do</i> backup, right? <i>Right</i>?</p>
<p>I run a backup every week so that if my computer blows up, I'll at worst have lost a few days' work. Before I did this, I once lost an entire drive with tens of thousands of files, and it didn't feel great at all. I only push backups so hard so you might avoid what I felt. ;_;</p>
<p>I use <a href="http://www.abstractspoon.com/tdl_resources.html">ToDoList</a> to remind me of my jobs for the day, including backup tasks, and <a href="http://sourceforge.net/projects/freefilesync/">FreeFileSync</a> to actually mirror over to an external usb drive.</p>
<p>If you want to backup hydrus, you can either go <i>database->create database backup</i> or just shut the client down and copy the entire install directory somewhere.</p>
<p>I recommend you do it before you update, just in case there is a problem with my code that breaks your database. If that happens, please <a href="contact.html">contact me</a>, describing the problem, and revert to the functioning older version. I'll get on any problems like that immediately.</p>
<p>I run a backup every week so that if my computer blows up or anything else awful happens, I'll at worst have lost a few days' work. Before I did this, I once lost an entire drive with tens of thousands of files, and it sucked. I encourage backups so you might avoid what I felt. ;_;</p>
<p>I use <a href="http://www.abstractspoon.com/tdl_resources.html">ToDoList</a> to remind me of my jobs for the day, including backup tasks, and <a href="http://sourceforge.net/projects/freefilesync/">FreeFileSync</a> to actually mirror over to an external usb drive. I recommend both highly.</p>
<p>By default, hydrus stores all your user data in one location, so backing up is simple:</p>
<ul>
<li>
<p><h3>the easy way - inside the client</h3></p>
<p>Go <i>database->set up a database backup location</i> in the client. This will tell the client where you want your backup to be stored. A fresh, empty directory on a different drive is ideal.</p>
<p>Once you have your location set up, you can thereafter hit <i>database->update database backup</i>. It will lock everything and mirror your files, showing its progress in a popup message. The first time you make this backup, it may take a little while (as it will have to fully copy your database and all its files), but after that, it will only have to copy new or altered files and should only ever take a couple of minutes.</p>
</li>
<li>
<p><h3>the powerful way - using an external program</h3></p>
<p>If you are more comfortable copying directories around yourself, would like to integrate hydrus into a broader backup scheme you already run, or you are an advanced user with a complicated hydrus install that you have migrated across multiple drives, then you need to backup two things: the client*.db files and your client_files directory(ies). By default, they are all stored in install_dir/db. The .db files contain your settings and metadata like tags, while the client_files subdirs store your actual media and its thumbnails. If everything is still under install_dir/db, then it is usually easiest to just backup the whole install dir, keeping a function 'portable' copy of your install that you can restore no prob. Make sure you keep the .db files together--they are not interchangeable and mostly useless on their own!</p>
<p>Shut the client down while you run the backup, obviously.</p>
</li>
</ul>
<p>I recommend you always backup before you update, just in case there is a problem with my code that breaks your database. If that happens, please <a href="contact.html">contact me</a>, describing the problem, and revert to the functioning older version. I'll get on any problems like that immediately.</p>
<p class="right"><a href="getting_started_files.html">Let's import some files! ----></a></p>
</div>
</body>

View File

@ -22,7 +22,7 @@
<li><h3>starting out</h3></li>
<ul>
<li><a href="introduction.html">introduction and statement of principles</a></li>
<li><a href="getting_started_installing.html">installing and updating</a></li>
<li><a href="getting_started_installing.html">installing, updating and backing up</a></li>
<li><a href="getting_started_files.html">getting started with files</a></li>
<li><a href="getting_started_tags.html">getting started with tags</a></li>
<li><a href="getting_started_ratings.html">getting started with ratings</a></li>

View File

@ -972,12 +972,49 @@ class ClientFilesManager( object ):
def ImportFile( self, *args, **kwargs ):
def ImportFile( self, file_import_job ):
with self._lock:
file_import_job.GenerateHashAndStatus()
hash = file_import_job.GetHash()
if file_import_job.IsNewToDB():
return self._controller.WriteSynchronous( 'import_file', *args, **kwargs )
file_import_job.GenerateInfo()
( good_to_import, reason ) = file_import_job.IsGoodToImport()
if good_to_import:
with self._lock:
( temp_path, thumbnail ) = file_import_job.GetTempPathAndThumbnail()
mime = file_import_job.GetMime()
self.LocklessAddFile( hash, mime, temp_path )
if thumbnail is not None:
self.LocklessAddFullSizeThumbnail( hash, thumbnail )
import_status = self._controller.WriteSynchronous( 'import_file', file_import_job )
else:
raise Exception( reason )
else:
file_import_job.PubsubContentUpdates()
import_status = file_import_job.GetPreImportStatus()
return ( import_status, hash )
def LocklessGetFilePath( self, hash, mime = None ):
@ -1043,7 +1080,7 @@ class ClientFilesManager( object ):
return os.path.exists( path )
def Rebalance( self, partial = True, stop_time = None ):
def Rebalance( self, stop_time = None ):
if self._bad_error_occured:
@ -1060,26 +1097,13 @@ class ClientFilesManager( object ):
text = 'Moving \'' + prefix + '\' from ' + overweight_location + ' to ' + underweight_location
if partial:
HydrusData.Print( text )
else:
self._controller.pub( 'splash_set_status_text', text )
HydrusData.ShowText( text )
HydrusData.Print( text )
# these two lines can cause a deadlock because the db sometimes calls stuff in here.
self._controller.Write( 'relocate_client_files', prefix, overweight_location, underweight_location )
self._Reinit()
if partial:
break
if stop_time is not None and HydrusData.TimeHasPassed( stop_time ):
return
@ -1096,26 +1120,13 @@ class ClientFilesManager( object ):
text = 'Recovering \'' + prefix + '\' from ' + recoverable_location + ' to ' + correct_location
if partial:
HydrusData.Print( text )
else:
self._controller.pub( 'splash_set_status_text', text )
HydrusData.ShowText( text )
HydrusData.Print( text )
recoverable_path = os.path.join( recoverable_location, prefix )
correct_path = os.path.join( correct_location, prefix )
HydrusPaths.MergeTree( recoverable_path, correct_path )
if partial:
break
if stop_time is not None and HydrusData.TimeHasPassed( stop_time ):
return
@ -1125,11 +1136,6 @@ class ClientFilesManager( object ):
if not partial:
HydrusData.ShowText( 'All folders balanced!' )
def RegenerateResizedThumbnail( self, hash ):
@ -1139,6 +1145,14 @@ class ClientFilesManager( object ):
def RebalanceWorkToDo( self ):
with self._lock:
return self._GetRebalanceTuple() is not None
def RegenerateThumbnails( self, only_do_missing = False ):
with self._lock:

View File

@ -251,6 +251,7 @@ NETWORK_CONTEXT_DOMAIN = 2
NETWORK_CONTEXT_DOWNLOADER = 3
NETWORK_CONTEXT_DOWNLOADER_QUERY = 4
NETWORK_CONTEXT_SUBSCRIPTION = 5
NETWORK_CONTEXT_THREAD_WATCHER_THREAD = 6
network_context_type_string_lookup = {}
@ -260,6 +261,7 @@ network_context_type_string_lookup[ NETWORK_CONTEXT_DOMAIN ] = 'web domain'
network_context_type_string_lookup[ NETWORK_CONTEXT_DOWNLOADER ] = 'downloader'
network_context_type_string_lookup[ NETWORK_CONTEXT_DOWNLOADER_QUERY ] = 'downloader query instance'
network_context_type_string_lookup[ NETWORK_CONTEXT_SUBSCRIPTION ] = 'subscription'
network_context_type_string_lookup[ NETWORK_CONTEXT_THREAD_WATCHER_THREAD ] = 'thread watcher thread instance'
network_context_type_description_lookup = {}
@ -269,6 +271,7 @@ network_context_type_description_lookup[ NETWORK_CONTEXT_DOMAIN ] = 'Network tra
network_context_type_description_lookup[ NETWORK_CONTEXT_DOWNLOADER ] = 'Network traffic going through this downloader.'
network_context_type_description_lookup[ NETWORK_CONTEXT_DOWNLOADER_QUERY ] = 'Network traffic going through this single downloader query (you probably shouldn\'t be able to see this!)'
network_context_type_description_lookup[ NETWORK_CONTEXT_SUBSCRIPTION ] = 'Network traffic going through this subscription.'
network_context_type_description_lookup[ NETWORK_CONTEXT_THREAD_WATCHER_THREAD ] = 'Network traffic going through this single thread watch (you probably shouldn\'t be able to see this!)'
SHORTCUT_MODIFIER_CTRL = 0
SHORTCUT_MODIFIER_ALT = 1

View File

@ -67,48 +67,6 @@ class Controller( HydrusController.HydrusController ):
return ClientDB.DB( self, self.db_dir, 'client', no_wal = self._no_wal )
def BackupDatabase( self ):
path = self._new_options.GetNoneableString( 'backup_path' )
if path is None:
wx.MessageBox( 'No backup path is set!' )
return
if not os.path.exists( path ):
wx.MessageBox( 'The backup path does not exist--creating it now.' )
HydrusPaths.MakeSureDirectoryExists( path )
client_db_path = os.path.join( path, 'client.db' )
if os.path.exists( client_db_path ):
action = 'Update the existing'
else:
action = 'Create a new'
text = action + ' backup at "' + path + '"?'
text += os.linesep * 2
text += 'The database will be locked while the backup occurs, which may lock up your gui as well.'
with ClientGUIDialogs.DialogYesNo( self._gui, text ) as dlg_yn:
if dlg_yn.ShowModal() == wx.ID_YES:
self.Write( 'backup', path )
def CallBlockingToWx( self, func, *args, **kwargs ):
def wx_code( job_key ):
@ -414,8 +372,6 @@ class Controller( HydrusController.HydrusController ):
stop_time = HydrusData.GetNow() + ( self._options[ 'idle_shutdown_max_minutes' ] * 60 )
self.client_files_manager.Rebalance( partial = False, stop_time = stop_time )
self.MaintainDB( stop_time = stop_time )
if not self._options[ 'pause_repo_sync' ]:
@ -459,15 +415,19 @@ class Controller( HydrusController.HydrusController ):
if idle_shutdown_action == CC.IDLE_ON_SHUTDOWN_ASK_FIRST:
text = 'Is now a good time for the client to do up to ' + HydrusData.ConvertIntToPrettyString( idle_shutdown_max_minutes ) + ' minutes\' maintenance work?'
text = 'Is now a good time for the client to do up to ' + HydrusData.ConvertIntToPrettyString( idle_shutdown_max_minutes ) + ' minutes\' maintenance work? (Will auto-no in 15 seconds)'
with ClientGUIDialogs.DialogYesNo( self._splash, text, title = 'Maintenance is due' ) as dlg_yn:
call_later = wx.CallLater( 15000, dlg_yn.EndModal, wx.ID_NO )
if dlg_yn.ShowModal() == wx.ID_YES:
HG.do_idle_shutdown_work = True
call_later.Stop()
else:
@ -523,7 +483,7 @@ class Controller( HydrusController.HydrusController ):
def GetGUI( self ):
return self._gui
return self.gui
def GetOptions( self ):
@ -538,9 +498,9 @@ class Controller( HydrusController.HydrusController ):
def GoodTimeToDoForegroundWork( self ):
if self._gui:
if self.gui:
return not self._gui.CurrentlyBusy()
return not self.gui.CurrentlyBusy()
else:
@ -679,7 +639,7 @@ class Controller( HydrusController.HydrusController ):
def wx_code_gui():
self._gui = ClientGUI.FrameGUI( self )
self.gui = ClientGUI.FrameGUI( self )
# this is because of some bug in wx C++ that doesn't add these by default
wx.richtext.RichTextBuffer.AddHandler( wx.richtext.RichTextHTMLHandler() )
@ -711,7 +671,6 @@ class Controller( HydrusController.HydrusController ):
self._daemons.append( HydrusThreading.DAEMONForegroundWorker( self, 'MaintainTrash', ClientDaemons.DAEMONMaintainTrash, init_wait = 120 ) )
self._daemons.append( HydrusThreading.DAEMONForegroundWorker( self, 'SynchroniseRepositories', ClientDaemons.DAEMONSynchroniseRepositories, ( 'notify_restart_repo_sync_daemon', 'notify_new_permissions' ), period = 4 * 3600, pre_call_wait = 1 ) )
self._daemons.append( HydrusThreading.DAEMONBackgroundWorker( self, 'RebalanceClientFiles', ClientDaemons.DAEMONRebalanceClientFiles, period = 3600 ) )
self._daemons.append( HydrusThreading.DAEMONBackgroundWorker( self, 'UPnP', ClientDaemons.DAEMONUPnP, ( 'notify_new_upnp_mappings', ), init_wait = 120, pre_call_wait = 6 ) )
@ -854,9 +813,9 @@ class Controller( HydrusController.HydrusController ):
def PageCompletelyDestroyed( self, page_key ):
if self._gui:
if self.gui:
return self._gui.PageCompletelyDestroyed( page_key )
return self.gui.PageCompletelyDestroyed( page_key )
else:
@ -866,9 +825,9 @@ class Controller( HydrusController.HydrusController ):
def PageClosedButNotDestroyed( self, page_key ):
if self._gui:
if self.gui:
return self._gui.PageClosedButNotDestroyed( page_key )
return self.gui.PageClosedButNotDestroyed( page_key )
else:
@ -988,7 +947,7 @@ class Controller( HydrusController.HydrusController ):
restore_intro = ''
with wx.DirDialog( self._gui, 'Select backup location.' ) as dlg:
with wx.DirDialog( self.gui, 'Select backup location.' ) as dlg:
if dlg.ShowModal() == wx.ID_OK:
@ -1000,13 +959,13 @@ class Controller( HydrusController.HydrusController ):
text += os.linesep * 2
text += 'The gui will shut down, and then it will take a while to complete the restore. Once it is done, the client will restart.'
with ClientGUIDialogs.DialogYesNo( self._gui, text ) as dlg_yn:
with ClientGUIDialogs.DialogYesNo( self.gui, text ) as dlg_yn:
if dlg_yn.ShowModal() == wx.ID_YES:
def THREADRestart():
wx.CallAfter( self._gui.Exit )
wx.CallAfter( self.gui.Exit )
while not self.db.LoopIsFinished():

View File

@ -20,6 +20,7 @@ import HydrusFileHandling
import HydrusGlobals as HG
import HydrusImageHandling
import HydrusNetwork
import HydrusNetworking
import HydrusPaths
import HydrusSerialisable
import HydrusTagArchive
@ -4812,6 +4813,15 @@ class DB( HydrusDB.HydrusDB ):
return ( CC.STATUS_NEW, None )
def _GetHashStatus( self, hash ):
hash_id = self._GetHashId( hash )
( status, hash ) = self._GetHashIdStatus( hash_id )
return status
def _GetHydrusSessions( self ):
now = HydrusData.GetNow()
@ -6230,87 +6240,27 @@ class DB( HydrusDB.HydrusDB ):
def _ImportFile( self, temp_path, import_file_options = None, override_deleted = False ):
def _ImportFile( self, file_import_job ):
if import_file_options is None:
import_file_options = ClientDefaults.GetDefaultImportFileOptions()
( archive, exclude_deleted_files, min_size, min_resolution ) = import_file_options.ToTuple()
HydrusImageHandling.ConvertToPngIfBmp( temp_path )
hash = HydrusFileHandling.GetHashFromPath( temp_path )
hash = file_import_job.GetHash()
hash_id = self._GetHashId( hash )
( status, status_hash ) = self._GetHashIdStatus( hash_id )
if status == CC.STATUS_DELETED:
if status != CC.STATUS_REDUNDANT:
if override_deleted or not exclude_deleted_files:
status = CC.STATUS_NEW
if status == CC.STATUS_REDUNDANT:
if archive:
self._ArchiveFiles( ( hash_id, ) )
self.pub_content_updates_after_commit( { CC.COMBINED_LOCAL_FILE_SERVICE_KEY : [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_ARCHIVE, set( ( hash, ) ) ) ] } )
elif status == CC.STATUS_NEW:
( size, mime, width, height, duration, num_frames, num_words ) = HydrusFileHandling.GetFileInfo( temp_path )
if width is not None and height is not None:
if min_resolution is not None:
( min_x, min_y ) = min_resolution
if width < min_x or height < min_y:
raise Exception( 'Resolution too small' )
if min_size is not None:
if size < min_size:
raise Exception( 'File too small' )
( size, mime, width, height, duration, num_frames, num_words ) = file_import_job.GetFileInfo()
timestamp = HydrusData.GetNow()
client_files_manager = self._controller.client_files_manager
phashes = file_import_job.GetPHashes()
if mime in HC.MIMES_WITH_THUMBNAILS:
thumbnail = HydrusFileHandling.GenerateThumbnail( temp_path )
# lockless because this db call is made by the locked client files manager
client_files_manager.LocklessAddFullSizeThumbnail( hash, thumbnail )
if mime in HC.MIMES_WE_CAN_PHASH:
phashes = ClientImageHandling.GenerateShapePerceptualHashes( temp_path )
if phashes is not None:
self._CacheSimilarFilesAssociatePHashes( hash_id, phashes )
# lockless because this db call is made by the locked client files manager
client_files_manager.LocklessAddFile( hash, mime, temp_path )
self._AddFilesInfo( [ ( hash_id, size, mime, width, height, duration, num_frames, num_words ) ], overwrite = True )
self._AddFiles( self._local_file_service_id, [ ( hash_id, timestamp ) ] )
@ -6321,10 +6271,14 @@ class DB( HydrusDB.HydrusDB ):
self.pub_content_updates_after_commit( { CC.LOCAL_FILE_SERVICE_KEY : [ content_update ] } )
( md5, sha1, sha512 ) = HydrusFileHandling.GetExtraHashesFromPath( temp_path )
( md5, sha1, sha512 ) = file_import_job.GetExtraHashes()
self._c.execute( 'INSERT OR IGNORE INTO local_hashes ( hash_id, md5, sha1, sha512 ) VALUES ( ?, ?, ?, ? );', ( hash_id, sqlite3.Binary( md5 ), sqlite3.Binary( sha1 ), sqlite3.Binary( sha512 ) ) )
import_file_options = file_import_job.GetImportFileOptions()
( archive, exclude_deleted_files, min_size, min_resolution ) = import_file_options.ToTuple()
if archive:
self._ArchiveFiles( ( hash_id, ) )
@ -6362,7 +6316,7 @@ class DB( HydrusDB.HydrusDB ):
return ( status, hash )
return status
def _ImportUpdate( self, update_network_string, update_hash, mime ):
@ -7885,6 +7839,7 @@ class DB( HydrusDB.HydrusDB ):
elif action == 'file_query_ids': result = self._GetHashIdsFromQuery( *args, **kwargs )
elif action == 'file_system_predicates': result = self._GetFileSystemPredicates( *args, **kwargs )
elif action == 'filter_hashes': result = self._FilterHashes( *args, **kwargs )
elif action == 'hash_status': result = self._GetHashStatus( *args, **kwargs )
elif action == 'hydrus_sessions': result = self._GetHydrusSessions( *args, **kwargs )
elif action == 'imageboards': result = self._GetYAMLDump( YAML_DUMP_ID_IMAGEBOARD, *args, **kwargs )
elif action == 'is_an_orphan': result = self._IsAnOrphan( *args, **kwargs )
@ -9663,6 +9618,17 @@ class DB( HydrusDB.HydrusDB ):
self._c.execute( 'ANALYZE urls;' )
if version == 264:
default_bandwidth_manager = ClientDefaults.GetDefaultBandwidthManager()
bandwidth_manager = self._GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_BANDWIDTH_MANAGER )
bandwidth_manager._network_contexts_to_bandwidth_rules = dict( default_bandwidth_manager._network_contexts_to_bandwidth_rules )
self._SetJSONDump( bandwidth_manager )
self._controller.pub( 'splash_set_title_text', 'updated db to v' + str( version + 1 ) )
self._c.execute( 'UPDATE version SET version = ?;', ( version + 1, ) )

View File

@ -1,3 +1,5 @@
import ClientData
import ClientImporting
import ClientThreading
import HydrusConstants as HC
import HydrusData
@ -112,7 +114,16 @@ def DAEMONDownloadFiles( controller ):
controller.WaitUntilPubSubsEmpty()
client_files_manager.ImportFile( temp_path, override_deleted = True )
automatic_archive = False
exclude_deleted = False # this is the important part here
min_size = None
min_resolution = None
import_file_options = ClientData.ImportFileOptions( automatic_archive = automatic_archive, exclude_deleted = exclude_deleted, min_size = min_size, min_resolution = min_resolution )
file_import_job = ClientImporting.FileImportJob( temp_path, import_file_options )
client_files_manager.ImportFile( file_import_job )
successful_hashes.add( hash )
@ -231,10 +242,6 @@ def DAEMONMaintainTrash( controller ):
def DAEMONRebalanceClientFiles( controller ):
controller.client_files_manager.Rebalance()
def DAEMONSaveDirtyObjects( controller ):
controller.SaveDirtyObjects()

View File

@ -1408,6 +1408,21 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
def RemoveClientFilesLocation( self, location ):
with self._lock:
if len( self._dictionary[ 'client_files_locations_ideal_weights' ] ) < 2:
raise Exception( 'Cannot remove any more files locations!' )
portable_location = HydrusPaths.ConvertAbsPathToPortablePath( location )
self._dictionary[ 'client_files_locations_ideal_weights' ] = [ ( l, w ) for ( l, w ) in self._dictionary[ 'client_files_locations_ideal_weights' ] if l != portable_location ]
def SetBoolean( self, name, value ):
with self._lock:
@ -1416,6 +1431,19 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
def SetClientFilesLocation( self, location, weight ):
with self._lock:
portable_location = HydrusPaths.ConvertAbsPathToPortablePath( location )
weight = float( weight )
self._dictionary[ 'client_files_locations_ideal_weights' ] = [ ( l, w ) for ( l, w ) in self._dictionary[ 'client_files_locations_ideal_weights' ] if l != portable_location ]
self._dictionary[ 'client_files_locations_ideal_weights' ].append( ( portable_location, weight ) )
def SetClientFilesLocationsToIdealWeights( self, locations_to_weights, resized_thumbnail_override, full_size_thumbnail_override ):
with self._lock:

View File

@ -20,7 +20,7 @@ def GetDefaultBandwidthManager():
rules = HydrusNetworking.BandwidthRules()
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 1, 5 ) # stop accidental spam
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 60, 120 ) # smooth out heavy usage. db prob needs a break
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 60, 120 ) # smooth out heavy usage/bugspam. db and gui prob need a break
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 10 * GB ) # check your inbox lad
@ -40,6 +40,8 @@ def GetDefaultBandwidthManager():
rules = HydrusNetworking.BandwidthRules()
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 86400, 50 ) # don't sync a giant db in one day
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 64 * MB ) # don't sync a giant db in one day
bandwidth_manager.SetRules( ClientNetworking.NetworkContext( CC.NETWORK_CONTEXT_HYDRUS ), rules )
@ -54,9 +56,9 @@ def GetDefaultBandwidthManager():
rules = HydrusNetworking.BandwidthRules()
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 600, 60 ) # after that first sample of small files, take it easy
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 300, 100 ) # after that first sample of small files, take it easy
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 600, 256 * MB ) # after that first sample of big files, take it easy
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 300, 128 * MB ) # after that first sample of big files, take it easy
bandwidth_manager.SetRules( ClientNetworking.NetworkContext( CC.NETWORK_CONTEXT_DOWNLOADER_QUERY ), rules )
@ -64,7 +66,7 @@ def GetDefaultBandwidthManager():
rules = HydrusNetworking.BandwidthRules()
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 5, 1 ) # be extremely polite
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 86400, 200 ) # catch up on a big sub in little chunks every day
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 256 * MB ) # catch up on a big sub in little chunks every day
@ -72,6 +74,16 @@ def GetDefaultBandwidthManager():
#
rules = HydrusNetworking.BandwidthRules()
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 300, 100 ) # after that first sample of small files, take it easy
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 300, 128 * MB ) # after that first sample of big files, take it easy
bandwidth_manager.SetRules( ClientNetworking.NetworkContext( CC.NETWORK_CONTEXT_THREAD_WATCHER_THREAD ), rules )
#
return bandwidth_manager
def GetClientDefaultOptions():

View File

@ -228,190 +228,6 @@ def GetYoutubeFormats( youtube_url ):
return info
def THREADDownloadURL( job_key, url, url_string ):
job_key.SetVariable( 'popup_title', url_string )
job_key.SetVariable( 'popup_text_1', 'initialising' )
( os_file_handle, temp_path ) = HydrusPaths.GetTempPath()
try:
response = ClientNetworking.RequestsGet( url, stream = True )
with open( temp_path, 'wb' ) as f:
ClientNetworking.StreamResponseToFile( job_key, response, f )
job_key.SetVariable( 'popup_text_1', 'importing' )
client_files_manager = HG.client_controller.client_files_manager
( result, hash ) = client_files_manager.ImportFile( temp_path )
except HydrusExceptions.CancelledException:
return
except HydrusExceptions.NetworkException:
job_key.Cancel()
raise
finally:
HydrusPaths.CleanUpTempPath( os_file_handle, temp_path )
if result in ( CC.STATUS_SUCCESSFUL, CC.STATUS_REDUNDANT ):
if result == CC.STATUS_SUCCESSFUL:
job_key.SetVariable( 'popup_text_1', 'successful!' )
else:
job_key.SetVariable( 'popup_text_1', 'was already in the database!' )
job_key.SetVariable( 'popup_files', { hash } )
elif result == CC.STATUS_DELETED:
job_key.SetVariable( 'popup_text_1', 'had already been deleted!' )
job_key.Finish()
def THREADDownloadURLs( job_key, urls, title ):
job_key.SetVariable( 'popup_title', title )
job_key.SetVariable( 'popup_text_1', 'initialising' )
num_successful = 0
num_redundant = 0
num_deleted = 0
num_failed = 0
successful_hashes = set()
for ( i, url ) in enumerate( urls ):
( i_paused, should_quit ) = job_key.WaitIfNeeded()
if should_quit:
break
job_key.SetVariable( 'popup_text_1', HydrusData.ConvertValueRangeToPrettyString( i + 1, len( urls ) ) )
job_key.SetVariable( 'popup_gauge_1', ( i + 1, len( urls ) ) )
( os_file_handle, temp_path ) = HydrusPaths.GetTempPath()
try:
try:
response = ClientNetworking.RequestsGet( url, stream = True )
with open( temp_path, 'wb' ) as f:
ClientNetworking.StreamResponseToFile( job_key, response, f )
except HydrusExceptions.CancelledException:
return
except HydrusExceptions.NetworkException:
job_key.Cancel()
raise
try:
job_key.SetVariable( 'popup_text_2', 'importing' )
client_files_manager = HG.client_controller.client_files_manager
( result, hash ) = client_files_manager.ImportFile( temp_path )
except Exception as e:
job_key.DeleteVariable( 'popup_text_2' )
HydrusData.Print( url + ' failed to import!' )
HydrusData.PrintException( e )
num_failed += 1
continue
finally:
HydrusPaths.CleanUpTempPath( os_file_handle, temp_path )
if result in ( CC.STATUS_SUCCESSFUL, CC.STATUS_REDUNDANT ):
if result == CC.STATUS_SUCCESSFUL:
num_successful += 1
else:
num_redundant += 1
successful_hashes.add( hash )
elif result == CC.STATUS_DELETED:
num_deleted += 1
text_components = []
if num_successful > 0:
text_components.append( HydrusData.ConvertIntToPrettyString( num_successful ) + ' successful' )
if num_redundant > 0:
text_components.append( HydrusData.ConvertIntToPrettyString( num_redundant ) + ' already in db' )
if num_deleted > 0:
text_components.append( HydrusData.ConvertIntToPrettyString( num_deleted ) + ' deleted' )
if num_failed > 0:
text_components.append( HydrusData.ConvertIntToPrettyString( num_failed ) + ' failed (errors written to log)' )
job_key.SetVariable( 'popup_text_1', ', '.join( text_components ) )
if len( successful_hashes ) > 0:
job_key.SetVariable( 'popup_files', successful_hashes )
job_key.DeleteVariable( 'popup_gauge_1' )
job_key.DeleteVariable( 'popup_text_2' )
job_key.DeleteVariable( 'popup_gauge_2' )
job_key.Finish()
def Parse4chanPostScreen( html ):
soup = GetSoup( html )
@ -1796,7 +1612,7 @@ class GalleryTumblr( Gallery ):
username = query
return 'http://' + username + '.tumblr.com/api/read/json?start=' + str( page_index * 50 ) + '&num=50'
return 'https://' + username + '.tumblr.com/api/read/json?start=' + str( page_index * 50 ) + '&num=50'
def _ParseGalleryPage( self, data, url_base ):
@ -1890,12 +1706,22 @@ class GalleryTumblr( Gallery ):
url = photo[ 'photo-url-1280' ]
if raw_url_available:
url = ConvertRegularToRawURL( url )
# some urls are given in the form:
# https://68.media.tumblr.com/tumblr_m5yb5m2O6A1rso2eyo1_540.jpg
# which is missing the hex key in the middle
# these urls are unavailable as raws from the main media server
# these seem to all be the pre-2013 files, but we'll double-check just in case anyway
unusual_hexless_url = url.count( '/' ) == 3
url = Remove68Subdomain( url )
if not unusual_hexless_url:
if raw_url_available:
url = ConvertRegularToRawURL( url )
url = Remove68Subdomain( url )
url = ClientData.ConvertHTTPToHTTPS( url )

View File

@ -72,10 +72,10 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
self.SetDropTarget( ClientDragDrop.FileDropTarget( self.ImportFiles, self.ImportURL ) )
bandwidth_width = ClientData.ConvertTextToPixelWidth( self, 7 )
idle_width = ClientData.ConvertTextToPixelWidth( self, 4 )
system_busy_width = ClientData.ConvertTextToPixelWidth( self, 11 )
db_width = ClientData.ConvertTextToPixelWidth( self, 12 )
bandwidth_width = ClientData.ConvertTextToPixelWidth( self, 9 )
idle_width = ClientData.ConvertTextToPixelWidth( self, 6 )
system_busy_width = ClientData.ConvertTextToPixelWidth( self, 13 )
db_width = ClientData.ConvertTextToPixelWidth( self, 14 )
self._statusbar = self.CreateStatusBar()
self._statusbar.SetFieldsCount( 5 )
@ -531,6 +531,51 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
def _BackupDatabase( self ):
path = self._new_options.GetNoneableString( 'backup_path' )
if path is None:
wx.MessageBox( 'No backup path is set!' )
return
if not os.path.exists( path ):
wx.MessageBox( 'The backup path does not exist--creating it now.' )
HydrusPaths.MakeSureDirectoryExists( path )
client_db_path = os.path.join( path, 'client.db' )
if os.path.exists( client_db_path ):
action = 'Update the existing'
else:
action = 'Create a new'
text = action + ' backup at "' + path + '"?'
text += os.linesep * 2
text += 'The database will be locked while the backup occurs, which may lock up your gui as well.'
with ClientGUIDialogs.DialogYesNo( self, text ) as dlg_yn:
if dlg_yn.ShowModal() == wx.ID_YES:
self._SaveGUISession( 'last session' )
# session save causes a db read in the menu refresh, so let's put this off just a bit
wx.CallLater( 1500, self._controller.Write, 'backup', path )
def _BackupService( self, service_key ):
def do_it():
@ -1201,7 +1246,7 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
else:
ClientGUIMenus.AppendMenuItem( self, menu, 'update database backup', 'Back the database up to an external location.', self._controller.BackupDatabase )
ClientGUIMenus.AppendMenuItem( self, menu, 'update database backup', 'Back the database up to an external location.', self._BackupDatabase )
ClientGUIMenus.AppendMenuItem( self, menu, 'change database backup location', 'Choose a path to back the database up to.', self._SetupBackupPath )
@ -1219,7 +1264,6 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
ClientGUIMenus.AppendMenuItem( self, submenu, 'vacuum', 'Defrag the database by completely rebuilding it.', self._VacuumDatabase )
ClientGUIMenus.AppendMenuItem( self, submenu, 'analyze', 'Optimise slow queries by running statistical analyses on the database.', self._AnalyzeDatabase )
ClientGUIMenus.AppendMenuItem( self, submenu, 'rebalance file storage', 'Move your files around your chosen storage directories until they satisfy the weights you have set in the options.', self._RebalanceClientFiles )
ClientGUIMenus.AppendMenuItem( self, submenu, 'clear orphans', 'Clear out surplus files that have found their way into the file structure.', self._ClearOrphans )
ClientGUIMenus.AppendMenu( menu, submenu, 'maintain' )
@ -1946,11 +1990,14 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
def _MigrateDatabase( self ):
frame = ClientGUITopLevelWindows.FrameThatTakesScrollablePanel( self, 'migrate database' )
panel = ClientGUIScrolledPanelsReview.MigrateDatabasePanel( frame, self._controller )
frame.SetPanel( panel )
with ClientGUITopLevelWindows.DialogNullipotent( self, 'migrate database' ) as dlg:
panel = ClientGUIScrolledPanelsReview.MigrateDatabasePanel( dlg, self._controller )
dlg.SetPanel( panel )
dlg.ShowModal()
def _ModifyAccount( self, service_key ):
@ -2017,6 +2064,18 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
def _NewPage( self, page_name, management_controller, initial_hashes = None, forced_insertion_index = None ):
if self._notebook.GetPageCount() + len( self._closed_pages ) >= 128:
self._DeleteAllClosedPages()
if self._notebook.GetPageCount() >= 128:
HydrusData.ShowText( 'The client cannot have more than 128 pages open! For system stability reasons, please close some now!' )
return
self._controller.ResetIdleTimer()
self._controller.ResetPageChangeTimer()
@ -2273,21 +2332,6 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
return shortcut_processed
def _RebalanceClientFiles( self ):
text = 'This will move your files around your storage directories until they satisfy the weights you have set in the options. It will also recover any folders that are in the wrong place. Use this if you have recently changed your file storage locations and want to hurry any transfers you have set up, or if you are recovering a complicated backup.'
text += os.linesep * 2
text += 'The operation will lock file access and the database. Popup messages will report its progress.'
with ClientGUIDialogs.DialogYesNo( self, text ) as dlg:
if dlg.ShowModal() == wx.ID_YES:
self._controller.CallToThread( self._controller.client_files_manager.Rebalance, partial = False )
def _Refresh( self ):
page = self._notebook.GetCurrentPage()
@ -2631,7 +2675,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
if dlg_yn_2.ShowModal() == wx.ID_YES:
self._controller.BackupDatabase()
self._BackupDatabase()

View File

@ -349,7 +349,8 @@ class AutoCompleteDropdown( wx.Panel ):
else:
self._dropdown_list.ProcessEvent( event ) # this typically skips the event, letting the text ctrl take it
# Don't say processevent here--it duplicates the event processing at higher levels, leading to 2 x F9, for instance
self._dropdown_list.EventCharHook( event ) # this typically skips the event, letting the text ctrl take it
else:

View File

@ -488,17 +488,23 @@ class NetworkJobControl( wx.Panel ):
def ClearNetworkJob( self ):
self._Update()
self._network_job = None
self._move_hide_timer.Start( 250, wx.TIMER_CONTINUOUS )
if self:
self._Update()
self._network_job = None
self._move_hide_timer.Start( 250, wx.TIMER_CONTINUOUS )
def SetNetworkJob( self, network_job ):
self._network_job = network_job
self._download_started = False
if self:
self._network_job = network_job
self._download_started = False
def TIMEREventUpdate( self, event ):

View File

@ -14,6 +14,7 @@ import ClientGUICollapsible
import ClientGUIListBoxes
import ClientGUIPredicates
import ClientGUITopLevelWindows
import ClientImporting
import ClientThreading
import collections
import gc
@ -3374,7 +3375,7 @@ class DialogSelectYoutubeURL( Dialog ):
job_key = ClientThreading.JobKey( pausable = True, cancellable = True )
HG.client_controller.CallToThread( ClientDownloading.THREADDownloadURL, job_key, url, url_string )
HG.client_controller.CallToThread( ClientImporting.THREADDownloadURL, job_key, url, url_string )
HG.client_controller.pub( 'message', job_key )

View File

@ -705,11 +705,25 @@ class ManagementPanel( wx.lib.scrolledpanel.ScrolledPanel ):
sizer.AddF( tags_box, CC.FLAGS_EXPAND_BOTH_WAYS )
def CleanBeforeDestroy( self ): pass
def CleanBeforeDestroy( self ):
pass
def SetSearchFocus( self, page_key ): pass
def SetSearchFocus( self, page_key ):
pass
def TestAbleToClose( self ): pass
def Start( self ):
pass
def TestAbleToClose( self ):
pass
class ManagementPanelDuplicateFilter( ManagementPanel ):
@ -1453,8 +1467,6 @@ class ManagementPanelImporterGallery( ManagementPanelImporter ):
self._UpdateStatus()
self._gallery_import.Start( self._page_key )
def _SeedCache( self ):
@ -1725,6 +1737,11 @@ class ManagementPanelImporterGallery( ManagementPanelImporter ):
def Start( self ):
self._gallery_import.Start( self._page_key )
management_panel_types_to_classes[ MANAGEMENT_TYPE_IMPORT_GALLERY ] = ManagementPanelImporterGallery
class ManagementPanelImporterHDD( ManagementPanelImporter ):
@ -1775,8 +1792,6 @@ class ManagementPanelImporterHDD( ManagementPanelImporter ):
self._UpdateStatus()
self._hdd_import.Start( self._page_key )
def _SeedCache( self ):
@ -1859,6 +1874,11 @@ class ManagementPanelImporterHDD( ManagementPanelImporter ):
self._UpdateStatus()
def Start( self ):
self._hdd_import.Start( self._page_key )
def TestAbleToClose( self ):
( ( overall_status, ( overall_value, overall_range ) ), paused ) = self._hdd_import.GetStatus()
@ -2003,8 +2023,6 @@ class ManagementPanelImporterPageOfImages( ManagementPanelImporter ):
self._UpdateStatus()
self._page_of_images_import.Start( self._page_key )
def _SeedCache( self ):
@ -2228,6 +2246,11 @@ class ManagementPanelImporterPageOfImages( ManagementPanelImporter ):
if page_key == self._page_key: self._page_url_input.SetFocus()
def Start( self ):
self._page_of_images_import.Start( self._page_key )
management_panel_types_to_classes[ MANAGEMENT_TYPE_IMPORT_PAGE_OF_IMAGES ] = ManagementPanelImporterPageOfImages
class ManagementPanelImporterThreadWatcher( ManagementPanelImporter ):
@ -2356,11 +2379,6 @@ class ManagementPanelImporterThreadWatcher( ManagementPanelImporter ):
self._thread_times_to_check.SetValue( times_to_check )
self._thread_check_period.SetValue( check_period )
if self._thread_watcher_import.HasThread():
self._thread_watcher_import.Start( self._page_key )
self._UpdateStatus()
@ -2572,6 +2590,14 @@ class ManagementPanelImporterThreadWatcher( ManagementPanelImporter ):
def Start( self ):
if self._thread_watcher_import.HasThread():
self._thread_watcher_import.Start( self._page_key )
def TestAbleToClose( self ):
if self._thread_watcher_import.HasThread():
@ -2666,8 +2692,6 @@ class ManagementPanelImporterURLs( ManagementPanelImporter ):
self._UpdateStatus()
self._urls_import.Start( self._page_key )
HG.client_controller.sub( self, 'SetURLInput', 'set_page_url_input' )
@ -2837,6 +2861,11 @@ class ManagementPanelImporterURLs( ManagementPanelImporter ):
def Start( self ):
self._urls_import.Start( self._page_key )
management_panel_types_to_classes[ MANAGEMENT_TYPE_IMPORT_URLS ] = ManagementPanelImporterURLs
class ManagementPanelPetitions( ManagementPanel ):
@ -2958,8 +2987,6 @@ class ManagementPanelPetitions( ManagementPanel ):
self.SetSizer( vbox )
wx.CallAfter( self._FetchNumPetitions )
self._controller.sub( self, 'RefreshQuery', 'refresh_query' )
@ -3325,6 +3352,11 @@ class ManagementPanelPetitions( ManagementPanel ):
if page_key == self._page_key: self._DrawCurrentPetition()
def Start( self ):
wx.CallAfter( self._FetchNumPetitions )
management_panel_types_to_classes[ MANAGEMENT_TYPE_PETITIONS ] = ManagementPanelPetitions
class ManagementPanelQuery( ManagementPanel ):
@ -3365,11 +3397,6 @@ class ManagementPanelQuery( ManagementPanel ):
self.SetSizer( vbox )
if len( initial_predicates ) > 0 and not file_search_context.IsComplete():
wx.CallAfter( self._DoQuery )
self._controller.sub( self, 'AddMediaResultsFromQuery', 'add_media_results_from_query' )
self._controller.sub( self, 'SearchImmediately', 'notify_search_immediately' )
self._controller.sub( self, 'ShowQuery', 'file_query_done' )
@ -3519,5 +3546,17 @@ class ManagementPanelQuery( ManagementPanel ):
self._controller.pub( 'swap_media_panel', self._page_key, panel )
def Start( self ):
file_search_context = self._management_controller.GetVariable( 'file_search_context' )
initial_predicates = file_search_context.GetPredicates()
if len( initial_predicates ) > 0 and not file_search_context.IsComplete():
wx.CallAfter( self._DoQuery )
management_panel_types_to_classes[ MANAGEMENT_TYPE_QUERY ] = ManagementPanelQuery

View File

@ -0,0 +1,58 @@
import wx
import matplotlib
matplotlib.use( 'WXagg' )
from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg
from matplotlib.figure import Figure
class BarChartBandwidthHistory( FigureCanvasWxAgg ):
def __init__( self, parent, monthly_usage ):
divisor = 1.0
unit = 'B'
highest_usage = max( ( m[1] for m in monthly_usage ) )
lines = [ ( 1073741824.0, 'GB' ), ( 1048576.0, 'MB' ), ( 1024.0, 'KB' ) ]
for ( line_divisor, line_unit ) in lines:
if highest_usage > line_divisor:
divisor = line_divisor
unit = line_unit
break
monthly_usage = [ ( month_str, month_value / divisor ) for ( month_str, month_value ) in monthly_usage ]
( r, g, b ) = wx.SystemSettings.GetColour( wx.SYS_COLOUR_FRAMEBK ).Get()
facecolor = '#' + chr( r ).encode( 'hex' ) + chr( g ).encode( 'hex' ) + chr( b ).encode( 'hex' )
figure = Figure( figsize = ( 6.4, 4.8 ), dpi = 80, facecolor = facecolor, edgecolor = facecolor )
axes = figure.add_subplot( 111 )
x_indices = range( len( monthly_usage ) )
axes.bar( x_indices, [ i[1] for i in monthly_usage ] )
axes.set_xticks( x_indices )
axes.set_xticklabels( [ i[0] for i in monthly_usage ] )
axes.grid( True )
axes.set_axisbelow( True )
axes.set_ylabel( '(' + unit + ')' )
axes.set_title( 'historical usage' )
FigureCanvasWxAgg.__init__( self, parent, -1, figure )

View File

@ -1627,9 +1627,10 @@ class MediaPanel( ClientMedia.ListeningMediaList, wx.ScrolledWindow ):
def Sort( self, page_key, sort_by = None ):
if page_key == self._page_key: ClientMedia.ListeningMediaList.Sort( self, sort_by )
HG.client_controller.pub( 'sorted_media_pulse', self._page_key, self._sorted_media )
if page_key == self._page_key:
ClientMedia.ListeningMediaList.Sort( self, sort_by )
class MediaPanelLoading( MediaPanel ):
@ -1666,13 +1667,13 @@ class MediaPanelLoading( MediaPanel ):
return []
def SetNumQueryResults( self, page_key, current, max ):
def SetNumQueryResults( self, page_key, num_current, num_max ):
if page_key == self._page_key:
self._current = current
self._current = num_current
self._max = max
self._max = num_max
self._PublishSelectionChange()
@ -2200,8 +2201,6 @@ class MediaPanelThumbnails( MediaPanel ):
self._PublishSelectionChange()
HG.client_controller.pub( 'sorted_media_pulse', self._page_key, self._sorted_media )
self.Refresh()

View File

@ -77,6 +77,8 @@ class Page( wx.SplitterWindow ):
self._initialised = True
self._management_panel.Start()
def _SetPrettyStatus( self, status ):
@ -254,6 +256,8 @@ class Page( wx.SplitterWindow ):
self._initialised = True
self._initial_hashes = []
self._management_panel.Start()
def SetPrettyStatus( self, page_key, status ):

View File

@ -137,6 +137,26 @@ class EditAccountTypePanel( ClientGUIScrolledPanels.EditPanel ):
return HydrusNetwork.AccountType.GenerateAccountTypeFromParameters( self._account_type_key, title, permissions, bandwidth_rules )
class EditBandwidthRulesPanel( ClientGUIScrolledPanels.EditPanel ):
def __init__( self, parent, bandwidth_rules ):
ClientGUIScrolledPanels.EditPanel.__init__( self, parent )
self._bandwidth_rules_ctrl = ClientGUIControls.BandwidthRulesCtrl( self, bandwidth_rules )
vbox = wx.BoxSizer( wx.VERTICAL )
vbox.AddF( self._bandwidth_rules_ctrl, CC.FLAGS_EXPAND_BOTH_WAYS )
self.SetSizer( vbox )
def GetValue( self ):
return self._bandwidth_rules_ctrl.GetValue()
class EditDuplicateActionOptionsPanel( ClientGUIScrolledPanels.EditPanel ):
def __init__( self, parent, duplicate_action, duplicate_action_options ):

View File

@ -1230,7 +1230,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
self._listbook.AddPage( 'default tag import options', 'default tag import options', self._DefaultTagImportOptionsPanel( self._listbook, self._new_options ) )
self._listbook.AddPage( 'colours', 'colours', self._ColoursPanel( self._listbook ) )
self._listbook.AddPage( 'sort/collect', 'sort/collect', self._SortCollectPanel( self._listbook ) )
self._listbook.AddPage( 'file storage locations', 'file storage locations', self._ClientFilesPanel( self._listbook ) )
#self._listbook.AddPage( 'file storage locations', 'file storage locations', self._ClientFilesPanel( self._listbook ) )
self._listbook.AddPage( 'downloading', 'downloading', self._DownloadingPanel( self._listbook, self._new_options ) )
self._listbook.AddPage( 'tags', 'tags', self._TagsPanel( self._listbook, self._new_options ) )

View File

@ -4,8 +4,10 @@ import ClientGUICommon
import ClientGUIDialogs
import ClientGUIFrames
import ClientGUIScrolledPanels
import ClientGUIScrolledPanelsEdit
import ClientGUIPanels
import ClientGUITopLevelWindows
import ClientNetworking
import ClientTags
import ClientThreading
import collections
@ -19,6 +21,17 @@ import traceback
import webbrowser
import wx
try:
import ClientGUIMatPlotLib
MATPLOTLIB_OK = True
except ImportError:
MATPLOTLIB_OK = False
class AdvancedContentUpdatePanel( ClientGUIScrolledPanels.ReviewPanel ):
COPY = 0
@ -236,7 +249,10 @@ class AdvancedContentUpdatePanel( ClientGUIScrolledPanels.ReviewPanel ):
with ClientGUIDialogs.DialogYesNo( self, 'Are you sure?' ) as dlg:
if dlg.ShowModal() != wx.ID_YES: return
if dlg.ShowModal() != wx.ID_YES:
return
if action == self.COPY:
@ -286,35 +302,108 @@ class ReviewAllBandwidthPanel( ClientGUIScrolledPanels.ReviewPanel ):
ClientGUIScrolledPanels.ReviewPanel.__init__( self, parent )
self._bandwidths = ClientGUICommon.SaneListCtrlForSingleObject( self, 360, [ ( 'context', -1 ), ( 'context type', 100 ), ( 'current usage', 100 ), ( 'past 24 hours', 100 ), ( 'this month', 100 ) ], activation_callback = self.ShowNetworkContext )
self._history_time_delta_threshold = ClientGUICommon.TimeDeltaButton( self, days = True, hours = True, minutes = True, seconds = True )
self._history_time_delta_threshold.Bind( ClientGUICommon.EVT_TIME_DELTA, self.EventTimeDeltaChanged )
self._bandwidths.SetMinSize( ( 640, 360 ) )
self._history_time_delta_none = wx.CheckBox( self, label = 'show all' )
self._history_time_delta_none.Bind( wx.EVT_CHECKBOX, self.EventTimeDeltaChanged )
# a button/checkbox to say 'show only those with data in the past 30 days'
# a button to say 'delete all record of this context'
self._bandwidths = ClientGUICommon.SaneListCtrlForSingleObject( self, 360, [ ( 'name', -1 ), ( 'type', 100 ), ( 'current usage', 100 ), ( 'past 24 hours', 100 ), ( 'this month', 100 ), ( 'has specific rules', 120 ) ], activation_callback = self.ShowNetworkContext )
self._bandwidths.SetMinSize( ( 740, 360 ) )
self._edit_default_bandwidth_rules_button = ClientGUICommon.BetterButton( self, 'edit default bandwidth rules', self._EditDefaultBandwidthRules )
default_rules_help_button = ClientGUICommon.BetterBitmapButton( self, CC.GlobalBMPs.help, self._ShowDefaultRulesHelp )
default_rules_help_button.SetToolTipString( 'Show help regarding default bandwidth rules.' )
self._delete_record_button = ClientGUICommon.BetterButton( self, 'delete selected history', self._DeleteNetworkContexts )
#
for ( network_context, bandwidth_tracker ) in self._controller.network_engine.bandwidth_manager.GetNetworkContextsAndBandwidthTrackersForUser():
( display_tuple, sort_tuple ) = self._GetTuples( network_context, bandwidth_tracker )
self._bandwidths.Append( display_tuple, sort_tuple, network_context )
self._history_time_delta_threshold.SetValue( 86400 * 30 )
self._bandwidths.SortListItems( 0 )
self._update_timer = wx.Timer( self )
self.Bind( wx.EVT_TIMER, self.TIMEREventUpdate )
self._Update()
#
hbox = wx.BoxSizer( wx.HORIZONTAL )
hbox.AddF( ClientGUICommon.BetterStaticText( self, 'Show network contexts with usage in the past: ' ), CC.FLAGS_VCENTER )
hbox.AddF( self._history_time_delta_threshold, CC.FLAGS_EXPAND_BOTH_WAYS )
hbox.AddF( self._history_time_delta_none, CC.FLAGS_VCENTER )
vbox = wx.BoxSizer( wx.VERTICAL )
button_hbox = wx.BoxSizer( wx.HORIZONTAL )
button_hbox.AddF( self._edit_default_bandwidth_rules_button, CC.FLAGS_VCENTER )
button_hbox.AddF( default_rules_help_button, CC.FLAGS_VCENTER )
button_hbox.AddF( self._delete_record_button, CC.FLAGS_VCENTER )
vbox.AddF( hbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
vbox.AddF( self._bandwidths, CC.FLAGS_EXPAND_BOTH_WAYS )
vbox.AddF( button_hbox, CC.FLAGS_BUTTON_SIZER )
self.SetSizer( vbox )
def _DeleteNetworkContexts( self ):
selected_network_contexts = self._bandwidths.GetObjects( only_selected = True )
if len( selected_network_contexts ) > 0:
with ClientGUIDialogs.DialogYesNo( self, 'Are you sure? This will delete all bandwidth record for the selected network contexts.' ) as dlg:
if dlg.ShowModal() == wx.ID_YES:
self._controller.network_engine.bandwidth_manager.DeleteHistory( selected_network_contexts )
self._Update()
def _EditDefaultBandwidthRules( self ):
network_contexts_and_bandwidth_rules = self._controller.network_engine.bandwidth_manager.GetDefaultRules()
choice_tuples = [ ( network_context.ToUnicode() + ' (' + str( len( bandwidth_rules.GetRules() ) ) + ' rules)', ( network_context, bandwidth_rules ) ) for ( network_context, bandwidth_rules ) in network_contexts_and_bandwidth_rules ]
with ClientGUIDialogs.DialogSelectFromList( self, 'select network context', choice_tuples ) as dlg:
if dlg.ShowModal() == wx.ID_OK:
( network_context, bandwidth_rules ) = dlg.GetChoice()
with ClientGUITopLevelWindows.DialogEdit( self, 'edit bandwidth rules for ' + network_context.ToUnicode() ) as dlg_2:
panel = ClientGUIScrolledPanelsEdit.EditBandwidthRulesPanel( dlg_2, bandwidth_rules )
dlg_2.SetPanel( panel )
if dlg_2.ShowModal() == wx.ID_OK:
bandwidth_rules = panel.GetValue()
self._controller.network_engine.bandwidth_manager.SetRules( network_context, bandwidth_rules )
def _GetTuples( self, network_context, bandwidth_tracker ):
has_rules = not self._controller.network_engine.bandwidth_manager.UsesDefaultRules( network_context )
sortable_network_context = ( network_context.context_type, network_context.context_data )
sortable_context_type = CC.network_context_type_string_lookup[ network_context.context_type ]
current_usage = bandwidth_tracker.GetUsage( HC.BANDWIDTH_TYPE_DATA, 1 )
@ -336,7 +425,88 @@ class ReviewAllBandwidthPanel( ClientGUIScrolledPanels.ReviewPanel ):
pretty_day_usage = HydrusData.ConvertIntToBytes( day_usage )
pretty_month_usage = HydrusData.ConvertIntToBytes( month_usage )
return ( ( pretty_network_context, pretty_context_type, pretty_current_usage, pretty_day_usage, pretty_month_usage ), ( sortable_network_context, sortable_context_type, current_usage, day_usage, month_usage ) )
if has_rules:
pretty_has_rules = 'yes'
else:
pretty_has_rules = ''
return ( ( pretty_network_context, pretty_context_type, pretty_current_usage, pretty_day_usage, pretty_month_usage, pretty_has_rules ), ( sortable_network_context, sortable_context_type, current_usage, day_usage, month_usage, has_rules ) )
def _ShowDefaultRulesHelp( self ):
help = 'Network requests act in multiple contexts. Most use the \'global\' and \'web domain\' network contexts, but a hydrus server request, for instance, will add its own service-specific context, and a subscription will add both itself and its downloader.'
help += os.linesep * 2
help += 'If a network context does not have some specific rules set up, it will use its respective default, which may or may not have rules of its own. If you want to set general policy, like "Never download more than 1GB/day from any individual website," or "Limit the entire client to 2MB/s," do it through \'global\' and these defaults.'
help += os.linesep * 2
help += 'All contexts\' rules are consulted and have to pass before a request can do work. If you set a 200KB/s limit on a website domain and a 50KB/s limit on global, your download will only ever run at 50KB/s. To make sense, network contexts with broader scope should have more lenient rules.'
help += os.linesep * 2
help += 'There are two special \'instance\' contexts, for downloaders and threads. These represent individual queries, either a single gallery search or a single watched thread. It is useful to set rules for these so your searches will gather a fast initial sample of results in the first few minutes--so you can make sure you are happy with them--but otherwise trickle the rest in over time. This keeps your CPU and other bandwidth limits less hammered and helps to avoid accidental downloads of many thousands of small bad files or a few hundred gigantic files all in one go.'
help += os.linesep * 2
help += 'If you do not understand what is going on here, you can safely leave it alone. The default settings make for a _reasonable_ and polite profile that will not accidentally cause you to download way too much in one go or piss off servers by being too aggressive. The simplest way of throttling your client is by editing the rules for the global context.'
wx.MessageBox( help )
def _Update( self ):
( sort_col, sort_asc ) = self._bandwidths.GetSortState()
selected_network_contexts = self._bandwidths.GetObjects( only_selected = True )
self._bandwidths.DeleteAllItems()
if self._history_time_delta_none.GetValue() == True:
history_time_delta_threshold = None
else:
history_time_delta_threshold = self._history_time_delta_threshold.GetValue()
network_contexts_and_bandwidth_trackers = self._controller.network_engine.bandwidth_manager.GetNetworkContextsAndBandwidthTrackersForUser( history_time_delta_threshold )
for ( index, ( network_context, bandwidth_tracker ) ) in enumerate( network_contexts_and_bandwidth_trackers ):
( display_tuple, sort_tuple ) = self._GetTuples( network_context, bandwidth_tracker )
self._bandwidths.Append( display_tuple, sort_tuple, network_context )
if network_context in selected_network_contexts:
self._bandwidths.Select( index )
self._bandwidths.SortListItems( sort_col, sort_asc )
timer_duration_s = max( len( network_contexts_and_bandwidth_trackers ), 20 )
self._update_timer.Start( 1000 * timer_duration_s, wx.TIMER_ONE_SHOT )
def EventTimeDeltaChanged( self, event ):
if self._history_time_delta_none.GetValue() == True:
self._history_time_delta_threshold.Disable()
else:
self._history_time_delta_threshold.Enable()
self._Update()
def TIMEREventUpdate( self, event ):
self._Update()
def ShowNetworkContext( self ):
@ -361,14 +531,19 @@ class ReviewNetworkContextBandwidthPanel( ClientGUIScrolledPanels.ReviewPanel ):
self._network_context = network_context
self._bandwidth_rules = self._controller.network_engine.bandwidth_manager.GetRules( self._network_context )
self._bandwidth_tracker = self._controller.network_engine.bandwidth_manager.GetTracker( self._network_context )
self._last_fetched_rule_rows = set()
#
info_panel = ClientGUICommon.StaticBox( self, 'description' )
description = CC.network_context_type_description_lookup[ self._network_context.context_type ]
self._name = ClientGUICommon.BetterStaticText( info_panel, label = self._network_context.ToUnicode() )
self._description = ClientGUICommon.BetterStaticText( info_panel, label = CC.network_context_type_description_lookup[ self._network_context.context_type ] )
self._description = ClientGUICommon.BetterStaticText( info_panel, label = description )
#
@ -382,6 +557,17 @@ class ReviewNetworkContextBandwidthPanel( ClientGUIScrolledPanels.ReviewPanel ):
#
rules_panel = ClientGUICommon.StaticBox( self, 'rules' )
self._uses_default_rules_st = ClientGUICommon.BetterStaticText( rules_panel, style = wx.ALIGN_CENTER )
self._rules_rows_panel = wx.Panel( rules_panel )
self._use_default_rules_button = ClientGUICommon.BetterButton( rules_panel, 'use default rules', self._UseDefaultRules )
self._edit_rules_button = ClientGUICommon.BetterButton( rules_panel, 'edit rules', self._EditRules )
#
self._time_delta_usage_time_delta.SetValue( 86400 )
for bandwidth_type in ( HC.BANDWIDTH_TYPE_DATA, HC.BANDWIDTH_TYPE_REQUESTS ):
@ -391,11 +577,23 @@ class ReviewNetworkContextBandwidthPanel( ClientGUIScrolledPanels.ReviewPanel ):
self._time_delta_usage_bandwidth_type.SelectClientData( HC.BANDWIDTH_TYPE_DATA )
# usage this month (with dropdown to select previous months for all months on record)
monthly_usage = self._bandwidth_tracker.GetMonthlyDataUsage()
# rules panel
# a way to show how much the current rules are used up--see review services for how this is already done
# button to edit rules for this domain
if len( monthly_usage ) > 0:
if MATPLOTLIB_OK:
self._barchart_canvas = ClientGUIMatPlotLib.BarChartBandwidthHistory( usage_panel, monthly_usage )
else:
self._barchart_canvas = ClientGUICommon.BetterStaticText( usage_panel, 'Could not find matplotlib, so cannot display bar chart here.' )
else:
self._barchart_canvas = ClientGUICommon.BetterStaticText( usage_panel, 'No usage yet, so no usage history to show.' )
#
@ -413,27 +611,64 @@ class ReviewNetworkContextBandwidthPanel( ClientGUIScrolledPanels.ReviewPanel ):
usage_panel.AddF( self._current_usage_st, CC.FLAGS_EXPAND_PERPENDICULAR )
usage_panel.AddF( hbox, CC.FLAGS_EXPAND_PERPENDICULAR )
usage_panel.AddF( self._barchart_canvas, CC.FLAGS_EXPAND_BOTH_WAYS )
#
hbox = wx.BoxSizer( wx.HORIZONTAL )
hbox.AddF( self._edit_rules_button, CC.FLAGS_SIZER_VCENTER )
hbox.AddF( self._use_default_rules_button, CC.FLAGS_SIZER_VCENTER )
rules_panel.AddF( self._uses_default_rules_st, CC.FLAGS_EXPAND_PERPENDICULAR )
rules_panel.AddF( self._rules_rows_panel, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
rules_panel.AddF( hbox, CC.FLAGS_BUTTON_SIZER )
#
vbox = wx.BoxSizer( wx.VERTICAL )
vbox.AddF( info_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
vbox.AddF( usage_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
vbox.AddF( usage_panel, CC.FLAGS_EXPAND_BOTH_WAYS )
vbox.AddF( rules_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
self.SetSizer( vbox )
min_width = ClientData.ConvertTextToPixelWidth( self, 60 )
self.SetMinSize( ( min_width, -1 ) )
#
self._rules_rows_panel.Bind( wx.EVT_TIMER, self.TIMEREventUpdateRules )
self._rules_timer = wx.Timer( self._rules_rows_panel )
self._rules_timer.Start( 5000, wx.TIMER_CONTINUOUS )
self.Bind( wx.EVT_TIMER, self.TIMEREventUpdate )
self._move_hide_timer = wx.Timer( self )
self._timer = wx.Timer( self )
self._move_hide_timer.Start( 250, wx.TIMER_CONTINUOUS )
self._timer.Start( 1000, wx.TIMER_CONTINUOUS )
self._UpdateRules()
self._Update()
def _EditRules( self ):
with ClientGUITopLevelWindows.DialogEdit( self, 'edit bandwidth rules for ' + self._network_context.ToUnicode() ) as dlg:
panel = ClientGUIScrolledPanelsEdit.EditBandwidthRulesPanel( dlg, self._bandwidth_rules )
dlg.SetPanel( panel )
if dlg.ShowModal() == wx.ID_OK:
self._bandwidth_rules = panel.GetValue()
self._controller.network_engine.bandwidth_manager.SetRules( self._network_context, self._bandwidth_rules )
self._UpdateRules()
def _Update( self ):
@ -465,11 +700,105 @@ class ReviewNetworkContextBandwidthPanel( ClientGUIScrolledPanels.ReviewPanel ):
self._time_delta_usage_st.SetLabelText( pretty_time_delta_usage )
def _UpdateRules( self ):
changes_made = False
if self._network_context.IsDefault() or self._network_context == ClientNetworking.GLOBAL_NETWORK_CONTEXT:
if self._use_default_rules_button.IsShown():
self._uses_default_rules_st.Hide()
self._use_default_rules_button.Hide()
changes_made = True
else:
if self._controller.network_engine.bandwidth_manager.UsesDefaultRules( self._network_context ):
self._uses_default_rules_st.SetLabelText( 'uses default rules' )
self._edit_rules_button.SetLabel( 'set specific rules' )
if self._use_default_rules_button.IsShown():
self._use_default_rules_button.Hide()
changes_made = True
else:
self._uses_default_rules_st.SetLabelText( 'has its own rules' )
self._edit_rules_button.SetLabel( 'edit rules' )
if not self._use_default_rules_button.IsShown():
self._use_default_rules_button.Show()
changes_made = True
rule_rows = self._bandwidth_rules.GetUsageStringsAndGaugeTuples( self._bandwidth_tracker, threshold = 0 )
if rule_rows != self._last_fetched_rule_rows:
self._last_fetched_rule_rows = rule_rows
self._rules_rows_panel.DestroyChildren()
vbox = wx.BoxSizer( wx.VERTICAL )
for ( status, ( v, r ) ) in rule_rows:
tg = ClientGUICommon.TextAndGauge( self._rules_rows_panel )
tg.SetValue( status, v, r )
vbox.AddF( tg, CC.FLAGS_EXPAND_PERPENDICULAR )
self._rules_rows_panel.SetSizer( vbox )
changes_made = True
if changes_made:
self.Layout()
ClientGUITopLevelWindows.PostSizeChangedEvent( self )
def _UseDefaultRules( self ):
with ClientGUIDialogs.DialogYesNo( self, 'Are you sure you want to revert to using the default rules for this context?' ) as dlg:
if dlg.ShowModal() == wx.ID_YES:
self._controller.network_engine.bandwidth_manager.DeleteRules( self._network_context )
self._UpdateRules()
def TIMEREventUpdate( self, event ):
self._Update()
def TIMEREventUpdateRules( self, event ):
self._UpdateRules()
class ReviewServicesPanel( ClientGUIScrolledPanels.ReviewPanel ):
def __init__( self, parent, controller ):
@ -638,6 +967,8 @@ class MigrateDatabasePanel( ClientGUIScrolledPanels.ReviewPanel ):
self._controller = controller
self._new_options = self._controller.GetNewOptions()
ClientGUIScrolledPanels.ReviewPanel.__init__( self, parent )
menu_items = []
@ -650,37 +981,29 @@ class MigrateDatabasePanel( ClientGUIScrolledPanels.ReviewPanel ):
#
info_panel = ClientGUICommon.StaticBox( self, 'current paths' )
self._refresh_button = ClientGUICommon.BetterBitmapButton( info_panel, CC.GlobalBMPs.refresh, self._Update )
info_panel = ClientGUICommon.StaticBox( self, 'locations' )
self._current_install_path_st = ClientGUICommon.BetterStaticText( info_panel )
self._current_db_path_st = ClientGUICommon.BetterStaticText( info_panel )
self._current_media_paths_st = ClientGUICommon.BetterStaticText( info_panel )
self._current_media_locations_listctrl = ClientGUICommon.SaneListCtrl( info_panel, 120, [ ( 'location', -1 ), ( 'portable?', 70 ), ( 'weight', 60 ), ( 'ideal usage', 160 ), ( 'current usage', 160 ) ] )
self._current_media_locations_listctrl = ClientGUICommon.SaneListCtrl( info_panel, 120, [ ( 'location', -1 ), ( 'portable?', 70 ), ( 'weight and ideal usage', 200 ), ( 'current usage', 200 ) ] )
self._add_path_button = ClientGUICommon.BetterButton( info_panel, 'add location', self._AddPath )
self._remove_path_button = ClientGUICommon.BetterButton( info_panel, 'empty/remove location', self._RemovePaths )
self._increase_weight_button = ClientGUICommon.BetterButton( info_panel, 'increase weight', self._IncreaseWeight )
self._decrease_weight_button = ClientGUICommon.BetterButton( info_panel, 'decrease weight', self._DecreaseWeight )
self._rebalance_button = ClientGUICommon.BetterButton( info_panel, 'move files now', self._Rebalance )
self._rebalance_status_st = ClientGUICommon.BetterStaticText( info_panel, style = wx.ALIGN_CENTER )
# ways to:
# increase/decrease ideal weight
# force rebalance now
# add new path
# remove existing path
# set/clear thumb locations
# move whole db and portable paths (requires shutdown and user shortcut command line yes/no warning)
# move the db and all portable client_files locations (provides warning about the shortcut and lets you copy the new location)
# this will require a shutdown
# rebalance files, listctrl
# location | portable yes/no | weight | ideal percent
# every change here, if valid, is saved immediately
# store all resized thumbs in sep location
# store all full_size thumbs in sep location
# do rebalance now button, only enabled if there is work to do
# should report to a stoppable job_key panel or something. text, gauge, stop button
#
help_hbox = wx.BoxSizer( wx.HORIZONTAL )
@ -694,11 +1017,20 @@ class MigrateDatabasePanel( ClientGUIScrolledPanels.ReviewPanel ):
#
info_panel.AddF( self._refresh_button, CC.FLAGS_LONE_BUTTON )
hbox = wx.BoxSizer( wx.HORIZONTAL )
hbox.AddF( self._add_path_button, CC.FLAGS_VCENTER )
hbox.AddF( self._remove_path_button, CC.FLAGS_VCENTER )
hbox.AddF( self._increase_weight_button, CC.FLAGS_VCENTER )
hbox.AddF( self._decrease_weight_button, CC.FLAGS_VCENTER )
hbox.AddF( self._rebalance_button, CC.FLAGS_VCENTER )
info_panel.AddF( self._current_install_path_st, CC.FLAGS_EXPAND_PERPENDICULAR )
info_panel.AddF( self._current_db_path_st, CC.FLAGS_EXPAND_PERPENDICULAR )
info_panel.AddF( self._current_media_paths_st, CC.FLAGS_EXPAND_PERPENDICULAR )
info_panel.AddF( self._current_media_locations_listctrl, CC.FLAGS_EXPAND_PERPENDICULAR )
info_panel.AddF( hbox, CC.FLAGS_BUTTON_SIZER )
info_panel.AddF( self._rebalance_status_st, CC.FLAGS_EXPAND_PERPENDICULAR )
#
@ -716,11 +1048,72 @@ class MigrateDatabasePanel( ClientGUIScrolledPanels.ReviewPanel ):
self._Update()
def _AddPath( self ):
( locations_to_ideal_weights, resized_thumbnail_override, full_size_thumbnail_override ) = self._new_options.GetClientFilesLocationsToIdealWeights()
with wx.DirDialog( self, 'Select the location' ) as dlg:
if dlg.ShowModal() == wx.ID_OK:
path = HydrusData.ToUnicode( dlg.GetPath() )
if path in locations_to_ideal_weights:
wx.MessageBox( 'You already have that location entered!' )
return
self._new_options.SetClientFilesLocation( path, 1 )
self._Update()
def _AdjustWeight( self, amount ):
( locations_to_ideal_weights, resized_thumbnail_override, full_size_thumbnail_override ) = self._new_options.GetClientFilesLocationsToIdealWeights()
adjustees = set()
for ( location, portable, gumpf, gumpf ) in self._current_media_locations_listctrl.GetSelectedClientData():
if location in locations_to_ideal_weights:
adjustees.add( location )
if len( adjustees ) > 0:
for location in adjustees:
current_weight = locations_to_ideal_weights[ location ]
new_amount = current_weight + amount
if new_amount > 0:
self._new_options.SetClientFilesLocation( location, new_amount )
self._Update()
def _DecreaseWeight( self ):
self._AdjustWeight( -1 )
def _GenerateCurrentMediaTuples( self ):
# ideal
( locations_to_ideal_weights, resized_thumbnail_override, full_size_thumbnail_override ) = self._controller.GetNewOptions().GetClientFilesLocationsToIdealWeights()
( locations_to_ideal_weights, resized_thumbnail_override, full_size_thumbnail_override ) = self._new_options.GetClientFilesLocationsToIdealWeights()
# current
@ -796,7 +1189,7 @@ class MigrateDatabasePanel( ClientGUIScrolledPanels.ReviewPanel ):
ideal_weight = locations_to_ideal_weights[ location ]
pretty_ideal_weight = str( ideal_weight )
pretty_ideal_weight = str( int( ideal_weight ) )
else:
@ -880,15 +1273,79 @@ class MigrateDatabasePanel( ClientGUIScrolledPanels.ReviewPanel ):
pretty_ideal_usage = ','.join( usages )
display_tuple = ( pretty_location, pretty_portable, pretty_ideal_weight, pretty_ideal_usage, pretty_current_usage )
sort_tuple = ( location, portable, ideal_weight, ideal_usage, current_usage )
display_tuple = ( pretty_location, pretty_portable, pretty_ideal_weight + ': ' + pretty_ideal_usage, pretty_current_usage )
sort_tuple = ( location, portable, ( ideal_weight, ideal_usage ), current_usage )
tuples.append( ( display_tuple, sort_tuple ) )
tuples.append( ( location, display_tuple, sort_tuple ) )
return tuples
def _IncreaseWeight( self ):
self._AdjustWeight( 1 )
def _Rebalance( self ):
# replace this with the job_key dialog next week
def do_it():
wx.CallAfter( wx.MessageBox, 'rebalance started - this notification will improve soon!' )
self._controller.client_files_manager.Rebalance()
wx.CallAfter( wx.MessageBox, 'rebalance finished' )
wx.CallAfter( self._Update )
self._controller.CallToThread( do_it )
def _RemovePaths( self ):
( locations_to_ideal_weights, resized_thumbnail_override, full_size_thumbnail_override ) = self._new_options.GetClientFilesLocationsToIdealWeights()
removees = set()
for ( location, portable, gumpf, gumpf ) in self._current_media_locations_listctrl.GetSelectedClientData():
if location in locations_to_ideal_weights:
removees.add( location )
# eventually have a check and veto if not enough size on the destination partition
if len( removees ) == 0:
wx.MessageBox( 'Please select some locations with weight.' )
elif len( removees ) == len( locations_to_ideal_weights ):
wx.MessageBox( 'You cannot empty every single location--please add a new place for the files to be moved to and then try again.' )
else:
with ClientGUIDialogs.DialogYesNo( self, 'Are you sure? This will schedule all the selected locations to have all their current files removed.' ) as dlg:
if dlg.ShowModal() == wx.ID_YES:
for location in removees:
self._new_options.RemoveClientFilesLocation( location )
self._Update()
def _Update( self ):
approx_total_db_size = self._controller.db.GetApproxTotalFileSize()
@ -904,11 +1361,31 @@ class MigrateDatabasePanel( ClientGUIScrolledPanels.ReviewPanel ):
self._current_media_paths_st.SetLabelText( 'media (totalling about ' + HydrusData.ConvertIntToBytes( approx_total_client_files ) + '):' )
selected_locations = { l[0] for l in self._current_media_locations_listctrl.GetSelectedClientData() }
self._current_media_locations_listctrl.DeleteAllItems()
for ( display_tuple, sort_tuple ) in self._GenerateCurrentMediaTuples():
for ( i, ( location, display_tuple, sort_tuple ) ) in enumerate( self._GenerateCurrentMediaTuples() ):
self._current_media_locations_listctrl.Append( display_tuple, sort_tuple )
if location in selected_locations:
self._current_media_locations_listctrl.Select( i )
if self._controller.client_files_manager.RebalanceWorkToDo():
self._rebalance_button.Enable()
self._rebalance_status_st.SetLabelText( 'files need to be moved' )
else:
self._rebalance_button.Disable()
self._rebalance_status_st.SetLabelText( 'all files are in their ideal locations' )

View File

@ -4,6 +4,7 @@ import ClientData
import ClientDefaults
import ClientDownloading
import ClientFiles
import ClientImageHandling
import ClientNetworking
import ClientThreading
import collections
@ -11,6 +12,7 @@ import HydrusConstants as HC
import HydrusData
import HydrusExceptions
import HydrusFileHandling
import HydrusImageHandling
import HydrusGlobals as HG
import HydrusPaths
import HydrusSerialisable
@ -26,6 +28,350 @@ import urlparse
import wx
import HydrusThreading
def THREADDownloadURL( job_key, url, url_string ):
job_key.SetVariable( 'popup_title', url_string )
job_key.SetVariable( 'popup_text_1', 'initialising' )
( os_file_handle, temp_path ) = HydrusPaths.GetTempPath()
try:
response = ClientNetworking.RequestsGet( url, stream = True )
with open( temp_path, 'wb' ) as f:
ClientNetworking.StreamResponseToFile( job_key, response, f )
job_key.SetVariable( 'popup_text_1', 'importing' )
file_import_job = FileImportJob( temp_path )
client_files_manager = HG.client_controller.client_files_manager
( result, hash ) = client_files_manager.ImportFile( file_import_job )
except HydrusExceptions.CancelledException:
return
except HydrusExceptions.NetworkException:
job_key.Cancel()
raise
finally:
HydrusPaths.CleanUpTempPath( os_file_handle, temp_path )
if result in ( CC.STATUS_SUCCESSFUL, CC.STATUS_REDUNDANT ):
if result == CC.STATUS_SUCCESSFUL:
job_key.SetVariable( 'popup_text_1', 'successful!' )
else:
job_key.SetVariable( 'popup_text_1', 'was already in the database!' )
job_key.SetVariable( 'popup_files', { hash } )
elif result == CC.STATUS_DELETED:
job_key.SetVariable( 'popup_text_1', 'had already been deleted!' )
job_key.Finish()
def THREADDownloadURLs( job_key, urls, title ):
job_key.SetVariable( 'popup_title', title )
job_key.SetVariable( 'popup_text_1', 'initialising' )
num_successful = 0
num_redundant = 0
num_deleted = 0
num_failed = 0
successful_hashes = set()
for ( i, url ) in enumerate( urls ):
( i_paused, should_quit ) = job_key.WaitIfNeeded()
if should_quit:
break
job_key.SetVariable( 'popup_text_1', HydrusData.ConvertValueRangeToPrettyString( i + 1, len( urls ) ) )
job_key.SetVariable( 'popup_gauge_1', ( i + 1, len( urls ) ) )
( os_file_handle, temp_path ) = HydrusPaths.GetTempPath()
try:
try:
response = ClientNetworking.RequestsGet( url, stream = True )
with open( temp_path, 'wb' ) as f:
ClientNetworking.StreamResponseToFile( job_key, response, f )
except HydrusExceptions.CancelledException:
return
except HydrusExceptions.NetworkException:
job_key.Cancel()
raise
try:
job_key.SetVariable( 'popup_text_2', 'importing' )
file_import_job = FileImportJob( temp_path )
client_files_manager = HG.client_controller.client_files_manager
( result, hash ) = client_files_manager.ImportFile( file_import_job )
except Exception as e:
job_key.DeleteVariable( 'popup_text_2' )
HydrusData.Print( url + ' failed to import!' )
HydrusData.PrintException( e )
num_failed += 1
continue
finally:
HydrusPaths.CleanUpTempPath( os_file_handle, temp_path )
if result in ( CC.STATUS_SUCCESSFUL, CC.STATUS_REDUNDANT ):
if result == CC.STATUS_SUCCESSFUL:
num_successful += 1
else:
num_redundant += 1
successful_hashes.add( hash )
elif result == CC.STATUS_DELETED:
num_deleted += 1
text_components = []
if num_successful > 0:
text_components.append( HydrusData.ConvertIntToPrettyString( num_successful ) + ' successful' )
if num_redundant > 0:
text_components.append( HydrusData.ConvertIntToPrettyString( num_redundant ) + ' already in db' )
if num_deleted > 0:
text_components.append( HydrusData.ConvertIntToPrettyString( num_deleted ) + ' deleted' )
if num_failed > 0:
text_components.append( HydrusData.ConvertIntToPrettyString( num_failed ) + ' failed (errors written to log)' )
job_key.SetVariable( 'popup_text_1', ', '.join( text_components ) )
if len( successful_hashes ) > 0:
job_key.SetVariable( 'popup_files', successful_hashes )
job_key.DeleteVariable( 'popup_gauge_1' )
job_key.DeleteVariable( 'popup_text_2' )
job_key.DeleteVariable( 'popup_gauge_2' )
job_key.Finish()
class FileImportJob( object ):
def __init__( self, temp_path, import_file_options = None ):
if import_file_options is None:
import_file_options = ClientDefaults.GetDefaultImportFileOptions()
self._temp_path = temp_path
self._import_file_options = import_file_options
self._hash = None
self._pre_import_status = None
self._file_info = None
self._thumbnail = None
self._phashes = None
self._extra_hashes = None
def GetExtraHashes( self ):
return self._extra_hashes
def GetImportFileOptions( self ):
return self._import_file_options
def GetFileInfo( self ):
return self._file_info
def GetHash( self ):
return self._hash
def GetMime( self ):
( size, mime, width, height, duration, num_frames, num_words ) = self._file_info
return mime
def GetPreImportStatus( self ):
return self._pre_import_status
def GetPHashes( self ):
return self._phashes
def GetTempPathAndThumbnail( self ):
return ( self._temp_path, self._thumbnail )
def PubsubContentUpdates( self ):
if self._pre_import_status == CC.STATUS_REDUNDANT:
( automatic_archive, exclude_deleted, min_size, min_resolution ) = self._import_file_options.ToTuple()
if automatic_archive:
service_keys_to_content_updates = { CC.COMBINED_LOCAL_FILE_SERVICE_KEY : [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_ARCHIVE, set( ( self._hash, ) ) ) ] }
HG.client_controller.Write( 'content_updates', service_keys_to_content_updates )
def IsGoodToImport( self ):
( automatic_archive, exclude_deleted, min_size, min_resolution ) = self._import_file_options.ToTuple()
( size, mime, width, height, duration, num_frames, num_words ) = self._file_info
if width is not None and height is not None:
if min_resolution is not None:
( min_x, min_y ) = min_resolution
if width < min_x or height < min_y:
return ( False, 'Resolution too small.' )
if min_size is not None:
if size < min_size:
return ( False, 'File too small.' )
return ( True, 'File looks good.' )
def IsNewToDB( self ):
if self._pre_import_status == CC.STATUS_NEW:
return True
if self._pre_import_status == CC.STATUS_DELETED:
( automatic_archive, exclude_deleted, min_size, min_resolution ) = self._import_file_options.ToTuple()
if not exclude_deleted:
return True
return False
def GenerateHashAndStatus( self ):
HydrusImageHandling.ConvertToPngIfBmp( self._temp_path )
self._hash = HydrusFileHandling.GetHashFromPath( self._temp_path )
self._pre_import_status = HG.client_controller.Read( 'hash_status', self._hash )
def GenerateInfo( self ):
self._file_info = HydrusFileHandling.GetFileInfo( self._temp_path )
( size, mime, width, height, duration, num_frames, num_words ) = self._file_info
if mime in HC.MIMES_WITH_THUMBNAILS:
self._thumbnail = HydrusFileHandling.GenerateThumbnail( self._temp_path, mime = mime )
if mime in HC.MIMES_WE_CAN_PHASH:
self._phashes = ClientImageHandling.GenerateShapePerceptualHashes( self._temp_path )
self._extra_hashes = HydrusFileHandling.GetExtraHashesFromPath( self._temp_path )
class GalleryImport( HydrusSerialisable.SerialisableBase ):
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_GALLERY_IMPORT
@ -214,9 +560,11 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
gallery.GetFile( temp_path, url, report_hooks = [ self._file_download_hook ] )
file_import_job = FileImportJob( temp_path, self._import_file_options )
client_files_manager = HG.client_controller.client_files_manager
( status, hash ) = client_files_manager.ImportFile( temp_path, import_file_options = self._import_file_options )
( status, hash ) = client_files_manager.ImportFile( file_import_job )
finally:
@ -726,9 +1074,11 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
raise Exception( 'File failed to copy--see log for error.' )
file_import_job = FileImportJob( temp_path, self._import_file_options )
client_files_manager = HG.client_controller.client_files_manager
( status, hash ) = client_files_manager.ImportFile( temp_path, import_file_options = self._import_file_options )
( status, hash ) = client_files_manager.ImportFile( file_import_job )
finally:
@ -1171,9 +1521,11 @@ class ImportFolder( HydrusSerialisable.SerialisableBaseNamed ):
raise Exception( 'File failed to copy--see log for error.' )
file_import_job = FileImportJob( temp_path, self._import_file_options )
client_files_manager = HG.client_controller.client_files_manager
( status, hash ) = client_files_manager.ImportFile( temp_path, import_file_options = self._import_file_options )
( status, hash ) = client_files_manager.ImportFile( file_import_job )
finally:
@ -1480,7 +1832,9 @@ class PageOfImagesImport( HydrusSerialisable.SerialisableBase ):
else:
( status, hash ) = HG.client_controller.client_files_manager.ImportFile( temp_path, import_file_options = self._import_file_options )
file_import_job = FileImportJob( temp_path, self._import_file_options )
( status, hash ) = HG.client_controller.client_files_manager.ImportFile( file_import_job )
self._urls_cache.UpdateSeedStatus( file_url, status )
@ -2576,9 +2930,9 @@ class Subscription( HydrusSerialisable.SerialisableBaseNamed ):
job_key.SetVariable( 'popup_text_1', x_out_of_y + 'importing file' )
client_files_manager = HG.client_controller.client_files_manager
file_import_job = FileImportJob( temp_path, self._import_file_options )
( status, hash ) = client_files_manager.ImportFile( temp_path, import_file_options = self._import_file_options )
( status, hash ) = HG.client_controller.client_files_manager.ImportFile( file_import_job )
service_keys_to_content_updates = { CC.COMBINED_LOCAL_FILE_SERVICE_KEY : [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_URLS, HC.CONTENT_UPDATE_ADD, ( hash, ( url, ) ) ) ] }
@ -2970,6 +3324,8 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
self._watcher_status = 'ready to start'
self._seed_cache_status = ( 'initialising', ( 0, 1 ) )
self._thread_key = HydrusData.GenerateKey()
self._lock = threading.Lock()
@ -3067,7 +3423,7 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
try:
network_job = ClientNetworking.NetworkJob( 'GET', file_url, temp_path = temp_path )
network_job = ClientNetworking.NetworkJobThreadWatcher( self._thread_key, 'GET', file_url, temp_path = temp_path )
with self._lock:
@ -3114,7 +3470,9 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
else:
( status, hash ) = HG.client_controller.client_files_manager.ImportFile( temp_path, import_file_options = self._import_file_options )
file_import_job = FileImportJob( temp_path, self._import_file_options )
( status, hash ) = HG.client_controller.client_files_manager.ImportFile( file_import_job )
self._urls_cache.UpdateSeedStatus( file_url, status )
@ -3199,7 +3557,7 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
json_url = ClientDownloading.GetImageboardThreadJSONURL( self._thread_url )
network_job = ClientNetworking.NetworkJob( 'GET', json_url )
network_job = ClientNetworking.NetworkJobThreadWatcher( self._thread_key, 'GET', json_url )
with self._lock:
@ -3292,6 +3650,8 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
watcher_status = HydrusData.ToUnicode( e )
HydrusData.PrintException( e )
with self._lock:
@ -3637,7 +3997,9 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
else:
( status, hash ) = HG.client_controller.client_files_manager.ImportFile( temp_path, import_file_options = self._import_file_options )
file_import_job = FileImportJob( temp_path, self._import_file_options )
( status, hash ) = HG.client_controller.client_files_manager.ImportFile( file_import_job )
self._urls_cache.UpdateSeedStatus( file_url, status )

View File

@ -1106,7 +1106,7 @@ class NetworkBandwidthManager( HydrusSerialisable.SerialisableBase ):
self._network_contexts_to_bandwidth_trackers = collections.defaultdict( HydrusNetworking.BandwidthTracker )
self._network_contexts_to_bandwidth_rules = collections.defaultdict( HydrusNetworking.BandwidthRules )
for context_type in [ CC.NETWORK_CONTEXT_GLOBAL, CC.NETWORK_CONTEXT_HYDRUS, CC.NETWORK_CONTEXT_DOMAIN, CC.NETWORK_CONTEXT_DOWNLOADER, CC.NETWORK_CONTEXT_DOWNLOADER_QUERY, CC.NETWORK_CONTEXT_SUBSCRIPTION ]:
for context_type in [ CC.NETWORK_CONTEXT_GLOBAL, CC.NETWORK_CONTEXT_HYDRUS, CC.NETWORK_CONTEXT_DOMAIN, CC.NETWORK_CONTEXT_DOWNLOADER, CC.NETWORK_CONTEXT_DOWNLOADER_QUERY, CC.NETWORK_CONTEXT_SUBSCRIPTION, CC.NETWORK_CONTEXT_THREAD_WATCHER_THREAD ]:
self._network_contexts_to_bandwidth_rules[ NetworkContext( context_type ) ] = HydrusNetworking.BandwidthRules()
@ -1124,8 +1124,8 @@ class NetworkBandwidthManager( HydrusSerialisable.SerialisableBase ):
def _GetSerialisableInfo( self ):
# note this discards downloader_query instances, which have page_key-specific identifiers and are temporary, not meant to be hung onto forever, and are generally invisible to the user
all_serialisable_trackers = [ ( network_context.GetSerialisableTuple(), tracker.GetSerialisableTuple() ) for ( network_context, tracker ) in self._network_contexts_to_bandwidth_trackers.items() if network_context.context_type != CC.NETWORK_CONTEXT_DOWNLOADER_QUERY ]
# note this discards ephemeral network contexts, which have page_key-specific identifiers and are temporary, not meant to be hung onto forever, and are generally invisible to the user
all_serialisable_trackers = [ ( network_context.GetSerialisableTuple(), tracker.GetSerialisableTuple() ) for ( network_context, tracker ) in self._network_contexts_to_bandwidth_trackers.items() if not network_context.IsEphemeral() ]
all_serialisable_rules = [ ( network_context.GetSerialisableTuple(), rules.GetSerialisableTuple() ) for ( network_context, rules ) in self._network_contexts_to_bandwidth_rules.items() ]
return ( all_serialisable_trackers, all_serialisable_rules )
@ -1201,7 +1201,7 @@ class NetworkBandwidthManager( HydrusSerialisable.SerialisableBase ):
with self._lock:
if network_context.content_data is None:
if network_context.context_data is None:
return # can't delete 'default' network contexts
@ -1217,6 +1217,46 @@ class NetworkBandwidthManager( HydrusSerialisable.SerialisableBase ):
def DeleteHistory( self, network_contexts ):
with self._lock:
for network_context in network_contexts:
if network_context in self._network_contexts_to_bandwidth_trackers:
del self._network_contexts_to_bandwidth_trackers[ network_context ]
if network_context == GLOBAL_NETWORK_CONTEXT:
# just to reset it, so we have a 0 global context at all times
self._network_contexts_to_bandwidth_trackers[ GLOBAL_NETWORK_CONTEXT ] = HydrusNetworking.BandwidthTracker()
self._SetDirty()
def GetDefaultRules( self ):
with self._lock:
result = []
for ( network_context, bandwidth_rules ) in self._network_contexts_to_bandwidth_rules.items():
if network_context.IsDefault():
result.append( ( network_context, bandwidth_rules ) )
return result
def GetEstimateInfo( self, network_contexts ):
with self._lock:
@ -1233,7 +1273,7 @@ class NetworkBandwidthManager( HydrusSerialisable.SerialisableBase ):
def GetNetworkContextsAndBandwidthTrackersForUser( self, history_time_delta_threshold = 86400 * 30 ):
def GetNetworkContextsAndBandwidthTrackersForUser( self, history_time_delta_threshold = None ):
with self._lock:
@ -1241,26 +1281,46 @@ class NetworkBandwidthManager( HydrusSerialisable.SerialisableBase ):
for ( network_context, bandwidth_tracker ) in self._network_contexts_to_bandwidth_trackers.items():
if network_context.context_type == CC.NETWORK_CONTEXT_DOWNLOADER_QUERY: # user doesn't want these
if network_context.IsDefault() or network_context.IsEphemeral():
continue
if bandwidth_tracker.GetUsage( HC.BANDWIDTH_TYPE_REQUESTS, history_time_delta_threshold ) > 0:
if network_context != GLOBAL_NETWORK_CONTEXT and history_time_delta_threshold is not None:
result.append( ( network_context, bandwidth_tracker ) )
if bandwidth_tracker.GetUsage( HC.BANDWIDTH_TYPE_REQUESTS, history_time_delta_threshold ) == 0:
continue
result.append( ( network_context, bandwidth_tracker ) )
return result
def GetRules( self, network_context ):
with self._lock:
return self._GetRules( network_context )
def GetTracker( self, network_context ):
with self._lock:
return self._network_contexts_to_bandwidth_trackers[ network_context ]
if network_context in self._network_contexts_to_bandwidth_trackers:
return self._network_contexts_to_bandwidth_trackers[ network_context ]
else:
return HydrusNetworking.BandwidthTracker()
@ -1310,7 +1370,27 @@ class NetworkBandwidthManager( HydrusSerialisable.SerialisableBase ):
with self._lock:
self._network_contexts_to_bandwidth_rules[ network_context ] = bandwidth_rules
if len( bandwidth_rules.GetRules() ) == 0:
if network_context in self._network_contexts_to_bandwidth_rules:
del self._network_contexts_to_bandwidth_rules[ network_context ]
else:
self._network_contexts_to_bandwidth_rules[ network_context ] = bandwidth_rules
self._SetDirty()
def UsesDefaultRules( self, network_context ):
with self._lock:
return network_context not in self._network_contexts_to_bandwidth_rules
@ -1377,11 +1457,28 @@ class NetworkContext( HydrusSerialisable.SerialisableBase ):
def IsDefault( self ):
return self.context_data is None and self.context_type != CC.NETWORK_CONTEXT_GLOBAL
def IsEphemeral( self ):
return self.context_type in ( CC.NETWORK_CONTEXT_DOWNLOADER_QUERY, CC.NETWORK_CONTEXT_THREAD_WATCHER_THREAD )
def ToUnicode( self ):
if self.context_data is None:
return CC.network_context_type_string_lookup[ self.context_type ] + ' domain'
if self.context_type == CC.NETWORK_CONTEXT_GLOBAL:
return 'global'
else:
return CC.network_context_type_string_lookup[ self.context_type ] + ' default'
else:
@ -2165,6 +2262,29 @@ class NetworkJobHydrus( NetworkJob ):
return network_contexts
class NetworkJobThreadWatcher( NetworkJob ):
def __init__( self, thread_key, method, url, body = None, temp_path = None, for_login = False ):
self._thread_key = thread_key
NetworkJob.__init__( self, method, url, body, temp_path = temp_path, for_login = for_login )
def _GenerateNetworkContexts( self ):
network_contexts = NetworkJob._GenerateNetworkContexts( self )
network_contexts.append( NetworkContext( CC.NETWORK_CONTEXT_THREAD_WATCHER_THREAD, self._thread_key ) )
return network_contexts
def _GetSessionNetworkContext( self ):
return self._network_contexts[-2] # the domain one
class NetworkLoginManager( HydrusSerialisable.SerialisableBase ):
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_LOGIN_MANAGER

View File

@ -1591,7 +1591,7 @@ class ServiceIPFS( ServiceRemote ):
if len( urls ) > 0:
HG.client_controller.CallToThread( ClientDownloading.THREADDownloadURLs, job_key, urls, multihash )
HG.client_controller.CallToThread( ClientImporting.THREADDownloadURLs, job_key, urls, multihash )
@ -1615,7 +1615,7 @@ class ServiceIPFS( ServiceRemote ):
url = url_tree[3]
HG.client_controller.CallToThread( ClientDownloading.THREADDownloadURL, job_key, url, multihash )
HG.client_controller.CallToThread( ClientImporting.THREADDownloadURL, job_key, url, multihash )
else:

View File

@ -49,7 +49,7 @@ options = {}
# Misc
NETWORK_VERSION = 18
SOFTWARE_VERSION = 264
SOFTWARE_VERSION = 265
UNSCALED_THUMBNAIL_DIMENSIONS = ( 200, 200 )

View File

@ -66,9 +66,12 @@ def SaveThumbnailToStream( pil_image, dimensions, f ):
pil_image.save( f, 'JPEG', quality = 92 )
def GenerateThumbnail( path, dimensions = HC.UNSCALED_THUMBNAIL_DIMENSIONS ):
def GenerateThumbnail( path, dimensions = HC.UNSCALED_THUMBNAIL_DIMENSIONS, mime = None ):
mime = GetMime( path )
if mime is None:
mime = GetMime( path )
if mime in ( HC.IMAGE_JPEG, HC.IMAGE_PNG, HC.IMAGE_GIF ):
@ -176,7 +179,10 @@ def GetFileInfo( path ):
size = os.path.getsize( path )
if size == 0: raise HydrusExceptions.SizeException( 'File is of zero length!' )
if size == 0:
raise HydrusExceptions.SizeException( 'File is of zero length!' )
mime = GetMime( path )
@ -232,7 +238,10 @@ def GetHashFromPath( path ):
with open( path, 'rb' ) as f:
for block in HydrusPaths.ReadFileLikeAsBlocks( f ): h.update( block )
for block in HydrusPaths.ReadFileLikeAsBlocks( f ):
h.update( block )
return h.digest()

View File

@ -16,7 +16,10 @@ import HydrusPaths
def ConvertToPngIfBmp( path ):
with open( path, 'rb' ) as f: header = f.read( 2 )
with open( path, 'rb' ) as f:
header = f.read( 2 )
if header == 'BM':

View File

@ -164,7 +164,16 @@ class BandwidthRules( HydrusSerialisable.SerialisableBase ):
rows = []
for ( bandwidth_type, time_delta, max_allowed ) in self._rules:
rules_sorted = list( self._rules )
def key( ( bandwidth_type, time_delta, max_allowed ) ):
return time_delta
rules_sorted.sort( key = key )
for ( bandwidth_type, time_delta, max_allowed ) in rules_sorted:
time_is_less_than_threshold = time_delta is not None and time_delta <= threshold
@ -482,6 +491,29 @@ class BandwidthTracker( HydrusSerialisable.SerialisableBase ):
def GetMonthlyDataUsage( self ):
with self._lock:
result = []
for ( month_time, usage ) in self._months_bytes.items():
month_dt = datetime.datetime.utcfromtimestamp( month_time )
( year, month ) = ( month_dt.year, month_dt.month )
date_str = str( year ) + '-' + str( month )
result.append( ( date_str, usage ) )
result.sort()
return result
def GetUsage( self, bandwidth_type, time_delta ):
with self._lock:

View File

@ -26,7 +26,7 @@ class TestFunctions( unittest.TestCase ):
fields.append( ( 'recaptcha', CC.FIELD_VERIFICATION_RECAPTCHA, 'reticulating splines' ) )
fields.append( ( 'resto', CC.FIELD_THREAD_ID, '1000000' ) )
fields.append( ( 'spoiler/on', CC.FIELD_CHECKBOX, True ) )
fields.append( ( 'upfile', CC.FIELD_FILE, ( hash, HC.IMAGE_GIF, TestConstants.tinest_gif ) ) )
fields.append( ( 'upfile', CC.FIELD_FILE, ( hash, HC.IMAGE_GIF, TestConstants.tiniest_gif ) ) )
result = ClientGUIManagement.GenerateDumpMultipartFormDataCTAndBody( fields )

View File

@ -25,11 +25,16 @@ class TestDaemons( unittest.TestCase ):
try:
HG.test_controller.SetRead( 'hash_status', CC.STATUS_NEW )
HydrusPaths.MakeSureDirectoryExists( test_dir )
with open( os.path.join( test_dir, '0' ), 'wb' ) as f: f.write( TestConstants.tinest_gif )
with open( os.path.join( test_dir, '1' ), 'wb' ) as f: f.write( TestConstants.tinest_gif ) # previously imported
with open( os.path.join( test_dir, '2' ), 'wb' ) as f: f.write( TestConstants.tinest_gif )
hydrus_png_path = os.path.join( HC.STATIC_DIR, 'hydrus.png' )
HydrusPaths.MirrorFile( hydrus_png_path, os.path.join( test_dir, '0' ) )
HydrusPaths.MirrorFile( hydrus_png_path, os.path.join( test_dir, '1' ) ) # previously imported
HydrusPaths.MirrorFile( hydrus_png_path, os.path.join( test_dir, '2' ) )
with open( os.path.join( test_dir, '3' ), 'wb' ) as f: f.write( 'blarg' ) # broken
with open( os.path.join( test_dir, '4' ), 'wb' ) as f: f.write( 'blarg' ) # previously failed

View File

@ -13,7 +13,7 @@ import wx
DB_DIR = None
tinest_gif = '\x47\x49\x46\x38\x39\x61\x01\x00\x01\x00\x00\xFF\x00\x2C\x00\x00\x00\x00\x01\x00\x01\x00\x00\x02\x00\x3B'
tiniest_gif = '\x47\x49\x46\x38\x39\x61\x01\x00\x01\x00\x00\xFF\x00\x2C\x00\x00\x00\x00\x01\x00\x01\x00\x00\x02\x00\x3B'
LOCAL_RATING_LIKE_SERVICE_KEY = HydrusData.GenerateKey()
LOCAL_RATING_NUMERICAL_SERVICE_KEY = HydrusData.GenerateKey()

View File

@ -70,6 +70,8 @@ class TestClientDB( unittest.TestCase ):
cls._db = ClientDB.DB( HG.test_controller, TestConstants.DB_DIR, 'client' )
HG.test_controller.SetRead( 'hash_status', CC.STATUS_NEW )
@classmethod
def tearDownClass( cls ):
@ -98,7 +100,13 @@ class TestClientDB( unittest.TestCase ):
path = os.path.join( HC.STATIC_DIR, 'hydrus.png' )
self._write( 'import_file', path )
file_import_job = ClientImporting.FileImportJob( path )
file_import_job.GenerateHashAndStatus()
file_import_job.GenerateInfo()
self._write( 'import_file', file_import_job )
#
@ -306,10 +314,16 @@ class TestClientDB( unittest.TestCase ):
path = os.path.join( HC.STATIC_DIR, 'hydrus.png' )
( written_result, written_hash ) = self._write( 'import_file', path )
file_import_job = ClientImporting.FileImportJob( path )
file_import_job.GenerateHashAndStatus()
file_import_job.GenerateInfo()
written_result = self._write( 'import_file', file_import_job )
self.assertEqual( written_result, CC.STATUS_SUCCESSFUL )
self.assertEqual( written_hash, hash )
self.assertEqual( file_import_job.GetHash(), hash )
time.sleep( 1 )
@ -584,7 +598,13 @@ class TestClientDB( unittest.TestCase ):
path = os.path.join( HC.STATIC_DIR, 'hydrus.png' )
self._write( 'import_file', path )
file_import_job = ClientImporting.FileImportJob( path )
file_import_job.GenerateHashAndStatus()
file_import_job.GenerateInfo()
self._write( 'import_file', file_import_job )
#
@ -701,15 +721,29 @@ class TestClientDB( unittest.TestCase ):
hash = hex_hash.decode( 'hex' )
( written_result, written_hash ) = self._write( 'import_file', path )
file_import_job = ClientImporting.FileImportJob( path )
file_import_job.GenerateHashAndStatus()
file_import_job.GenerateInfo()
written_result = self._write( 'import_file', file_import_job )
self.assertEqual( written_result, CC.STATUS_SUCCESSFUL )
self.assertEqual( written_hash, hash )
self.assertEqual( file_import_job.GetHash(), hash )
( written_result, written_hash ) = self._write( 'import_file', path )
file_import_job = ClientImporting.FileImportJob( path )
file_import_job.GenerateHashAndStatus()
file_import_job.GenerateInfo()
written_result = self._write( 'import_file', file_import_job )
self.assertEqual( written_result, CC.STATUS_REDUNDANT )
self.assertEqual( written_hash, hash )
self.assertEqual( file_import_job.GetHash(), hash )
written_hash = file_import_job.GetHash()
( media_result, ) = self._read( 'media_results', ( written_hash, ) )
@ -815,7 +849,13 @@ class TestClientDB( unittest.TestCase ):
#
self._write( 'import_file', path )
file_import_job = ClientImporting.FileImportJob( path )
file_import_job.GenerateHashAndStatus()
file_import_job.GenerateInfo()
self._write( 'import_file', file_import_job )
#
@ -848,7 +888,15 @@ class TestClientDB( unittest.TestCase ):
HC.options[ 'exclude_deleted_files' ] = False
( result, hash ) = self._write( 'import_file', path )
file_import_job = ClientImporting.FileImportJob( path )
file_import_job.GenerateHashAndStatus()
file_import_job.GenerateInfo()
self._write( 'import_file', file_import_job )
hash = file_import_job.GetHash()
#

14
test.py
View File

@ -334,12 +334,16 @@ class Controller( object ):
if name == 'import_file':
( path, ) = args
( file_import_job, ) = args
with open( path, 'rb' ) as f: file = f.read()
if file == 'blarg': raise Exception( 'File failed to import for some reason!' )
else: return ( CC.STATUS_SUCCESSFUL, '0123456789abcdef'.decode( 'hex' ) )
if file_import_job.GetHash().encode( 'hex' ) == 'a593942cb7ea9ffcd8ccf2f0fa23c338e23bfecd9a3e508dfc0bcf07501ead08': # 'blarg' in sha256 hex
raise Exception( 'File failed to import for some reason!' )
else:
return CC.STATUS_SUCCESSFUL