Version 189

This commit is contained in:
Hydrus 2016-01-06 15:17:20 -06:00
parent 518595d11b
commit 90784a000d
29 changed files with 918 additions and 365 deletions

View File

@ -8,6 +8,36 @@
<div class="content">
<h3>changelog</h3>
<ul>
<li><h3>version 189</h3></li>
<ul>
<li>split the big analyze db calls into individual table/index calls and moved them from update code to the normal maintenance routines</li>
<li>on vacuum, both the client and server dbs will now bump their page size up to 4096 if they are running on windows (server vacuum is triggered by running a backup)</li>
<li>vacuum should be a slightly faster operation for both the client and server</li>
<li>boosted the db cache significantly--we'll see if it makes much difference for things.</li>
<li>the way the selection tags control updates its taglist on increases to its media cache is massively sped up. An update on a 5,000-thumbnail-strong page now typically works in 3ms instead of ~250ms. large import pages should stream new results much more quickly now</li>
<li>sped up some slow hash calculation code that was lagging a variety of large operations</li>
<li>some hash caching responsibility is moved about to make it available for the add_media_results comparison, which now typically works in sub-millisecond time (was about 16ms before)</li>
<li>some sorted media list index recalculation now works faster</li>
<li>some internal media object hashing is now cached, so sorted list index regeneration is a bit faster</li>
<li>some medialist file counting is now superfast</li>
<li>wrote a new pauser object to break big jobs up more conveniently and reduce gui choking</li>
<li>the repo processing db call now uses this pauser</li>
<li>some copy and mirror directory functions now use this pauser</li>
<li>backup and restore code for the client now skips re-copying files if they share the same last modified date and file size</li>
<li>backup code for the server now skips re-copying files if they share the same last modified date and file size</li>
<li>http cannotsendrequest and badstatusline errors will now provoke two reattempts before being raised</li>
<li>socket error 10013 (no access permission, usually due to a firewall) is caught and a nicer error produced</li>
<li>socket error 10054 (remote host reset connection) is caught, and the connection is reattempted twice before being raised</li>
<li>the old giphy API is gone, so I have removed giphy</li>
<li>forced shutdown due to system exit/logoff is handled better</li>
<li>pubsub-related shutdown exceptions are now caught and silenced</li>
<li>an unusual shutdown exception is now caught</li>
<li>fixed a copy subtag menu typo</li>
<li>cleaned some misc hydrus path code</li>
<li>tags that begin with a colon (like ':)' ) will now render correctly in the media canvas background</li>
<li>some misc code cleanup</li>
<li>dropped flvlib since ffmpeg parses flv metadata better anyway</li>
</ul>
<li><h3>version 188</h3></li>
<ul>
<li>if you have custom filters set up, they will now be listed in the normal thumbnail right-click menu, which will expand for them</li>

View File

@ -11,7 +11,7 @@
<h3>what you will need</h3>
<p>You will need to install python 2.7 and a number of python modules. Most of it you can get through pip. I think this will do for most systems:</p>
<ul>
<li>(sudo) pip install beautifulsoup4 flvlib hsaudiotag lxml lz4 mp3play nose numpy pafy Pillow psutil pycrypto PyPDF2 PySocks python-potr PyYAML Send2Trash twisted</li>
<li>(sudo) pip install beautifulsoup4 hsaudiotag lxml lz4 mp3play nose numpy pafy Pillow psutil pycrypto PyPDF2 PySocks python-potr PyYAML Send2Trash twisted</li>
</ul>
<p>Although you may want to do them one at a time, or in a different order. Your specific system may also need some of them from different sources and will need some complicated things installed separately. The best way to figure it out is just to keep running client.pyw and see what it complains about missing.</p>
<p>I use Ubuntu 14.04, which also requires something like:</p>

View File

@ -62,8 +62,6 @@ class Controller( HydrusController.HydrusController ):
text = 'Are you sure "' + path + '" is the correct directory?'
text += os.linesep * 2
text += 'Everything already in that directory will be deleted before the backup starts.'
text += os.linesep * 2
text += 'The database will be locked while the backup occurs, which may lock up your gui as well.'
with ClientGUIDialogs.DialogYesNo( self._gui, text ) as dlg_yn:
@ -117,8 +115,18 @@ class Controller( HydrusController.HydrusController ):
time.sleep( 0.05 )
if job_key.HasVariable( 'result' ): return job_key.GetVariable( 'result' )
else: raise job_key.GetVariable( 'error' )
if job_key.HasVariable( 'result' ):
return job_key.GetVariable( 'result' )
elif job_key.HasVariable( 'error' ):
raise job_key.GetVariable( 'error' )
else:
raise HydrusExceptions.ShutdownException()
def CheckAlreadyRunning( self ):
@ -337,29 +345,37 @@ class Controller( HydrusController.HydrusController ):
def Exit( self ):
try:
if HydrusGlobals.emergency_exit:
self._gui.TestAbleToClose()
self.ShutdownView()
self.ShutdownModel()
except HydrusExceptions.PermissionException:
else:
return
try:
self._gui.TestAbleToClose()
except HydrusExceptions.PermissionException:
return
try:
try:
self._splash = ClientGUI.FrameSplash( self )
except Exception as e:
HydrusData.Print( 'There was an error trying to start the splash screen!' )
HydrusData.Print( traceback.format_exc() )
self._splash = ClientGUI.FrameSplash( self )
exit_thread = threading.Thread( target = self.THREADExitEverything, name = 'Application Exit Thread' )
except Exception as e:
exit_thread.start()
HydrusData.Print( 'There was an error trying to start the splash screen!' )
HydrusData.Print( traceback.format_exc() )
exit_thread = threading.Thread( target = self.THREADExitEverything, name = 'Application Exit Thread' )
exit_thread.start()
def ForceIdle( self ):
@ -546,6 +562,13 @@ class Controller( HydrusController.HydrusController ):
stale_time_delta = 14 * 86400
stop_time = HydrusData.GetNow() + 120
self.pub( 'splash_set_status_text', 'analyzing' )
self.WriteSynchronous( 'analyze', stale_time_delta, stop_time )
if self._timestamps[ 'last_service_info_cache_fatten' ] == 0:
self._timestamps[ 'last_service_info_cache_fatten' ] = HydrusData.GetNow()
@ -836,55 +859,64 @@ class Controller( HydrusController.HydrusController ):
def ShutdownView( self ):
self.CallBlockingToWx( self._gui.Shutdown )
self.pub( 'splash_set_status_text', 'waiting for daemons to exit' )
self._ShutdownDaemons()
idle_shutdown_action = self._options[ 'idle_shutdown' ]
if idle_shutdown_action in ( CC.IDLE_ON_SHUTDOWN, CC.IDLE_ON_SHUTDOWN_ASK_FIRST ):
if HydrusGlobals.emergency_exit:
self.pub( 'splash_set_status_text', 'running maintenance' )
self._gui.Shutdown()
self.ResetIdleTimer()
HydrusController.HydrusController.ShutdownView( self )
do_it = True
else:
if CC.IDLE_ON_SHUTDOWN_ASK_FIRST:
self.CallBlockingToWx( self._gui.Shutdown )
self.pub( 'splash_set_status_text', 'waiting for daemons to exit' )
self._ShutdownDaemons()
idle_shutdown_action = self._options[ 'idle_shutdown' ]
if idle_shutdown_action in ( CC.IDLE_ON_SHUTDOWN, CC.IDLE_ON_SHUTDOWN_ASK_FIRST ):
if self.ThereIsIdleShutdownWorkDue():
self.pub( 'splash_set_status_text', 'running maintenance' )
self.ResetIdleTimer()
do_it = True
if CC.IDLE_ON_SHUTDOWN_ASK_FIRST:
def wx_code():
if self.ThereIsIdleShutdownWorkDue():
text = 'Is now a good time for the client to do up to ' + HydrusData.ConvertIntToPrettyString( self._options[ 'idle_shutdown_max_minutes' ] ) + ' minutes\' maintenance work?'
with ClientGUIDialogs.DialogYesNo( self._splash, text, title = 'Maintenance is due' ) as dlg_yn:
def wx_code():
if dlg_yn.ShowModal() == wx.ID_YES:
text = 'Is now a good time for the client to do up to ' + HydrusData.ConvertIntToPrettyString( self._options[ 'idle_shutdown_max_minutes' ] ) + ' minutes\' maintenance work?'
with ClientGUIDialogs.DialogYesNo( self._splash, text, title = 'Maintenance is due' ) as dlg_yn:
return True
else:
return False
if dlg_yn.ShowModal() == wx.ID_YES:
return True
else:
return False
do_it = self.CallBlockingToWx( wx_code )
do_it = self.CallBlockingToWx( wx_code )
if do_it:
self.DoIdleShutdownWork()
if do_it:
self.DoIdleShutdownWork()
HydrusController.HydrusController.ShutdownView( self )
HydrusController.HydrusController.ShutdownView( self )
def StartFileQuery( self, query_key, search_context ):
@ -1035,7 +1067,8 @@ class Controller( HydrusController.HydrusController ):
self.ShutdownModel()
except HydrusExceptions.PermissionException as e: pass
except HydrusExceptions.PermissionException: pass
except HydrusExceptions.ShutdownException: pass
except:
traceback.print_exc()

View File

@ -1122,7 +1122,7 @@ class DB( HydrusDB.HydrusDB ):
if not os.path.exists( update_dir ):
os.mkdir( update_dir )
os.makedirs( update_dir )
if 'first_timestamp' not in info: info[ 'first_timestamp' ] = None
@ -1171,11 +1171,41 @@ class DB( HydrusDB.HydrusDB ):
self._c.execute( 'REPLACE INTO web_sessions ( name, cookies, expiry ) VALUES ( ?, ?, ? );', ( name, cookies, expires ) )
def _AnalyzeAfterUpdate( self ):
def _Analyze( self, stale_time_delta, stop_time ):
self._controller.pub( 'splash_set_status_text', 'analyzing db after update' )
all_names = [ name for ( name, ) in self._c.execute( 'SELECT name FROM sqlite_master;' ) ]
HydrusDB.HydrusDB._AnalyzeAfterUpdate( self )
existing_names_to_timestamps = dict( self._c.execute( 'SELECT name, timestamp FROM analyze_timestamps;' ).fetchall() )
names_to_analyze = [ name for name in all_names if name not in existing_names_to_timestamps or HydrusData.TimeHasPassed( existing_names_to_timestamps[ name ] + stale_time_delta ) ]
random.shuffle( names_to_analyze )
while len( names_to_analyze ) > 0:
name = names_to_analyze.pop()
started = HydrusData.GetNowPrecise()
self._c.execute( 'ANALYZE ' + name + ';' )
self._c.execute( 'REPLACE INTO analyze_timestamps ( name, timestamp ) VALUES ( ?, ? );', ( name, HydrusData.GetNow() ) )
time_took = HydrusData.GetNowPrecise() - started
HydrusData.Print( 'Analyzed ' + name + ' in ' + HydrusData.ConvertTimeDeltaToPrettyString( time_took ) )
if HydrusData.TimeHasPassed( stop_time ) or not self._controller.CurrentlyIdle():
break
self._c.execute( 'ANALYZE sqlite_master;' ) # this reloads the current stats into the query planner
still_more_to_do = len( names_to_analyze ) > 0
return still_more_to_do
def _ArchiveFiles( self, hash_ids ):
@ -1198,24 +1228,51 @@ class DB( HydrusDB.HydrusDB ):
def _Backup( self, path ):
deletee_filenames = os.listdir( path )
job_key = HydrusThreading.JobKey( cancellable = True )
for deletee_filename in deletee_filenames:
job_key.SetVariable( 'popup_title', 'backing up db' )
self._controller.pub( 'message', job_key )
job_key.SetVariable( 'popup_text_1', 'closing db' )
self._c.execute( 'COMMIT' )
self._c.close()
self._db.close()
if not os.path.exists( path ):
deletee_path = os.path.join( path, deletee_filename )
ClientData.DeletePath( deletee_path )
os.makedirs( path )
shutil.copy2( self._db_path, os.path.join( path, 'client.db' ) )
if os.path.exists( self._db_path + '-wal' ): shutil.copy2( self._db_path + '-wal', os.path.join( path, 'client.db-wal' ) )
job_key.SetVariable( 'popup_text_1', 'copying db file' )
shutil.copytree( HC.CLIENT_ARCHIVES_DIR, os.path.join( path, 'client_archives' ) )
shutil.copytree( HC.CLIENT_FILES_DIR, os.path.join( path, 'client_files' ) )
shutil.copytree( HC.CLIENT_THUMBNAILS_DIR, os.path.join( path, 'client_thumbnails' ) )
shutil.copytree( HC.CLIENT_UPDATES_DIR, os.path.join( path, 'client_updates' ) )
shutil.copy2( self._db_path, os.path.join( path, self.DB_NAME + '.db' ) )
HydrusData.ShowText( 'Database backup done!' )
job_key.SetVariable( 'popup_text_1', 'copying archives directory' )
HydrusPaths.MirrorTree( HC.CLIENT_ARCHIVES_DIR, os.path.join( path, 'client_archives' ) )
job_key.SetVariable( 'popup_text_1', 'copying files directory' )
HydrusPaths.MirrorTree( HC.CLIENT_FILES_DIR, os.path.join( path, 'client_files' ) )
job_key.SetVariable( 'popup_text_1', 'copying thumbnails directory' )
HydrusPaths.MirrorTree( HC.CLIENT_THUMBNAILS_DIR, os.path.join( path, 'client_thumbnails' ) )
job_key.SetVariable( 'popup_text_1', 'copying updates directory' )
HydrusPaths.MirrorTree( HC.CLIENT_UPDATES_DIR, os.path.join( path, 'client_updates' ) )
self._InitDBCursor()
self._c.execute( 'BEGIN IMMEDIATE' )
job_key.SetVariable( 'popup_text_1', 'done!' )
job_key.Finish()
def _CheckDBIntegrity( self ):
@ -1412,24 +1469,29 @@ class DB( HydrusDB.HydrusDB ):
HydrusGlobals.is_first_start = True
if not os.path.exists( HC.CLIENT_ARCHIVES_DIR ): os.mkdir( HC.CLIENT_ARCHIVES_DIR )
if not os.path.exists( HC.CLIENT_FILES_DIR ): os.mkdir( HC.CLIENT_FILES_DIR )
if not os.path.exists( HC.CLIENT_THUMBNAILS_DIR ): os.mkdir( HC.CLIENT_THUMBNAILS_DIR )
if not os.path.exists( HC.CLIENT_UPDATES_DIR ): os.mkdir( HC.CLIENT_UPDATES_DIR )
if not os.path.exists( HC.CLIENT_ARCHIVES_DIR ): os.makedirs( HC.CLIENT_ARCHIVES_DIR )
if not os.path.exists( HC.CLIENT_FILES_DIR ): os.makedirs( HC.CLIENT_FILES_DIR )
if not os.path.exists( HC.CLIENT_THUMBNAILS_DIR ): os.makedirs( HC.CLIENT_THUMBNAILS_DIR )
if not os.path.exists( HC.CLIENT_UPDATES_DIR ): os.makedirs( HC.CLIENT_UPDATES_DIR )
for prefix in HydrusData.IterateHexPrefixes():
dir = os.path.join( HC.CLIENT_FILES_DIR, prefix )
if not os.path.exists( dir ): os.mkdir( dir )
if not os.path.exists( dir ): os.makedirs( dir )
dir = os.path.join( HC.CLIENT_THUMBNAILS_DIR, prefix )
if not os.path.exists( dir ): os.mkdir( dir )
if not os.path.exists( dir ): os.makedirs( dir )
self._c.execute( 'PRAGMA auto_vacuum = 0;' ) # none
self._c.execute( 'PRAGMA journal_mode=WAL;' )
self._c.execute( 'PRAGMA journal_mode = WAL;' )
if HC.PLATFORM_WINDOWS:
self._c.execute( 'PRAGMA page_size = 4096;' )
try: self._c.execute( 'BEGIN IMMEDIATE' )
except Exception as e:
@ -1442,6 +1504,8 @@ class DB( HydrusDB.HydrusDB ):
#
self._c.execute( 'CREATE TABLE analyze_timestamps ( name TEXT, timestamp INTEGER );' )
self._c.execute( 'CREATE TABLE autocomplete_tags_cache ( file_service_id INTEGER REFERENCES services ( service_id ) ON DELETE CASCADE, tag_service_id INTEGER REFERENCES services ( service_id ) ON DELETE CASCADE, namespace_id INTEGER, tag_id INTEGER, current_count INTEGER, pending_count INTEGER, PRIMARY KEY ( file_service_id, tag_service_id, namespace_id, tag_id ) );' )
self._c.execute( 'CREATE INDEX autocomplete_tags_cache_tag_service_id_namespace_id_tag_id_index ON autocomplete_tags_cache ( tag_service_id, namespace_id, tag_id );' )
@ -4236,6 +4300,8 @@ class DB( HydrusDB.HydrusDB ):
def _ProcessContentUpdatePackage( self, service_key, content_update_package, job_key, only_when_idle ):
pauser = HydrusData.BigJobPauser()
self.pub_after_commit( 'notify_new_pending' )
self.pub_after_commit( 'notify_new_siblings' )
self.pub_after_commit( 'notify_new_parents' )
@ -4258,6 +4324,8 @@ class DB( HydrusDB.HydrusDB ):
for content_update in content_update_package.IterateContentUpdates():
pauser.Pause()
options = self._controller.GetOptions()
if only_when_idle and not self._controller.CurrentlyIdle():
@ -4288,9 +4356,7 @@ class DB( HydrusDB.HydrusDB ):
pending_content_updates.append( content_update )
content_update_weight = len( content_update.GetHashes() )
pending_weight += content_update_weight
pending_weight += content_update.GetWeight()
if pending_weight > 100:
@ -4992,7 +5058,7 @@ class DB( HydrusDB.HydrusDB ):
elif not os.path.exists( full_dest ):
os.mkdir( full_dest )
os.makedirs( full_dest )
portable_dest = HydrusPaths.ConvertAbsPathToPortablePath( dest )
@ -5732,7 +5798,7 @@ class DB( HydrusDB.HydrusDB ):
if not os.path.exists( dest_dir ):
os.mkdir( dest_dir )
os.makedirs( dest_dir )
source_path = os.path.join( HC.CLIENT_UPDATES_DIR, filename )
@ -6167,6 +6233,11 @@ class DB( HydrusDB.HydrusDB ):
if version == 188:
self._c.execute( 'CREATE TABLE analyze_timestamps ( name TEXT, timestamp INTEGER );' )
self._controller.pub( 'splash_set_title_text', 'updating db to v' + str( version + 1 ) )
self._c.execute( 'UPDATE version SET version = ?;', ( version + 1, ) )
@ -6629,6 +6700,7 @@ class DB( HydrusDB.HydrusDB ):
( hta_path, hta ) = self._tag_archives[ archive_name ]
adding = True
self._controller.pub( 'sync_to_tag_archive', hta_path, adding, namespaces, service_key )
@ -6675,6 +6747,24 @@ class DB( HydrusDB.HydrusDB ):
time.sleep( 1 )
self._c.execute( 'PRAGMA journal_mode = TRUNCATE;' )
if HC.PLATFORM_WINDOWS:
ideal_page_size = 4096
else:
ideal_page_size = 1024
( page_size, ) = self._c.execute( 'PRAGMA page_size;' ).fetchone()
if page_size != ideal_page_size:
self._c.execute( 'PRAGMA page_size = ' + str( ideal_page_size ) + ';' )
try:
self._c.execute( 'VACUUM' )
@ -6698,8 +6788,6 @@ class DB( HydrusDB.HydrusDB ):
job_key.SetVariable( 'popup_text_1', prefix + 'cleaning up' )
self._c.execute( 'ANALYZE' )
self._c.execute( 'REPLACE INTO shutdown_timestamps ( shutdown_type, timestamp ) VALUES ( ?, ? );', ( CC.SHUTDOWN_TIMESTAMP_VACUUM, HydrusData.GetNow() ) )
self._c.close()
@ -6717,6 +6805,7 @@ class DB( HydrusDB.HydrusDB ):
def _Write( self, action, *args, **kwargs ):
if action == '4chan_pass': result = self._SetYAMLDump( YAML_DUMP_ID_SINGLE, '4chan_pass', *args, **kwargs )
elif action == 'analyze': result = self._Analyze( *args, **kwargs )
elif action == 'backup': result = self._Backup( *args, **kwargs )
elif action == 'content_update_package':result = self._ProcessContentUpdatePackage( *args, **kwargs )
elif action == 'content_updates':result = self._ProcessContentUpdates( *args, **kwargs )
@ -6770,26 +6859,11 @@ class DB( HydrusDB.HydrusDB ):
def RestoreBackup( self, path ):
deletee_filenames = os.listdir( HC.DB_DIR )
shutil.copy2( os.path.join( path, self.DB_NAME + '.db' ), self._db_path )
for deletee_filename in deletee_filenames:
if deletee_filename.startswith( 'client' ):
deletee_path = os.path.join( HC.DB_DIR, deletee_filename )
ClientData.DeletePath( deletee_path )
shutil.copy2( os.path.join( path, 'client.db' ), self._db_path )
wal_path = os.path.join( path, 'client.db-wal' )
if os.path.exists( wal_path ): shutil.copy2( wal_path, self._db_path + '-wal' )
shutil.copytree( os.path.join( path, 'client_archives' ), HC.CLIENT_ARCHIVES_DIR )
shutil.copytree( os.path.join( path, 'client_files' ), HC.CLIENT_FILES_DIR )
shutil.copytree( os.path.join( path, 'client_thumbnails' ), HC.CLIENT_THUMBNAILS_DIR )
shutil.copytree( os.path.join( path, 'client_updates' ), HC.CLIENT_UPDATES_DIR )
HydrusPaths.MirrorTree( os.path.join( path, 'client_archives' ), HC.CLIENT_ARCHIVES_DIR )
HydrusPaths.MirrorTree( os.path.join( path, 'client_files' ), HC.CLIENT_FILES_DIR )
HydrusPaths.MirrorTree( os.path.join( path, 'client_thumbnails' ), HC.CLIENT_THUMBNAILS_DIR )
HydrusPaths.MirrorTree( os.path.join( path, 'client_updates' ), HC.CLIENT_UPDATES_DIR )

View File

@ -1306,9 +1306,7 @@ class Service( HydrusData.HydrusYAMLBase ):
if os.path.exists( path ):
info = os.lstat( path )
size = info[6]
HydrusPaths.GetPathSize( path )
if size == 0:

View File

@ -129,7 +129,7 @@ def GetExportPath():
if not os.path.exists( path ):
os.mkdir( path )
os.makedirs( path )
@ -499,9 +499,7 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
elif os.path.exists( dest_path ):
dest_info = os.lstat( dest_path )
dest_size = dest_info[6]
dest_size = HydrusPaths.GetPathSize( dest_path )
if dest_size == size:

View File

@ -597,29 +597,32 @@ class FrameGUI( ClientGUICommon.FrameThatResizes ):
def _Exit( self, restart = False ):
if HC.options[ 'confirm_client_exit' ]:
if not HydrusGlobals.emergency_exit:
if restart:
if HC.options[ 'confirm_client_exit' ]:
text = 'Are you sure you want to restart the client? (Will auto-yes in 15 seconds)'
if restart:
text = 'Are you sure you want to restart the client? (Will auto-yes in 15 seconds)'
else:
text = 'Are you sure you want to exit the client? (Will auto-yes in 15 seconds)'
else:
text = 'Are you sure you want to exit the client? (Will auto-yes in 15 seconds)'
with ClientGUIDialogs.DialogYesNo( self, text ) as dlg:
call_later = wx.CallLater( 15000, dlg.EndModal, wx.ID_YES )
if dlg.ShowModal() == wx.ID_NO:
with ClientGUIDialogs.DialogYesNo( self, text ) as dlg:
call_later = wx.CallLater( 15000, dlg.EndModal, wx.ID_YES )
if dlg.ShowModal() == wx.ID_NO:
call_later.Stop()
return
call_later.Stop()
return
call_later.Stop()
@ -842,7 +845,6 @@ class FrameGUI( ClientGUICommon.FrameThatResizes ):
submenu = wx.Menu()
submenu.Append( ClientCaches.MENU_EVENT_ID_TO_ACTION_CACHE.GetPermanentId( 'new_import_booru' ), p( 'Booru' ), p( 'Open a new tab to download files from a booru.' ) )
submenu.Append( ClientCaches.MENU_EVENT_ID_TO_ACTION_CACHE.GetPermanentId( 'new_import_gallery', HC.SITE_TYPE_GIPHY ), p( 'Giphy' ), p( 'Open a new tab to download files from Giphy.' ) )
submenu.Append( ClientCaches.MENU_EVENT_ID_TO_ACTION_CACHE.GetPermanentId( 'new_import_gallery', HC.SITE_TYPE_DEVIANT_ART ), p( 'Deviant Art' ), p( 'Open a new tab to download files from Deviant Art.' ) )
hf_submenu = wx.Menu()
hf_submenu.Append( ClientCaches.MENU_EVENT_ID_TO_ACTION_CACHE.GetPermanentId( 'new_import_gallery', HC.SITE_TYPE_HENTAI_FOUNDRY_ARTIST ), p( 'By Artist' ), p( 'Open a new tab to download files from Hentai Foundry.' ) )
@ -1551,7 +1553,7 @@ class FrameGUI( ClientGUICommon.FrameThatResizes ):
self._controller.pub( 'message', job_key )
if not os.path.exists( HC.CLIENT_THUMBNAILS_DIR ): os.mkdir( HC.CLIENT_THUMBNAILS_DIR )
if not os.path.exists( HC.CLIENT_THUMBNAILS_DIR ): os.makedirs( HC.CLIENT_THUMBNAILS_DIR )
for p in HydrusData.IterateHexPrefixes():
@ -1559,7 +1561,7 @@ class FrameGUI( ClientGUICommon.FrameThatResizes ):
if not os.path.exists( dir ):
os.mkdir( dir )
os.makedirs( dir )
@ -2102,6 +2104,15 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
def EventExit( self, event ):
if event.CanVeto():
event.Veto()
else:
HydrusGlobals.emergency_exit = True
self._Exit()
@ -2504,30 +2515,52 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
def Shutdown( self ):
self._SaveGUISession( 'last session' )
try:
if HydrusGlobals.emergency_exit:
self._message_manager.Hide()
self._SaveGUISession( 'last session' )
self._message_manager.CleanBeforeDestroy()
except: pass
self.Hide()
for page in [ self._notebook.GetPage( i ) for i in range( self._notebook.GetPageCount() ) ]: page.CleanBeforeDestroy()
page = self._notebook.GetCurrentPage()
if page is not None:
for page in [ self._notebook.GetPage( i ) for i in range( self._notebook.GetPageCount() ) ]: page.CleanBeforeDestroy()
( HC.options[ 'hpos' ], HC.options[ 'vpos' ] ) = page.GetSashPositions()
page = self._notebook.GetCurrentPage()
if page is not None:
( HC.options[ 'hpos' ], HC.options[ 'vpos' ] ) = page.GetSashPositions()
self._controller.Write( 'save_options', HC.options )
self.Destroy()
else:
self._SaveGUISession( 'last session' )
try:
self._message_manager.Hide()
self._message_manager.CleanBeforeDestroy()
except: pass
self.Hide()
for page in [ self._notebook.GetPage( i ) for i in range( self._notebook.GetPageCount() ) ]: page.CleanBeforeDestroy()
page = self._notebook.GetCurrentPage()
if page is not None:
( HC.options[ 'hpos' ], HC.options[ 'vpos' ] ) = page.GetSashPositions()
self._controller.Write( 'save_options', HC.options )
wx.CallAfter( self.Destroy )
self._controller.Write( 'save_options', HC.options )
wx.CallAfter( self.Destroy )
def SyncToTagArchive( self, hta, adding, namespaces, service_key ):

View File

@ -1319,7 +1319,7 @@ class CanvasWithDetails( Canvas ):
for tag in tags_i_want_to_display:
display_string = tag
display_string = HydrusTags.RenderTag( tag )
if tag in pending:

View File

@ -2912,7 +2912,7 @@ class ListBoxTags( ListBox ):
if ':' in selection_string:
sub_selection_string = selection_string.split( ':', 1 )[1]
sub_selection_string = '"' + selection_string.split( ':', 1 )[1]
menu.Append( ClientCaches.MENU_EVENT_ID_TO_ACTION_CACHE.GetTemporaryId( 'copy_sub_terms' ), 'copy ' + sub_selection_string )
@ -3681,71 +3681,145 @@ class ListBoxTagsSelection( ListBoxTags ):
def _RecalcStrings( self ):
def _GetTagString( self, tag ):
siblings_manager = HydrusGlobals.client_controller.GetManager( 'tag_siblings' )
tag_string = HydrusTags.RenderTag( tag )
all_tags = set()
if self._show_current: all_tags.update( ( tag for ( tag, count ) in self._current_tags_to_count.items() if count > 0 ) )
if self._show_deleted: all_tags.update( ( tag for ( tag, count ) in self._deleted_tags_to_count.items() if count > 0 ) )
if self._show_pending: all_tags.update( ( tag for ( tag, count ) in self._pending_tags_to_count.items() if count > 0 ) )
if self._show_petitioned: all_tags.update( ( tag for ( tag, count ) in self._petitioned_tags_to_count.items() if count > 0 ) )
self._ordered_strings = []
self._strings_to_terms = {}
for tag in all_tags:
if self._include_counts:
tag_string = HydrusTags.RenderTag( tag )
if self._show_current and tag in self._current_tags_to_count: tag_string += ' (' + HydrusData.ConvertIntToPrettyString( self._current_tags_to_count[ tag ] ) + ')'
if self._show_pending and tag in self._pending_tags_to_count: tag_string += ' (+' + HydrusData.ConvertIntToPrettyString( self._pending_tags_to_count[ tag ] ) + ')'
if self._show_petitioned and tag in self._petitioned_tags_to_count: tag_string += ' (-' + HydrusData.ConvertIntToPrettyString( self._petitioned_tags_to_count[ tag ] ) + ')'
if self._show_deleted and tag in self._deleted_tags_to_count: tag_string += ' (X' + HydrusData.ConvertIntToPrettyString( self._deleted_tags_to_count[ tag ] ) + ')'
if self._include_counts:
else:
if self._show_pending and tag in self._pending_tags_to_count: tag_string += ' (+)'
if self._show_petitioned and tag in self._petitioned_tags_to_count: tag_string += ' (-)'
if self._show_deleted and tag in self._deleted_tags_to_count: tag_string += ' (X)'
if not self._collapse_siblings:
siblings_manager = HydrusGlobals.client_controller.GetManager( 'tag_siblings' )
sibling = siblings_manager.GetSibling( tag )
if sibling is not None:
if self._show_current and tag in self._current_tags_to_count: tag_string += ' (' + HydrusData.ConvertIntToPrettyString( self._current_tags_to_count[ tag ] ) + ')'
if self._show_pending and tag in self._pending_tags_to_count: tag_string += ' (+' + HydrusData.ConvertIntToPrettyString( self._pending_tags_to_count[ tag ] ) + ')'
if self._show_petitioned and tag in self._petitioned_tags_to_count: tag_string += ' (-' + HydrusData.ConvertIntToPrettyString( self._petitioned_tags_to_count[ tag ] ) + ')'
if self._show_deleted and tag in self._deleted_tags_to_count: tag_string += ' (X' + HydrusData.ConvertIntToPrettyString( self._deleted_tags_to_count[ tag ] ) + ')'
else:
if self._show_pending and tag in self._pending_tags_to_count: tag_string += ' (+)'
if self._show_petitioned and tag in self._petitioned_tags_to_count: tag_string += ' (-)'
if self._show_deleted and tag in self._deleted_tags_to_count: tag_string += ' (X)'
tag_string += ' (will display as ' + HydrusTags.RenderTag( sibling ) + ')'
if not self._collapse_siblings:
return tag_string
def _RecalcStrings( self, limit_to_these_tags = None ):
if limit_to_these_tags is None:
all_tags = set()
if self._show_current: all_tags.update( ( tag for ( tag, count ) in self._current_tags_to_count.items() if count > 0 ) )
if self._show_deleted: all_tags.update( ( tag for ( tag, count ) in self._deleted_tags_to_count.items() if count > 0 ) )
if self._show_pending: all_tags.update( ( tag for ( tag, count ) in self._pending_tags_to_count.items() if count > 0 ) )
if self._show_petitioned: all_tags.update( ( tag for ( tag, count ) in self._petitioned_tags_to_count.items() if count > 0 ) )
self._ordered_strings = []
self._strings_to_terms = {}
for tag in all_tags:
sibling = siblings_manager.GetSibling( tag )
tag_string = self._GetTagString( tag )
if sibling is not None:
self._ordered_strings.append( tag_string )
self._strings_to_terms[ tag_string ] = tag
self._SortTags()
else:
sort_needed = False
terms_to_old_strings = { tag : tag_string for ( tag_string, tag ) in self._strings_to_terms.items() }
for tag in limit_to_these_tags:
tag_string = self._GetTagString( tag )
do_insert = True
if tag in terms_to_old_strings:
tag_string += ' (will display as ' + HydrusTags.RenderTag( sibling ) + ')'
old_tag_string = terms_to_old_strings[ tag ]
if tag_string == old_tag_string:
do_insert = False
else:
self._ordered_strings.remove( old_tag_string )
del self._strings_to_terms[ old_tag_string ]
if do_insert:
self._ordered_strings.append( tag_string )
self._strings_to_terms[ tag_string ] = tag
sort_needed = True
self._ordered_strings.append( tag_string )
self._strings_to_terms[ tag_string ] = tag
if sort_needed:
self._SortTags()
self._SortTags()
def _SortTags( self ):
if self._sort == CC.SORT_BY_LEXICOGRAPHIC_ASC: compare_function = lambda a, b: cmp( a, b )
elif self._sort == CC.SORT_BY_LEXICOGRAPHIC_DESC: compare_function = lambda a, b: cmp( b, a )
if self._sort == CC.SORT_BY_LEXICOGRAPHIC_ASC:
key = None
reverse = False
elif self._sort == CC.SORT_BY_LEXICOGRAPHIC_DESC:
key = None
reverse = True
elif self._sort in ( CC.SORT_BY_INCIDENCE_ASC, CC.SORT_BY_INCIDENCE_DESC ):
tags_to_count = collections.defaultdict( lambda: 0 )
tags_to_count = collections.Counter()
tags_to_count.update( self._current_tags_to_count )
for ( tag, count ) in self._pending_tags_to_count.items(): tags_to_count[ tag ] += count
if self._show_current: tags_to_count.update( self._current_tags_to_count )
if self._show_deleted: tags_to_count.update( self._deleted_tags_to_count )
if self._show_pending: tags_to_count.update( self._pending_tags_to_count )
if self._show_petitioned: tags_to_count.update( self._petitioned_tags_to_count )
if self._sort == CC.SORT_BY_INCIDENCE_ASC: compare_function = lambda a, b: cmp( ( tags_to_count[ self._strings_to_terms[ a ] ], a ), ( tags_to_count[ self._strings_to_terms[ b ] ], b ) )
elif self._sort == CC.SORT_BY_INCIDENCE_DESC: compare_function = lambda a, b: cmp( ( tags_to_count[ self._strings_to_terms[ b ] ], a ), ( tags_to_count[ self._strings_to_terms[ a ] ], b ) )
def key( a ):
return tags_to_count[ self._strings_to_terms[ a ] ]
if self._sort == CC.SORT_BY_INCIDENCE_ASC:
reverse = False
elif self._sort == CC.SORT_BY_INCIDENCE_DESC:
reverse = True
self._ordered_strings.sort( compare_function )
self._ordered_strings.sort( key = key, reverse = reverse )
self._TextsHaveChanged()
@ -3774,6 +3848,30 @@ class ListBoxTagsSelection( ListBoxTags ):
self._RecalcStrings()
def IncrementTagsByMedia( self, media ):
( current_tags_to_count, deleted_tags_to_count, pending_tags_to_count, petitioned_tags_to_count ) = ClientData.GetMediasTagCount( media, tag_service_key = self._tag_service_key, collapse_siblings = self._collapse_siblings )
self._current_tags_to_count.update( current_tags_to_count )
self._deleted_tags_to_count.update( deleted_tags_to_count )
self._pending_tags_to_count.update( pending_tags_to_count )
self._petitioned_tags_to_count.update( petitioned_tags_to_count )
tags_changed = set()
if self._show_current: tags_changed.update( current_tags_to_count.keys() )
if self._show_deleted: tags_changed.update( deleted_tags_to_count.keys() )
if self._show_pending: tags_changed.update( pending_tags_to_count.keys() )
if self._show_petitioned: tags_changed.update( petitioned_tags_to_count.keys() )
if len( tags_changed ) > 0:
self._RecalcStrings( tags_changed )
self._last_media.update( media )
def SetTagsByMedia( self, media, force_reload = False ):
media = set( media )
@ -3787,6 +3885,8 @@ class ListBoxTagsSelection( ListBoxTags ):
self._pending_tags_to_count = pending_tags_to_count
self._petitioned_tags_to_count = petitioned_tags_to_count
self._RecalcStrings()
else:
removees = self._last_media.difference( media )
@ -3816,8 +3916,25 @@ class ListBoxTagsSelection( ListBoxTags ):
self._RecalcStrings()
if len( removees ) == 0:
tags_changed = set()
if self._show_current: tags_changed.update( current_tags_to_count.keys() )
if self._show_deleted: tags_changed.update( deleted_tags_to_count.keys() )
if self._show_pending: tags_changed.update( pending_tags_to_count.keys() )
if self._show_petitioned: tags_changed.update( petitioned_tags_to_count.keys() )
if len( tags_changed ) > 0:
self._RecalcStrings( tags_changed )
else:
self._RecalcStrings()
self._last_media = media
@ -3845,6 +3962,7 @@ class ListBoxTagsSelectionManagementPanel( ListBoxTagsSelection ):
self._page_key = page_key
self._predicates_callable = predicates_callable
HydrusGlobals.client_controller.sub( self, 'IncrementTagsByMediaPubsub', 'increment_tags_selection' )
HydrusGlobals.client_controller.sub( self, 'SetTagsByMediaPubsub', 'new_tags_selection' )
HydrusGlobals.client_controller.sub( self, 'ChangeTagRepositoryPubsub', 'change_tag_repository' )
@ -3886,9 +4004,20 @@ class ListBoxTagsSelectionManagementPanel( ListBoxTagsSelection ):
if page_key == self._page_key: self.ChangeTagRepository( service_key )
def IncrementTagsByMediaPubsub( self, page_key, media ):
if page_key == self._page_key:
self.IncrementTagsByMedia( media )
def SetTagsByMediaPubsub( self, page_key, media, force_reload = False ):
if page_key == self._page_key: self.SetTagsByMedia( media, force_reload = force_reload )
if page_key == self._page_key:
self.SetTagsByMedia( media, force_reload = force_reload )
class ListBoxTagsSelectionTagsDialog( ListBoxTagsSelection ):

View File

@ -1730,9 +1730,7 @@ class DialogInputLocalFiles( Dialog ):
break
info = os.lstat( path )
size = info[6]
size = HydrusPaths.GetPathSize( path )
if size == 0:
@ -2792,7 +2790,6 @@ class DialogPageChooser( Dialog ):
elif menu_keyword == 'gallery':
entries.append( ( 'page_import_booru', None ) )
entries.append( ( 'page_import_gallery', HC.SITE_TYPE_GIPHY ) )
entries.append( ( 'page_import_gallery', HC.SITE_TYPE_DEVIANT_ART ) )
entries.append( ( 'menu', 'hentai foundry' ) )
entries.append( ( 'page_import_gallery', HC.SITE_TYPE_NEWGROUNDS ) )

View File

@ -3902,7 +3902,7 @@ class DialogManageOptions( ClientGUIDialogs.Dialog ):
gallery_identifiers = []
for site_type in [ HC.SITE_TYPE_DEFAULT, HC.SITE_TYPE_DEVIANT_ART, HC.SITE_TYPE_GIPHY, HC.SITE_TYPE_HENTAI_FOUNDRY, HC.SITE_TYPE_NEWGROUNDS, HC.SITE_TYPE_PIXIV, HC.SITE_TYPE_TUMBLR ]:
for site_type in [ HC.SITE_TYPE_DEFAULT, HC.SITE_TYPE_DEVIANT_ART, HC.SITE_TYPE_HENTAI_FOUNDRY, HC.SITE_TYPE_NEWGROUNDS, HC.SITE_TYPE_PIXIV, HC.SITE_TYPE_TUMBLR ]:
gallery_identifiers.append( ClientDownloading.GalleryIdentifier( site_type ) )
@ -7127,7 +7127,6 @@ class DialogManageSubscriptions( ClientGUIDialogs.Dialog ):
site_types = []
site_types.append( HC.SITE_TYPE_BOORU )
site_types.append( HC.SITE_TYPE_DEVIANT_ART )
site_types.append( HC.SITE_TYPE_GIPHY )
site_types.append( HC.SITE_TYPE_HENTAI_FOUNDRY_ARTIST )
site_types.append( HC.SITE_TYPE_HENTAI_FOUNDRY_TAGS )
site_types.append( HC.SITE_TYPE_NEWGROUNDS )

View File

@ -382,7 +382,7 @@ class MediaPanel( ClientMedia.ListeningMediaList, wx.ScrolledWindow ):
def _GetPrettyStatus( self ):
num_files = sum( [ media.GetNumFiles() for media in self._sorted_media ] )
num_files = len( self._hashes )
num_selected = self._GetNumSelected()
@ -655,6 +655,12 @@ class MediaPanel( ClientMedia.ListeningMediaList, wx.ScrolledWindow ):
HydrusGlobals.client_controller.pub( 'new_page_status', self._page_key, self._GetPrettyStatus() )
def _PublishSelectionIncrement( self, medias ):
HydrusGlobals.client_controller.pub( 'increment_tags_selection', self._page_key, medias )
HydrusGlobals.client_controller.pub( 'new_page_status', self._page_key, self._GetPrettyStatus() )
def _RatingsFilter( self, service_key ):
if service_key is None:
@ -853,7 +859,10 @@ class MediaPanel( ClientMedia.ListeningMediaList, wx.ScrolledWindow ):
def AddMediaResults( self, page_key, media_results, append = True ):
if page_key == self._page_key: return ClientMedia.ListeningMediaList.AddMediaResults( self, media_results, append = append )
if page_key == self._page_key:
return ClientMedia.ListeningMediaList.AddMediaResults( self, media_results, append = append )
def Archive( self, hashes ):
@ -1604,7 +1613,10 @@ class MediaPanelThumbnails( MediaPanel ):
self._FadeThumbnail( thumbnail )
self._PublishSelectionChange()
if len( self._selected_media ) == 0:
self._PublishSelectionIncrement( thumbnails )

View File

@ -207,11 +207,12 @@ class Media( object ):
def __init__( self ):
self._id = HydrusData.GenerateKey()
self._id_hash = self._id.__hash__()
def __eq__( self, other ): return self.__hash__() == other.__hash__()
def __hash__( self ): return self._id.__hash__()
def __hash__( self ): return self._id_hash
def __ne__( self, other ): return self.__hash__() != other.__hash__()
@ -232,6 +233,8 @@ class MediaList( object ):
self._singleton_media = set( self._sorted_media )
self._collected_media = set()
self._RecalcHashes()
def _CalculateCollectionKeysToMedias( self, collect_by, medias ):
@ -276,15 +279,6 @@ class MediaList( object ):
def _GetFirst( self ): return self._sorted_media[ 0 ]
def _GetHashes( self ):
result = set()
for media in self._sorted_media: result.update( media.GetHashes() )
return result
def _GetLast( self ): return self._sorted_media[ -1 ]
def _GetMedia( self, hashes, discriminator = None ):
@ -316,6 +310,14 @@ class MediaList( object ):
else: return self._sorted_media[ previous_index ]
def _RecalcHashes( self ):
self._hashes = set()
for media in self._collected_media: self._hashes.update( media.GetHashes() )
for media in self._singleton_media: self._hashes.add( media.GetHash() )
def _RemoveMedia( self, singleton_media, collected_media ):
if type( singleton_media ) != set: singleton_media = set( singleton_media )
@ -337,11 +339,21 @@ class MediaList( object ):
if append:
for media in new_media:
self._hashes.add( media.GetHash() )
self._singleton_media.update( new_media )
self._sorted_media.append_items( new_media )
else:
for media in new_media:
self._hashes.update( media.GetHashes() )
if self._collect_by is not None:
keys_to_medias = self._CalculateCollectionKeysToMedias( self._collect_by, new_media )
@ -558,6 +570,8 @@ class MediaList( object ):
self._RemoveMedia( affected_singleton_media, affected_collected_media )
self._RecalcHashes()
@ -586,7 +600,12 @@ class MediaList( object ):
def ResetService( self, service_key ):
if service_key == self._file_service_key: self._RemoveMedia( self._singleton_media, self._collected_media )
if service_key == self._file_service_key:
self._RemoveMedia( self._singleton_media, self._collected_media )
self._RecalcHashes()
else:
for media in self._collected_media: media.ResetService( service_key )
@ -673,15 +692,16 @@ class ListeningMediaList( MediaList ):
self._file_query_result.AddMediaResults( media_results )
existing_hashes = self._GetHashes()
new_media = []
for media_result in media_results:
hash = media_result.GetHash()
if hash in existing_hashes: continue
if hash in self._hashes:
continue
new_media.append( self._GenerateMediaSingleton( media_result ) )
@ -719,14 +739,8 @@ class MediaCollection( MediaList, Media ):
self._RecalcInternals()
#def __hash__( self ): return frozenset( self._hashes ).__hash__()
def _RecalcInternals( self ):
self._hashes = set()
for media in self._sorted_media: self._hashes.update( media.GetHashes() )
self._archive = True in ( media.HasArchive() for media in self._sorted_media )
self._inbox = True in ( media.HasInbox() for media in self._sorted_media )
@ -781,7 +795,10 @@ class MediaCollection( MediaList, Media ):
def GetHashes( self, has_location = None, discriminant = None, not_uploaded_to = None, ordered = False ):
if has_location is None and discriminant is None and not_uploaded_to is None and not ordered: return self._hashes
if has_location is None and discriminant is None and not_uploaded_to is None and not ordered:
return self._hashes
else:
if ordered:
@ -805,7 +822,10 @@ class MediaCollection( MediaList, Media ):
def GetMime( self ): return HC.APPLICATION_HYDRUS_CLIENT_COLLECTION
def GetNumFiles( self ): return len( self._hashes )
def GetNumFiles( self ):
return len( self._hashes )
def GetNumInbox( self ): return sum( ( media.GetNumInbox() for media in self._sorted_media ) )
@ -894,32 +914,57 @@ class MediaSingleton( Media ):
def GetHashes( self, has_location = None, discriminant = None, not_uploaded_to = None, ordered = False ):
if ordered:
no_result = []
else:
no_result = set()
locations_manager = self._media_result.GetLocationsManager()
if discriminant is not None:
inbox = self._media_result.GetInbox()
if ( discriminant == CC.DISCRIMINANT_INBOX and not inbox ) or ( discriminant == CC.DISCRIMINANT_ARCHIVE and inbox ) or ( discriminant == CC.DISCRIMINANT_LOCAL and not locations_manager.HasLocal() ) or ( discriminant == CC.DISCRIMINANT_NOT_LOCAL and locations_manager.HasLocal() ): return no_result
locations_manager = self._media_result.GetLocationsManager()
if ( discriminant == CC.DISCRIMINANT_INBOX and not inbox ) or ( discriminant == CC.DISCRIMINANT_ARCHIVE and inbox ) or ( discriminant == CC.DISCRIMINANT_LOCAL and not locations_manager.HasLocal() ) or ( discriminant == CC.DISCRIMINANT_NOT_LOCAL and locations_manager.HasLocal() ):
if ordered:
return []
else:
return set()
if has_location is not None:
if has_location not in locations_manager.GetCurrent(): return no_result
locations_manager = self._media_result.GetLocationsManager()
if has_location not in locations_manager.GetCurrent():
if ordered:
return []
else:
return set()
if not_uploaded_to is not None:
if not_uploaded_to in locations_manager.GetCurrentRemote(): return no_result
locations_manager = self._media_result.GetLocationsManager()
if not_uploaded_to in locations_manager.GetCurrentRemote():
if ordered:
return []
else:
return set()
if ordered:
@ -1246,22 +1291,42 @@ class SortedList( object ):
for item in self._sorted_list: yield item
def __len__( self ): return self._sorted_list.__len__()
def __len__( self ):
return self._sorted_list.__len__()
def _DirtyIndices( self ): self._items_to_indices = None
def _DirtyIndices( self ):
self._items_to_indices = None
def _RecalcIndices( self ): self._items_to_indices = { item : index for ( index, item ) in enumerate( self._sorted_list ) }
def _RecalcIndices( self ):
self._items_to_indices = { item : index for ( index, item ) in enumerate( self._sorted_list ) }
def append_items( self, items ):
self._sorted_list.extend( items )
if self._items_to_indices is None:
self._RecalcIndices()
self._DirtyIndices()
for ( i, item ) in enumerate( items, start = len( self._sorted_list ) ):
self._items_to_indices[ item ] = i
self._sorted_list.extend( items )
def index( self, item ):
if self._items_to_indices is None: self._RecalcIndices()
if self._items_to_indices is None:
self._RecalcIndices()
try:

View File

@ -2,8 +2,10 @@ import HydrusConstants as HC
import HydrusExceptions
import HydrusPaths
import HydrusSerialisable
import errno
import httplib
import os
import socket
import socks
import threading
import time
@ -293,6 +295,66 @@ class HTTPConnection( object ):
self._RefreshConnection()
def _GetResponse( self, method_string, path_and_query, request_headers, body, attempt_number = 1 ):
try:
self._connection.request( method_string, path_and_query, headers = request_headers, body = body )
return self._connection.getresponse()
except ( httplib.CannotSendRequest, httplib.BadStatusLine ):
# for some reason, we can't send a request on the current connection, so let's make a new one and try again!
time.sleep( 1 )
if attempt_number <= 3:
self._RefreshConnection()
return self._GetResponse( method_string, path_and_query, request_headers, body, attempt_number = attempt_number + 1 )
else:
raise
except socket.error as e:
if e.errno == errno.WSAEACCES:
text = 'The hydrus client did not have permission to make a connection to ' + HydrusData.ToUnicode( self._host )
if self._port is not None:
text += ' on port ' + HydrusData.ToUnicode( self._port )
text += '. This is usually due to a firewall stopping it.'
raise HydrusExceptions.FirewallException( text )
elif e.errno == errno.WSAECONNRESET:
time.sleep( 5 )
if attempt_number <= 3:
self._RefreshConnection()
return self._GetResponse( method_string, path_and_query, request_headers, body, attempt_number = attempt_number + 1 )
else:
text = 'The hydrus client\'s connection to ' + HydrusData.ToUnicode( self._host ) + ' kept on being reset by the remote host, so the attempt was abandoned.'
raise HydrusExceptions.NetworkException( text )
def _ParseCookies( self, raw_cookies_string ):
cookies = {}
@ -502,22 +564,7 @@ class HTTPConnection( object ):
request_headers = { str( k ) : str( v ) for ( k, v ) in request_headers.items() }
try:
self._connection.request( method_string, path_and_query, headers = request_headers, body = body )
response = self._connection.getresponse()
except ( httplib.CannotSendRequest, httplib.BadStatusLine ):
# for some reason, we can't send a request on the current connection, so let's make a new one and try again!
self._RefreshConnection()
self._connection.request( method_string, path_and_query, headers = request_headers, body = body )
response = self._connection.getresponse()
response = self._GetResponse( method_string, path_and_query, request_headers, body )
if response.status == 200 and temp_path is not None:

View File

@ -53,7 +53,7 @@ options = {}
# Misc
NETWORK_VERSION = 17
SOFTWARE_VERSION = 188
SOFTWARE_VERSION = 189
UNSCALED_THUMBNAIL_DIMENSIONS = ( 200, 200 )

View File

@ -7,6 +7,7 @@ import HydrusGlobals
import os
import pstats
import Queue
import random
import sqlite3
import sys
import traceback
@ -44,8 +45,6 @@ class HydrusDB( object ):
( version, ) = self._c.execute( 'SELECT version FROM version;' ).fetchone()
run_analyze_after = version < HC.SOFTWARE_VERSION
if version < HC.SOFTWARE_VERSION - 50: raise Exception( 'Your current version of hydrus ' + str( version ) + ' is too old for this version ' + str( HC.SOFTWARE_VERSION ) + ' to update. Please try updating with version ' + str( version + 45 ) + ' or earlier first.' )
while version < HC.SOFTWARE_VERSION:
@ -74,21 +73,9 @@ class HydrusDB( object ):
( version, ) = self._c.execute( 'SELECT version FROM version;' ).fetchone()
if run_analyze_after:
self._AnalyzeAfterUpdate()
self._CloseDBCursor()
def _AnalyzeAfterUpdate( self ):
HydrusData.Print( 'Analyzing db after update...' )
self._c.execute( 'ANALYZE' )
def _CleanUpCaches( self ):
pass
@ -167,9 +154,10 @@ class HydrusDB( object ):
self._c.execute( 'DROP TABLE IF EXISTS ratings_aggregates;' )
self._c.execute( 'PRAGMA cache_size = 10000;' )
self._c.execute( 'PRAGMA cache_size = -50000;' )
self._c.execute( 'PRAGMA foreign_keys = ON;' )
self._c.execute( 'PRAGMA synchronous = 1;' )
self._c.execute( 'PRAGMA journal_mode = WAL;' )
def _ManageDBError( self, job, e ):

View File

@ -1307,6 +1307,26 @@ UNKNOWN_ACCOUNT_TYPE = AccountType( 'unknown account', [ HC.UNKNOWN_PERMISSION ]
def GetUnknownAccount( account_key = None ): return Account( account_key, UNKNOWN_ACCOUNT_TYPE, 0, None, 0, 0 )
class BigJobPauser( object ):
def __init__( self, period = 10, wait_time = 0.1 ):
self._period = period
self._wait_time = wait_time
self._next_pause = GetNow() + self._period
def Pause( self ):
if TimeHasPassed( self._next_pause ):
time.sleep( self._wait_time )
self._next_pause = GetNow() + self._period
class ClientToServerContentUpdatePackage( HydrusYAMLBase ):
yaml_tag = u'!ClientToServerContentUpdatePackage'
@ -1626,6 +1646,11 @@ class ContentUpdate( object ):
return hashes
def GetWeight( self ):
return len( self.GetHashes() )
def IsInboxRelated( self ):
return self._action in ( HC.CONTENT_UPDATE_ARCHIVE, HC.CONTENT_UPDATE_INBOX )

View File

@ -2,16 +2,19 @@ class CantRenderWithCVException( Exception ): pass
class DBException( Exception ): pass
class DBAccessException( Exception ): pass
class FileException( Exception ): pass
class ForbiddenException( Exception ): pass
class MimeException( Exception ): pass
class NameException( Exception ): pass
class NetworkVersionException( Exception ): pass
class NoContentException( Exception ): pass
class NotFoundException( Exception ): pass
class NotModifiedException( Exception ): pass
class PermissionException( Exception ): pass
class ServerBusyException( Exception ): pass
class SessionException( Exception ): pass
class ShutdownException( Exception ): pass
class SizeException( Exception ): pass
class WrongServiceTypeException( Exception ): pass
class NetworkException( Exception ): pass
class FirewallException( NetworkException ): pass
class ForbiddenException( NetworkException ): pass
class NetworkVersionException( NetworkException ): pass
class NoContentException( NetworkException ): pass
class NotModifiedException( NetworkException ): pass
class ServerBusyException( NetworkException ): pass
class SessionException( NetworkException ): pass
class WrongServiceTypeException( NetworkException ): pass

View File

@ -147,9 +147,7 @@ def GetExtraHashesFromPath( path ):
def GetFileInfo( path ):
info = os.lstat( path )
size = info[6]
size = HydrusPaths.GetPathSize( path )
if size == 0: raise HydrusExceptions.SizeException( 'File is of zero length!' )
@ -171,11 +169,7 @@ def GetFileInfo( path ):
( ( width, height ), duration, num_frames ) = HydrusFlashHandling.GetFlashProperties( path )
elif mime == HC.VIDEO_FLV:
( ( width, height ), duration, num_frames ) = HydrusVideoHandling.GetFLVProperties( path )
elif mime in ( HC.VIDEO_WMV, HC.VIDEO_MP4, HC.VIDEO_MKV, HC.VIDEO_WEBM ):
elif mime in ( HC.VIDEO_FLV, HC.VIDEO_WMV, HC.VIDEO_MP4, HC.VIDEO_MKV, HC.VIDEO_WEBM ):
( ( width, height ), duration, num_frames ) = HydrusVideoHandling.GetFFMPEGVideoProperties( path )

View File

@ -14,3 +14,4 @@ server_busy = False
shutdown_complete = False
restart = False
emergency_exit = False

View File

@ -64,9 +64,11 @@ def ConvertPortablePathToAbsPath( portable_path ):
def CopyAndMergeTree( source, dest ):
pauser = HydrusData.BigJobPauser()
if not os.path.exists( dest ):
os.mkdir( dest )
os.makedirs( dest )
for ( root, dirnames, filenames ) in os.walk( source ):
@ -75,12 +77,14 @@ def CopyAndMergeTree( source, dest ):
for dirname in dirnames:
pauser.Pause()
source_path = os.path.join( root, dirname )
dest_path = os.path.join( dest_root, dirname )
if not os.path.exists( dest_path ):
os.mkdir( dest_path )
os.makedirs( dest_path )
shutil.copystat( source_path, dest_path )
@ -88,6 +92,8 @@ def CopyAndMergeTree( source, dest ):
for filename in filenames:
pauser.Pause()
source_path = os.path.join( root, filename )
dest_path = os.path.join( dest_root, filename )
@ -113,6 +119,10 @@ def DeletePath( path ):
def GetPathSize( path ):
return os.lstat( path )[6]
def GetTempFile(): return tempfile.TemporaryFile()
def GetTempFileQuick(): return tempfile.SpooledTemporaryFile( max_size = 1024 * 1024 * 4 )
def GetTempPath(): return tempfile.mkstemp( prefix = 'hydrus' )
@ -175,6 +185,80 @@ def LaunchFile( path ):
thread.start()
def MirrorTree( source, dest ):
pauser = HydrusData.BigJobPauser()
if not os.path.exists( dest ):
os.makedirs( dest )
for ( root, dirnames, filenames ) in os.walk( source ):
dest_root = root.replace( source, dest )
surplus_dest_paths = { os.path.join( dest_root, dest_filename ) for dest_filename in os.listdir( dest_root ) }
for dirname in dirnames:
pauser.Pause()
source_path = os.path.join( root, dirname )
dest_path = os.path.join( dest_root, dirname )
surplus_dest_paths.discard( dest_path )
if not os.path.exists( dest_path ):
os.makedirs( dest_path )
shutil.copystat( source_path, dest_path )
for filename in filenames:
pauser.Pause()
source_path = os.path.join( root, filename )
dest_path = os.path.join( dest_root, filename )
surplus_dest_paths.discard( dest_path )
if not PathsHaveSameSizeAndDate( source_path, dest_path ):
shutil.copy2( source_path, dest_path )
for dest_path in surplus_dest_paths:
pauser.Pause()
DeletePath( dest_path )
def PathsHaveSameSizeAndDate( path1, path2 ):
if os.path.exists( path1 ) and os.path.exists( path2 ):
path1_stat = os.lstat( path1 )
path2_stat = os.lstat( path2 )
same_size = path1_stat[6] == path2_stat[6]
same_modified_time = path1_stat[8] == path2_stat[8]
if same_size and same_modified_time:
return True
return False
def ReadFileLikeAsBlocks( f ):
next_block = f.read( HC.READ_BLOCK_SIZE )

View File

@ -1,5 +1,6 @@
import HydrusConstants as HC
import HydrusData
import HydrusExceptions
import Queue
import threading
import traceback
@ -121,7 +122,14 @@ class HydrusPubSub( object ):
else:
callable( *args, **kwargs )
try:
callable( *args, **kwargs )
except HydrusExceptions.ShutdownException:
return

View File

@ -292,9 +292,7 @@ class HydrusResourceCommand( Resource ):
path = response_context.GetPath()
info = os.lstat( path )
size = info[6]
size = HydrusPaths.GetPathSize( path )
if response_context.IsJSON():

View File

@ -1,4 +1,3 @@
from flvlib import tags as flv_tags
import HydrusConstants as HC
import HydrusData
import HydrusExceptions
@ -47,45 +46,6 @@ def GetFFMPEGVideoProperties( path ):
return ( ( w, h ), duration, num_frames )
def GetFLVProperties( path ):
with open( path, 'rb' ) as f:
flv = flv_tags.FLV( f )
script_tag = None
for tag in flv.iter_tags():
if isinstance( tag, flv_tags.ScriptTag ):
script_tag = tag
break
width = 853
height = 480
duration = 0
num_frames = 0
if script_tag is not None:
tag_dict = script_tag.variable
# tag_dict can sometimes be a float?
# it is on the broken one I tried!
if 'width' in tag_dict: width = tag_dict[ 'width' ]
if 'height' in tag_dict: height = tag_dict[ 'height' ]
if 'duration' in tag_dict: duration = int( tag_dict[ 'duration' ] * 1000 )
if 'framerate' in tag_dict: num_frames = int( ( duration / 1000.0 ) * tag_dict[ 'framerate' ] )
return ( ( width, height ), duration, num_frames )
def GetMatroskaOrWebm( path ):
tags = matroska.parse( path )

View File

@ -340,6 +340,14 @@ class Controller( HydrusController.HydrusController ):
def JustWokeFromSleep( self ): return False
def MaintainDB( self ):
stale_time_delta = 30 * 86400
stop_time = HydrusData.GetNow() + 10
self.WriteSynchronous( 'analyze', stale_time_delta, stop_time )
def NotifyPubSubs( self ):
self.CallToThread( self.ProcessPubSub )

View File

@ -314,6 +314,50 @@ class DB( HydrusDB.HydrusDB ):
def _AddToExpires( self, account_ids, timespan ): self._c.execute( 'UPDATE accounts SET expires = expires + ? WHERE account_id IN ' + HydrusData.SplayListForDB( account_ids ) + ';', ( timespan, ) )
def _Analyze( self, stale_time_delta, stop_time ):
all_names = [ name for ( name, ) in self._c.execute( 'SELECT name FROM sqlite_master;' ) ]
existing_names_to_timestamps = dict( self._c.execute( 'SELECT name, timestamp FROM analyze_timestamps;' ).fetchall() )
names_to_analyze = [ name for name in all_names if name not in existing_names_to_timestamps or HydrusData.TimeHasPassed( existing_names_to_timestamps[ name ] + stale_time_delta ) ]
random.shuffle( names_to_analyze )
if len( names_to_analyze ) > 0:
HydrusGlobals.server_busy = True
while len( names_to_analyze ) > 0:
name = names_to_analyze.pop()
started = HydrusData.GetNowPrecise()
self._c.execute( 'ANALYZE ' + name + ';' )
self._c.execute( 'REPLACE INTO analyze_timestamps ( name, timestamp ) VALUES ( ?, ? );', ( name, HydrusData.GetNow() ) )
time_took = HydrusData.GetNowPrecise() - started
HydrusData.Print( 'Analyzed ' + name + ' in ' + HydrusData.ConvertTimeDeltaToPrettyString( time_took ) )
if HydrusData.TimeHasPassed( stop_time ):
break
self._c.execute( 'ANALYZE sqlite_master;' ) # this reloads the current stats into the query planner
still_more_to_do = len( names_to_analyze ) > 0
HydrusGlobals.server_busy = False
return still_more_to_do
def _ApproveFilePetition( self, service_id, account_id, hash_ids, reason_id ):
self._ApproveFilePetitionOptimised( service_id, account_id, hash_ids )
@ -536,7 +580,10 @@ class DB( HydrusDB.HydrusDB ):
for dir in dirs:
if not os.path.exists( dir ): os.mkdir( dir )
if not os.path.exists( dir ):
os.makedirs( dir )
dirs = ( HC.SERVER_FILES_DIR, HC.SERVER_THUMBNAILS_DIR )
@ -549,7 +596,7 @@ class DB( HydrusDB.HydrusDB ):
if not os.path.exists( new_dir ):
os.mkdir( new_dir )
os.makedirs( new_dir )
@ -557,7 +604,7 @@ class DB( HydrusDB.HydrusDB ):
self._c.execute( 'BEGIN IMMEDIATE' )
self._c.execute( 'PRAGMA auto_vacuum = 0;' ) # none
self._c.execute( 'PRAGMA journal_mode=WAL;' )
self._c.execute( 'PRAGMA journal_mode = WAL;' )
now = HydrusData.GetNow()
@ -574,6 +621,8 @@ class DB( HydrusDB.HydrusDB ):
self._c.execute( 'CREATE TABLE account_types ( account_type_id INTEGER PRIMARY KEY, service_id INTEGER REFERENCES services ON DELETE CASCADE, title TEXT, account_type TEXT_YAML );' )
self._c.execute( 'CREATE UNIQUE INDEX account_types_service_id_account_type_id_index ON account_types ( service_id, account_type_id );' )
self._c.execute( 'CREATE TABLE analyze_timestamps ( name TEXT, timestamp INTEGER );' )
self._c.execute( 'CREATE TABLE bans ( service_id INTEGER REFERENCES services ON DELETE CASCADE, account_id INTEGER, admin_account_id INTEGER, reason_id INTEGER, created INTEGER, expires INTEGER, PRIMARY KEY( service_id, account_id ) );' )
self._c.execute( 'CREATE INDEX bans_expires ON bans ( expires );' )
@ -1744,37 +1793,53 @@ class DB( HydrusDB.HydrusDB ):
self._c.execute( 'COMMIT' )
HydrusData.Print( 'backing up: vacuum' )
self._c.execute( 'VACUUM' )
self._c.execute( 'PRAGMA journal_mode = TRUNCATE;' )
HydrusData.Print( 'backing up: analyze' )
self._c.execute( 'ANALYZE' )
if HC.PLATFORM_WINDOWS:
ideal_page_size = 4096
else:
ideal_page_size = 1024
( page_size, ) = self._c.execute( 'PRAGMA page_size;' ).fetchone()
if page_size != ideal_page_size:
self._c.execute( 'PRAGMA page_size = ' + str( ideal_page_size ) + ';' )
HydrusData.Print( 'backing up: vacuum' )
self._c.execute( 'VACUUM' )
self._c.close()
self._db.close()
self._InitDBCursor()
self._c.execute( 'BEGIN IMMEDIATE' )
backup_path = os.path.join( HC.DB_DIR, 'server_backup' )
HydrusData.Print( 'backing up: deleting old backup' )
HydrusPaths.RecyclePath( backup_path )
os.mkdir( backup_path )
if not os.path.exists( backup_path ):
os.makedirs( backup_path )
HydrusData.Print( 'backing up: copying db file' )
shutil.copy2( self._db_path, os.path.join( backup_path, self.DB_NAME + '.db' ) )
HydrusData.Print( 'backing up: copying files' )
shutil.copytree( HC.SERVER_FILES_DIR, os.path.join( backup_path, 'server_files' ) )
HydrusPaths.MirrorTree( HC.SERVER_FILES_DIR, os.path.join( backup_path, 'server_files' ) )
HydrusData.Print( 'backing up: copying thumbnails' )
shutil.copytree( HC.SERVER_THUMBNAILS_DIR, os.path.join( backup_path, 'server_thumbnails' ) )
HydrusPaths.MirrorTree( HC.SERVER_THUMBNAILS_DIR, os.path.join( backup_path, 'server_thumbnails' ) )
HydrusData.Print( 'backing up: copying updates' )
shutil.copytree( HC.SERVER_UPDATES_DIR, os.path.join( backup_path, 'server_updates' ) )
HydrusPaths.MirrorTree( HC.SERVER_UPDATES_DIR, os.path.join( backup_path, 'server_updates' ) )
self._InitDBCursor()
self._c.execute( 'BEGIN IMMEDIATE' )
HydrusData.Print( 'backing up: done!' )
@ -1914,7 +1979,7 @@ class DB( HydrusDB.HydrusDB ):
if not os.path.exists( update_dir ):
os.mkdir( update_dir )
os.makedirs( update_dir )
begin = 0
@ -2356,7 +2421,7 @@ class DB( HydrusDB.HydrusDB ):
if not os.path.exists( new_dir ):
os.mkdir( new_dir )
os.makedirs( new_dir )
@ -2473,7 +2538,7 @@ class DB( HydrusDB.HydrusDB ):
if not os.path.exists( dest_dir ):
os.mkdir( dest_dir )
os.makedirs( dest_dir )
source_path = os.path.join( HC.SERVER_UPDATES_DIR, filename )
@ -2528,6 +2593,11 @@ class DB( HydrusDB.HydrusDB ):
if version == 188:
self._c.execute( 'CREATE TABLE analyze_timestamps ( name TEXT, timestamp INTEGER );' )
HydrusData.Print( 'The server has updated to version ' + str( version + 1 ) )
self._c.execute( 'UPDATE version SET version = ?;', ( version + 1, ) )
@ -2555,6 +2625,7 @@ class DB( HydrusDB.HydrusDB ):
if action == 'account': result = self._ModifyAccount( *args, **kwargs )
elif action == 'account_types': result = self._ModifyAccountTypes( *args, **kwargs )
elif action == 'analyze': result = self._Analyze( *args, **kwargs )
elif action == 'backup': result = self._MakeBackup( *args, **kwargs )
elif action == 'check_data_usage': result = self._CheckDataUsage( *args, **kwargs )
elif action == 'check_monthly_data': result = self._CheckMonthlyData( *args, **kwargs )

View File

@ -20,7 +20,7 @@ class TestDaemons( unittest.TestCase ):
try:
if not os.path.exists( test_dir ): os.mkdir( test_dir )
if not os.path.exists( test_dir ): os.makedirs( test_dir )
with open( os.path.join( test_dir, '0' ), 'wb' ) as f: f.write( TestConstants.tinest_gif )
with open( os.path.join( test_dir, '1' ), 'wb' ) as f: f.write( TestConstants.tinest_gif ) # previously imported

View File

@ -537,9 +537,7 @@ class TestClientDB( unittest.TestCase ):
predicates.append( ClientSearch.Predicate( HC.PREDICATE_TYPE_SYSTEM_EVERYTHING, None, counts = { HC.CURRENT : 1 } ) )
predicates.extend( [ ClientSearch.Predicate( predicate_type, None ) for predicate_type in [ HC.PREDICATE_TYPE_SYSTEM_UNTAGGED, HC.PREDICATE_TYPE_SYSTEM_NUM_TAGS, HC.PREDICATE_TYPE_SYSTEM_LIMIT, HC.PREDICATE_TYPE_SYSTEM_SIZE, HC.PREDICATE_TYPE_SYSTEM_AGE, HC.PREDICATE_TYPE_SYSTEM_HASH, HC.PREDICATE_TYPE_SYSTEM_DIMENSIONS, HC.PREDICATE_TYPE_SYSTEM_DURATION, HC.PREDICATE_TYPE_SYSTEM_NUM_WORDS, HC.PREDICATE_TYPE_SYSTEM_MIME, HC.PREDICATE_TYPE_SYSTEM_SIMILAR_TO, HC.PREDICATE_TYPE_SYSTEM_FILE_SERVICE ] ] )
for r in result: print( repr( r ) )
print( 'yo' )
for r in predicates: print( repr( r ) )
self.assertEqual( result, predicates )
for i in range( len( predicates ) ): self.assertEqual( result[i].GetCount(), predicates[i].GetCount() )

View File

@ -42,8 +42,8 @@ class TestServer( unittest.TestCase ):
services_manager._keys_to_services[ self._tag_service.GetServiceKey() ] = self._tag_service
services_manager._keys_to_services[ self._admin_service.GetServiceKey() ] = self._admin_service
os.mkdir( ServerFiles.GetExpectedUpdateDir( self._file_service.GetServiceKey() ) )
os.mkdir( ServerFiles.GetExpectedUpdateDir( self._tag_service.GetServiceKey() ) )
os.makedirs( ServerFiles.GetExpectedUpdateDir( self._file_service.GetServiceKey() ) )
os.makedirs( ServerFiles.GetExpectedUpdateDir( self._tag_service.GetServiceKey() ) )
permissions = [ HC.GET_DATA, HC.POST_DATA, HC.POST_PETITIONS, HC.RESOLVE_PETITIONS, HC.MANAGE_USERS, HC.GENERAL_ADMIN, HC.EDIT_SERVICES ]