Version 276
This commit is contained in:
parent
781005f1d4
commit
0fd301bedd
|
@ -25,23 +25,23 @@ If your hard drive is fine, please send me the details! If it could be my code b
|
|||
|
||||
now what?
|
||||
|
||||
If you have a recent backup of your client, it is probably a good idea just to restore from that. A lot of hard drive errors cannot be recovered from. Just copy your backup client.db on top of your corrupted client.db and you are good to go.
|
||||
If you have a recent backup of your client, it is probably a good idea just to restore from that. A lot of hard drive errors cannot be recovered from, so if your backup is good, this may be the only way to recover the lost information. Make a copy of your corrupted database somewhere (just in case of other problems or it turns out the backup is also corrupt) and then restore your backup.
|
||||
|
||||
If you do not have a backup, you'll have to try recovering what data you can from the corrupted db.
|
||||
|
||||
First of all, make a _new_ backup of the corrupted db, just in case something goes wrong with the recovery and we need to try again. You can just copy the client.db, but having a copy of all your files is a great idea if you have the time and space.
|
||||
|
||||
FreeFileSync is great for maintaining regular backups. It only takes a few minutes a week to stay safe.
|
||||
First of all, make a _new_ backup of the corrupted db, just in case something goes wrong with the recovery and we need to try again. You can just copy the client*.db files, but having a copy of all your media files is a great idea if you have the time and space. Please check the help for more information on making backups--they are important!
|
||||
|
||||
|
||||
fix the problem
|
||||
|
||||
Open the SQLite shell, which should be in the db directory, called sqlite3 or sqlite3.exe. Type:
|
||||
One way to fix common small problems is to instruct the database to copy itself completely. When it comes across damaged data, it will attempt to cleanly correct or nullify the broken links. The new copy often works better than the original.
|
||||
|
||||
So: open the SQLite shell, which should be in the db directory, called sqlite3 or sqlite3.exe. Type:
|
||||
|
||||
.open client.db
|
||||
PRAGMA integrity_check;
|
||||
|
||||
The integrity check doesn't correct anything, but it lets you know the magnitude of the problem: if only a couple of issues are found, you may be in luck.
|
||||
The integrity check doesn't correct anything, but it lets you know the magnitude of the problem: if only a couple of issues are found, you may be in luck. There are several .db files in the database, and client.db may not be the one broken. If you do not know which file is already broken, try opening the other files in new shells to figure out the extent of the damage. client.mappings.db is usually the largest and busiest file in most people's databases, so it is a common victim.
|
||||
|
||||
If it doesn't look too bad, then go:
|
||||
|
||||
|
@ -53,8 +53,8 @@ And wait a bit. It'll report its progress as it tries to copy your db's info to
|
|||
|
||||
Will close the shell.
|
||||
|
||||
If the clone doesn't work, contact me and I'll help you manually extract what you can to a new db.
|
||||
If the clone operation is successful, rename client.db to client_old.db and client_new.db to client.db. Then, try running the client!
|
||||
|
||||
If the clone does work, rename client.db to client_old.db and client_new.db to client.db. Then, try running the client!
|
||||
If the clone is unsuccessful or it still does not run correctly, contact me (Hydrus Dev) and I'll help you with the next steps. In the worst case, we will manually extract what you can to a new db.
|
||||
|
||||
If you still get problems, please contact me. Check help/contact.html for ways to do that.
|
|
@ -8,8 +8,40 @@
|
|||
<div class="content">
|
||||
<h3>changelog</h3>
|
||||
<ul>
|
||||
<li><h3>version 276</h3></li>
|
||||
<ul>
|
||||
<li>the new thread watcher object will no longer produce check periods shorter than the time since the latest file. this effectively throttles checking on threads that were posting very fast but have since suddenly stopped completely</li>
|
||||
<li>thread watchers now parse their thread subject and place this in the left management panel</li>
|
||||
<li>thread watchers now name their pages based on the thread subject, if one exists</li>
|
||||
<li>an option to permit or deny thread watchers renaming their pages is now under options->downloading</li>
|
||||
<li>dead and 404 threads now disable their checker pause button--to attempt to revive, hit 'check now'</li>
|
||||
<li>thread watchers now preface their page name with [DEAD] or [404] when appropriate</li>
|
||||
<li>misc thread watcher code improvements</li>
|
||||
<li>added basic import support for zip, rar, and 7z files. they get no useful metadata (yet) and have a default 'archive' thumbnail</li>
|
||||
<li>the client will now by default detect and not import decompression bombs before they blat your computer. an option to allow them nonetheless is under options->media</li>
|
||||
<li>the server will now not parse or accept decompression bomb uploads in POST requests</li>
|
||||
<li>added a 'refresh all pages' entry to page of pages's right-click menu</li>
|
||||
<li>added 'send this page down to a new page of pages' to page right-click menu</li>
|
||||
<li>added 'send all pages to the right to a new page of pages' to page right-click menu</li>
|
||||
<li>fixed a page of pages drag and drop issue when dropping the last page of a notebook onto the same notebook tab</li>
|
||||
<li>fixed some index calculation problems when DnDing page tabs to the right on the same notebook</li>
|
||||
<li>sending a refresh event to a 'show selection in a new page' page (which has no search predicates and so cannot 'refresh' its search) will now trigger a sort event (like importers got last week)</li>
|
||||
<li>thumbnails at the bottom of the current view but are at least 90% in view will no longer scroll into view when selected</li>
|
||||
<li>click events will no longer scroll thumbnails that are semi-out of view into view</li>
|
||||
<li>improved how all 'wait until the client ain't so busy' checks work. importers that have a whole slew of 'already in db' to catch up on should now not clog the gui so much</li>
|
||||
<li>similarly, under ideal conditions where nothing is busy, importers will iterate over their files more quickly</li>
|
||||
<li>the network engine now has a 'verification' loop that doesn't do anything yet, and a stub domain engine is generated to be consulted in this</li>
|
||||
<li>wrote some verification code, extended popup messages to support yes/no questions</li>
|
||||
<li>polished some domain engine code</li>
|
||||
<li>fixed an issue where file repositories were not recording deleted files in certain cases</li>
|
||||
<li>all file repositories will be reset on update</li>
|
||||
<li>the date entries on the review bandwidth bar chart now have leading zeroes on 0-9 months to ensure the sort correctly (this month's 2017-10 entry was sorting before 2017-8, wew!)</li>
|
||||
<li>the migrate database dialog now shows approximate total thumbnail size</li>
|
||||
<li>gave the migrate database help a quick pass</li>
|
||||
<li>gave the 'help my db is broke.txt' file a quick pass</li>
|
||||
</ul>
|
||||
<li><h3>version 275</h3></li>
|
||||
<ul>
|
||||
<li>if you hold shift down while dropping a page tab, the client will not 'chase' that page to show it (try it out!)</li>
|
||||
<li>the gui will be more snappy about dealing with drag-and-drop drops (of types file import, page tab rearrange, and url drop), returning the mouse to normal state instantly on drop and dealing with the event in a subsequent action</li>
|
||||
<li>dropping url text on the client will specifically inform the source program not to delete the dragged text (that the operation was copy, not move), if it is interested in hearing that</li>
|
||||
|
|
|
@ -26,19 +26,20 @@
|
|||
</li>
|
||||
</ol>
|
||||
<h3>these components can be put on different drives</h3>
|
||||
<p>Although an initial install will keep these parts together, it is possible to run the database on a fast drive but keep your media in cheap slow storage. And if you have a very large collection, you can even spread your files across multiple drives. It is not very technically difficult, but I do not recommend it for new users.</p>
|
||||
<p>Although an initial install will keep these parts together, it is possible to, say, run the database on a fast drive but keep your media in cheap slow storage. And if you have a very large collection, you can even spread your files across multiple drives. It is not very technically difficult, but I do not recommend it for new users.</p>
|
||||
<p>Backing such an arrangement up is obviously more complicated, and the internal client backup is not sophisticated enough to capture everything, so I recommend you figure out a broader solution with a third-party backup program like FreeFileSync.</p>
|
||||
<h3>pulling your media apart</h3>
|
||||
<p><b class="warning">As always, I recommend creating a backup before you try any of this, just in case it goes wrong.</b></p>
|
||||
<p>If you have multiple drives and would like to spread your media across them, please do not move the folders around yourself--the database has an internal 'knowledge' of where it thinks its file and thumbnail folders are, and if you move them while it is closed, it will throw 'missing path' errors as soon as it boots. The internal hydrus logic of relative and absolute paths is not always obvious, so it is easy to make mistakes, even if you think you know what you are doing. Instead, please do it through the gui:</p>
|
||||
<p>If you would like to spread your files and thumbnails across multiple locations, please do not move their folders around yourself--the database has an internal 'knowledge' of where it thinks its file and thumbnail folders are, and if you move them while it is closed, it will throw 'missing path' errors as soon as it boots. The internal hydrus logic of relative and absolute paths is not always obvious, so it is easy to make mistakes, even if you think you know what you are doing. Instead, please do it through the gui:</p>
|
||||
<p>Go <i>database->migration</i>, giving you this dialog:</p>
|
||||
<p><img src="db_migration.png" /></p>
|
||||
<p>This is from my main laptop client that I use day to day. I have moved the main database and its files out of the install directory but otherwise kept everything together. Your situation may be simpler or more complicated.</p>
|
||||
<p>To move your files somewhere else, add the new location, empty/remove the old location, and then click 'move files now'.</p>
|
||||
<p><b>Portable</b> means that the path is beneath the main db dir and so is stored as a relative path. Portable paths will still function if the database changes location between boots (for instance, if you run the client from a USB drive and it mounts under a different location).</p>
|
||||
<p><b>Weight</b> means the relative amount of media you would like to store in that location. If location A has a weight of 1 and B has a weight of 2, A will get approximately one third of your files and B will get approximately two thirds.</p>
|
||||
<p>The operations on this dialog are simple and atomic--at no point is your db ever invalid. Once you have the locations and ideal usage set how you like, hit the 'move files now' button to actually shuffle your files around. It will take some time to finish, but you can pause and resume it later if the job is large or you want to alter a path.</p>
|
||||
<p>If you decide to move your database, the program will have to shut down first. Before you boot up again, you will have to create a new program shortcut:</p>
|
||||
<h3>informing the software that the database has moved</h3>
|
||||
<p><b>Weight</b> means the relative amount of media you would like to store in that location. It only matters if you are spreading your files across multiple locations. If location A has a weight of 1 and B has a weight of 2, A will get approximately one third of your files and B will get approximately two thirds.</p>
|
||||
<p>The operations on this dialog are simple and atomic--at no point is your db ever invalid. Once you have the locations and ideal usage set how you like, hit the 'move files now' button to actually shuffle your files around. It will take some time to finish, but you can pause and resume it later if the job is large or you want to undo or alter something.</p>
|
||||
<p>If you decide to move your actual database, the program will have to shut down first. Before you boot up again, you will have to create a new program shortcut:</p>
|
||||
<h3>informing the software that the database is not in the default location</h3>
|
||||
<p>A straight call to the client executable will look for a database in <i>install_dir/db</i>. If one is not found, it will create one. So, if you move your database and then try to run the client again, it will try to create a new empty database in the previous location!</p>
|
||||
<p>So, pass it a -d or --db_dir command line argument, like so:</p>
|
||||
<ul>
|
||||
|
@ -46,10 +47,10 @@
|
|||
<li><i>--or--</i></li>
|
||||
<li>client --db_dir="G:\misc documents\New Folder (3)\DO NOT ENTER"</li>
|
||||
</ul>
|
||||
<p>And it will instead use the given path. You can use any path that is valid in your system, but I would not advise using network locations and so on, as the database works best with some clever device locking calls these interfaces may not provide.</p>
|
||||
<p>And it will instead use the given path. If no database is found, it will similarly create a new empty one at that location. You can use any path that is valid in your system, but I would not advise using network locations and so on, as the database works best with some clever device locking calls these interfaces may not provide.</p>
|
||||
<p>Rather than typing the path out in a terminal every time you want to launch your external database, create a new shortcut with the argument in. Something like this, which is from my main development computer and tests that a fresh default install will run an existing database ok:</p>
|
||||
<p><img src="db_migration_shortcut.png" /></p>
|
||||
<p>Note that an install with an 'external' database no longer needs access to write to its own path, so you can store it anywhere you like (e.g. in 'Program Files'). If you move it, just double-check your shortcuts are still good and you are done.</p>
|
||||
<p>Note that an install with an 'external' database no longer needs access to write to its own path, so you can store it anywhere you like, including protected read-only locations (e.g. in 'Program Files'). If you do move it, just double-check your shortcuts are still good and you are done.</p>
|
||||
<h3>finally</h3>
|
||||
<p>If your database now lives in one or more new locations, make sure to update your backup routine to follow them!</p>
|
||||
<h3>p.s. running multiple clients</h3>
|
||||
|
|
|
@ -1723,7 +1723,10 @@ class RenderedImageCache( object ):
|
|||
self._data_cache = DataCache( self._controller, cache_size, timeout = 600 )
|
||||
|
||||
|
||||
def Clear( self ): self._data_cache.Clear()
|
||||
def Clear( self ):
|
||||
|
||||
self._data_cache.Clear()
|
||||
|
||||
|
||||
def GetImageRenderer( self, media ):
|
||||
|
||||
|
@ -1909,7 +1912,7 @@ class ThumbnailCache( object ):
|
|||
|
||||
self._special_thumbs = {}
|
||||
|
||||
names = [ 'hydrus', 'flash', 'pdf', 'audio', 'video' ]
|
||||
names = [ 'hydrus', 'flash', 'pdf', 'audio', 'video', 'zip' ]
|
||||
|
||||
( os_file_handle, temp_path ) = HydrusPaths.GetTempPath()
|
||||
|
||||
|
@ -1940,6 +1943,14 @@ class ThumbnailCache( object ):
|
|||
|
||||
|
||||
|
||||
def DoingWork( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return len( self._waterfall_queue_random ) > 0
|
||||
|
||||
|
||||
|
||||
def GetThumbnail( self, media ):
|
||||
|
||||
display_media = media.GetDisplayMedia()
|
||||
|
@ -1971,6 +1982,7 @@ class ThumbnailCache( object ):
|
|||
elif mime in HC.VIDEO: return self._special_thumbs[ 'video' ]
|
||||
elif mime == HC.APPLICATION_FLASH: return self._special_thumbs[ 'flash' ]
|
||||
elif mime == HC.APPLICATION_PDF: return self._special_thumbs[ 'pdf' ]
|
||||
elif mime in HC.ARCHIVES: return self._special_thumbs[ 'zip' ]
|
||||
else: return self._special_thumbs[ 'hydrus' ]
|
||||
|
||||
else:
|
||||
|
|
|
@ -222,6 +222,9 @@ else:
|
|||
|
||||
|
||||
media_viewer_capabilities[ HC.APPLICATION_PDF ] = no_support
|
||||
media_viewer_capabilities[ HC.APPLICATION_ZIP ] = no_support
|
||||
media_viewer_capabilities[ HC.APPLICATION_7Z ] = no_support
|
||||
media_viewer_capabilities[ HC.APPLICATION_RAR ] = no_support
|
||||
media_viewer_capabilities[ HC.VIDEO_AVI ] = animated_full_support
|
||||
media_viewer_capabilities[ HC.VIDEO_FLV ] = animated_full_support
|
||||
media_viewer_capabilities[ HC.VIDEO_MOV ] = animated_full_support
|
||||
|
|
|
@ -4,6 +4,7 @@ import ClientDaemons
|
|||
import ClientDefaults
|
||||
import ClientGUIMenus
|
||||
import ClientNetworking
|
||||
import ClientNetworkingDomain
|
||||
import ClientThreading
|
||||
import hashlib
|
||||
import HydrusConstants as HC
|
||||
|
@ -591,9 +592,11 @@ class Controller( HydrusController.HydrusController ):
|
|||
wx.MessageBox( 'Your session manager was missing on boot! I have recreated a new empty one. Please check that your hard drive and client are ok and let the hydrus dev know the details if there is a mystery.' )
|
||||
|
||||
|
||||
domain_manager = ClientNetworkingDomain.NetworkDomainManager()
|
||||
|
||||
login_manager = ClientNetworking.NetworkLoginManager()
|
||||
|
||||
self.network_engine = ClientNetworking.NetworkEngine( self, bandwidth_manager, session_manager, login_manager )
|
||||
self.network_engine = ClientNetworking.NetworkEngine( self, bandwidth_manager, session_manager, domain_manager, login_manager )
|
||||
|
||||
self.CallToThreadLongRunning( self.network_engine.MainLoop )
|
||||
|
||||
|
@ -1189,7 +1192,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
self.pub( 'set_num_query_results', page_key, len( media_results ), len( query_hash_ids ) )
|
||||
|
||||
self.WaitUntilPubSubsEmpty()
|
||||
self.WaitUntilViewFree()
|
||||
|
||||
|
||||
search_context.SetComplete()
|
||||
|
@ -1277,6 +1280,32 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
|
||||
|
||||
def WaitUntilViewFree( self ):
|
||||
|
||||
self.WaitUntilModelFree()
|
||||
|
||||
self.WaitUntilThumbnailsFree()
|
||||
|
||||
|
||||
def WaitUntilThumbnailsFree( self ):
|
||||
|
||||
while True:
|
||||
|
||||
if self._view_shutdown:
|
||||
|
||||
raise HydrusExceptions.ShutdownException( 'Application shutting down!' )
|
||||
|
||||
elif not self._caches[ 'thumbnail' ].DoingWork():
|
||||
|
||||
return
|
||||
|
||||
else:
|
||||
|
||||
time.sleep( 0.00001 )
|
||||
|
||||
|
||||
|
||||
|
||||
def Write( self, action, *args, **kwargs ):
|
||||
|
||||
if action == 'content_updates': self._managers[ 'undo' ].AddCommand( 'content_updates', *args, **kwargs )
|
||||
|
|
|
@ -2865,53 +2865,44 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
def _DeleteFiles( self, service_id, hash_ids ):
|
||||
|
||||
service = self._GetService( service_id )
|
||||
|
||||
service_type = service.GetServiceType()
|
||||
|
||||
splayed_hash_ids = HydrusData.SplayListForDB( hash_ids )
|
||||
|
||||
valid_hash_ids = { hash_id for ( hash_id, ) in self._c.execute( 'SELECT hash_id FROM current_files WHERE service_id = ? AND hash_id IN ' + splayed_hash_ids + ';', ( service_id, ) ) }
|
||||
existing_hash_ids = { hash_id for ( hash_id, ) in self._c.execute( 'SELECT hash_id FROM current_files WHERE service_id = ? AND hash_id IN ' + splayed_hash_ids + ';', ( service_id, ) ) }
|
||||
|
||||
if len( valid_hash_ids ) > 0:
|
||||
service_info_updates = []
|
||||
|
||||
if len( existing_hash_ids ) > 0:
|
||||
|
||||
splayed_valid_hash_ids = HydrusData.SplayListForDB( valid_hash_ids )
|
||||
splayed_existing_hash_ids = HydrusData.SplayListForDB( existing_hash_ids )
|
||||
|
||||
# remove them from the service
|
||||
|
||||
self._c.execute( 'DELETE FROM current_files WHERE service_id = ? AND hash_id IN ' + splayed_valid_hash_ids + ';', ( service_id, ) )
|
||||
self._c.execute( 'DELETE FROM current_files WHERE service_id = ? AND hash_id IN ' + splayed_existing_hash_ids + ';', ( service_id, ) )
|
||||
|
||||
self._c.execute( 'DELETE FROM file_petitions WHERE service_id = ? AND hash_id IN ' + splayed_valid_hash_ids + ';', ( service_id, ) )
|
||||
self._c.execute( 'DELETE FROM file_petitions WHERE service_id = ? AND hash_id IN ' + splayed_existing_hash_ids + ';', ( service_id, ) )
|
||||
|
||||
info = self._c.execute( 'SELECT size, mime FROM files_info WHERE hash_id IN ' + splayed_valid_hash_ids + ';' ).fetchall()
|
||||
info = self._c.execute( 'SELECT size, mime FROM files_info WHERE hash_id IN ' + splayed_existing_hash_ids + ';' ).fetchall()
|
||||
|
||||
num_files = len( valid_hash_ids )
|
||||
num_existing_files_removed = len( existing_hash_ids )
|
||||
delta_size = sum( ( size for ( size, mime ) in info ) )
|
||||
num_inbox = len( valid_hash_ids.intersection( self._inbox_hash_ids ) )
|
||||
|
||||
service_info_updates = []
|
||||
num_inbox = len( existing_hash_ids.intersection( self._inbox_hash_ids ) )
|
||||
|
||||
service_info_updates.append( ( -delta_size, service_id, HC.SERVICE_INFO_TOTAL_SIZE ) )
|
||||
service_info_updates.append( ( -num_files, service_id, HC.SERVICE_INFO_NUM_FILES ) )
|
||||
service_info_updates.append( ( -num_existing_files_removed, service_id, HC.SERVICE_INFO_NUM_FILES ) )
|
||||
service_info_updates.append( ( -num_inbox, service_id, HC.SERVICE_INFO_NUM_INBOX ) )
|
||||
|
||||
select_statement = 'SELECT COUNT( * ) FROM files_info WHERE mime IN ' + HydrusData.SplayListForDB( HC.SEARCHABLE_MIMES ) + ' AND hash_id IN %s;'
|
||||
|
||||
num_viewable_files = sum( self._STL( self._SelectFromList( select_statement, valid_hash_ids ) ) )
|
||||
num_viewable_files = sum( self._STL( self._SelectFromList( select_statement, existing_hash_ids ) ) )
|
||||
|
||||
service_info_updates.append( ( -num_viewable_files, service_id, HC.SERVICE_INFO_NUM_VIEWABLE_FILES ) )
|
||||
|
||||
# now do special stuff
|
||||
|
||||
service = self._GetService( service_id )
|
||||
|
||||
service_type = service.GetServiceType()
|
||||
|
||||
# record the deleted row if appropriate
|
||||
|
||||
if service_id == self._combined_local_file_service_id or service_type == HC.FILE_REPOSITORY:
|
||||
|
||||
service_info_updates.append( ( num_files, service_id, HC.SERVICE_INFO_NUM_DELETED_FILES ) )
|
||||
|
||||
self._c.executemany( 'INSERT OR IGNORE INTO deleted_files ( service_id, hash_id ) VALUES ( ?, ? );', [ ( service_id, hash_id ) for hash_id in valid_hash_ids ] )
|
||||
|
||||
|
||||
# if we maintain tag counts for this service, update
|
||||
|
||||
if service_type in HC.AUTOCOMPLETE_CACHE_SPECIFIC_FILE_SERVICES:
|
||||
|
@ -2920,7 +2911,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
for tag_service_id in tag_service_ids:
|
||||
|
||||
self._CacheSpecificMappingsDeleteFiles( service_id, tag_service_id, valid_hash_ids )
|
||||
self._CacheSpecificMappingsDeleteFiles( service_id, tag_service_id, existing_hash_ids )
|
||||
|
||||
|
||||
|
||||
|
@ -2932,9 +2923,9 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
splayed_local_file_service_ids = HydrusData.SplayListForDB( local_file_service_ids )
|
||||
|
||||
non_orphan_hash_ids = { hash_id for ( hash_id, ) in self._c.execute( 'SELECT hash_id FROM current_files WHERE hash_id IN ' + splayed_valid_hash_ids + ' AND service_id IN ' + splayed_local_file_service_ids + ';' ) }
|
||||
non_orphan_hash_ids = { hash_id for ( hash_id, ) in self._c.execute( 'SELECT hash_id FROM current_files WHERE hash_id IN ' + splayed_existing_hash_ids + ' AND service_id IN ' + splayed_local_file_service_ids + ';' ) }
|
||||
|
||||
orphan_hash_ids = valid_hash_ids.difference( non_orphan_hash_ids )
|
||||
orphan_hash_ids = existing_hash_ids.difference( non_orphan_hash_ids )
|
||||
|
||||
if len( orphan_hash_ids ) > 0:
|
||||
|
||||
|
@ -2950,17 +2941,28 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if service_id == self._combined_local_file_service_id:
|
||||
|
||||
self._DeletePhysicalFiles( valid_hash_ids )
|
||||
self._DeletePhysicalFiles( existing_hash_ids )
|
||||
|
||||
|
||||
# push the info updates, notify
|
||||
|
||||
self._c.executemany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', service_info_updates )
|
||||
|
||||
self.pub_after_job( 'notify_new_pending' )
|
||||
|
||||
|
||||
return valid_hash_ids
|
||||
# record the deleted row if appropriate
|
||||
# this happens outside of 'existing' and occurs on all files due to file repo stuff
|
||||
# file repos will sometimes report deleted files without having reported the initial file
|
||||
|
||||
if service_id == self._combined_local_file_service_id or service_type == HC.FILE_REPOSITORY:
|
||||
|
||||
self._c.executemany( 'INSERT OR IGNORE INTO deleted_files ( service_id, hash_id ) VALUES ( ?, ? );', [ ( service_id, hash_id ) for hash_id in hash_ids ] )
|
||||
|
||||
num_new_deleted_files = self._GetRowCount()
|
||||
|
||||
service_info_updates.append( ( num_new_deleted_files, service_id, HC.SERVICE_INFO_NUM_DELETED_FILES ) )
|
||||
|
||||
|
||||
# push the info updates, notify
|
||||
|
||||
self._c.executemany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', service_info_updates )
|
||||
|
||||
|
||||
def _DeleteHydrusSessionKey( self, service_key ):
|
||||
|
@ -6814,11 +6816,11 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
elif action == HC.CONTENT_UPDATE_DELETE:
|
||||
|
||||
deleted_hash_ids = self._DeleteFiles( service_id, hash_ids )
|
||||
self._DeleteFiles( service_id, hash_ids )
|
||||
|
||||
if service_id == self._trash_service_id:
|
||||
|
||||
self._DeleteFiles( self._combined_local_file_service_id, deleted_hash_ids )
|
||||
self._DeleteFiles( self._combined_local_file_service_id, hash_ids )
|
||||
|
||||
|
||||
elif action == HC.CONTENT_UPDATE_UNDELETE:
|
||||
|
@ -7343,7 +7345,6 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
files_info_rows = []
|
||||
files_rows = []
|
||||
hash_ids = []
|
||||
|
||||
for ( service_hash_id, size, mime, timestamp, width, height, duration, num_frames, num_words ) in chunk:
|
||||
|
||||
|
@ -7353,8 +7354,6 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
files_rows.append( ( hash_id, timestamp ) )
|
||||
|
||||
hash_ids.append( hash_id )
|
||||
|
||||
|
||||
self._AddFilesInfo( files_info_rows )
|
||||
|
||||
|
@ -9756,6 +9755,50 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
|
||||
if version == 275:
|
||||
|
||||
try:
|
||||
|
||||
self._service_cache = {}
|
||||
|
||||
file_repo_ids = self._GetServiceIds( ( HC.FILE_REPOSITORY, ) )
|
||||
|
||||
for service_id in file_repo_ids:
|
||||
|
||||
service = self._GetService( service_id )
|
||||
|
||||
service._no_requests_reason = ''
|
||||
service._no_requests_until = 0
|
||||
|
||||
service._account = HydrusNetwork.Account.GenerateUnknownAccount()
|
||||
self._next_account_sync = 0
|
||||
|
||||
service._metadata = HydrusNetwork.Metadata()
|
||||
|
||||
service._SetDirty()
|
||||
|
||||
self._ResetRepository( service )
|
||||
|
||||
self._SaveDirtyServices( ( service, ) )
|
||||
|
||||
|
||||
if len( file_repo_ids ) > 0:
|
||||
|
||||
message = 'All file repositories\' processing caches were reset on this update. They will resync (and have more accurate \'deleted file\' counts!) in the normal maintenance cycle.'
|
||||
|
||||
self.pub_initial_message( message )
|
||||
|
||||
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
message = 'While attempting to update, the database failed to reset your file repositories. The full error has been written to the log. Please check your file repos in _review services_ to make sure everything looks good, and when it is convenient, try resetting them manually.'
|
||||
|
||||
self.pub_initial_message( message )
|
||||
|
||||
|
||||
|
||||
self._controller.pub( 'splash_set_title_text', 'updated db to v' + str( version + 1 ) )
|
||||
|
||||
self._c.execute( 'UPDATE version SET version = ?;', ( version + 1, ) )
|
||||
|
|
|
@ -112,7 +112,7 @@ def DAEMONDownloadFiles( controller ):
|
|||
|
||||
file_repository.Request( HC.GET, 'file', { 'hash' : hash }, temp_path = temp_path )
|
||||
|
||||
controller.WaitUntilPubSubsEmpty()
|
||||
controller.WaitUntilModelFree()
|
||||
|
||||
automatic_archive = False
|
||||
exclude_deleted = False # this is the important part here
|
||||
|
@ -209,7 +209,7 @@ def DAEMONMaintainTrash( controller ):
|
|||
|
||||
service_keys_to_content_updates = { CC.TRASH_SERVICE_KEY : [ content_update ] }
|
||||
|
||||
controller.WaitUntilPubSubsEmpty()
|
||||
controller.WaitUntilModelFree()
|
||||
|
||||
controller.WriteSynchronous( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
|
@ -236,7 +236,7 @@ def DAEMONMaintainTrash( controller ):
|
|||
|
||||
service_keys_to_content_updates = { CC.TRASH_SERVICE_KEY : [ content_update ] }
|
||||
|
||||
controller.WaitUntilPubSubsEmpty()
|
||||
controller.WaitUntilModelFree()
|
||||
|
||||
controller.WriteSynchronous( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
|
|
|
@ -820,6 +820,8 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
self._dictionary[ 'booleans' ][ 'apply_all_siblings_to_all_services' ] = False
|
||||
self._dictionary[ 'booleans' ][ 'filter_inbox_and_archive_predicates' ] = False
|
||||
|
||||
self._dictionary[ 'booleans' ][ 'do_not_import_decompression_bombs' ] = True
|
||||
|
||||
self._dictionary[ 'booleans' ][ 'discord_dnd_fix' ] = False
|
||||
|
||||
self._dictionary[ 'booleans' ][ 'show_thumbnail_title_banner' ] = True
|
||||
|
@ -832,6 +834,8 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._dictionary[ 'booleans' ][ 'get_tags_if_url_known_and_file_redundant' ] = False
|
||||
|
||||
self._dictionary[ 'booleans' ][ 'permit_watchers_to_name_their_pages' ] = True
|
||||
|
||||
self._dictionary[ 'booleans' ][ 'show_related_tags' ] = False
|
||||
self._dictionary[ 'booleans' ][ 'show_file_lookup_script_tags' ] = False
|
||||
self._dictionary[ 'booleans' ][ 'hide_message_manager_on_gui_iconise' ] = HC.PLATFORM_OSX
|
||||
|
@ -1018,6 +1022,9 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
self._dictionary[ 'media_view' ][ HC.APPLICATION_PDF ] = ( CC.MEDIA_VIEWER_ACTION_SHOW_OPEN_EXTERNALLY_BUTTON, CC.MEDIA_VIEWER_ACTION_SHOW_OPEN_EXTERNALLY_BUTTON, null_zoom_info )
|
||||
self._dictionary[ 'media_view' ][ HC.APPLICATION_ZIP ] = ( CC.MEDIA_VIEWER_ACTION_SHOW_OPEN_EXTERNALLY_BUTTON, CC.MEDIA_VIEWER_ACTION_SHOW_OPEN_EXTERNALLY_BUTTON, null_zoom_info )
|
||||
self._dictionary[ 'media_view' ][ HC.APPLICATION_7Z ] = ( CC.MEDIA_VIEWER_ACTION_SHOW_OPEN_EXTERNALLY_BUTTON, CC.MEDIA_VIEWER_ACTION_SHOW_OPEN_EXTERNALLY_BUTTON, null_zoom_info )
|
||||
self._dictionary[ 'media_view' ][ HC.APPLICATION_RAR ] = ( CC.MEDIA_VIEWER_ACTION_SHOW_OPEN_EXTERNALLY_BUTTON, CC.MEDIA_VIEWER_ACTION_SHOW_OPEN_EXTERNALLY_BUTTON, null_zoom_info )
|
||||
self._dictionary[ 'media_view' ][ HC.APPLICATION_HYDRUS_UPDATE_CONTENT ] = ( CC.MEDIA_VIEWER_ACTION_DO_NOT_SHOW, CC.MEDIA_VIEWER_ACTION_DO_NOT_SHOW, null_zoom_info )
|
||||
self._dictionary[ 'media_view' ][ HC.APPLICATION_HYDRUS_UPDATE_DEFINITIONS ] = ( CC.MEDIA_VIEWER_ACTION_DO_NOT_SHOW, CC.MEDIA_VIEWER_ACTION_DO_NOT_SHOW, null_zoom_info )
|
||||
|
||||
|
@ -2746,9 +2753,9 @@ class WatcherOptions( HydrusSerialisable.SerialisableBase ):
|
|||
# we want next check to be like 30mins from now, not 12 hours
|
||||
# so we'll say "5 files in 30 mins" rather than "5 files in 24 hours"
|
||||
|
||||
earliest_file_time = seed_cache.GetEarliestTimestamp()
|
||||
earliest_source_time = seed_cache.GetEarliestSourceTime()
|
||||
|
||||
early_time_delta = max( last_check_time - earliest_file_time, 30 )
|
||||
early_time_delta = max( last_check_time - earliest_source_time, 30 )
|
||||
|
||||
current_time_delta = min( early_time_delta, death_time_delta )
|
||||
|
||||
|
@ -2785,6 +2792,8 @@ class WatcherOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
if current_files_found == 0:
|
||||
|
||||
# this shouldn't typically matter, since a dead checker won't care about next check time
|
||||
# so let's just have a nice safe value in case this is ever asked legit
|
||||
check_period = self._never_slower_than
|
||||
|
||||
else:
|
||||
|
@ -2793,7 +2802,16 @@ class WatcherOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
ideal_check_period = self._intended_files_per_check * approx_time_per_file
|
||||
|
||||
check_period = min( max( self._never_faster_than, ideal_check_period ), self._never_slower_than )
|
||||
# if a thread produced lots of files and then stopped completely for whatever reason, we don't want to keep checking fast
|
||||
# so, we set a lower limit of time since last file upload, neatly doubling our check period in these situations
|
||||
|
||||
latest_source_time = seed_cache.GetLatestSourceTime()
|
||||
|
||||
time_since_latest_file = max( last_check_time - latest_source_time, 30 )
|
||||
|
||||
never_faster_than = max( self._never_faster_than, time_since_latest_file )
|
||||
|
||||
check_period = min( max( never_faster_than, ideal_check_period ), self._never_slower_than )
|
||||
|
||||
|
||||
return last_check_time + check_period
|
||||
|
|
|
@ -339,6 +339,24 @@ def ParseImageboardFileURLsFromJSON( thread_url, raw_json ):
|
|||
|
||||
return file_infos
|
||||
|
||||
def ParseImageboardThreadSubject( raw_json ):
|
||||
|
||||
json_dict = json.loads( raw_json )
|
||||
|
||||
posts_list = json_dict[ 'posts' ]
|
||||
|
||||
if len( posts_list ) > 0:
|
||||
|
||||
top_post = posts_list[0]
|
||||
|
||||
if 'sub' in top_post:
|
||||
|
||||
return top_post[ 'sub' ]
|
||||
|
||||
|
||||
|
||||
return ''
|
||||
|
||||
def IsImageboardThread( url ):
|
||||
|
||||
if '4chan.org' in url:
|
||||
|
|
|
@ -116,6 +116,7 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
|
|||
self._controller.sub( self, 'NotifyNewSessions', 'notify_new_sessions' )
|
||||
self._controller.sub( self, 'NotifyNewUndo', 'notify_new_undo' )
|
||||
self._controller.sub( self._statusbar_thread_updater, 'Update', 'refresh_status' )
|
||||
self._controller.sub( self, 'RenamePage', 'rename_page' )
|
||||
self._controller.sub( self, 'SetDBLockedStatus', 'db_locked_status' )
|
||||
self._controller.sub( self, 'SetMediaFocus', 'set_media_focus' )
|
||||
self._controller.sub( self, 'SetTitle', 'main_gui_title' )
|
||||
|
@ -2497,7 +2498,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
job_key.SetVariable( 'popup_text_1', 'matched ' + HydrusData.ConvertValueRangeToPrettyString( len( hydrus_hashes ), total_num_hta_hashes ) + ' files' )
|
||||
|
||||
HG.client_controller.WaitUntilPubSubsEmpty()
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
|
||||
del hta
|
||||
|
@ -2527,7 +2528,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
job_key.SetVariable( 'popup_text_1', 'synced ' + HydrusData.ConvertValueRangeToPrettyString( total_num_processed, len( hydrus_hashes ) ) + ' files' )
|
||||
job_key.SetVariable( 'popup_gauge_1', ( total_num_processed, len( hydrus_hashes ) ) )
|
||||
|
||||
HG.client_controller.WaitUntilPubSubsEmpty()
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
|
||||
job_key.DeleteVariable( 'popup_gauge_1' )
|
||||
|
@ -2665,7 +2666,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
time.sleep( 0.1 )
|
||||
|
||||
self._controller.WaitUntilPubSubsEmpty()
|
||||
self._controller.WaitUntilViewFree()
|
||||
|
||||
result = self._controller.Read( 'pending', service_key )
|
||||
|
||||
|
@ -3272,6 +3273,11 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
self._RefreshStatusBar()
|
||||
|
||||
|
||||
def RenamePage( self, page_key, name ):
|
||||
|
||||
self._notebook.RenamePage( page_key, name )
|
||||
|
||||
|
||||
def SaveLastSession( self ):
|
||||
|
||||
if HC.options[ 'default_gui_session' ] == 'last session':
|
||||
|
|
|
@ -2244,6 +2244,8 @@ class ManagementPanelImporterThreadWatcher( ManagementPanelImporter ):
|
|||
|
||||
self._thread_watcher_panel = ClientGUICommon.StaticBox( self, 'thread watcher' )
|
||||
|
||||
self._thread_subject = ClientGUICommon.BetterStaticText( self._thread_watcher_panel )
|
||||
|
||||
self._thread_input = wx.TextCtrl( self._thread_watcher_panel, style = wx.TE_PROCESS_ENTER )
|
||||
self._thread_input.Bind( wx.EVT_KEY_DOWN, self.EventKeyDown )
|
||||
|
||||
|
@ -2321,6 +2323,7 @@ class ManagementPanelImporterThreadWatcher( ManagementPanelImporter ):
|
|||
|
||||
self._options_panel.SetSizer( vbox )
|
||||
|
||||
self._thread_watcher_panel.AddF( self._thread_subject, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
self._thread_watcher_panel.AddF( self._thread_input, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
self._thread_watcher_panel.AddF( self._options_panel, CC.FLAGS_EXPAND_SIZER_BOTH_WAYS )
|
||||
self._thread_watcher_panel.AddF( self._file_import_options, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
@ -2387,6 +2390,8 @@ class ManagementPanelImporterThreadWatcher( ManagementPanelImporter ):
|
|||
|
||||
if not self._options_panel.IsShown():
|
||||
|
||||
self._thread_subject.Show()
|
||||
|
||||
self._options_panel.Show()
|
||||
|
||||
self.Layout()
|
||||
|
@ -2396,13 +2401,15 @@ class ManagementPanelImporterThreadWatcher( ManagementPanelImporter ):
|
|||
|
||||
if self._options_panel.IsShown():
|
||||
|
||||
self._thread_subject.Hide()
|
||||
|
||||
self._options_panel.Hide()
|
||||
|
||||
self.Layout()
|
||||
|
||||
|
||||
|
||||
( current_action, files_paused, file_velocity_status, next_check_time, watcher_status, check_now, thread_paused ) = self._thread_watcher_import.GetStatus()
|
||||
( current_action, files_paused, file_velocity_status, next_check_time, watcher_status, thread_subject, thread_status, check_now, thread_paused ) = self._thread_watcher_import.GetStatus()
|
||||
|
||||
if files_paused:
|
||||
|
||||
|
@ -2459,6 +2466,26 @@ class ManagementPanelImporterThreadWatcher( ManagementPanelImporter ):
|
|||
|
||||
self._watcher_status.SetLabelText( watcher_status )
|
||||
|
||||
if thread_status == ClientImporting.THREAD_STATUS_404:
|
||||
|
||||
self._thread_pause_button.Disable()
|
||||
|
||||
elif thread_status == ClientImporting.THREAD_STATUS_DEAD:
|
||||
|
||||
self._thread_pause_button.Disable()
|
||||
|
||||
else:
|
||||
|
||||
self._thread_pause_button.Enable()
|
||||
|
||||
|
||||
if thread_subject in ( '', 'unknown subject' ):
|
||||
|
||||
thread_subject = 'no subject'
|
||||
|
||||
|
||||
self._thread_subject.SetLabelText( thread_subject )
|
||||
|
||||
if check_now:
|
||||
|
||||
self._thread_check_now_button.Disable()
|
||||
|
@ -3364,9 +3391,9 @@ class ManagementPanelQuery( ManagementPanel ):
|
|||
|
||||
self._query_job_key = ClientThreading.JobKey()
|
||||
|
||||
if self._management_controller.GetVariable( 'search_enabled' ) and self._management_controller.GetVariable( 'synchronised' ):
|
||||
if self._management_controller.GetVariable( 'search_enabled' ):
|
||||
|
||||
try:
|
||||
if self._management_controller.GetVariable( 'synchronised' ):
|
||||
|
||||
file_search_context = self._searchbox.GetFileSearchContext()
|
||||
|
||||
|
@ -3391,7 +3418,10 @@ class ManagementPanelQuery( ManagementPanel ):
|
|||
|
||||
self._controller.pub( 'swap_media_panel', self._page_key, panel )
|
||||
|
||||
except: wx.MessageBox( traceback.format_exc() )
|
||||
|
||||
else:
|
||||
|
||||
self._sort_by.BroadcastSort()
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -2300,7 +2300,7 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
wx.PostEvent( self, wx.ScrollWinEvent( wx.wxEVT_SCROLLWIN_THUMBRELEASE, pos = y_to_scroll_to ) )
|
||||
|
||||
elif y > ( start_y * y_unit ) + height - thumbnail_span_height:
|
||||
elif y > ( start_y * y_unit ) + height - ( thumbnail_span_height * 0.90 ):
|
||||
|
||||
y_to_scroll_to = ( y - height ) / y_unit
|
||||
|
||||
|
@ -2365,7 +2365,6 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
self._RecalculateVirtualSize()
|
||||
|
||||
HG.client_controller.GetCache( 'thumbnail' ).Waterfall( self._page_key, thumbnails )
|
||||
#self._FadeThumbnails( thumbnails )
|
||||
|
||||
if len( self._selected_media ) == 0:
|
||||
|
||||
|
@ -2669,10 +2668,7 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
self._HitMedia( self._GetThumbnailUnderMouse( event ), event.CmdDown(), event.ShiftDown() )
|
||||
|
||||
if not ( event.CmdDown() or event.ShiftDown() ):
|
||||
|
||||
self._ScrollToMedia( self._focussed_media )
|
||||
|
||||
# this specifically does not scroll to media, as for clicking (esp. double-clicking attempts), the scroll can be jarring
|
||||
|
||||
event.Skip()
|
||||
|
||||
|
|
|
@ -1008,7 +1008,40 @@ class PagesNotebook( wx.Notebook ):
|
|||
return None
|
||||
|
||||
|
||||
def _MovePage( self, page_index, delta = None, new_index = None ):
|
||||
def _MovePage( self, page, dest_notebook, insertion_tab_index, follow_dropped_page = False ):
|
||||
|
||||
source_notebook = page.GetParent()
|
||||
|
||||
for ( index, p ) in enumerate( source_notebook._GetPages() ):
|
||||
|
||||
if p == page:
|
||||
|
||||
source_notebook.RemovePage( index )
|
||||
|
||||
break
|
||||
|
||||
|
||||
|
||||
if source_notebook != dest_notebook:
|
||||
|
||||
page.Reparent( dest_notebook )
|
||||
|
||||
self._controller.pub( 'refresh_page_name', source_notebook.GetPageKey() )
|
||||
|
||||
|
||||
insertion_tab_index = min( insertion_tab_index, dest_notebook.GetPageCount() )
|
||||
|
||||
dest_notebook.InsertPage( insertion_tab_index, page, page.GetName(), select = follow_dropped_page )
|
||||
|
||||
if follow_dropped_page:
|
||||
|
||||
self.ShowPage( page )
|
||||
|
||||
|
||||
self._controller.pub( 'refresh_page_name', page.GetPageKey() )
|
||||
|
||||
|
||||
def _ShiftPage( self, page_index, delta = None, new_index = None ):
|
||||
|
||||
new_page_index = page_index
|
||||
|
||||
|
@ -1101,6 +1134,41 @@ class PagesNotebook( wx.Notebook ):
|
|||
|
||||
|
||||
|
||||
def _SendPageToNewNotebook( self, index ):
|
||||
|
||||
page = self.GetPage( index )
|
||||
|
||||
dest_notebook = self.NewPagesNotebook( forced_insertion_index = index, give_it_a_blank_page = False )
|
||||
|
||||
self._MovePage( page, dest_notebook, 0 )
|
||||
|
||||
|
||||
def _SendRightPagesToNewNotebook( self, from_index ):
|
||||
|
||||
message = 'Send all pages to the right to a new page of pages?'
|
||||
|
||||
with ClientGUIDialogs.DialogYesNo( self, message ) as dlg:
|
||||
|
||||
if dlg.ShowModal() == wx.ID_YES:
|
||||
|
||||
pages_index = self.GetPageCount()
|
||||
|
||||
dest_notebook = self.NewPagesNotebook( forced_insertion_index = pages_index, give_it_a_blank_page = False )
|
||||
|
||||
movees = list( range( from_index + 1, pages_index ) )
|
||||
|
||||
movees.reverse()
|
||||
|
||||
for index in movees:
|
||||
|
||||
page = self.GetPage( index )
|
||||
|
||||
self._MovePage( page, dest_notebook, 0 )
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def _ShowMenu( self, screen_position ):
|
||||
|
||||
( tab_index, flags ) = ClientGUICommon.NotebookScreenToHitTest( self, screen_position )
|
||||
|
@ -1113,28 +1181,37 @@ class PagesNotebook( wx.Notebook ):
|
|||
|
||||
menu = wx.Menu()
|
||||
|
||||
if tab_index != -1:
|
||||
if click_over_tab:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'close page', 'Close this page.', self._ClosePage, tab_index )
|
||||
|
||||
can_go_left = tab_index > 0
|
||||
can_go_right = tab_index < end_index
|
||||
|
||||
if num_pages > 1:
|
||||
|
||||
can_close_left = tab_index > 0
|
||||
can_close_right = tab_index < end_index
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'close other pages', 'Close all pages but this one.', self._CloseOtherPages, tab_index )
|
||||
|
||||
if can_close_left:
|
||||
if can_go_left:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'close pages to the left', 'Close all pages to the left of this one.', self._CloseLeftPages, tab_index )
|
||||
|
||||
|
||||
if can_close_right:
|
||||
if can_go_right:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'close pages to the right', 'Close all pages to the right of this one.', self._CloseRightPages, tab_index )
|
||||
|
||||
|
||||
|
||||
ClientGUIMenus.AppendSeparator( menu )
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'send this page down to a new page of pages', 'Make a new page of pages and put this page in it.', self._SendPageToNewNotebook, tab_index )
|
||||
|
||||
if can_go_right:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'send pages to the right to a new page of pages', 'Make a new page of pages and put all the pages to the right into it.', self._SendRightPagesToNewNotebook, tab_index )
|
||||
|
||||
|
||||
ClientGUIMenus.AppendSeparator( menu )
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'rename page', 'Rename this page.', self._RenamePage, tab_index )
|
||||
|
@ -1161,25 +1238,34 @@ class PagesNotebook( wx.Notebook ):
|
|||
|
||||
if can_home:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'move to left end', 'Move this page all the way to the left.', self._MovePage, tab_index, new_index = 0 )
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'move to left end', 'Move this page all the way to the left.', self._ShiftPage, tab_index, new_index = 0 )
|
||||
|
||||
|
||||
if can_move_left:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'move left', 'Move this page one to the left.', self._MovePage, tab_index, delta = -1 )
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'move left', 'Move this page one to the left.', self._ShiftPage, tab_index, delta = -1 )
|
||||
|
||||
|
||||
if can_move_right:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'move right', 'Move this page one to the right.', self._MovePage, tab_index, 1 )
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'move right', 'Move this page one to the right.', self._ShiftPage, tab_index, 1 )
|
||||
|
||||
|
||||
if can_end:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'move to right end', 'Move this page all the way to the right.', self._MovePage, tab_index, new_index = end_index )
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'move to right end', 'Move this page all the way to the right.', self._ShiftPage, tab_index, new_index = end_index )
|
||||
|
||||
|
||||
|
||||
page = self.GetPage( tab_index )
|
||||
|
||||
if isinstance( page, PagesNotebook ) and page.GetPageCount() > 0:
|
||||
|
||||
ClientGUIMenus.AppendSeparator( menu )
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'refresh all this page\'s pages', 'Command every page below this one to refresh.', page.RefreshAllPages )
|
||||
|
||||
|
||||
|
||||
self._controller.PopupMenu( self, menu )
|
||||
|
||||
|
@ -1889,8 +1975,6 @@ class PagesNotebook( wx.Notebook ):
|
|||
return
|
||||
|
||||
|
||||
source_notebook = page.GetParent()
|
||||
|
||||
screen_position = wx.GetMousePosition()
|
||||
|
||||
dest_notebook = self._GetNotebookFromScreenPosition( screen_position )
|
||||
|
@ -1975,46 +2059,13 @@ class PagesNotebook( wx.Notebook ):
|
|||
return
|
||||
|
||||
|
||||
#
|
||||
|
||||
insertion_tab_index = tab_index
|
||||
|
||||
for ( index, p ) in enumerate( source_notebook._GetPages() ):
|
||||
|
||||
if p == page:
|
||||
|
||||
if source_notebook == dest_notebook and index + 1 < insertion_tab_index:
|
||||
|
||||
# we are just about to remove it from earlier in the same list, which shuffles the inserting index up one
|
||||
|
||||
insertion_tab_index -= 1
|
||||
|
||||
|
||||
source_notebook.RemovePage( index )
|
||||
|
||||
break
|
||||
|
||||
|
||||
|
||||
if source_notebook != dest_notebook:
|
||||
|
||||
page.Reparent( dest_notebook )
|
||||
|
||||
self._controller.pub( 'refresh_page_name', source_notebook.GetPageKey() )
|
||||
|
||||
|
||||
shift_down = wx.GetKeyState( wx.WXK_SHIFT )
|
||||
|
||||
follow_dropped_page = not shift_down
|
||||
|
||||
dest_notebook.InsertPage( insertion_tab_index, page, page.GetName(), select = follow_dropped_page )
|
||||
|
||||
if follow_dropped_page:
|
||||
|
||||
self.ShowPage( page )
|
||||
|
||||
|
||||
self._controller.pub( 'refresh_page_name', page.GetPageKey() )
|
||||
self._MovePage( page, dest_notebook, insertion_tab_index, follow_dropped_page )
|
||||
|
||||
|
||||
def PrepareToHide( self ):
|
||||
|
@ -2025,6 +2076,21 @@ class PagesNotebook( wx.Notebook ):
|
|||
|
||||
|
||||
|
||||
def RefreshAllPages( self ):
|
||||
|
||||
for page in self._GetPages():
|
||||
|
||||
if isinstance( page, PagesNotebook ):
|
||||
|
||||
page.RefreshAllPages()
|
||||
|
||||
else:
|
||||
|
||||
page.RefreshQuery()
|
||||
|
||||
|
||||
|
||||
|
||||
def RefreshPageName( self, page_key = None ):
|
||||
|
||||
if page_key is None:
|
||||
|
@ -2059,6 +2125,30 @@ class PagesNotebook( wx.Notebook ):
|
|||
|
||||
|
||||
|
||||
def RenamePage( self, page_key, name ):
|
||||
|
||||
for page in self._GetPages():
|
||||
|
||||
if page.GetPageKey() == page_key:
|
||||
|
||||
if page.GetName() != page_key:
|
||||
|
||||
page.SetName( name )
|
||||
|
||||
self.RefreshPageName( page_key )
|
||||
|
||||
|
||||
return
|
||||
|
||||
elif isinstance( page, PagesNotebook ) and page.HasPageKey( page_key ):
|
||||
|
||||
page.RenamePage( page_key, name )
|
||||
|
||||
return
|
||||
|
||||
|
||||
|
||||
|
||||
def SaveGUISession( self, name = None ):
|
||||
|
||||
if name is None:
|
||||
|
|
|
@ -106,6 +106,17 @@ class PopupMessage( PopupWindow ):
|
|||
self._gauge_2.Bind( wx.EVT_RIGHT_DOWN, self.EventDismiss )
|
||||
self._gauge_2.Hide()
|
||||
|
||||
self._text_yes_no = ClientGUICommon.FitResistantStaticText( self )
|
||||
self._text_yes_no.Wrap( self.WRAP_WIDTH )
|
||||
self._text_yes_no.Bind( wx.EVT_RIGHT_DOWN, self.EventDismiss )
|
||||
self._text_yes_no.Hide()
|
||||
|
||||
self._yes = ClientGUICommon.BetterButton( self, 'yes', self._YesButton )
|
||||
self._yes.Hide()
|
||||
|
||||
self._no = ClientGUICommon.BetterButton( self, 'no', self._NoButton )
|
||||
self._no.Hide()
|
||||
|
||||
self._network_job_ctrl = ClientGUIControls.NetworkJobControl( self )
|
||||
self._network_job_ctrl.Hide()
|
||||
|
||||
|
@ -143,11 +154,18 @@ class PopupMessage( PopupWindow ):
|
|||
hbox.AddF( self._pause_button, CC.FLAGS_VCENTER )
|
||||
hbox.AddF( self._cancel_button, CC.FLAGS_VCENTER )
|
||||
|
||||
yes_no_hbox = wx.BoxSizer( wx.HORIZONTAL )
|
||||
|
||||
yes_no_hbox.AddF( self._yes, CC.FLAGS_VCENTER )
|
||||
yes_no_hbox.AddF( self._no, CC.FLAGS_VCENTER )
|
||||
|
||||
vbox.AddF( self._title, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.AddF( self._text_1, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.AddF( self._gauge_1, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.AddF( self._text_2, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.AddF( self._gauge_2, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.AddF( self._text_yes_no, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.AddF( yes_no_hbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
vbox.AddF( self._network_job_ctrl, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.AddF( self._copy_to_clipboard_button, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.AddF( self._show_files_button, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
@ -159,6 +177,11 @@ class PopupMessage( PopupWindow ):
|
|||
self.SetSizer( vbox )
|
||||
|
||||
|
||||
def _NoButton( self ):
|
||||
|
||||
self._job_key.SetVariable( 'popup_yes_no_answer', False )
|
||||
|
||||
|
||||
def _ProcessText( self, text ):
|
||||
|
||||
if len( text ) > self.TEXT_CUTOFF:
|
||||
|
@ -175,6 +198,11 @@ class PopupMessage( PopupWindow ):
|
|||
return text
|
||||
|
||||
|
||||
def _YesButton( self ):
|
||||
|
||||
self._job_key.SetVariable( 'popup_yes_no_answer', True )
|
||||
|
||||
|
||||
def Cancel( self ):
|
||||
|
||||
self._job_key.Cancel()
|
||||
|
@ -360,6 +388,30 @@ class PopupMessage( PopupWindow ):
|
|||
self._gauge_2.Hide()
|
||||
|
||||
|
||||
popup_yes_no_question = self._job_key.GetIfHasVariable( 'popup_yes_no_question' )
|
||||
|
||||
if popup_yes_no_question is not None and not paused:
|
||||
|
||||
text = popup_yes_no_question
|
||||
|
||||
# set and show text, yes, no buttons
|
||||
|
||||
if self._text_yes_no.GetLabelText() != text:
|
||||
|
||||
self._text_yes_no.SetLabelText( self._ProcessText( HydrusData.ToUnicode( text ) ) )
|
||||
|
||||
|
||||
self._text_yes_no.Show()
|
||||
self._yes.Show()
|
||||
self._no.Show()
|
||||
|
||||
else:
|
||||
|
||||
self._text_yes_no.Hide()
|
||||
self._yes.Hide()
|
||||
self._no.Hide()
|
||||
|
||||
|
||||
popup_network_job = self._job_key.GetIfHasVariable( 'popup_network_job' )
|
||||
|
||||
if popup_network_job is not None:
|
||||
|
|
|
@ -1865,6 +1865,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
thread_checker = ClientGUICommon.StaticBox( self, 'thread checker' )
|
||||
|
||||
self._permit_watchers_to_name_their_pages = wx.CheckBox( thread_checker )
|
||||
|
||||
watcher_options = self._new_options.GetDefaultThreadWatcherOptions()
|
||||
|
||||
self._thread_watcher_options = ClientGUIScrolledPanelsEdit.EditWatcherOptions( thread_checker, watcher_options )
|
||||
|
@ -1875,6 +1877,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
self._gallery_file_limit.SetValue( HC.options[ 'gallery_file_limit' ] )
|
||||
|
||||
self._permit_watchers_to_name_their_pages.SetValue( self._new_options.GetBoolean( 'permit_watchers_to_name_their_pages' ) )
|
||||
|
||||
#
|
||||
|
||||
rows = []
|
||||
|
@ -1891,6 +1895,14 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
#
|
||||
|
||||
|
||||
rows = []
|
||||
|
||||
rows.append( ( 'Permit thread checkers to name their own pages:', self._permit_watchers_to_name_their_pages ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( thread_checker, rows )
|
||||
|
||||
thread_checker.AddF( gridbox, CC.FLAGS_EXPAND_SIZER_BOTH_WAYS )
|
||||
thread_checker.AddF( self._thread_watcher_options, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
#
|
||||
|
@ -1909,6 +1921,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
self._new_options.SetBoolean( 'verify_regular_https', self._verify_regular_https.GetValue() )
|
||||
HC.options[ 'gallery_file_limit' ] = self._gallery_file_limit.GetValue()
|
||||
|
||||
self._new_options.SetBoolean( 'permit_watchers_to_name_their_pages', self._permit_watchers_to_name_their_pages.GetValue() )
|
||||
|
||||
self._new_options.SetDefaultThreadWatcherOptions( self._thread_watcher_options.GetValue() )
|
||||
|
||||
|
||||
|
@ -2701,6 +2715,9 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
self._load_images_with_pil = wx.CheckBox( self )
|
||||
self._load_images_with_pil.SetToolTipString( 'OpenCV is much faster than PIL, but it is sometimes less reliable. Switch this on if you experience crashes or other unusual problems while importing or viewing certain images.' )
|
||||
|
||||
self._do_not_import_decompression_bombs = wx.CheckBox( self )
|
||||
self._do_not_import_decompression_bombs.SetToolTipString( 'Some images, called Decompression Bombs, consume huge amounts of memory and CPU time (typically multiple GB and 30s+) to render. These can be malicious attacks or accidentally inelegant compressions of very large (typically 100MegaPixel+) images. Check this to disallow them before they blat your computer.' )
|
||||
|
||||
self._use_system_ffmpeg = wx.CheckBox( self )
|
||||
self._use_system_ffmpeg.SetToolTipString( 'Check this to always default to the system ffmpeg in your path, rather than using the static ffmpeg in hydrus\'s bin directory. (requires restart)' )
|
||||
|
||||
|
@ -2719,13 +2736,14 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
self._animation_start_position.SetValue( int( HC.options[ 'animation_start_position' ] * 100.0 ) )
|
||||
self._disable_cv_for_gifs.SetValue( self._new_options.GetBoolean( 'disable_cv_for_gifs' ) )
|
||||
self._load_images_with_pil.SetValue( self._new_options.GetBoolean( 'load_images_with_pil' ) )
|
||||
self._do_not_import_decompression_bombs.SetValue( self._new_options.GetBoolean( 'do_not_import_decompression_bombs' ) )
|
||||
self._use_system_ffmpeg.SetValue( self._new_options.GetBoolean( 'use_system_ffmpeg' ) )
|
||||
|
||||
media_zooms = self._new_options.GetMediaZooms()
|
||||
|
||||
self._media_zooms.SetValue( ','.join( ( str( media_zoom ) for media_zoom in media_zooms ) ) )
|
||||
|
||||
mimes_in_correct_order = ( HC.IMAGE_JPEG, HC.IMAGE_PNG, HC.IMAGE_APNG, HC.IMAGE_GIF, HC.APPLICATION_FLASH, HC.APPLICATION_PDF, HC.APPLICATION_HYDRUS_UPDATE_CONTENT, HC.APPLICATION_HYDRUS_UPDATE_DEFINITIONS, HC.VIDEO_AVI, HC.VIDEO_FLV, HC.VIDEO_MOV, HC.VIDEO_MP4, HC.VIDEO_MKV, HC.VIDEO_MPEG, HC.VIDEO_WEBM, HC.VIDEO_WMV, HC.AUDIO_MP3, HC.AUDIO_OGG, HC.AUDIO_FLAC, HC.AUDIO_WMA )
|
||||
mimes_in_correct_order = ( HC.IMAGE_JPEG, HC.IMAGE_PNG, HC.IMAGE_APNG, HC.IMAGE_GIF, HC.APPLICATION_FLASH, HC.APPLICATION_PDF, HC.APPLICATION_ZIP, HC.APPLICATION_RAR, HC.APPLICATION_7Z, HC.APPLICATION_HYDRUS_UPDATE_CONTENT, HC.APPLICATION_HYDRUS_UPDATE_DEFINITIONS, HC.VIDEO_AVI, HC.VIDEO_FLV, HC.VIDEO_MOV, HC.VIDEO_MP4, HC.VIDEO_MKV, HC.VIDEO_MPEG, HC.VIDEO_WEBM, HC.VIDEO_WMV, HC.AUDIO_MP3, HC.AUDIO_OGG, HC.AUDIO_FLAC, HC.AUDIO_WMA )
|
||||
|
||||
for mime in mimes_in_correct_order:
|
||||
|
||||
|
@ -2749,6 +2767,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
rows.append( ( 'Start animations this % in: ', self._animation_start_position ) )
|
||||
rows.append( ( 'Disable OpenCV for gifs: ', self._disable_cv_for_gifs ) )
|
||||
rows.append( ( 'Load images with PIL: ', self._load_images_with_pil ) )
|
||||
rows.append( ( 'Do not import Decompression Bombs: ', self._do_not_import_decompression_bombs ) )
|
||||
rows.append( ( 'Prefer system FFMPEG: ', self._use_system_ffmpeg ) )
|
||||
rows.append( ( 'Media zooms: ', self._media_zooms ) )
|
||||
|
||||
|
@ -2845,6 +2864,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
self._new_options.SetBoolean( 'disable_cv_for_gifs', self._disable_cv_for_gifs.GetValue() )
|
||||
self._new_options.SetBoolean( 'load_images_with_pil', self._load_images_with_pil.GetValue() )
|
||||
self._new_options.SetBoolean( 'do_not_import_decompression_bombs', self._do_not_import_decompression_bombs.GetValue() )
|
||||
self._new_options.SetBoolean( 'use_system_ffmpeg', self._use_system_ffmpeg.GetValue() )
|
||||
|
||||
try:
|
||||
|
|
|
@ -1620,16 +1620,26 @@ class MigrateDatabasePanel( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
approx_total_db_size = self._controller.db.GetApproxTotalFileSize()
|
||||
|
||||
self._current_db_path_st.SetLabelText( 'database (totalling about ' + HydrusData.ConvertIntToBytes( approx_total_db_size ) + '): ' + self._controller.GetDBDir() )
|
||||
self._current_db_path_st.SetLabelText( 'database (about ' + HydrusData.ConvertIntToBytes( approx_total_db_size ) + '): ' + self._controller.GetDBDir() )
|
||||
self._current_install_path_st.SetLabelText( 'install: ' + HC.BASE_DIR )
|
||||
|
||||
service_info = HG.client_controller.Read( 'service_info', CC.COMBINED_LOCAL_FILE_SERVICE_KEY )
|
||||
|
||||
all_local_files_total_size = service_info[ HC.SERVICE_INFO_TOTAL_SIZE ]
|
||||
|
||||
approx_total_client_files = all_local_files_total_size * ( 1.0 + self.RESIZED_RATIO + self.FULLSIZE_RATIO )
|
||||
approx_total_client_files = all_local_files_total_size
|
||||
approx_total_resized_thumbs = all_local_files_total_size * self.RESIZED_RATIO
|
||||
approx_total_fullsize_thumbs = all_local_files_total_size * self.FULLSIZE_RATIO
|
||||
|
||||
self._current_media_paths_st.SetLabelText( 'media (totalling about ' + HydrusData.ConvertIntToBytes( approx_total_client_files ) + '):' )
|
||||
label_components = []
|
||||
|
||||
label_components.append( 'media (about ' + HydrusData.ConvertIntToBytes( approx_total_client_files ) + ')' )
|
||||
label_components.append( 'resized thumbnails (about ' + HydrusData.ConvertIntToBytes( approx_total_resized_thumbs ) + ')' )
|
||||
label_components.append( 'full-size thumbnails (about ' + HydrusData.ConvertIntToBytes( approx_total_fullsize_thumbs ) + ')' )
|
||||
|
||||
label = ', '.join( label_components ) + ':'
|
||||
|
||||
self._current_media_paths_st.SetLabelText( label )
|
||||
|
||||
selected_locations = { l[0] for l in self._current_media_locations_listctrl.GetSelectedClientData() }
|
||||
|
||||
|
|
|
@ -29,7 +29,7 @@ import urlparse
|
|||
import wx
|
||||
import HydrusThreading
|
||||
|
||||
DID_FILE_WORK_MINIMUM_SLEEP_TIME = 0.25
|
||||
DID_FILE_WORK_MINIMUM_SLEEP_TIME = 0.1
|
||||
|
||||
def THREADDownloadURL( job_key, url, url_string ):
|
||||
|
||||
|
@ -366,7 +366,19 @@ class FileImportJob( object ):
|
|||
|
||||
def GenerateInfo( self ):
|
||||
|
||||
self._file_info = HydrusFileHandling.GetFileInfo( self._temp_path )
|
||||
mime = HydrusFileHandling.GetMime( self._temp_path )
|
||||
|
||||
new_options = HG.client_controller.GetNewOptions()
|
||||
|
||||
if mime in HC.IMAGES and new_options.GetBoolean( 'do_not_import_decompression_bombs' ):
|
||||
|
||||
if HydrusImageHandling.IsDecompressionBomb( self._temp_path ):
|
||||
|
||||
raise HydrusExceptions.SizeException( 'Image seems to be a Decompression Bomb!' )
|
||||
|
||||
|
||||
|
||||
self._file_info = HydrusFileHandling.GetFileInfo( self._temp_path, mime )
|
||||
|
||||
( size, mime, width, height, duration, num_frames, num_words ) = self._file_info
|
||||
|
||||
|
@ -824,7 +836,7 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._new_files_event.wait( 5 )
|
||||
|
||||
|
||||
HG.client_controller.WaitUntilPubSubsEmpty()
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
except Exception as e:
|
||||
|
||||
|
@ -861,7 +873,7 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._new_query_event.wait( 5 )
|
||||
|
||||
|
||||
HG.client_controller.WaitUntilPubSubsEmpty()
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
except Exception as e:
|
||||
|
||||
|
@ -1364,7 +1376,7 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._new_files_event.wait( 5 )
|
||||
|
||||
|
||||
HG.client_controller.WaitUntilPubSubsEmpty()
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
except Exception as e:
|
||||
|
||||
|
@ -2248,7 +2260,7 @@ class PageOfImagesImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._new_files_event.wait( 5 )
|
||||
|
||||
|
||||
HG.client_controller.WaitUntilPubSubsEmpty()
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
except Exception as e:
|
||||
|
||||
|
@ -2285,7 +2297,7 @@ class PageOfImagesImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._new_page_event.wait( 5 )
|
||||
|
||||
|
||||
HG.client_controller.WaitUntilPubSubsEmpty()
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
except Exception as e:
|
||||
|
||||
|
@ -2805,7 +2817,7 @@ class SeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
HG.client_controller.pub( 'seed_cache_seeds_updated', self._seed_cache_key, ( seed, ) )
|
||||
|
||||
|
||||
def GetEarliestTimestamp( self ):
|
||||
def GetEarliestSourceTime( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
|
@ -2815,6 +2827,16 @@ class SeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
return earliest_timestamp
|
||||
|
||||
|
||||
def GetLatestSourceTime( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
latest_timestamp = max( ( self._GetSourceTimestamp( seed ) for seed in self._seeds_ordered ) )
|
||||
|
||||
|
||||
return latest_timestamp
|
||||
|
||||
|
||||
def GetNextSeed( self, status ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -3414,7 +3436,7 @@ class Subscription( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
time.sleep( 0.1 )
|
||||
|
||||
HG.client_controller.WaitUntilPubSubsEmpty()
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
|
||||
job_key.DeleteVariable( 'popup_text_1' )
|
||||
|
@ -3972,10 +3994,14 @@ class TagImportOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_TAG_IMPORT_OPTIONS ] = TagImportOptions
|
||||
|
||||
THREAD_STATUS_OK = 0
|
||||
THREAD_STATUS_DEAD = 1
|
||||
THREAD_STATUS_404 = 2
|
||||
|
||||
class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
|
||||
|
||||
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_THREAD_WATCHER_IMPORT
|
||||
SERIALISABLE_VERSION = 2
|
||||
SERIALISABLE_VERSION = 3
|
||||
|
||||
MIN_CHECK_PERIOD = 30
|
||||
|
||||
|
@ -3997,6 +4023,8 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._file_import_options = file_import_options
|
||||
self._tag_import_options = tag_import_options
|
||||
self._last_check_time = 0
|
||||
self._thread_status = THREAD_STATUS_OK
|
||||
self._thread_subject = 'unknown subject'
|
||||
|
||||
self._next_check_time = None
|
||||
|
||||
|
@ -4017,6 +4045,8 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._lock = threading.Lock()
|
||||
|
||||
self._last_pubbed_page_name = ''
|
||||
|
||||
self._new_files_event = threading.Event()
|
||||
self._new_thread_event = threading.Event()
|
||||
|
||||
|
@ -4063,6 +4093,11 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
raw_json = network_job.GetContent()
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._thread_subject = ClientDownloading.ParseImageboardThreadSubject( raw_json )
|
||||
|
||||
|
||||
file_infos = ClientDownloading.ParseImageboardFileURLsFromJSON( self._thread_url, raw_json )
|
||||
|
||||
new_urls = []
|
||||
|
@ -4109,7 +4144,12 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
error_occurred = True
|
||||
|
||||
watcher_status = 'thread 404'
|
||||
with self._lock:
|
||||
|
||||
self._thread_status = THREAD_STATUS_404
|
||||
|
||||
|
||||
watcher_status = ''
|
||||
|
||||
except Exception as e:
|
||||
|
||||
|
@ -4164,7 +4204,7 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
serialisable_file_options = self._file_import_options.GetSerialisableTuple()
|
||||
serialisable_tag_options = self._tag_import_options.GetSerialisableTuple()
|
||||
|
||||
return ( self._thread_url, serialisable_url_cache, self._urls_to_filenames, self._urls_to_md5_base64, serialisable_watcher_options, serialisable_file_options, serialisable_tag_options, self._last_check_time, self._files_paused, self._thread_paused )
|
||||
return ( self._thread_url, serialisable_url_cache, self._urls_to_filenames, self._urls_to_md5_base64, serialisable_watcher_options, serialisable_file_options, serialisable_tag_options, self._last_check_time, self._files_paused, self._thread_paused, self._thread_status, self._thread_subject )
|
||||
|
||||
|
||||
def _HasThread( self ):
|
||||
|
@ -4172,9 +4212,46 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
return self._thread_url != ''
|
||||
|
||||
|
||||
def _PublishPageName( self, page_key ):
|
||||
|
||||
new_options = HG.client_controller.GetNewOptions()
|
||||
|
||||
cannot_rename = not new_options.GetBoolean( 'permit_watchers_to_name_their_pages' )
|
||||
|
||||
if cannot_rename:
|
||||
|
||||
page_name = 'thread watcher'
|
||||
|
||||
elif self._thread_subject in ( '', 'unknown subject' ):
|
||||
|
||||
page_name = 'thread watcher'
|
||||
|
||||
else:
|
||||
|
||||
page_name = self._thread_subject
|
||||
|
||||
|
||||
if self._thread_status == THREAD_STATUS_404:
|
||||
|
||||
page_name = '[404] ' + page_name
|
||||
|
||||
elif self._thread_status == THREAD_STATUS_DEAD:
|
||||
|
||||
page_name = '[DEAD] ' + page_name
|
||||
|
||||
|
||||
if page_name != self._last_pubbed_page_name:
|
||||
|
||||
HG.client_controller.pub( 'rename_page', page_key, page_name )
|
||||
|
||||
self._last_pubbed_page_name = page_name
|
||||
|
||||
|
||||
|
||||
|
||||
def _InitialiseFromSerialisableInfo( self, serialisable_info ):
|
||||
|
||||
( self._thread_url, serialisable_url_cache, self._urls_to_filenames, self._urls_to_md5_base64, serialisable_watcher_options, serialisable_file_options, serialisable_tag_options, self._last_check_time, self._files_paused, self._thread_paused ) = serialisable_info
|
||||
( self._thread_url, serialisable_url_cache, self._urls_to_filenames, self._urls_to_md5_base64, serialisable_watcher_options, serialisable_file_options, serialisable_tag_options, self._last_check_time, self._files_paused, self._thread_paused, self._thread_status, self._thread_subject ) = serialisable_info
|
||||
|
||||
self._urls_cache = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_url_cache )
|
||||
self._watcher_options = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_watcher_options )
|
||||
|
@ -4193,14 +4270,25 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._next_check_time = self._last_check_time + self.MIN_CHECK_PERIOD
|
||||
|
||||
self._thread_status = THREAD_STATUS_OK
|
||||
|
||||
else:
|
||||
|
||||
if self._watcher_options.IsDead( self._urls_cache, self._last_check_time ):
|
||||
|
||||
self._watcher_status = 'thread is dead'
|
||||
if self._thread_status != THREAD_STATUS_404:
|
||||
|
||||
self._thread_status = THREAD_STATUS_DEAD
|
||||
|
||||
|
||||
self._watcher_status = ''
|
||||
|
||||
self._thread_paused = True
|
||||
|
||||
else:
|
||||
|
||||
self._thread_status = THREAD_STATUS_OK
|
||||
|
||||
|
||||
self._next_check_time = self._watcher_options.GetNextCheckTime( self._urls_cache, self._last_check_time )
|
||||
|
||||
|
@ -4224,6 +4312,18 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
return ( 2, new_serialisable_info )
|
||||
|
||||
|
||||
if version == 2:
|
||||
|
||||
( thread_url, serialisable_url_cache, urls_to_filenames, urls_to_md5_base64, serialisable_watcher_options, serialisable_file_options, serialisable_tag_options, last_check_time, files_paused, thread_paused ) = old_serialisable_info
|
||||
|
||||
thread_status = THREAD_STATUS_OK
|
||||
thread_subject = 'unknown subject'
|
||||
|
||||
new_serialisable_info = ( thread_url, serialisable_url_cache, urls_to_filenames, urls_to_md5_base64, serialisable_watcher_options, serialisable_file_options, serialisable_tag_options, last_check_time, files_paused, thread_paused, thread_status, thread_subject )
|
||||
|
||||
return ( 3, new_serialisable_info )
|
||||
|
||||
|
||||
|
||||
def _WorkOnFiles( self, page_key ):
|
||||
|
||||
|
@ -4438,7 +4538,7 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._new_files_event.wait( 5 )
|
||||
|
||||
|
||||
HG.client_controller.WaitUntilPubSubsEmpty()
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
|
||||
except Exception as e:
|
||||
|
@ -4475,9 +4575,14 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._CheckThread( page_key )
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._PublishPageName( page_key )
|
||||
|
||||
|
||||
time.sleep( 5 )
|
||||
|
||||
HG.client_controller.WaitUntilPubSubsEmpty()
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
except Exception as e:
|
||||
|
||||
|
@ -4487,6 +4592,11 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._PublishPageName( page_key )
|
||||
|
||||
|
||||
self._new_thread_event.clear()
|
||||
|
||||
|
||||
|
@ -4532,7 +4642,20 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
return ( self._current_action, self._files_paused, self._file_velocity_status, self._next_check_time, self._watcher_status, self._check_now, self._thread_paused )
|
||||
if self._thread_status == THREAD_STATUS_404:
|
||||
|
||||
watcher_status = 'Thread 404'
|
||||
|
||||
elif self._thread_status == THREAD_STATUS_DEAD:
|
||||
|
||||
watcher_status = 'Thread dead'
|
||||
|
||||
else:
|
||||
|
||||
watcher_status = self._watcher_status
|
||||
|
||||
|
||||
return ( self._current_action, self._files_paused, self._file_velocity_status, self._next_check_time, watcher_status, self._thread_subject, self._thread_status, self._check_now, self._thread_paused )
|
||||
|
||||
|
||||
|
||||
|
@ -4568,7 +4691,7 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
if self._thread_paused and self._watcher_options.IsDead( self._urls_cache, self._last_check_time ):
|
||||
|
||||
self._watcher_status = 'thread is dead--hit check now to try to revive'
|
||||
return # thread is dead, so don't unpause until a checknow event
|
||||
|
||||
else:
|
||||
|
||||
|
@ -4643,6 +4766,8 @@ class ThreadWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._UpdateNextCheckTime()
|
||||
|
||||
self._PublishPageName( page_key )
|
||||
|
||||
self._UpdateFileVelocityStatus()
|
||||
|
||||
HG.client_controller.CallToThreadLongRunning( self._THREADWorkOnThread, page_key )
|
||||
|
@ -4871,7 +4996,7 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._new_urls_event.wait( 5 )
|
||||
|
||||
|
||||
HG.client_controller.WaitUntilPubSubsEmpty()
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
except HydrusExceptions.ShutdownException:
|
||||
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
import ClientConstants as CC
|
||||
import ClientNetworkingDomain
|
||||
import collections
|
||||
import cPickle
|
||||
import cStringIO
|
||||
|
@ -98,25 +99,6 @@ def CombineGETURLWithParameters( url, params_dict ):
|
|||
|
||||
return url + '?' + request_string
|
||||
|
||||
def ConvertDomainIntoAllApplicableDomains( domain ):
|
||||
|
||||
domains = []
|
||||
|
||||
while domain.count( '.' ) > 0:
|
||||
|
||||
# let's discard www.blah.com so we don't end up tracking it separately to blah.com--there's not much point!
|
||||
startswith_www = domain.count( '.' ) > 1 and domain.startswith( 'www' )
|
||||
|
||||
if not startswith_www:
|
||||
|
||||
domains.append( domain )
|
||||
|
||||
|
||||
domain = '.'.join( domain.split( '.' )[1:] ) # i.e. strip off the leftmost subdomain maps.google.com -> google.com
|
||||
|
||||
|
||||
return domains
|
||||
|
||||
def ConvertStatusCodeAndDataIntoExceptionInfo( status_code, data ):
|
||||
|
||||
error_text = data
|
||||
|
@ -171,14 +153,6 @@ def ConvertStatusCodeAndDataIntoExceptionInfo( status_code, data ):
|
|||
|
||||
return ( e, error_text )
|
||||
|
||||
def ConvertURLIntoDomain( url ):
|
||||
|
||||
parser_result = urlparse.urlparse( url )
|
||||
|
||||
domain = HydrusData.ToByteString( parser_result.netloc )
|
||||
|
||||
return domain
|
||||
|
||||
def RequestsGet( url, params = None, stream = False, headers = None ):
|
||||
|
||||
if headers is None:
|
||||
|
@ -1619,22 +1593,26 @@ class NetworkEngine( object ):
|
|||
|
||||
MAX_JOBS = 10 # turn this into an option
|
||||
|
||||
def __init__( self, controller, bandwidth_manager, session_manager, login_manager ):
|
||||
def __init__( self, controller, bandwidth_manager, session_manager, domain_manager, login_manager ):
|
||||
|
||||
self.controller = controller
|
||||
|
||||
self.bandwidth_manager = bandwidth_manager
|
||||
self.session_manager = session_manager
|
||||
self.domain_manager = domain_manager
|
||||
self.login_manager = login_manager
|
||||
|
||||
self.bandwidth_manager.engine = self
|
||||
self.session_manager.engine = self
|
||||
self.domain_manager.engine = self
|
||||
self.login_manager.engine = self
|
||||
|
||||
self._lock = threading.Lock()
|
||||
|
||||
self._new_work_to_do = threading.Event()
|
||||
|
||||
self._jobs_awaiting_validity = []
|
||||
self._current_validation_process = None
|
||||
self._jobs_bandwidth_throttled = []
|
||||
self._jobs_login_throttled = []
|
||||
self._current_login_process = None
|
||||
|
@ -1652,7 +1630,7 @@ class NetworkEngine( object ):
|
|||
|
||||
job.engine = self
|
||||
|
||||
self._jobs_bandwidth_throttled.append( job )
|
||||
self._jobs_awaiting_validity.append( job )
|
||||
|
||||
|
||||
self._new_work_to_do.set()
|
||||
|
@ -1676,6 +1654,65 @@ class NetworkEngine( object ):
|
|||
|
||||
def MainLoop( self ):
|
||||
|
||||
def ProcessValidationJob( job ):
|
||||
|
||||
if job.IsDone():
|
||||
|
||||
return False
|
||||
|
||||
elif job.IsAsleep():
|
||||
|
||||
return True
|
||||
|
||||
elif not job.IsValid():
|
||||
|
||||
if job.CanValidateInPopup():
|
||||
|
||||
if self._current_validation_process is None:
|
||||
|
||||
validation_process = job.GenerateValidationProcess
|
||||
|
||||
self.controller.CallToThread( validation_process )
|
||||
|
||||
self._current_validation_process = validation_process
|
||||
|
||||
job.SetStatus( u'validation presented to user\u2026' )
|
||||
|
||||
else:
|
||||
|
||||
job.SetStatus( u'waiting on user validation\u2026' )
|
||||
|
||||
job.Sleep( 5 )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
job.SetStatus( u'network context not currently valid!' )
|
||||
|
||||
job.Sleep( 15 )
|
||||
|
||||
|
||||
return True
|
||||
|
||||
else:
|
||||
|
||||
self._jobs_bandwidth_throttled.append( job )
|
||||
|
||||
return False
|
||||
|
||||
|
||||
|
||||
def ProcessCurrentValidationJob():
|
||||
|
||||
if self._current_validation_process is not None:
|
||||
|
||||
if self._current_validation_process.IsDone():
|
||||
|
||||
self._current_validation_process = None
|
||||
|
||||
|
||||
|
||||
|
||||
def ProcessBandwidthJob( job ):
|
||||
|
||||
if job.IsDone():
|
||||
|
@ -1797,6 +1834,10 @@ class NetworkEngine( object ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
self._jobs_awaiting_validity = filter( ProcessValidationJob, self._jobs_awaiting_validity )
|
||||
|
||||
ProcessCurrentValidationJob()
|
||||
|
||||
self._jobs_bandwidth_throttled = filter( ProcessBandwidthJob, self._jobs_bandwidth_throttled )
|
||||
|
||||
self._jobs_login_throttled = filter( ProcessLoginJob, self._jobs_login_throttled )
|
||||
|
@ -1904,8 +1945,8 @@ class NetworkJob( object ):
|
|||
|
||||
network_contexts.append( GLOBAL_NETWORK_CONTEXT )
|
||||
|
||||
domain = ConvertURLIntoDomain( self._url )
|
||||
domains = ConvertDomainIntoAllApplicableDomains( domain )
|
||||
domain = ClientNetworkingDomain.ConvertURLIntoDomain( self._url )
|
||||
domains = ClientNetworkingDomain.ConvertDomainIntoAllApplicableDomains( domain )
|
||||
|
||||
network_contexts.extend( ( NetworkContext( CC.NETWORK_CONTEXT_DOMAIN, domain ) for domain in domains ) )
|
||||
|
||||
|
@ -2184,6 +2225,14 @@ class NetworkJob( object ):
|
|||
|
||||
|
||||
|
||||
def CanValidateInPopup( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return self.engine.domain_manager.CanValidateInPopup( self._network_contexts )
|
||||
|
||||
|
||||
|
||||
def GenerateLoginProcess( self ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -2199,6 +2248,14 @@ class NetworkJob( object ):
|
|||
|
||||
|
||||
|
||||
def GenerateValidationPopupProcess( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return self.engine.domain_manager.GenerateValidationPopupProcess( self._network_contexts )
|
||||
|
||||
|
||||
|
||||
def GetContent( self ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -2281,6 +2338,14 @@ class NetworkJob( object ):
|
|||
|
||||
|
||||
|
||||
def IsValid( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return self.engine.domain_manager.IsValid( self._network_contexts )
|
||||
|
||||
|
||||
|
||||
def NeedsLogin( self ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -2291,14 +2356,7 @@ class NetworkJob( object ):
|
|||
|
||||
else:
|
||||
|
||||
result = self.engine.login_manager.NeedsLogin( self._network_contexts )
|
||||
|
||||
if result:
|
||||
|
||||
self._status_text = u'waiting on login\u2026'
|
||||
|
||||
|
||||
return result
|
||||
return self.engine.login_manager.NeedsLogin( self._network_contexts )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,13 +1,46 @@
|
|||
import ClientConstants as CC
|
||||
import ClientParsing
|
||||
import ClientThreading
|
||||
import collections
|
||||
import HydrusConstants as HC
|
||||
import HydrusGlobals as HG
|
||||
import HydrusData
|
||||
import HydrusExceptions
|
||||
import HydrusSerialisable
|
||||
import threading
|
||||
import time
|
||||
import urlparse
|
||||
|
||||
def ConvertDomainIntoAllApplicableDomains( domain ):
|
||||
|
||||
domains = []
|
||||
|
||||
while domain.count( '.' ) > 0:
|
||||
|
||||
# let's discard www.blah.com so we don't end up tracking it separately to blah.com--there's not much point!
|
||||
startswith_www = domain.count( '.' ) > 1 and domain.startswith( 'www' )
|
||||
|
||||
if not startswith_www:
|
||||
|
||||
domains.append( domain )
|
||||
|
||||
|
||||
domain = '.'.join( domain.split( '.' )[1:] ) # i.e. strip off the leftmost subdomain maps.google.com -> google.com
|
||||
|
||||
|
||||
return domains
|
||||
|
||||
def ConvertURLIntoDomain( url ):
|
||||
|
||||
parser_result = urlparse.urlparse( url )
|
||||
|
||||
domain = HydrusData.ToByteString( parser_result.netloc )
|
||||
|
||||
return domain
|
||||
|
||||
VALID_DENIED = 0
|
||||
VALID_APPROVED = 1
|
||||
VALID_UNKNOWN = 2
|
||||
# this should do network_contexts->user-agent as well, with some kind of approval system in place
|
||||
# approval needs a new queue in the network engine. this will eventually test downloader validity and so on. failable at that stage
|
||||
# user-agent info should be exportable/importable on the ui as well
|
||||
|
@ -16,24 +49,32 @@ import urlparse
|
|||
# hide urls on media viewer based on domain
|
||||
# decide whether we want to add this to the dirtyobjects loop, and it which case, if anything is appropriate to store in the db separately
|
||||
# hence making this a serialisableobject itself.
|
||||
class DomainManager( object ):
|
||||
class NetworkDomainManager( HydrusSerialisable.SerialisableBase ):
|
||||
|
||||
def __init__( self, controller ):
|
||||
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_BANDWIDTH_MANAGER
|
||||
SERIALISABLE_VERSION = 1
|
||||
|
||||
def __init__( self ):
|
||||
|
||||
self._controller = controller
|
||||
self._domains_to_url_matches = {}
|
||||
self._network_contexts_to_custom_headers = {} # user-agent here
|
||||
# ( header_key, header_value, approved, approval_reason )
|
||||
# approved is True for user created, None for imported and defaults
|
||||
HydrusSerialisable.SerialisableBase.__init__( self )
|
||||
|
||||
self.engine = None
|
||||
|
||||
self._url_matches = HydrusSerialisable.SerialisableList()
|
||||
self._network_contexts_to_custom_headers = {}
|
||||
|
||||
self._domains_to_url_matches = collections.defaultdict( list )
|
||||
|
||||
self._dirty = False
|
||||
|
||||
self._lock = threading.Lock()
|
||||
|
||||
self._Initialise()
|
||||
self._RecalcCache()
|
||||
|
||||
|
||||
def _GetURLMatch( self, url ):
|
||||
|
||||
domain = 'blah' # get top urldomain
|
||||
domain = ConvertURLIntoDomain( url )
|
||||
|
||||
if domain in self._domains_to_url_matches:
|
||||
|
||||
|
@ -58,40 +99,54 @@ class DomainManager( object ):
|
|||
return None
|
||||
|
||||
|
||||
def _Initialise( self ):
|
||||
def _RecalcCache( self ):
|
||||
|
||||
self._domains_to_url_matches = {}
|
||||
self._domains_to_url_matches = collections.defaultdict( list )
|
||||
|
||||
# fetch them all from controller's db
|
||||
# figure out domain -> urlmatch for each entry based on example url
|
||||
|
||||
pass
|
||||
for url_match in self._url_matches:
|
||||
|
||||
domain = url_match.GetDomain()
|
||||
|
||||
self._domains_to_url_matches[ domain ].append( url_match )
|
||||
|
||||
|
||||
|
||||
def CanApprove( self, network_contexts ):
|
||||
def _SetDirty( self ):
|
||||
|
||||
# if user selected false for any approval, return false
|
||||
# network job presumably throws a ValidationError at this point, which will cause any larger queue to pause.
|
||||
|
||||
pass
|
||||
self._dirty = True
|
||||
|
||||
|
||||
def DoApproval( self, network_contexts ):
|
||||
def CanValidateInPopup( self, network_contexts ):
|
||||
|
||||
# if false on validity check, it presents the user with a yes/no popup with the approval_reason and waits
|
||||
# we can always do this for headers
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def GenerateValidationProcess( self, network_contexts ):
|
||||
|
||||
# generate a process that will, when threadcalled maybe with .Start() , ask the user, one after another, all the key-value pairs
|
||||
# Should (network context) apply "(key)" header "(value)"?
|
||||
# Reason given is: "You need this to make it work lol."
|
||||
# once all the yes/nos are set, update db, reinitialise domain manager, set IsDone to true.
|
||||
|
||||
pass
|
||||
|
||||
|
||||
def GetCustomHeaders( self, network_contexts ):
|
||||
|
||||
keys_to_values = {}
|
||||
|
||||
with self._lock:
|
||||
|
||||
pass
|
||||
|
||||
# good order is global = least powerful, which I _think_ is how these come.
|
||||
# e.g. a site User-Agent should overwrite a global default
|
||||
|
||||
|
||||
return keys_to_values
|
||||
|
||||
|
||||
def GetDownloader( self, url ):
|
||||
|
||||
|
@ -106,13 +161,27 @@ class DomainManager( object ):
|
|||
|
||||
|
||||
|
||||
def NeedsApproval( self, network_contexts ):
|
||||
def IsValid( self, network_contexts ):
|
||||
|
||||
# this is called by the network engine in the new approval queue
|
||||
# if a job needs approval, it goes to a single step like the login one and waits, possibly failing.
|
||||
# checks for 'approved is None' on all ncs
|
||||
# for now, let's say that denied headers are simply not added, not that they invalidate a query
|
||||
|
||||
pass
|
||||
for network_context in network_contexts:
|
||||
|
||||
if network_context in self._network_contexts_to_custom_headers:
|
||||
|
||||
custom_headers = self._network_contexts_to_custom_headers[ network_context ]
|
||||
|
||||
for ( key, value, approved, reason ) in custom_headers:
|
||||
|
||||
if approved == VALID_UNKNOWN:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def NormaliseURL( self, url ):
|
||||
|
@ -135,11 +204,86 @@ class DomainManager( object ):
|
|||
|
||||
|
||||
|
||||
def Reinitialise( self ):
|
||||
def SetClean( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._Initialise()
|
||||
self._dirty = False
|
||||
|
||||
|
||||
|
||||
def SetHeaderValidation( self, network_context, key, approved ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
custom_headers = self._network_contexts_to_custom_headers[ network_context ]
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER ] = NetworkDomainManager
|
||||
|
||||
class DomainValidationProcess( object ):
|
||||
|
||||
def __init__( self, domain_manager, header_tuples ):
|
||||
|
||||
self._domain_manager = domain_manager
|
||||
|
||||
self._header_tuples = header_tuples
|
||||
|
||||
self._is_done = False
|
||||
|
||||
|
||||
def IsDone( self ):
|
||||
|
||||
return self._is_done
|
||||
|
||||
|
||||
def Start( self ):
|
||||
|
||||
try:
|
||||
|
||||
results = []
|
||||
|
||||
for ( network_context, key, value, approval_reason ) in self._header_tuples:
|
||||
|
||||
job_key = ClientThreading.JobKey()
|
||||
|
||||
# generate question
|
||||
question = 'intro text ' + approval_reason
|
||||
|
||||
job_key.SetVariable( 'popup_yes_no_question', question )
|
||||
|
||||
# pub it
|
||||
|
||||
result = job_key.GetIfHasVariable( 'popup_yes_no_answer' )
|
||||
|
||||
while result is None:
|
||||
|
||||
if HG.view_shutdown:
|
||||
|
||||
return
|
||||
|
||||
|
||||
time.sleep( 0.25 )
|
||||
|
||||
|
||||
if result:
|
||||
|
||||
approved = VALID_APPROVED
|
||||
|
||||
else:
|
||||
|
||||
approved = VALID_DENIED
|
||||
|
||||
|
||||
self._domain_manager.SetHeaderValidation( network_context, key, approved )
|
||||
|
||||
|
||||
finally:
|
||||
|
||||
self._is_done = True
|
||||
|
||||
|
||||
|
||||
|
@ -152,7 +296,7 @@ class URLMatch( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_URL_MATCH
|
||||
SERIALISABLE_VERSION = 1
|
||||
|
||||
def __init__( self, name, preferred_scheme = 'https', netloc = 'hostname.com', subdomain_is_important = False, path_components = None, parameters = None, example_url = 'https://hostname.com' ):
|
||||
def __init__( self, name, preferred_scheme = 'https', netloc = 'hostname.com', subdomain_is_important = False, path_components = None, parameters = None, example_url = 'https://hostname.com/post/page.php?id=123456&s=view' ):
|
||||
|
||||
if path_components is None:
|
||||
|
||||
|
@ -175,13 +319,13 @@ class URLMatch( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
HydrusSerialisable.SerialisableBaseNamed.__init__( self, name )
|
||||
|
||||
self._preferred_scheme = 'https'
|
||||
self._netloc = 'hostname.com'
|
||||
self._subdomain_is_important = False
|
||||
self._path_components = HydrusSerialisable.SerialisableList()
|
||||
self._parameters = HydrusSerialisable.SerialisableDictionary()
|
||||
self._preferred_scheme = preferred_scheme
|
||||
self._netloc = netloc
|
||||
self._subdomain_is_important = subdomain_is_important
|
||||
self._path_components = path_components
|
||||
self._parameters = parameters
|
||||
|
||||
self._example_url = 'https://hostname.com/post/page.php?id=123456&s=view'
|
||||
self._example_url = example_url
|
||||
|
||||
|
||||
def _ClipNetLoc( self, netloc ):
|
||||
|
@ -265,6 +409,11 @@ class URLMatch( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
return query
|
||||
|
||||
|
||||
def GetDomain( self ):
|
||||
|
||||
return ConvertURLIntoDomain( self._example_url )
|
||||
|
||||
|
||||
def Normalise( self, url ):
|
||||
|
||||
p = urlparse.urlparse( url )
|
||||
|
|
|
@ -1062,7 +1062,7 @@ class ServiceRepository( ServiceRestricted ):
|
|||
with self._lock:
|
||||
|
||||
self._no_requests_reason = ''
|
||||
no_requests_until = 0
|
||||
self._no_requests_until = 0
|
||||
|
||||
self._account = HydrusNetwork.Account.GenerateUnknownAccount()
|
||||
self._next_account_sync = 0
|
||||
|
|
|
@ -49,7 +49,7 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 18
|
||||
SOFTWARE_VERSION = 275
|
||||
SOFTWARE_VERSION = 276
|
||||
|
||||
UNSCALED_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
||||
|
@ -426,11 +426,13 @@ VIDEO_AVI = 27
|
|||
APPLICATION_HYDRUS_UPDATE_DEFINITIONS = 28
|
||||
APPLICATION_HYDRUS_UPDATE_CONTENT = 29
|
||||
TEXT_PLAIN = 30
|
||||
APPLICATION_RAR = 31
|
||||
APPLICATION_7Z = 32
|
||||
APPLICATION_OCTET_STREAM = 100
|
||||
APPLICATION_UNKNOWN = 101
|
||||
|
||||
ALLOWED_MIMES = ( IMAGE_JPEG, IMAGE_PNG, IMAGE_APNG, IMAGE_GIF, IMAGE_BMP, APPLICATION_FLASH, VIDEO_AVI, VIDEO_FLV, VIDEO_MOV, VIDEO_MP4, VIDEO_MKV, VIDEO_WEBM, VIDEO_MPEG, APPLICATION_PDF, AUDIO_MP3, AUDIO_OGG, AUDIO_FLAC, AUDIO_WMA, VIDEO_WMV, APPLICATION_HYDRUS_UPDATE_CONTENT, APPLICATION_HYDRUS_UPDATE_DEFINITIONS )
|
||||
SEARCHABLE_MIMES = ( IMAGE_JPEG, IMAGE_PNG, IMAGE_APNG, IMAGE_GIF, APPLICATION_FLASH, VIDEO_AVI, VIDEO_FLV, VIDEO_MOV, VIDEO_MP4, VIDEO_MKV, VIDEO_WEBM, VIDEO_MPEG, APPLICATION_PDF, AUDIO_MP3, AUDIO_OGG, AUDIO_FLAC, AUDIO_WMA, VIDEO_WMV )
|
||||
ALLOWED_MIMES = ( IMAGE_JPEG, IMAGE_PNG, IMAGE_APNG, IMAGE_GIF, IMAGE_BMP, APPLICATION_FLASH, VIDEO_AVI, VIDEO_FLV, VIDEO_MOV, VIDEO_MP4, VIDEO_MKV, VIDEO_WEBM, VIDEO_MPEG, APPLICATION_PDF, APPLICATION_ZIP, APPLICATION_RAR, APPLICATION_7Z, AUDIO_MP3, AUDIO_OGG, AUDIO_FLAC, AUDIO_WMA, VIDEO_WMV, APPLICATION_HYDRUS_UPDATE_CONTENT, APPLICATION_HYDRUS_UPDATE_DEFINITIONS )
|
||||
SEARCHABLE_MIMES = ( IMAGE_JPEG, IMAGE_PNG, IMAGE_APNG, IMAGE_GIF, APPLICATION_FLASH, VIDEO_AVI, VIDEO_FLV, VIDEO_MOV, VIDEO_MP4, VIDEO_MKV, VIDEO_WEBM, VIDEO_MPEG, APPLICATION_PDF, APPLICATION_ZIP, APPLICATION_RAR, APPLICATION_7Z, AUDIO_MP3, AUDIO_OGG, AUDIO_FLAC, AUDIO_WMA, VIDEO_WMV )
|
||||
|
||||
IMAGES = ( IMAGE_JPEG, IMAGE_PNG, IMAGE_APNG, IMAGE_GIF, IMAGE_BMP )
|
||||
|
||||
|
@ -440,11 +442,11 @@ VIDEO = ( VIDEO_AVI, VIDEO_FLV, VIDEO_MOV, VIDEO_MP4, VIDEO_WMV, VIDEO_MKV, VIDE
|
|||
|
||||
NATIVE_VIDEO = ( IMAGE_APNG, VIDEO_AVI, VIDEO_FLV, VIDEO_MOV, VIDEO_MP4, VIDEO_WMV, VIDEO_MKV, VIDEO_WEBM, VIDEO_MPEG )
|
||||
|
||||
APPLICATIONS = ( APPLICATION_FLASH, APPLICATION_PDF, APPLICATION_ZIP )
|
||||
APPLICATIONS = ( APPLICATION_FLASH, APPLICATION_PDF, APPLICATION_ZIP, APPLICATION_RAR, APPLICATION_7Z )
|
||||
|
||||
NOISY_MIMES = tuple( [ APPLICATION_FLASH ] + list( AUDIO ) + list( VIDEO ) )
|
||||
|
||||
ARCHIVES = ( APPLICATION_ZIP, APPLICATION_HYDRUS_ENCRYPTED_ZIP )
|
||||
ARCHIVES = ( APPLICATION_ZIP, APPLICATION_HYDRUS_ENCRYPTED_ZIP, APPLICATION_RAR, APPLICATION_7Z )
|
||||
|
||||
MIMES_WITH_THUMBNAILS = ( APPLICATION_FLASH, IMAGE_JPEG, IMAGE_PNG, IMAGE_APNG, IMAGE_GIF, IMAGE_BMP, VIDEO_AVI, VIDEO_FLV, VIDEO_MOV, VIDEO_MP4, VIDEO_WMV, VIDEO_MKV, VIDEO_WEBM, VIDEO_MPEG )
|
||||
|
||||
|
@ -471,6 +473,8 @@ mime_enum_lookup[ 'application/x-yaml' ] = APPLICATION_YAML
|
|||
mime_enum_lookup[ 'PDF document' ] = APPLICATION_PDF
|
||||
mime_enum_lookup[ 'application/pdf' ] = APPLICATION_PDF
|
||||
mime_enum_lookup[ 'application/zip' ] = APPLICATION_ZIP
|
||||
mime_enum_lookup[ 'application/vnd.rar' ] = APPLICATION_RAR
|
||||
mime_enum_lookup[ 'application/x-7z-compressed' ] = APPLICATION_7Z
|
||||
mime_enum_lookup[ 'application/json' ] = APPLICATION_JSON
|
||||
mime_enum_lookup[ 'application/hydrus-encrypted-zip' ] = APPLICATION_HYDRUS_ENCRYPTED_ZIP
|
||||
mime_enum_lookup[ 'application/hydrus-update-content' ] = APPLICATION_HYDRUS_UPDATE_CONTENT
|
||||
|
@ -509,6 +513,8 @@ mime_string_lookup[ APPLICATION_YAML ] = 'application/x-yaml'
|
|||
mime_string_lookup[ APPLICATION_JSON ] = 'application/json'
|
||||
mime_string_lookup[ APPLICATION_PDF ] = 'application/pdf'
|
||||
mime_string_lookup[ APPLICATION_ZIP ] = 'application/zip'
|
||||
mime_string_lookup[ APPLICATION_RAR ] = 'application/vnd.rar'
|
||||
mime_string_lookup[ APPLICATION_7Z ] = 'application/x-7z-compressed'
|
||||
mime_string_lookup[ APPLICATION_HYDRUS_ENCRYPTED_ZIP ] = 'application/hydrus-encrypted-zip'
|
||||
mime_string_lookup[ APPLICATION_HYDRUS_UPDATE_CONTENT ] = 'application/hydrus-update-content'
|
||||
mime_string_lookup[ APPLICATION_HYDRUS_UPDATE_DEFINITIONS ] = 'application/hydrus-update-definitions'
|
||||
|
@ -547,6 +553,8 @@ mime_ext_lookup[ APPLICATION_YAML ] = '.yaml'
|
|||
mime_ext_lookup[ APPLICATION_JSON ] = '.json'
|
||||
mime_ext_lookup[ APPLICATION_PDF ] = '.pdf'
|
||||
mime_ext_lookup[ APPLICATION_ZIP ] = '.zip'
|
||||
mime_ext_lookup[ APPLICATION_RAR ] = '.rar'
|
||||
mime_ext_lookup[ APPLICATION_7Z ] = '.7z'
|
||||
mime_ext_lookup[ APPLICATION_HYDRUS_ENCRYPTED_ZIP ] = '.zip.encrypted'
|
||||
mime_ext_lookup[ APPLICATION_HYDRUS_UPDATE_CONTENT ] = ''
|
||||
mime_ext_lookup[ APPLICATION_HYDRUS_UPDATE_DEFINITIONS ] = ''
|
||||
|
|
|
@ -440,11 +440,37 @@ class HydrusController( object ):
|
|||
return self._view_shutdown
|
||||
|
||||
|
||||
def WaitUntilDBEmpty( self ):
|
||||
|
||||
while True:
|
||||
|
||||
if self._model_shutdown:
|
||||
|
||||
raise HydrusExceptions.ShutdownException( 'Application shutting down!' )
|
||||
|
||||
elif self.db.JobsQueueEmpty() and not self.db.CurrentlyDoingJob():
|
||||
|
||||
return
|
||||
|
||||
else:
|
||||
|
||||
time.sleep( 0.00001 )
|
||||
|
||||
|
||||
|
||||
|
||||
def WaitUntilModelFree( self ):
|
||||
|
||||
self.WaitUntilPubSubsEmpty()
|
||||
|
||||
self.WaitUntilDBEmpty()
|
||||
|
||||
|
||||
def WaitUntilPubSubsEmpty( self ):
|
||||
|
||||
while True:
|
||||
|
||||
if self._view_shutdown:
|
||||
if self._model_shutdown:
|
||||
|
||||
raise HydrusExceptions.ShutdownException( 'Application shutting down!' )
|
||||
|
||||
|
|
|
@ -33,6 +33,11 @@ header_and_mime = [
|
|||
( 0, 'FLV', HC.VIDEO_FLV ),
|
||||
( 0, '%PDF', HC.APPLICATION_PDF ),
|
||||
( 0, 'PK\x03\x04', HC.APPLICATION_ZIP ),
|
||||
( 0, 'PK\x05\x06', HC.APPLICATION_ZIP ),
|
||||
( 0, 'PK\x07\x08', HC.APPLICATION_ZIP ),
|
||||
( 0, '7z\xBC\xAF\x27\x1C', HC.APPLICATION_7Z ),
|
||||
( 0, '\x52\x61\x72\x21\x1A\x07\x00', HC.APPLICATION_RAR ),
|
||||
( 0, '\x52\x61\x72\x21\x1A\x07\x01\x00', HC.APPLICATION_RAR ),
|
||||
( 0, 'hydrus encrypted zip', HC.APPLICATION_HYDRUS_ENCRYPTED_ZIP ),
|
||||
( 4, 'ftypmp4', HC.VIDEO_MP4 ),
|
||||
( 4, 'ftypisom', HC.VIDEO_MP4 ),
|
||||
|
@ -175,7 +180,7 @@ def GetExtraHashesFromPath( path ):
|
|||
|
||||
return ( md5, sha1, sha512 )
|
||||
|
||||
def GetFileInfo( path ):
|
||||
def GetFileInfo( path, mime = None ):
|
||||
|
||||
size = os.path.getsize( path )
|
||||
|
||||
|
@ -184,7 +189,10 @@ def GetFileInfo( path ):
|
|||
raise HydrusExceptions.SizeException( 'File is of zero length!' )
|
||||
|
||||
|
||||
mime = GetMime( path )
|
||||
if mime is None:
|
||||
|
||||
mime = GetMime( path )
|
||||
|
||||
|
||||
if mime not in HC.ALLOWED_MIMES:
|
||||
|
||||
|
@ -248,6 +256,13 @@ def GetHashFromPath( path ):
|
|||
|
||||
def GetMime( path ):
|
||||
|
||||
size = os.path.getsize( path )
|
||||
|
||||
if size == 0:
|
||||
|
||||
raise HydrusExceptions.SizeException( 'File is of zero length!' )
|
||||
|
||||
|
||||
with open( path, 'rb' ) as f:
|
||||
|
||||
f.seek( 0 )
|
||||
|
|
|
@ -13,6 +13,9 @@ import traceback
|
|||
import HydrusData
|
||||
import HydrusGlobals as HG
|
||||
import HydrusPaths
|
||||
import warnings
|
||||
|
||||
warnings.simplefilter( 'ignore', PILImage.DecompressionBombWarning )
|
||||
|
||||
def ConvertToPngIfBmp( path ):
|
||||
|
||||
|
@ -242,3 +245,22 @@ def GetThumbnailResolution( ( im_x, im_y ), ( target_x, target_y ) ):
|
|||
|
||||
return ( target_x, target_y )
|
||||
|
||||
def IsDecompressionBomb( path ):
|
||||
|
||||
warnings.simplefilter( 'error', PILImage.DecompressionBombWarning )
|
||||
|
||||
try:
|
||||
|
||||
GeneratePILImage( path )
|
||||
|
||||
except PILImage.DecompressionBombWarning:
|
||||
|
||||
return True
|
||||
|
||||
finally:
|
||||
|
||||
warnings.simplefilter( 'ignore', PILImage.DecompressionBombWarning )
|
||||
|
||||
|
||||
return False
|
||||
|
||||
|
|
|
@ -585,9 +585,8 @@ class BandwidthTracker( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
month_dt = datetime.datetime.utcfromtimestamp( month_time )
|
||||
|
||||
( year, month ) = ( month_dt.year, month_dt.month )
|
||||
|
||||
date_str = str( year ) + '-' + str( month )
|
||||
# this generates zero-padded month, to keep this lexicographically sortable at the gui level
|
||||
date_str = month_dt.strftime( '%Y-%m' )
|
||||
|
||||
result.append( ( date_str, usage ) )
|
||||
|
||||
|
|
|
@ -55,6 +55,7 @@ SERIALISABLE_TYPE_MEDIA_SORT = 49
|
|||
SERIALISABLE_TYPE_URL_MATCH = 50
|
||||
SERIALISABLE_TYPE_STRING_MATCH = 51
|
||||
SERIALISABLE_TYPE_WATCHER_OPTIONS = 52
|
||||
SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER = 53
|
||||
|
||||
SERIALISABLE_TYPES_TO_OBJECT_TYPES = {}
|
||||
|
||||
|
|
|
@ -206,7 +206,14 @@ def ParseFileArguments( path ):
|
|||
|
||||
try:
|
||||
|
||||
( size, mime, width, height, duration, num_frames, num_words ) = HydrusFileHandling.GetFileInfo( path )
|
||||
mime = HydrusFileHandling.GetMime( path )
|
||||
|
||||
if mime in HC.IMAGES and HydrusImageHandling.IsDecompressionBomb( path ):
|
||||
|
||||
raise HydrusExceptions.ForbiddenException( 'File seemed to be a Decompression Bomb!' )
|
||||
|
||||
|
||||
( size, mime, width, height, duration, num_frames, num_words ) = HydrusFileHandling.GetFileInfo( path, mime )
|
||||
|
||||
except HydrusExceptions.SizeException:
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ class TestData( unittest.TestCase ):
|
|||
|
||||
seed_cache.AddSeeds( ( seed, ) )
|
||||
|
||||
seed_cache.UpdateSeedSourceTime( seed, one_day_before + 10 )
|
||||
seed_cache.UpdateSeedSourceTime( seed, last_check_time - 600 )
|
||||
|
||||
|
||||
bare_seed_cache = ClientImporting.SeedCache()
|
||||
|
@ -144,8 +144,9 @@ class TestData( unittest.TestCase ):
|
|||
self.assertEqual( slow_watcher_options.GetPrettyCurrentVelocity( new_thread_seed_cache, last_check_time ), u'at last check, found 10 files in previous 10 minutes' )
|
||||
self.assertEqual( callous_watcher_options.GetPrettyCurrentVelocity( new_thread_seed_cache, last_check_time ), u'at last check, found 0 files in previous 1 minute' )
|
||||
|
||||
self.assertEqual( regular_watcher_options.GetNextCheckTime( new_thread_seed_cache, last_check_time ), last_check_time + 300 )
|
||||
self.assertEqual( fast_watcher_options.GetNextCheckTime( new_thread_seed_cache, last_check_time ), last_check_time + 120 )
|
||||
# these would be 360, 120, 600, but the 'don't check faster the time since last file post' bumps this up
|
||||
self.assertEqual( regular_watcher_options.GetNextCheckTime( new_thread_seed_cache, last_check_time ), last_check_time + 600 )
|
||||
self.assertEqual( fast_watcher_options.GetNextCheckTime( new_thread_seed_cache, last_check_time ), last_check_time + 600 )
|
||||
self.assertEqual( slow_watcher_options.GetNextCheckTime( new_thread_seed_cache, last_check_time ), last_check_time + 600 )
|
||||
|
||||
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
import ClientConstants as CC
|
||||
import ClientNetworking
|
||||
import ClientNetworkingDomain
|
||||
import collections
|
||||
import HydrusConstants as HC
|
||||
import HydrusData
|
||||
|
@ -220,9 +221,10 @@ class TestNetworkingEngine( unittest.TestCase ):
|
|||
mock_controller = TestConstants.MockController()
|
||||
bandwidth_manager = ClientNetworking.NetworkBandwidthManager()
|
||||
session_manager = ClientNetworking.NetworkSessionManager()
|
||||
domain_manager = ClientNetworkingDomain.NetworkDomainManager()
|
||||
login_manager = ClientNetworking.NetworkLoginManager()
|
||||
|
||||
engine = ClientNetworking.NetworkEngine( mock_controller, bandwidth_manager, session_manager, login_manager )
|
||||
engine = ClientNetworking.NetworkEngine( mock_controller, bandwidth_manager, session_manager, domain_manager, login_manager )
|
||||
|
||||
self.assertFalse( engine.IsRunning() )
|
||||
self.assertFalse( engine.IsShutdown() )
|
||||
|
@ -249,9 +251,10 @@ class TestNetworkingEngine( unittest.TestCase ):
|
|||
mock_controller = TestConstants.MockController()
|
||||
bandwidth_manager = ClientNetworking.NetworkBandwidthManager()
|
||||
session_manager = ClientNetworking.NetworkSessionManager()
|
||||
domain_manager = ClientNetworkingDomain.NetworkDomainManager()
|
||||
login_manager = ClientNetworking.NetworkLoginManager()
|
||||
|
||||
engine = ClientNetworking.NetworkEngine( mock_controller, bandwidth_manager, session_manager, login_manager )
|
||||
engine = ClientNetworking.NetworkEngine( mock_controller, bandwidth_manager, session_manager, domain_manager, login_manager )
|
||||
|
||||
self.assertFalse( engine.IsRunning() )
|
||||
self.assertFalse( engine.IsShutdown() )
|
||||
|
@ -276,9 +279,10 @@ class TestNetworkingEngine( unittest.TestCase ):
|
|||
mock_controller = TestConstants.MockController()
|
||||
bandwidth_manager = ClientNetworking.NetworkBandwidthManager()
|
||||
session_manager = ClientNetworking.NetworkSessionManager()
|
||||
domain_manager = ClientNetworkingDomain.NetworkDomainManager()
|
||||
login_manager = ClientNetworking.NetworkLoginManager()
|
||||
|
||||
engine = ClientNetworking.NetworkEngine( mock_controller, bandwidth_manager, session_manager, login_manager )
|
||||
engine = ClientNetworking.NetworkEngine( mock_controller, bandwidth_manager, session_manager, domain_manager, login_manager )
|
||||
|
||||
self.assertFalse( engine.IsRunning() )
|
||||
self.assertFalse( engine.IsShutdown() )
|
||||
|
@ -327,9 +331,10 @@ class TestNetworkingJob( unittest.TestCase ):
|
|||
mock_controller = TestConstants.MockController()
|
||||
bandwidth_manager = ClientNetworking.NetworkBandwidthManager()
|
||||
session_manager = ClientNetworking.NetworkSessionManager()
|
||||
domain_manager = ClientNetworkingDomain.NetworkDomainManager()
|
||||
login_manager = ClientNetworking.NetworkLoginManager()
|
||||
|
||||
engine = ClientNetworking.NetworkEngine( mock_controller, bandwidth_manager, session_manager, login_manager )
|
||||
engine = ClientNetworking.NetworkEngine( mock_controller, bandwidth_manager, session_manager, domain_manager, login_manager )
|
||||
|
||||
job.engine = engine
|
||||
|
||||
|
@ -535,9 +540,10 @@ class TestNetworkingJobHydrus( unittest.TestCase ):
|
|||
mock_controller = TestConstants.MockController()
|
||||
bandwidth_manager = ClientNetworking.NetworkBandwidthManager()
|
||||
session_manager = ClientNetworking.NetworkSessionManager()
|
||||
domain_manager = ClientNetworkingDomain.NetworkDomainManager()
|
||||
login_manager = ClientNetworking.NetworkLoginManager()
|
||||
|
||||
engine = ClientNetworking.NetworkEngine( mock_controller, bandwidth_manager, session_manager, login_manager )
|
||||
engine = ClientNetworking.NetworkEngine( mock_controller, bandwidth_manager, session_manager, domain_manager, login_manager )
|
||||
|
||||
job.engine = engine
|
||||
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 21 KiB |
Binary file not shown.
After Width: | Height: | Size: 2.1 KiB |
Binary file not shown.
After Width: | Height: | Size: 35 KiB |
Loading…
Reference in New Issue