Version 455

Version 455
This commit is contained in:
Hydrus Network Developer 2021-09-22 16:13:57 -05:00 committed by GitHub
commit 6151866d29
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
49 changed files with 2233 additions and 1156 deletions

View File

@ -8,6 +8,40 @@
<div class="content">
<h3 id="changelog"><a href="#changelog">changelog</a></h3>
<ul>
<li><h3 id="version_455"><a href="#version_455">version 455</a></h3></li>
<ul>
<li>misc:</li>
<li>many of the simple system predicates (width, size, duration, etc..) now support the '≠' (not equal) operator! searches only support one of these at once for now (e.g. you can't say height != 640 AND height != 1080--it'll pick one of these pseudorandomly)</li>
<li>the watcher page list right-click menu now has 'copy subjects', which copies the selected watchers' 'subject' texts to clipboard</li>
<li>the advanced file deletion panel now remembers which reason you last used. it remembers if you selected the 'default' value up top versus specific reason text, and if you enter custom text, it remembers that for next time too</li>
<li>the network job widget now shows the current URL as tooltip over the progress gauge, and under the cog menu, where clicking it copies it to clipboard</li>
<li>the various menu labels across the program should now show ampersand (&) correctly (e.g. in URLs)</li>
<li>the way byte sizes (like 21.7KB or 1.24GB) above 1KB are rendered to strings has been overhauled. they now generally show three significant figures. a new EXPERIMENTAL option in 'gui' options panel lets you change this, but only 2 or 3 are really helpful</li>
<li>if a repository clears the message on your account, you no longer get a popup telling you 'hey, new message from server x: ...'</li>
<li>the new ≠ system preds should be parseable (but be careful, likely buggy) using the client api's new system predicate parser, with '≠', '!=', 'is not', or 'isn't'</li>
<li>cleaned up some old data presentation methods and improved how client specific options are patched in to base hydrus string conversion code</li>
<li>.</li>
<li>ui freezes:</li>
<li>session pages can now detect if they have had no saveable changes since a certain time. they use this ability to skip redundant session save CPU time for pages with no changes since the last session save</li>
<li>for now, since the smallest atom of the session system is a whole page, gallery and watcher pages can only save time if _every_ downloader in the page has had no changes, so in their case this optimisation mostly only applies to completely finished/paused pages. it is still better to have several medium size downloader pages than one gigantic one</li>
<li>a new database maintenance task ensures that optimisation cannot accidentally lose a page (from something like an unfortunate timing of a session save after many manual session deletes)</li>
<li>the existing optimisation that skips 'last session' save on no changes now initialises its data as the 'last session' is first loaded (rather than on first save), meaning that if there are no changes while the client is open, no new 'last session's will be saved at all</li>
<li>misc session save code cleanup</li>
<li>.</li>
<li>database repair, mostly boring:</li>
<li>a client can now boot with client.caches.db missing and will rebuild that file. almost all of its tables are now able to automatically repopulate (issue #975)</li>
<li>all the new modules I have been working on are now responsible for their own repair. this includes missing indices, missing tables, and table repopulation where possible. modules now know which of their tables are critical to a boot, what version each table and index was added, and now manage both initial and later-created service tables and indices</li>
<li>essentially, all newer database repair code is now modularised rather than hardcoded. the repair and creation code actually now share the same table and index definitions. the code is more reliable, checkpoints its good work in case of later failure, and will be much easier to maintain and expand in future</li>
<li>lots of module repair and initialisation code is refactored and generally given a full pass</li>
<li>the core mappings cache regeneration routine now takes transaction checkpoints throughout its job to save progress and reduce journal size</li>
<li>master definition critical error detection code is no longer hardcoded!</li>
<li>mapping storage repair code is no longer hardcoded!</li>
<li>similar files repair code is no longer hardcoded!</li>
<li>parent or sibling cache repair repopulation is no longer hardcoded!</li>
<li>the local hashes cache module can now repopulate itself during repair</li>
<li>the notes fast search table can now repopulate itself during repair</li>
<li>the similar files search tree table can now rebuild itself during repair</li>
</ul>
<li><h3 id="version_454"><a href="#version_454">version 454</a></h3></li>
<ul>
<li>misc:</li>

View File

@ -141,6 +141,9 @@ import_folder_string_lookup[ IMPORT_FOLDER_DELETE ] = 'delete the file'
import_folder_string_lookup[ IMPORT_FOLDER_IGNORE ] = 'leave the file alone, do not reattempt it'
import_folder_string_lookup[ IMPORT_FOLDER_MOVE ] = 'move the file'
EXIT_SESSION_SESSION_NAME = 'exit session'
LAST_SESSION_SESSION_NAME = 'last session'
MEDIA_VIEWER_ACTION_SHOW_WITH_NATIVE = 0
MEDIA_VIEWER_ACTION_SHOW_WITH_NATIVE_PAUSED = 1
MEDIA_VIEWER_ACTION_SHOW_BEHIND_EMBED = 2
@ -397,6 +400,9 @@ SUCCESSFUL_IMPORT_STATES = { STATUS_SUCCESSFUL_AND_NEW, STATUS_SUCCESSFUL_BUT_RE
UNSUCCESSFUL_IMPORT_STATES = { STATUS_DELETED, STATUS_ERROR, STATUS_VETOED }
FAILED_IMPORT_STATES = { STATUS_ERROR, STATUS_VETOED }
UNICODE_ALMOST_EQUAL_TO = '\u2248'
UNICODE_NOT_EQUAL_TO = '\u2260'
ZOOM_NEAREST = 0 # pixelly garbage
ZOOM_LINEAR = 1 # simple and quick
ZOOM_AREA = 2 # for shrinking without moire

View File

@ -396,7 +396,7 @@ class Controller( HydrusController.HydrusController ):
else:
raise HydrusExceptions.QtDeadWindowException('Parent Window was destroyed before Qt command was called!')
raise HydrusExceptions.QtDeadWindowException( 'Parent Window was destroyed before Qt command was called!' )
@ -1386,6 +1386,14 @@ class Controller( HydrusController.HydrusController ):
def ReportLastSessionLoaded( self, gui_session ):
if self._last_last_session_hash is None:
self._last_last_session_hash = gui_session.GetSerialisedHash()
def ReportFirstSessionLoaded( self ):
job = self.CallRepeating( 5.0, 180.0, ClientDaemons.DAEMONCheckImportFolders )
@ -1609,7 +1617,7 @@ class Controller( HydrusController.HydrusController ):
name = session.GetName()
if name == 'last session':
if name == CC.LAST_SESSION_SESSION_NAME:
session_hash = session.GetSerialisedHash()

View File

@ -383,9 +383,19 @@ def TimestampToPrettyTimeDelta( timestamp, just_now_string = 'just now', just_no
else:
return HydrusData.TimestampToPrettyTimeDelta( timestamp, just_now_string = just_now_string, just_now_threshold = just_now_threshold, history_suffix = history_suffix, show_seconds = show_seconds, no_prefix = no_prefix )
return HydrusData.BaseTimestampToPrettyTimeDelta( timestamp, just_now_string = just_now_string, just_now_threshold = just_now_threshold, history_suffix = history_suffix, show_seconds = show_seconds, no_prefix = no_prefix )
HydrusData.TimestampToPrettyTimeDelta = TimestampToPrettyTimeDelta
def ToHumanBytes( size ):
sig_figs = HG.client_controller.new_options.GetInteger( 'human_bytes_sig_figs' )
return HydrusData.BaseToHumanBytes( size, sig_figs = sig_figs )
HydrusData.ToHumanBytes = ToHumanBytes
class Booru( HydrusData.HydrusYAMLBase ):
yaml_tag = '!Booru'

View File

@ -22,7 +22,7 @@ def GetClientDefaultOptions():
options[ 'fullscreen_cache_size' ] = 150 * 1048576
options[ 'thumbnail_dimensions' ] = [ 150, 125 ]
options[ 'password' ] = None
options[ 'default_gui_session' ] = 'last session'
options[ 'default_gui_session' ] = CC.LAST_SESSION_SESSION_NAME
options[ 'idle_period' ] = 60 * 30
options[ 'idle_mouse_period' ] = 60 * 10
options[ 'idle_normal' ] = True

View File

@ -390,6 +390,8 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
self._dictionary[ 'integers' ][ 'system_busy_cpu_percent' ] = 50
self._dictionary[ 'integers' ][ 'human_bytes_sig_figs' ] = 3
#
self._dictionary[ 'keys' ] = {}
@ -445,6 +447,7 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
self._dictionary[ 'noneable_strings' ][ 'no_proxy' ] = '127.0.0.1'
self._dictionary[ 'noneable_strings' ][ 'qt_style_name' ] = None
self._dictionary[ 'noneable_strings' ][ 'qt_stylesheet_name' ] = None
self._dictionary[ 'noneable_strings' ][ 'last_advanced_file_deletion_reason' ] = None
self._dictionary[ 'strings' ] = {}

View File

@ -172,12 +172,14 @@ NUMBER_TEST_OPERATOR_LESS_THAN = 0
NUMBER_TEST_OPERATOR_GREATER_THAN = 1
NUMBER_TEST_OPERATOR_EQUAL = 2
NUMBER_TEST_OPERATOR_APPROXIMATE = 3
NUMBER_TEST_OPERATOR_NOT_EQUAL = 4
number_test_operator_to_str_lookup = {
NUMBER_TEST_OPERATOR_LESS_THAN : '<',
NUMBER_TEST_OPERATOR_GREATER_THAN : '>',
NUMBER_TEST_OPERATOR_EQUAL : '=',
NUMBER_TEST_OPERATOR_APPROXIMATE : '\u2248'
NUMBER_TEST_OPERATOR_APPROXIMATE : CC.UNICODE_ALMOST_EQUAL_TO,
NUMBER_TEST_OPERATOR_NOT_EQUAL : CC.UNICODE_NOT_EQUAL_TO
}
number_test_str_to_operator_lookup = { value : key for ( key, value ) in number_test_operator_to_str_lookup.items() }
@ -380,7 +382,7 @@ class FileSystemPredicates( object ):
self._common_info[ max_label ] = now - age
elif operator == '\u2248':
elif operator == CC.UNICODE_ALMOST_EQUAL_TO:
self._common_info[ min_label ] = now - int( age * 1.15 )
self._common_info[ max_label ] = now - int( age * 0.85 )
@ -408,7 +410,7 @@ class FileSystemPredicates( object ):
self._common_info[ min_label ] = timestamp
self._common_info[ max_label ] = timestamp + 86400
elif operator == '\u2248':
elif operator == CC.UNICODE_ALMOST_EQUAL_TO:
self._common_info[ min_label ] = timestamp - 86400 * 30
self._common_info[ max_label ] = timestamp + 86400 * 30
@ -432,7 +434,8 @@ class FileSystemPredicates( object ):
if operator == '<': self._common_info[ 'max_duration' ] = duration
elif operator == '>': self._common_info[ 'min_duration' ] = duration
elif operator == '=': self._common_info[ 'duration' ] = duration
elif operator == '\u2248':
elif operator == CC.UNICODE_NOT_EQUAL_TO: self._common_info[ 'not_duration' ] = duration
elif operator == CC.UNICODE_ALMOST_EQUAL_TO:
if duration == 0:
@ -453,6 +456,7 @@ class FileSystemPredicates( object ):
if operator == '<': self._common_info[ 'max_framerate' ] = framerate
elif operator == '>': self._common_info[ 'min_framerate' ] = framerate
elif operator == '=': self._common_info[ 'framerate' ] = framerate
elif operator == CC.UNICODE_NOT_EQUAL_TO: self._common_info[ 'not_framerate' ] = framerate
if predicate_type == PREDICATE_TYPE_SYSTEM_NUM_FRAMES:
@ -462,7 +466,8 @@ class FileSystemPredicates( object ):
if operator == '<': self._common_info[ 'max_num_frames' ] = num_frames
elif operator == '>': self._common_info[ 'min_num_frames' ] = num_frames
elif operator == '=': self._common_info[ 'num_frames' ] = num_frames
elif operator == '\u2248':
elif operator == CC.UNICODE_NOT_EQUAL_TO: self._common_info[ 'not_num_frames' ] = num_frames
elif operator == CC.UNICODE_ALMOST_EQUAL_TO:
if num_frames == 0:
@ -496,7 +501,11 @@ class FileSystemPredicates( object ):
self._common_info[ 'max_ratio' ] = ( ratio_width, ratio_height )
elif operator == '\u2248':
elif operator == CC.UNICODE_NOT_EQUAL_TO:
self._common_info[ 'not_ratio' ] = ( ratio_width, ratio_height )
elif operator == CC.UNICODE_ALMOST_EQUAL_TO:
self._common_info[ 'min_ratio' ] = ( ratio_width * 0.85, ratio_height )
self._common_info[ 'max_ratio' ] = ( ratio_width * 1.15, ratio_height )
@ -512,7 +521,8 @@ class FileSystemPredicates( object ):
if operator == '<': self._common_info[ 'max_size' ] = size
elif operator == '>': self._common_info[ 'min_size' ] = size
elif operator == '=': self._common_info[ 'size' ] = size
elif operator == '\u2248':
elif operator == CC.UNICODE_NOT_EQUAL_TO: self._common_info[ 'not_size' ] = size
elif operator == CC.UNICODE_ALMOST_EQUAL_TO:
self._common_info[ 'min_size' ] = int( size * 0.85 )
self._common_info[ 'max_size' ] = int( size * 1.15 )
@ -530,7 +540,7 @@ class FileSystemPredicates( object ):
if operator == '<': self._common_info[ 'max_tag_as_number' ] = ( namespace, num )
elif operator == '>': self._common_info[ 'min_tag_as_number' ] = ( namespace, num )
elif operator == '\u2248':
elif operator == CC.UNICODE_ALMOST_EQUAL_TO:
self._common_info[ 'min_tag_as_number' ] = ( namespace, int( num * 0.85 ) )
self._common_info[ 'max_tag_as_number' ] = ( namespace, int( num * 1.15 ) )
@ -544,7 +554,8 @@ class FileSystemPredicates( object ):
if operator == '<': self._common_info[ 'max_width' ] = width
elif operator == '>': self._common_info[ 'min_width' ] = width
elif operator == '=': self._common_info[ 'width' ] = width
elif operator == '\u2248':
elif operator == '\u2260': self._common_info[ 'not_width' ] = width
elif operator == CC.UNICODE_ALMOST_EQUAL_TO:
if width == 0: self._common_info[ 'width' ] = 0
else:
@ -564,7 +575,8 @@ class FileSystemPredicates( object ):
if operator == '<': self._common_info[ 'max_num_pixels' ] = num_pixels
elif operator == '>': self._common_info[ 'min_num_pixels' ] = num_pixels
elif operator == '=': self._common_info[ 'num_pixels' ] = num_pixels
elif operator == '\u2248':
elif operator == CC.UNICODE_NOT_EQUAL_TO: self._common_info[ 'not_num_pixels' ] = num_pixels
elif operator == CC.UNICODE_ALMOST_EQUAL_TO:
self._common_info[ 'min_num_pixels' ] = int( num_pixels * 0.85 )
self._common_info[ 'max_num_pixels' ] = int( num_pixels * 1.15 )
@ -578,9 +590,13 @@ class FileSystemPredicates( object ):
if operator == '<': self._common_info[ 'max_height' ] = height
elif operator == '>': self._common_info[ 'min_height' ] = height
elif operator == '=': self._common_info[ 'height' ] = height
elif operator == '\u2248':
elif operator == '\u2260': self._common_info[ 'not_height' ] = height
elif operator == CC.UNICODE_ALMOST_EQUAL_TO:
if height == 0: self._common_info[ 'height' ] = 0
if height == 0:
self._common_info[ 'height' ] = 0
else:
self._common_info[ 'min_height' ] = int( height * 0.85 )
@ -626,7 +642,8 @@ class FileSystemPredicates( object ):
if operator == '<': self._common_info[ 'max_num_words' ] = num_words
elif operator == '>': self._common_info[ 'min_num_words' ] = num_words
elif operator == '=': self._common_info[ 'num_words' ] = num_words
elif operator == '\u2248':
elif operator == CC.UNICODE_NOT_EQUAL_TO: self._common_info[ 'not_num_words' ] = num_words
elif operator == CC.UNICODE_ALMOST_EQUAL_TO:
if num_words == 0: self._common_info[ 'num_words' ] = 0
else:
@ -2227,7 +2244,7 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
pretty_operator = 'before '
elif operator == '\u2248':
elif operator == CC.UNICODE_ALMOST_EQUAL_TO:
pretty_operator = 'around '
@ -2255,7 +2272,7 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
pretty_operator = 'on the day of '
elif operator == '\u2248':
elif operator == CC.UNICODE_ALMOST_EQUAL_TO:
pretty_operator = 'a month either side of '
@ -2452,7 +2469,7 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
n_text = namespace
if operator == '\u2248':
if operator == CC.UNICODE_ALMOST_EQUAL_TO:
o_text = ' about '
@ -2476,7 +2493,7 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
( operator, num_relationships, dupe_type ) = self._value
if operator == '\u2248':
if operator == CC.UNICODE_ALMOST_EQUAL_TO:
o_text = ' about '

View File

@ -1334,7 +1334,7 @@ class ServiceRestricted( ServiceRemote ):
( message, message_created ) = self._account.GetMessageAndTimestamp()
if message_created != original_message_created and not HydrusData.TimeHasPassed( message_created + ( 86400 * 5 ) ):
if message != '' and message_created != original_message_created and not HydrusData.TimeHasPassed( message_created + ( 86400 * 5 ) ):
m = 'New message for your account on {}:'.format( self._name )
m += os.linesep * 2

View File

@ -1137,32 +1137,6 @@ class DB( HydrusDB.HydrusDB ):
self._CacheCombinedFilesDisplayMappingsRegeneratePending( tag_service_id, status_hook = status_hook )
def _CacheLocalHashIdsGenerate( self ):
self.modules_hashes_local_cache.ClearCache()
self._controller.frame_splash_status.SetSubtext( 'reading local file data' )
local_hash_ids = self.modules_files_storage.GetCurrentHashIdsList( self.modules_services.combined_local_file_service_id )
BLOCK_SIZE = 10000
num_to_do = len( local_hash_ids )
for ( i, block_of_hash_ids ) in enumerate( HydrusData.SplitListIntoChunks( local_hash_ids, BLOCK_SIZE ) ):
self._controller.frame_splash_status.SetSubtext( 'caching local file data {}'.format( HydrusData.ConvertValueRangeToPrettyString( i * BLOCK_SIZE, num_to_do ) ) )
self.modules_hashes_local_cache.AddHashIdsToCache( block_of_hash_ids )
table_names = self.modules_hashes_local_cache.GetExpectedTableNames()
for table_name in table_names:
self.modules_db_maintenance.AnalyzeTable( table_name )
def _CacheLocalTagIdsGenerate( self ):
# update this to be a thing for the self.modules_tags_local_cache, maybe give it the ac cach as a param, or just boot that lad with it
@ -1189,7 +1163,7 @@ class DB( HydrusDB.HydrusDB ):
self.modules_tags_local_cache.AddTagIdsToCache( block_of_tag_ids )
table_names = self.modules_tags_local_cache.GetExpectedTableNames()
table_names = self.modules_tags_local_cache.GetExpectedInitialTableNames()
for table_name in table_names:
@ -5078,7 +5052,7 @@ class DB( HydrusDB.HydrusDB ):
# doesn't work for '= 0' or '< 1'
if operator == '\u2248':
if operator == CC.UNICODE_ALMOST_EQUAL_TO:
lower_bound = 0.8 * num_relationships
upper_bound = 1.2 * num_relationships
@ -7504,7 +7478,7 @@ class DB( HydrusDB.HydrusDB ):
content_phrase = viewtime_phrase
if operator == '\u2248':
if operator == CC.UNICODE_ALMOST_EQUAL_TO:
lower_bound = int( 0.8 * viewing_value )
upper_bound = int( 1.2 * viewing_value )
@ -7702,6 +7676,7 @@ class DB( HydrusDB.HydrusDB ):
if 'min_size' in simple_preds: files_info_predicates.append( 'size > ' + str( simple_preds[ 'min_size' ] ) )
if 'size' in simple_preds: files_info_predicates.append( 'size = ' + str( simple_preds[ 'size' ] ) )
if 'not_size' in simple_preds: files_info_predicates.append( 'size != ' + str( simple_preds[ 'not_size' ] ) )
if 'max_size' in simple_preds: files_info_predicates.append( 'size < ' + str( simple_preds[ 'max_size' ] ) )
if 'mimes' in simple_preds:
@ -7729,14 +7704,17 @@ class DB( HydrusDB.HydrusDB ):
if 'min_width' in simple_preds: files_info_predicates.append( 'width > ' + str( simple_preds[ 'min_width' ] ) )
if 'width' in simple_preds: files_info_predicates.append( 'width = ' + str( simple_preds[ 'width' ] ) )
if 'not_width' in simple_preds: files_info_predicates.append( 'width != ' + str( simple_preds[ 'not_width' ] ) )
if 'max_width' in simple_preds: files_info_predicates.append( 'width < ' + str( simple_preds[ 'max_width' ] ) )
if 'min_height' in simple_preds: files_info_predicates.append( 'height > ' + str( simple_preds[ 'min_height' ] ) )
if 'height' in simple_preds: files_info_predicates.append( 'height = ' + str( simple_preds[ 'height' ] ) )
if 'not_height' in simple_preds: files_info_predicates.append( 'height != ' + str( simple_preds[ 'not_height' ] ) )
if 'max_height' in simple_preds: files_info_predicates.append( 'height < ' + str( simple_preds[ 'max_height' ] ) )
if 'min_num_pixels' in simple_preds: files_info_predicates.append( 'width * height > ' + str( simple_preds[ 'min_num_pixels' ] ) )
if 'num_pixels' in simple_preds: files_info_predicates.append( 'width * height = ' + str( simple_preds[ 'num_pixels' ] ) )
if 'not_num_pixels' in simple_preds: files_info_predicates.append( 'width * height != ' + str( simple_preds[ 'not_num_pixels' ] ) )
if 'max_num_pixels' in simple_preds: files_info_predicates.append( 'width * height < ' + str( simple_preds[ 'max_num_pixels' ] ) )
if 'min_ratio' in simple_preds:
@ -7751,6 +7729,12 @@ class DB( HydrusDB.HydrusDB ):
files_info_predicates.append( '( width * 1.0 ) / height = ' + str( float( ratio_width ) ) + ' / ' + str( ratio_height ) )
if 'not_ratio' in simple_preds:
( ratio_width, ratio_height ) = simple_preds[ 'not_ratio' ]
files_info_predicates.append( '( width * 1.0 ) / height != ' + str( float( ratio_width ) ) + ' / ' + str( ratio_height ) )
if 'max_ratio' in simple_preds:
( ratio_width, ratio_height ) = simple_preds[ 'max_ratio' ]
@ -7766,6 +7750,12 @@ class DB( HydrusDB.HydrusDB ):
if num_words == 0: files_info_predicates.append( '( num_words IS NULL OR num_words = 0 )' )
else: files_info_predicates.append( 'num_words = ' + str( num_words ) )
if 'not_num_words' in simple_preds:
num_words = simple_preds[ 'not_num_words' ]
files_info_predicates.append( '( num_words IS NULL OR num_words != {} )'.format( num_words ) )
if 'max_num_words' in simple_preds:
max_num_words = simple_preds[ 'max_num_words' ]
@ -7788,6 +7778,12 @@ class DB( HydrusDB.HydrusDB ):
files_info_predicates.append( 'duration = ' + str( duration ) )
if 'not_duration' in simple_preds:
duration = simple_preds[ 'not_duration' ]
files_info_predicates.append( '( duration IS NULL OR duration != {} )'.format( duration ) )
if 'max_duration' in simple_preds:
max_duration = simple_preds[ 'max_duration' ]
@ -7796,38 +7792,50 @@ class DB( HydrusDB.HydrusDB ):
else: files_info_predicates.append( '( duration < ' + str( max_duration ) + ' OR duration IS NULL )' )
if 'min_framerate' in simple_preds or 'framerate' in simple_preds or 'max_framerate' in simple_preds:
if 'min_framerate' in simple_preds or 'framerate' in simple_preds or 'max_framerate' in simple_preds or 'not_framerate' in simple_preds:
min_framerate_sql = None
max_framerate_sql = None
if 'min_framerate' in simple_preds:
if 'not_framerate' in simple_preds:
min_framerate_sql = simple_preds[ 'min_framerate' ] * 1.05
pred = '( duration IS NULL OR num_frames = 0 OR ( duration IS NOT NULL AND duration != 0 AND num_frames != 0 AND num_frames IS NOT NULL AND {} ) )'
if 'framerate' in simple_preds:
min_framerate_sql = simple_preds[ 'not_framerate' ] * 0.95
max_framerate_sql = simple_preds[ 'not_framerate' ] * 1.05
min_framerate_sql = simple_preds[ 'framerate' ] * 0.95
max_framerate_sql = simple_preds[ 'framerate' ] * 1.05
if 'max_framerate' in simple_preds:
max_framerate_sql = simple_preds[ 'max_framerate' ] * 0.95
pred = '( duration IS NOT NULL AND duration != 0 AND num_frames != 0 AND num_frames IS NOT NULL AND {})'
if min_framerate_sql is None:
pred = pred.format( '( num_frames * 1.0 ) / ( duration / 1000.0 ) < {}'.format( max_framerate_sql ) )
elif max_framerate_sql is None:
pred = pred.format( '( num_frames * 1.0 ) / ( duration / 1000.0 ) > {}'.format( min_framerate_sql ) )
pred = pred.format( '( num_frames * 1.0 ) / ( duration / 1000.0 ) NOT BETWEEN {} AND {}'.format( min_framerate_sql, max_framerate_sql ) )
else:
pred = pred.format( '( num_frames * 1.0 ) / ( duration / 1000.0 ) BETWEEN {} AND {}'.format( min_framerate_sql, max_framerate_sql ) )
min_framerate_sql = None
max_framerate_sql = None
pred = '( duration IS NOT NULL AND duration != 0 AND num_frames != 0 AND num_frames IS NOT NULL AND {} )'
if 'min_framerate' in simple_preds:
min_framerate_sql = simple_preds[ 'min_framerate' ] * 1.05
if 'framerate' in simple_preds:
min_framerate_sql = simple_preds[ 'framerate' ] * 0.95
max_framerate_sql = simple_preds[ 'framerate' ] * 1.05
if 'max_framerate' in simple_preds:
max_framerate_sql = simple_preds[ 'max_framerate' ] * 0.95
if min_framerate_sql is None:
pred = pred.format( '( num_frames * 1.0 ) / ( duration / 1000.0 ) < {}'.format( max_framerate_sql ) )
elif max_framerate_sql is None:
pred = pred.format( '( num_frames * 1.0 ) / ( duration / 1000.0 ) > {}'.format( min_framerate_sql ) )
else:
pred = pred.format( '( num_frames * 1.0 ) / ( duration / 1000.0 ) BETWEEN {} AND {}'.format( min_framerate_sql, max_framerate_sql ) )
files_info_predicates.append( pred )
@ -7841,6 +7849,12 @@ class DB( HydrusDB.HydrusDB ):
if num_frames == 0: files_info_predicates.append( '( num_frames IS NULL OR num_frames = 0 )' )
else: files_info_predicates.append( 'num_frames = ' + str( num_frames ) )
if 'not_num_frames' in simple_preds:
num_frames = simple_preds[ 'not_num_frames' ]
files_info_predicates.append( '( num_frames IS NULL OR num_frames != {} )'.format( num_frames ) )
if 'max_num_frames' in simple_preds:
max_num_frames = simple_preds[ 'max_num_frames' ]
@ -8055,7 +8069,7 @@ class DB( HydrusDB.HydrusDB ):
# floats are a pain! as is storing rating as 0.0-1.0 and then allowing number of stars to change!
if operator == '\u2248':
if operator == CC.UNICODE_ALMOST_EQUAL_TO:
predicate = str( ( value - half_a_star_value ) * 0.8 ) + ' < rating AND rating < ' + str( ( value + half_a_star_value ) * 1.2 )
@ -8087,7 +8101,7 @@ class DB( HydrusDB.HydrusDB ):
for ( operator, num_relationships, dupe_type ) in system_predicates.GetDuplicateRelationshipCountPredicates():
only_do_zero = ( operator in ( '=', '\u2248' ) and num_relationships == 0 ) or ( operator == '<' and num_relationships == 1 )
only_do_zero = ( operator in ( '=', CC.UNICODE_ALMOST_EQUAL_TO ) and num_relationships == 0 ) or ( operator == '<' and num_relationships == 1 )
include_zero = operator == '<'
if only_do_zero:
@ -8110,7 +8124,7 @@ class DB( HydrusDB.HydrusDB ):
for ( view_type, viewing_locations, operator, viewing_value ) in system_predicates.GetFileViewingStatsPredicates():
only_do_zero = ( operator in ( '=', '\u2248' ) and viewing_value == 0 ) or ( operator == '<' and viewing_value == 1 )
only_do_zero = ( operator in ( '=', CC.UNICODE_ALMOST_EQUAL_TO ) and viewing_value == 0 ) or ( operator == '<' and viewing_value == 1 )
include_zero = operator == '<'
if only_do_zero:
@ -8460,7 +8474,7 @@ class DB( HydrusDB.HydrusDB ):
for ( operator, num_relationships, dupe_type ) in system_predicates.GetDuplicateRelationshipCountPredicates():
only_do_zero = ( operator in ( '=', '\u2248' ) and num_relationships == 0 ) or ( operator == '<' and num_relationships == 1 )
only_do_zero = ( operator in ( '=', CC.UNICODE_ALMOST_EQUAL_TO ) and num_relationships == 0 ) or ( operator == '<' and num_relationships == 1 )
include_zero = operator == '<'
if only_do_zero:
@ -8551,7 +8565,7 @@ class DB( HydrusDB.HydrusDB ):
for ( view_type, viewing_locations, operator, viewing_value ) in system_predicates.GetFileViewingStatsPredicates():
only_do_zero = ( operator in ( '=', '\u2248' ) and viewing_value == 0 ) or ( operator == '<' and viewing_value == 1 )
only_do_zero = ( operator in ( '=', CC.UNICODE_ALMOST_EQUAL_TO ) and viewing_value == 0 ) or ( operator == '<' and viewing_value == 1 )
include_zero = operator == '<'
if only_do_zero:
@ -11668,20 +11682,22 @@ class DB( HydrusDB.HydrusDB ):
#
self.modules_files_storage = ClientDBFilesStorage.ClientDBFilesStorage( self._c, self.modules_services, self.modules_texts )
self._modules.append( self.modules_files_storage )
#
self.modules_tags_local_cache = ClientDBDefinitionsCache.ClientDBCacheLocalTags( self._c, self.modules_tags )
self._modules.append( self.modules_tags_local_cache )
self.modules_hashes_local_cache = ClientDBDefinitionsCache.ClientDBCacheLocalHashes( self._c, self.modules_hashes )
self.modules_hashes_local_cache = ClientDBDefinitionsCache.ClientDBCacheLocalHashes( self._c, self.modules_hashes, self.modules_services, self.modules_files_storage )
self._modules.append( self.modules_hashes_local_cache )
#
self.modules_files_storage = ClientDBFilesStorage.ClientDBFilesStorage( self._c, self.modules_services, self.modules_texts )
self._modules.append( self.modules_files_storage )
self.modules_mappings_storage = ClientDBMappingsStorage.ClientDBMappingsStorage( self._c, self.modules_services )
self._modules.append( self.modules_mappings_storage )
@ -13085,6 +13101,7 @@ class DB( HydrusDB.HydrusDB ):
elif action == 'gui_session': result = self.modules_serialisable.GetGUISession( *args, **kwargs )
elif action == 'hash_ids_to_hashes': result = self.modules_hashes_local_cache.GetHashIdsToHashes( *args, **kwargs )
elif action == 'hash_status': result = self._GetHashStatus( *args, **kwargs )
elif action == 'have_hashed_serialised_objects': result = self.modules_serialisable.HaveHashedJSONDumps( *args, **kwargs )
elif action == 'ideal_client_files_locations': result = self._GetIdealClientFilesLocations( *args, **kwargs )
elif action == 'imageboards': result = self.modules_serialisable.GetYAMLDump( ClientDBSerialisable.YAML_DUMP_ID_IMAGEBOARD, *args, **kwargs )
elif action == 'inbox_hashes': result = self._FilterInboxHashes( *args, **kwargs )
@ -13244,7 +13261,7 @@ class DB( HydrusDB.HydrusDB ):
job_key.SetVariable( 'popup_text_1', message )
self._controller.frame_splash_status.SetSubtext( message )
self._CacheLocalHashIdsGenerate()
self.modules_hashes_local_cache.Repopulate()
finally:
@ -13682,6 +13699,8 @@ class DB( HydrusDB.HydrusDB ):
self._CacheSpecificMappingsGenerate( file_service_id, tag_service_id )
self._cursor_transaction_wrapper.CommitAndBegin()
for tag_service_id in tag_service_ids:
@ -13704,6 +13723,8 @@ class DB( HydrusDB.HydrusDB ):
self._CacheCombinedFilesMappingsGenerate( tag_service_id )
self._cursor_transaction_wrapper.CommitAndBegin()
if tag_service_key is None:
@ -13889,221 +13910,23 @@ class DB( HydrusDB.HydrusDB ):
def _RepairDB( self ):
def _RepairDB( self, version ):
# migrate most of this gubbins to the new modules system, and HydrusDB tbh!
self._controller.frame_splash_status.SetText( 'checking database' )
( version, ) = self._Execute( 'SELECT version FROM version;' ).fetchone()
HydrusDB.HydrusDB._RepairDB( self )
HydrusDB.HydrusDB._RepairDB( self, version )
self._weakref_media_result_cache = ClientMediaResultCache.MediaResultCache()
tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
file_service_ids = self.modules_services.GetServiceIds( HC.AUTOCOMPLETE_CACHE_SPECIFIC_FILE_SERVICES )
# master
existing_master_tables = self._STS( self._Execute( 'SELECT name FROM external_master.sqlite_master WHERE type = ?;', ( 'table', ) ) )
main_master_tables = set()
main_master_tables.add( 'hashes' )
main_master_tables.add( 'namespaces' )
main_master_tables.add( 'subtags' )
main_master_tables.add( 'tags' )
main_master_tables.add( 'texts' )
if version >= 396:
main_master_tables.add( 'labels' )
main_master_tables.add( 'notes' )
missing_main_tables = main_master_tables.difference( existing_master_tables )
if len( missing_main_tables ) > 0:
message = 'On boot, some required master tables were missing. This could be due to the entire \'master\' database file being missing or due to some other problem. Critical data is missing, so the client cannot boot! The exact missing tables were:'
message += os.linesep * 2
message += os.linesep.join( missing_main_tables )
message += os.linesep * 2
message += 'The boot will fail once you click ok. If you do not know what happened and how to fix this, please take a screenshot and contact hydrus dev.'
self._controller.SafeShowCriticalMessage( 'Error', message )
raise Exception( 'Master database was invalid!' )
if 'local_hashes' not in existing_master_tables:
message = 'On boot, the \'local_hashes\' tables was missing.'
message += os.linesep * 2
message += 'If you wish, click ok on this message and the client will recreate it--empty, without data--which should at least let the client boot. The client can repopulate the table in through the file maintenance jobs, the \'regenerate non-standard hashes\' job. But if you want to solve this problem otherwise, kill the hydrus process now.'
message += os.linesep * 2
message += 'If you do not already know what caused this, it was likely a hard drive fault--either due to a recent abrupt power cut or actual hardware failure. Check \'help my db is broke.txt\' in the install_dir/db directory as soon as you can.'
BlockingSafeShowMessage( message )
self._Execute( 'CREATE TABLE external_master.local_hashes ( hash_id INTEGER PRIMARY KEY, md5 BLOB_BYTES, sha1 BLOB_BYTES, sha512 BLOB_BYTES );' )
self._CreateIndex( 'external_master.local_hashes', [ 'md5' ] )
self._CreateIndex( 'external_master.local_hashes', [ 'sha1' ] )
self._CreateIndex( 'external_master.local_hashes', [ 'sha512' ] )
# mappings
existing_mapping_tables = self._STS( self._Execute( 'SELECT name FROM external_mappings.sqlite_master WHERE type = ?;', ( 'table', ) ) )
main_mappings_tables = set()
for service_id in tag_service_ids:
main_mappings_tables.update( ( name.split( '.' )[1] for name in ClientDBMappingsStorage.GenerateMappingsTableNames( service_id ) ) )
missing_main_tables = sorted( main_mappings_tables.difference( existing_mapping_tables ) )
if len( missing_main_tables ) > 0:
message = 'On boot, some important mappings tables were missing! This could be due to the entire \'mappings\' database file being missing or some other problem. The tags in these tables are lost. The exact missing tables were:'
message += os.linesep * 2
message += os.linesep.join( missing_main_tables )
message += os.linesep * 2
message += 'If you wish, click ok on this message and the client will recreate these tables--empty, without data--which should at least let the client boot. If the affected tag service(s) are tag repositories, you will want to reset the processing cache so the client can repopulate the tables from your cached update files. But if you want to solve this problem otherwise, kill the hydrus process now.'
message += os.linesep * 2
message += 'If you do not already know what caused this, it was likely a hard drive fault--either due to a recent abrupt power cut or actual hardware failure. Check \'help my db is broke.txt\' in the install_dir/db directory as soon as you can.'
BlockingSafeShowMessage( message )
for service_id in tag_service_ids:
self.modules_mappings_storage.GenerateMappingsTables( service_id )
# caches
existing_cache_tables = self._STS( self._Execute( 'SELECT name FROM external_caches.sqlite_master WHERE type = ?;', ( 'table', ) ) )
main_cache_tables = set()
main_cache_tables.add( 'shape_vptree' )
main_cache_tables.add( 'shape_maintenance_branch_regen' )
missing_main_tables = sorted( main_cache_tables.difference( existing_cache_tables ) )
if len( missing_main_tables ) > 0:
message = 'On boot, some important caches tables were missing! This could be due to the entire \'caches\' database file being missing or some other problem. Data related to duplicate file search may have been lost. The exact missing tables were:'
message += os.linesep * 2
message += os.linesep.join( missing_main_tables )
message += os.linesep * 2
message += 'If you wish, click ok on this message and the client will recreate these tables--empty, without data--which should at least let the client boot. But if you want to solve this problem otherwise, kill the hydrus process now.'
message += os.linesep * 2
message += 'If you do not already know what caused this, it was likely a hard drive fault--either due to a recent abrupt power cut or actual hardware failure. Check \'help my db is broke.txt\' in the install_dir/db directory as soon as you can.'
BlockingSafeShowMessage( message )
if version >= 414:
# tag display caches
tag_display_cache_service_ids = list( self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES ) )
missing_tag_sibling_cache_tables = []
for tag_service_id in tag_display_cache_service_ids:
( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = ClientDBTagSiblings.GenerateTagSiblingsLookupCacheTableNames( tag_service_id )
actual_missing = cache_actual_tag_siblings_lookup_table_name.split( '.' )[1] not in existing_cache_tables
ideal_missing = cache_ideal_tag_siblings_lookup_table_name.split( '.' )[1] not in existing_cache_tables
if actual_missing:
missing_tag_sibling_cache_tables.append( cache_actual_tag_siblings_lookup_table_name )
if ideal_missing:
missing_tag_sibling_cache_tables.append( cache_ideal_tag_siblings_lookup_table_name )
if actual_missing or ideal_missing:
self.modules_tag_siblings.Generate( tag_service_id )
self._CreateIndex( cache_actual_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] )
self._CreateIndex( cache_ideal_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] )
if len( missing_tag_sibling_cache_tables ) > 0:
missing_tag_sibling_cache_tables.sort()
message = 'On boot, some important tag sibling cache tables were missing! This could be due to the entire \'caches\' database file being missing or some other problem. All of this data can be regenerated. The exact missing tables were:'
message += os.linesep * 2
message += os.linesep.join( missing_tag_sibling_cache_tables )
message += os.linesep * 2
message += 'If you wish, click ok on this message and the client will recreate and repopulate these tables with the correct data. But if you want to solve this problem otherwise, kill the hydrus process now.'
message += os.linesep * 2
message += 'If you do not already know what caused this, it was likely a hard drive fault--either due to a recent abrupt power cut or actual hardware failure. Check \'help my db is broke.txt\' in the install_dir/db directory as soon as you can.'
BlockingSafeShowMessage( message )
missing_tag_parent_cache_tables = []
for tag_service_id in tag_display_cache_service_ids:
( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = ClientDBTagParents.GenerateTagParentsLookupCacheTableNames( tag_service_id )
actual_missing = cache_actual_tag_parents_lookup_table_name.split( '.' )[1] not in existing_cache_tables
ideal_missing = cache_ideal_tag_parents_lookup_table_name.split( '.' )[1] not in existing_cache_tables
if actual_missing:
missing_tag_parent_cache_tables.append( cache_actual_tag_parents_lookup_table_name )
if ideal_missing:
missing_tag_parent_cache_tables.append( cache_ideal_tag_parents_lookup_table_name )
if actual_missing or ideal_missing:
self.modules_tag_parents.Generate( tag_service_id )
self._CreateIndex( cache_actual_tag_parents_lookup_table_name, [ 'ancestor_tag_id' ] )
self._CreateIndex( cache_ideal_tag_parents_lookup_table_name, [ 'ancestor_tag_id' ] )
if len( missing_tag_parent_cache_tables ) > 0:
missing_tag_parent_cache_tables.sort()
message = 'On boot, some important tag parent cache tables were missing! This could be due to the entire \'caches\' database file being missing or some other problem. All of this data can be regenerated. The exact missing tables were:'
message += os.linesep * 2
message += os.linesep.join( missing_tag_parent_cache_tables )
message += os.linesep * 2
message += 'If you wish, click ok on this message and the client will recreate and repopulate these tables with the correct data. But if you want to solve this problem otherwise, kill the hydrus process now.'
message += os.linesep * 2
message += 'If you do not already know what caused this, it was likely a hard drive fault--either due to a recent abrupt power cut or actual hardware failure. Check \'help my db is broke.txt\' in the install_dir/db directory as soon as you can.'
BlockingSafeShowMessage( message )
mappings_cache_tables = set()
for ( file_service_id, tag_service_id ) in itertools.product( file_service_ids, tag_service_ids ):
@ -14140,12 +13963,10 @@ class DB( HydrusDB.HydrusDB ):
BlockingSafeShowMessage( message )
# quick hack
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.local_tags_cache ( tag_id INTEGER PRIMARY KEY, tag TEXT UNIQUE );' )
self._RegenerateTagMappingsCache()
# delete this when mappings caches are moved to modules that will auto-heal this!
for ( file_service_id, tag_service_id ) in itertools.product( file_service_ids, tag_service_ids ):
( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id )
@ -14162,6 +13983,8 @@ class DB( HydrusDB.HydrusDB ):
self._CreateIndex( cache_display_pending_mappings_table_name, [ 'tag_id', 'hash_id' ], unique = True )
self._cursor_transaction_wrapper.CommitAndBegin()
if version >= 424:
@ -14226,6 +14049,8 @@ class DB( HydrusDB.HydrusDB ):
self._CacheTagsGenerate( file_service_id, tag_service_id )
self._CacheTagsPopulate( file_service_id, tag_service_id )
self._cursor_transaction_wrapper.CommitAndBegin()
@ -16391,7 +16216,7 @@ class DB( HydrusDB.HydrusDB ):
#
self._CacheLocalHashIdsGenerate()
self.modules_hashes_local_cache.Repopulate()
if version == 447:

View File

@ -1,29 +1,35 @@
import sqlite3
import typing
from hydrus.core import HydrusDB
from hydrus.core import HydrusDBModule
from hydrus.core import HydrusData
from hydrus.core import HydrusDBBase
from hydrus.core import HydrusExceptions
from hydrus.core import HydrusGlobals as HG
from hydrus.core import HydrusTags
from hydrus.client.db import ClientDBFilesStorage
from hydrus.client.db import ClientDBMaster
from hydrus.client.db import ClientDBModule
from hydrus.client.db import ClientDBServices
class ClientDBCacheLocalHashes( HydrusDBModule.HydrusDBModule ):
class ClientDBCacheLocalHashes( ClientDBModule.ClientDBModule ):
def __init__( self, cursor: sqlite3.Cursor, modules_hashes: ClientDBMaster.ClientDBMasterHashes ):
def __init__( self, cursor: sqlite3.Cursor, modules_hashes: ClientDBMaster.ClientDBMasterHashes, modules_services: ClientDBServices.ClientDBMasterServices, modules_files_storage: ClientDBFilesStorage.ClientDBFilesStorage ):
self.modules_hashes = modules_hashes
self.modules_services = modules_services
self.modules_files_storage = modules_files_storage
self._hash_ids_to_hashes_cache = {}
HydrusDBModule.HydrusDBModule.__init__( self, 'client hashes local cache', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client hashes local cache', cursor )
def _GetInitialIndexGenerationTuples( self ):
def _GetInitialTableGenerationDict( self ) -> dict:
index_generation_tuples = []
return index_generation_tuples
return {
'external_caches.local_hashes_cache' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES UNIQUE );', 429 )
}
def _PopulateHashIdsToHashesCache( self, hash_ids ):
@ -71,9 +77,9 @@ class ClientDBCacheLocalHashes( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
def _RepairRepopulateTables( self, table_names, cursor_transaction_wrapper: HydrusDBBase.DBCursorTransactionWrapper ):
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.local_hashes_cache ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES UNIQUE );' )
self.Repopulate()
def AddHashIdsToCache( self, hash_ids ):
@ -93,15 +99,6 @@ class ClientDBCacheLocalHashes( HydrusDBModule.HydrusDBModule ):
self._ExecuteMany( 'DELETE FROM local_hashes_cache WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = [
'external_caches.local_hashes_cache'
]
return expected_table_names
def GetHash( self, hash_id ) -> str:
self._PopulateHashIdsToHashesCache( ( hash_id, ) )
@ -196,7 +193,26 @@ class ClientDBCacheLocalHashes( HydrusDBModule.HydrusDBModule ):
return result is not None
class ClientDBCacheLocalTags( HydrusDBModule.HydrusDBModule ):
def Repopulate( self ):
self.ClearCache()
HG.client_controller.frame_splash_status.SetSubtext( 'reading local file data' )
local_hash_ids = self.modules_files_storage.GetCurrentHashIdsList( self.modules_services.combined_local_file_service_id )
BLOCK_SIZE = 10000
num_to_do = len( local_hash_ids )
for ( i, block_of_hash_ids ) in enumerate( HydrusData.SplitListIntoChunks( local_hash_ids, BLOCK_SIZE ) ):
HG.client_controller.frame_splash_status.SetSubtext( 'caching local file data {}'.format( HydrusData.ConvertValueRangeToPrettyString( i * BLOCK_SIZE, num_to_do ) ) )
self.AddHashIdsToCache( block_of_hash_ids )
class ClientDBCacheLocalTags( ClientDBModule.ClientDBModule ):
def __init__( self, cursor: sqlite3.Cursor, modules_tags: ClientDBMaster.ClientDBMasterTags ):
@ -204,14 +220,14 @@ class ClientDBCacheLocalTags( HydrusDBModule.HydrusDBModule ):
self._tag_ids_to_tags_cache = {}
HydrusDBModule.HydrusDBModule.__init__( self, 'client tags local cache', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client tags local cache', cursor )
def _GetInitialIndexGenerationTuples( self ):
def _GetInitialTableGenerationDict( self ) -> dict:
index_generation_tuples = []
return index_generation_tuples
return {
'external_caches.local_tags_cache' : ( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER PRIMARY KEY, tag TEXT UNIQUE );', 400 )
}
def _PopulateTagIdsToTagsCache( self, tag_ids ):
@ -259,9 +275,13 @@ class ClientDBCacheLocalTags( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
def _RepairRepopulateTables( self, table_names, cursor_transaction_wrapper: HydrusDBBase.DBCursorTransactionWrapper ):
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.local_tags_cache ( tag_id INTEGER PRIMARY KEY, tag TEXT UNIQUE );' )
message = 'Unfortunately, the local tag cache cannot repopulate itself yet during repair. Once you boot, please run _database->regenerate->local tag cache_. This message has been printed to the log.'
HydrusData.DebugPrint( message )
ClientDBModule.BlockingSafeShowMessage( message )
def AddTagIdsToCache( self, tag_ids ):
@ -281,15 +301,6 @@ class ClientDBCacheLocalTags( HydrusDBModule.HydrusDBModule ):
self._ExecuteMany( 'DELETE FROM local_tags_cache WHERE tag_id = ?;', ( ( tag_id, ) for tag_id in tag_ids ) )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = [
'external_caches.local_tags_cache'
]
return expected_table_names
def GetTablesAndColumnsThatUseDefinitions( self, content_type: int ) -> typing.List[ typing.Tuple[ str, str ] ]:
# we actually provide a backup, which we may want to automate later in mappings caches etc...

View File

@ -3,17 +3,17 @@ import typing
from hydrus.core import HydrusConstants as HC
from hydrus.core import HydrusData
from hydrus.core import HydrusDBModule
from hydrus.core import HydrusGlobals as HG
from hydrus.client import ClientFiles
from hydrus.client.db import ClientDBDefinitionsCache
from hydrus.client.db import ClientDBFilesMetadataBasic
from hydrus.client.db import ClientDBMaster
from hydrus.client.db import ClientDBModule
from hydrus.client.db import ClientDBSimilarFiles
from hydrus.client.media import ClientMediaResultCache
class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ):
class ClientDBFilesMaintenance( ClientDBModule.ClientDBModule ):
def __init__(
self,
@ -25,7 +25,7 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ):
weakref_media_result_cache: ClientMediaResultCache.MediaResultCache
):
HydrusDBModule.HydrusDBModule.__init__( self, 'client files maintenance', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client files maintenance', cursor )
self.modules_hashes = modules_hashes
self.modules_hashes_local_cache = modules_hashes_local_cache
@ -34,11 +34,11 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ):
self._weakref_media_result_cache = weakref_media_result_cache
def _GetInitialIndexGenerationTuples( self ):
def _GetInitialTableGenerationDict( self ) -> dict:
index_generation_tuples = []
return index_generation_tuples
return {
'external_caches.file_maintenance_jobs' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER, job_type INTEGER, time_can_start INTEGER, PRIMARY KEY ( hash_id, job_type ) );', 400 )
}
def AddJobs( self, hash_ids, job_type, time_can_start = 0 ):
@ -72,11 +72,6 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ):
self._Execute( 'DELETE FROM file_maintenance_jobs WHERE job_type = ?;', ( job_type, ) )
def CreateInitialTables( self ):
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.file_maintenance_jobs ( hash_id INTEGER, job_type INTEGER, time_can_start INTEGER, PRIMARY KEY ( hash_id, job_type ) );' )
def ClearJobs( self, cleared_job_tuples ):
new_file_info = set()
@ -194,15 +189,6 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ):
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = [
'external_caches.file_maintenance_jobs'
]
return expected_table_names
def GetJob( self, job_types = None ):
if job_types is None:

View File

@ -4,32 +4,43 @@ import typing
from hydrus.core import HydrusConstants as HC
from hydrus.core import HydrusDB
from hydrus.core import HydrusDBModule
from hydrus.core import HydrusExceptions
class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ):
from hydrus.client.db import ClientDBModule
class ClientDBFilesMetadataBasic( ClientDBModule.ClientDBModule ):
def __init__( self, cursor: sqlite3.Cursor ):
HydrusDBModule.HydrusDBModule.__init__( self, 'client files metadata', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client files metadata', cursor )
self.inbox_hash_ids = set()
self._InitCaches()
def _GetInitialIndexGenerationTuples( self ):
def _GetInitialIndexGenerationDict( self ) -> dict:
index_generation_tuples = []
index_generation_dict = {}
index_generation_tuples.append( ( 'files_info', [ 'size' ], False ) )
index_generation_tuples.append( ( 'files_info', [ 'mime' ], False ) )
index_generation_tuples.append( ( 'files_info', [ 'width' ], False ) )
index_generation_tuples.append( ( 'files_info', [ 'height' ], False ) )
index_generation_tuples.append( ( 'files_info', [ 'duration' ], False ) )
index_generation_tuples.append( ( 'files_info', [ 'num_frames' ], False ) )
index_generation_dict[ 'main.files_info' ] = [
( [ 'size' ], False, 400 ),
( [ 'mime' ], False, 400 ),
( [ 'width' ], False, 400 ),
( [ 'height' ], False, 400 ),
( [ 'duration' ], False, 400 ),
( [ 'num_frames' ], False, 400 )
]
return index_generation_tuples
return index_generation_dict
def _GetInitialTableGenerationDict( self ) -> dict:
return {
'main.file_inbox' : ( 'CREATE TABLE {} ( hash_id INTEGER PRIMARY KEY );', 400 ),
'main.files_info' : ( 'CREATE TABLE {} ( hash_id INTEGER PRIMARY KEY, size INTEGER, mime INTEGER, width INTEGER, height INTEGER, duration INTEGER, num_frames INTEGER, has_audio INTEGER_BOOLEAN, num_words INTEGER );', 400 )
}
def _InitCaches( self ):
@ -74,22 +85,6 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ):
return archiveable_hash_ids
def CreateInitialTables( self ):
self._Execute( 'CREATE TABLE file_inbox ( hash_id INTEGER PRIMARY KEY );' )
self._Execute( 'CREATE TABLE files_info ( hash_id INTEGER PRIMARY KEY, size INTEGER, mime INTEGER, width INTEGER, height INTEGER, duration INTEGER, num_frames INTEGER, has_audio INTEGER_BOOLEAN, num_words INTEGER );' )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = [
'file_inbox',
'files_info'
]
return expected_table_names
def GetMime( self, hash_id: int ) -> int:
result = self._Execute( 'SELECT mime FROM files_info WHERE hash_id = ?;', ( hash_id, ) ).fetchone()

View File

@ -5,24 +5,24 @@ import typing
from hydrus.core import HydrusConstants as HC
from hydrus.core import HydrusData
from hydrus.core import HydrusDB
from hydrus.core import HydrusDBModule
from hydrus.client import ClientConstants as CC
from hydrus.client import ClientSearch
from hydrus.client.db import ClientDBMaster
from hydrus.client.db import ClientDBModule
from hydrus.client.db import ClientDBServices
def GenerateFilesTableNames( service_id: int ) -> typing.Tuple[ str, str, str, str ]:
suffix = str( service_id )
current_files_table_name = 'current_files_{}'.format( suffix )
current_files_table_name = 'main.current_files_{}'.format( suffix )
deleted_files_table_name = 'deleted_files_{}'.format( suffix )
deleted_files_table_name = 'main.deleted_files_{}'.format( suffix )
pending_files_table_name = 'pending_files_{}'.format( suffix )
pending_files_table_name = 'main.pending_files_{}'.format( suffix )
petitioned_files_table_name = 'petitioned_files_{}'.format( suffix )
petitioned_files_table_name = 'main.petitioned_files_{}'.format( suffix )
return ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name )
@ -85,23 +85,62 @@ class DBLocationSearchContext( object ):
class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
def __init__( self, cursor: sqlite3.Cursor, modules_services: ClientDBServices.ClientDBMasterServices, modules_texts: ClientDBMaster.ClientDBMasterTexts ):
self.modules_services = modules_services
self.modules_texts = modules_texts
HydrusDBModule.HydrusDBModule.__init__( self, 'client files storage', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client files storage', cursor )
self.temp_file_storage_table_name = None
def _GetInitialIndexGenerationTuples( self ):
def _GetInitialTableGenerationDict( self ) -> dict:
index_generation_tuples = []
return {
'main.local_file_deletion_reasons' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );', 400 )
}
return index_generation_tuples
def _GetServiceIndexGenerationDict( self, service_id ) -> dict:
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
index_generation_dict = {}
index_generation_dict[ current_files_table_name ] = [
( [ 'timestamp' ], False, 447 )
]
index_generation_dict[ deleted_files_table_name ] = [
( [ 'timestamp' ], False, 447 ),
( [ 'original_timestamp' ], False, 447 )
]
index_generation_dict[ petitioned_files_table_name ] = [
( [ 'reason_id' ], False, 447 )
]
return index_generation_dict
def _GetServiceTableGenerationDict( self, service_id ) -> dict:
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
return {
current_files_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, timestamp INTEGER );', 447 ),
deleted_files_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, timestamp INTEGER, original_timestamp INTEGER );', 447 ),
pending_files_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );', 447 ),
petitioned_files_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );', 447 )
}
def _GetServiceIdsWeGenerateDynamicTablesFor( self ):
return self.modules_services.GetServiceIds( HC.SPECIFIC_FILE_SERVICES )
def AddFiles( self, service_id, insert_rows ):
@ -194,11 +233,6 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
return service_ids_to_nums_cleared
def CreateInitialTables( self ):
self._Execute( 'CREATE TABLE local_file_deletion_reasons ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );' )
def DeletePending( self, service_id: int ):
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
@ -309,19 +343,19 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
def GenerateFilesTables( self, service_id: int ):
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
table_generation_dict = self._GetServiceTableGenerationDict( service_id )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, timestamp INTEGER );'.format( current_files_table_name ) )
self._CreateIndex( current_files_table_name, [ 'timestamp' ] )
for ( table_name, ( create_query_without_name, version_added ) ) in table_generation_dict.items():
self._Execute( create_query_without_name.format( table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, timestamp INTEGER, original_timestamp INTEGER );'.format( deleted_files_table_name ) )
self._CreateIndex( deleted_files_table_name, [ 'timestamp' ] )
self._CreateIndex( deleted_files_table_name, [ 'original_timestamp' ] )
index_generation_dict = self._GetServiceIndexGenerationDict( service_id )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );'.format( pending_files_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );'.format( petitioned_files_table_name ) )
self._CreateIndex( petitioned_files_table_name, [ 'reason_id' ] )
for ( table_name, columns, unique, version_added ) in self._FlattenIndexGenerationDict( index_generation_dict ):
self._CreateIndex( table_name, columns, unique = unique )
def GetAPendingHashId( self, service_id ):
@ -552,15 +586,6 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
return db_location_search_context
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = [
'local_file_deletion_reasons',
]
return expected_table_names
def GetHashIdsToCurrentServiceIds( self, temp_hash_ids_table_name ):
hash_ids_to_current_file_service_ids = collections.defaultdict( list )

View File

@ -6,26 +6,28 @@ import typing
from hydrus.core import HydrusConstants as HC
from hydrus.core import HydrusData
from hydrus.core import HydrusDBModule
from hydrus.core import HydrusGlobals as HG
from hydrus.client import ClientThreading
from hydrus.client.db import ClientDBModule
class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ):
class ClientDBMaintenance( ClientDBModule.ClientDBModule ):
def __init__( self, cursor: sqlite3.Cursor, db_dir: str, db_filenames: typing.Collection[ str ] ):
HydrusDBModule.HydrusDBModule.__init__( self, 'client db maintenance', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client db maintenance', cursor )
self._db_dir = db_dir
self._db_filenames = db_filenames
def _GetInitialIndexGenerationTuples( self ):
def _GetInitialTableGenerationDict( self ) -> dict:
index_generation_tuples = []
return index_generation_tuples
return {
'main.last_shutdown_work_time' : ( 'CREATE TABLE {} ( last_shutdown_work_time INTEGER );', 400 ),
'main.analyze_timestamps' : ( 'CREATE TABLE {} ( name TEXT, num_rows INTEGER, timestamp INTEGER );', 400 ),
'main.vacuum_timestamps' : ( 'CREATE TABLE {} ( name TEXT, timestamp INTEGER );', 400 )
}
def _TableHasAtLeastRowCount( self, name, row_count ):
@ -140,25 +142,6 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ):
self._Execute( 'INSERT OR IGNORE INTO analyze_timestamps ( name, num_rows, timestamp ) VALUES ( ?, ?, ? );', ( name, num_rows, HydrusData.GetNow() ) )
def CreateInitialTables( self ):
self._Execute( 'CREATE TABLE last_shutdown_work_time ( last_shutdown_work_time INTEGER );' )
self._Execute( 'CREATE TABLE analyze_timestamps ( name TEXT, num_rows INTEGER, timestamp INTEGER );' )
self._Execute( 'CREATE TABLE vacuum_timestamps ( name TEXT, timestamp INTEGER );' )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = [
'last_shutdown_work_time',
'analyze_timestamps',
'vacuum_timestamps'
]
return expected_table_names
def GetLastShutdownWorkTime( self ):
result = self._Execute( 'SELECT last_shutdown_work_time FROM last_shutdown_work_time;' ).fetchone()

View File

@ -2,8 +2,8 @@ import sqlite3
import typing
from hydrus.core import HydrusConstants as HC
from hydrus.core import HydrusDBModule
from hydrus.client.db import ClientDBModule
from hydrus.client.db import ClientDBServices
def GenerateMappingsTableNames( service_id: int ) -> typing.Tuple[ str, str, str, str ]:
@ -20,32 +20,55 @@ def GenerateMappingsTableNames( service_id: int ) -> typing.Tuple[ str, str, str
return ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name )
class ClientDBMappingsStorage( HydrusDBModule.HydrusDBModule ):
class ClientDBMappingsStorage( ClientDBModule.ClientDBModule ):
def __init__( self, cursor: sqlite3.Cursor, modules_services: ClientDBServices.ClientDBMasterServices ):
self.modules_services = modules_services
HydrusDBModule.HydrusDBModule.__init__( self, 'client mappings storage', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client mappings storage', cursor )
def _GetInitialIndexGenerationTuples( self ):
def _GetServiceIndexGenerationDict( self, service_id ) -> dict:
index_generation_tuples = []
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
return index_generation_tuples
index_generation_dict = {}
index_generation_dict[ current_mappings_table_name ] = [
( [ 'hash_id', 'tag_id' ], True, 400 )
]
index_generation_dict[ deleted_mappings_table_name ] = [
( [ 'hash_id', 'tag_id' ], True, 400 )
]
index_generation_dict[ pending_mappings_table_name ] = [
( [ 'hash_id', 'tag_id' ], True, 400 )
]
index_generation_dict[ petitioned_mappings_table_name ] = [
( [ 'hash_id', 'tag_id' ], True, 400 )
]
return index_generation_dict
def CreateInitialTables( self ):
def _GetServiceTableGenerationDict( self, service_id ) -> dict:
pass
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
return {
current_mappings_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;', 400 ),
deleted_mappings_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;', 400 ),
pending_mappings_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;', 400 ),
petitioned_mappings_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, reason_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;', 400 )
}
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
def _GetServiceIdsWeGenerateDynamicTablesFor( self ):
expected_table_names = []
return expected_table_names
return self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
def ClearMappingsTables( self, service_id: int ):
@ -70,19 +93,19 @@ class ClientDBMappingsStorage( HydrusDBModule.HydrusDBModule ):
def GenerateMappingsTables( self, service_id: int ):
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
table_generation_dict = self._GetServiceTableGenerationDict( service_id )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( current_mappings_table_name ) )
self._CreateIndex( current_mappings_table_name, [ 'hash_id', 'tag_id' ], unique = True )
for ( table_name, ( create_query_without_name, version_added ) ) in table_generation_dict.items():
self._Execute( create_query_without_name.format( table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( deleted_mappings_table_name ) )
self._CreateIndex( deleted_mappings_table_name, [ 'hash_id', 'tag_id' ], unique = True )
index_generation_dict = self._GetServiceIndexGenerationDict( service_id )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( pending_mappings_table_name ) )
self._CreateIndex( pending_mappings_table_name, [ 'hash_id', 'tag_id' ], unique = True )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, reason_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( petitioned_mappings_table_name ) )
self._CreateIndex( petitioned_mappings_table_name, [ 'hash_id', 'tag_id' ], unique = True )
for ( table_name, columns, unique, version_added ) in self._FlattenIndexGenerationDict( index_generation_dict ):
self._CreateIndex( table_name, columns, unique = unique )
def GetCurrentFilesCount( self, service_id: int ) -> int:

View File

@ -4,31 +4,48 @@ import typing
from hydrus.core import HydrusConstants as HC
from hydrus.core import HydrusData
from hydrus.core import HydrusDB
from hydrus.core import HydrusDBModule
from hydrus.core import HydrusDBBase
from hydrus.core import HydrusExceptions
from hydrus.core import HydrusTags
from hydrus.client.db import ClientDBModule
from hydrus.client.networking import ClientNetworkingDomain
class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ):
class ClientDBMasterHashes( ClientDBModule.ClientDBModule ):
def __init__( self, cursor: sqlite3.Cursor ):
HydrusDBModule.HydrusDBModule.__init__( self, 'client hashes master', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client hashes master', cursor )
self._hash_ids_to_hashes_cache = {}
def _GetInitialIndexGenerationTuples( self ):
def _GetCriticalTableNames( self ) -> typing.Collection[ str ]:
index_generation_tuples = []
return {
'external_master.hashes'
}
index_generation_tuples.append( ( 'external_master.local_hashes', [ 'md5' ], False ) )
index_generation_tuples.append( ( 'external_master.local_hashes', [ 'sha1' ], False ) )
index_generation_tuples.append( ( 'external_master.local_hashes', [ 'sha512' ], False ) )
def _GetInitialIndexGenerationDict( self ) -> dict:
return index_generation_tuples
index_generation_dict = {}
index_generation_dict[ 'external_master.local_hashes' ] = [
( [ 'md5' ], False, 400 ),
( [ 'sha1' ], False, 400 ),
( [ 'sha512' ], False, 400 )
]
return index_generation_dict
def _GetInitialTableGenerationDict( self ) -> dict:
return {
'external_master.hashes' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES UNIQUE );', 400 ),
'external_master.local_hashes' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, md5 BLOB_BYTES, sha1 BLOB_BYTES, sha512 BLOB_BYTES );', 400 )
}
def _PopulateHashIdsToHashesCache( self, hash_ids, exception_on_error = False ):
@ -98,23 +115,6 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.hashes ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.local_hashes ( hash_id INTEGER PRIMARY KEY, md5 BLOB_BYTES, sha1 BLOB_BYTES, sha512 BLOB_BYTES );' )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = [
'external_master.hashes',
'external_master.local_hashes'
]
return expected_table_names
def GetExtraHash( self, hash_type, hash_id ) -> bytes:
result = self._Execute( 'SELECT {} FROM local_hashes WHERE hash_id = ?;'.format( hash_type ), ( hash_id, ) ).fetchone()
@ -312,41 +312,29 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ):
self._Execute( 'INSERT OR IGNORE INTO local_hashes ( hash_id, md5, sha1, sha512 ) VALUES ( ?, ?, ?, ? );', ( hash_id, sqlite3.Binary( md5 ), sqlite3.Binary( sha1 ), sqlite3.Binary( sha512 ) ) )
class ClientDBMasterTexts( HydrusDBModule.HydrusDBModule ):
class ClientDBMasterTexts( ClientDBModule.ClientDBModule ):
def __init__( self, cursor: sqlite3.Cursor ):
HydrusDBModule.HydrusDBModule.__init__( self, 'client texts master', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client texts master', cursor )
def _GetInitialIndexGenerationTuples( self ):
def _GetInitialTableGenerationDict( self ) -> dict:
index_generation_tuples = []
return index_generation_tuples
return {
'external_master.labels' : ( 'CREATE TABLE IF NOT EXISTS {} ( label_id INTEGER PRIMARY KEY, label TEXT UNIQUE );', 400 ),
'external_master.notes' : ( 'CREATE TABLE IF NOT EXISTS {} ( note_id INTEGER PRIMARY KEY, note TEXT UNIQUE );', 400 ),
'external_master.texts' : ( 'CREATE TABLE IF NOT EXISTS {} ( text_id INTEGER PRIMARY KEY, text TEXT UNIQUE );', 400 ),
'external_caches.notes_fts4' : ( 'CREATE VIRTUAL TABLE IF NOT EXISTS {} USING fts4( note );', 400 )
}
def CreateInitialTables( self ):
def _RepairRepopulateTables( self, repopulate_table_names, cursor_transaction_wrapper: HydrusDBBase.DBCursorTransactionWrapper ):
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.labels ( label_id INTEGER PRIMARY KEY, label TEXT UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.notes ( note_id INTEGER PRIMARY KEY, note TEXT UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.texts ( text_id INTEGER PRIMARY KEY, text TEXT UNIQUE );' )
self._Execute( 'CREATE VIRTUAL TABLE IF NOT EXISTS external_caches.notes_fts4 USING fts4( note );' )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = [
'external_master.labels',
'external_master.notes',
'external_master.texts',
'external_caches.notes_fts4'
]
return expected_table_names
if 'external_caches.notes_fts4' in repopulate_table_names:
self._Execute( 'REPLACE INTO notes_fts4 ( docid, note ) SELECT note_id, note FROM notes;' )
def GetLabelId( self, label ):
@ -424,25 +412,45 @@ class ClientDBMasterTexts( HydrusDBModule.HydrusDBModule ):
return text_id
class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ):
class ClientDBMasterTags( ClientDBModule.ClientDBModule ):
def __init__( self, cursor: sqlite3.Cursor ):
HydrusDBModule.HydrusDBModule.__init__( self, 'client tags master', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client tags master', cursor )
self.null_namespace_id = None
self._tag_ids_to_tags_cache = {}
def _GetInitialIndexGenerationTuples( self ):
def _GetCriticalTableNames( self ) -> typing.Collection[ str ]:
index_generation_tuples = []
return {
'external_master.namespaces',
'external_master.subtags',
'external_master.tags'
}
index_generation_tuples.append( ( 'external_master.tags', [ 'subtag_id' ], False ) )
index_generation_tuples.append( ( 'external_master.tags', [ 'namespace_id', 'subtag_id' ], True ) )
def _GetInitialIndexGenerationDict( self ) -> dict:
return index_generation_tuples
index_generation_dict = {}
index_generation_dict[ 'external_master.tags' ] = [
( [ 'subtag_id' ], False, 400 ),
( [ 'namespace_id', 'subtag_id' ], True, 412 )
]
return index_generation_dict
def _GetInitialTableGenerationDict( self ) -> dict:
return {
'external_master.namespaces' : ( 'CREATE TABLE IF NOT EXISTS {} ( namespace_id INTEGER PRIMARY KEY, namespace TEXT UNIQUE );', 400 ),
'external_master.subtags' : ( 'CREATE TABLE IF NOT EXISTS {} ( subtag_id INTEGER PRIMARY KEY, subtag TEXT UNIQUE );', 400 ),
'external_master.tags' : ( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER PRIMARY KEY, namespace_id INTEGER, subtag_id INTEGER );', 400 )
}
def _PopulateTagIdsToTagsCache( self, tag_ids ):
@ -502,26 +510,6 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.namespaces ( namespace_id INTEGER PRIMARY KEY, namespace TEXT UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.subtags ( subtag_id INTEGER PRIMARY KEY, subtag TEXT UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.tags ( tag_id INTEGER PRIMARY KEY, namespace_id INTEGER, subtag_id INTEGER );' )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = [
'external_master.namespaces',
'external_master.subtags',
'external_master.tags'
]
return expected_table_names
def GetNamespaceId( self, namespace ) -> int:
if namespace == '':
@ -738,37 +726,30 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ):
class ClientDBMasterURLs( HydrusDBModule.HydrusDBModule ):
class ClientDBMasterURLs( ClientDBModule.ClientDBModule ):
def __init__( self, cursor: sqlite3.Cursor ):
HydrusDBModule.HydrusDBModule.__init__( self, 'client urls master', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client urls master', cursor )
def _GetInitialIndexGenerationTuples( self ):
def _GetInitialIndexGenerationDict( self ) -> dict:
index_generation_tuples = []
index_generation_dict = {}
index_generation_tuples.append( ( 'external_master.urls', [ 'domain_id' ], False ) )
return index_generation_tuples
def CreateInitialTables( self ):
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.url_domains ( domain_id INTEGER PRIMARY KEY, domain TEXT UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.urls ( url_id INTEGER PRIMARY KEY, domain_id INTEGER, url TEXT UNIQUE );' )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = [
'external_master.url_domains',
'external_master.urls'
index_generation_dict[ 'external_master.urls' ] = [
( [ 'domain_id' ], False, 400 )
]
return expected_table_names
return index_generation_dict
def _GetInitialTableGenerationDict( self ) -> dict:
return {
'external_master.url_domains' : ( 'CREATE TABLE IF NOT EXISTS {} ( domain_id INTEGER PRIMARY KEY, domain TEXT UNIQUE );', 400 ),
'external_master.urls' : ( 'CREATE TABLE IF NOT EXISTS {} ( url_id INTEGER PRIMARY KEY, domain_id INTEGER, url TEXT UNIQUE );', 400 )
}
def GetTablesAndColumnsThatUseDefinitions( self, content_type: int ) -> typing.List[ typing.Tuple[ str, str ] ]:

View File

@ -0,0 +1,47 @@
import os
import sqlite3
import typing
from hydrus.core import HydrusData
from hydrus.core import HydrusDBModule
from hydrus.core import HydrusExceptions
from hydrus.core import HydrusGlobals as HG
def BlockingSafeShowMessage( message ):
from qtpy import QtWidgets as QW
HG.client_controller.CallBlockingToQt( HG.client_controller.app, QW.QMessageBox.warning, None, 'Warning', message )
class ClientDBModule( HydrusDBModule.HydrusDBModule ):
def _PresentMissingIndicesWarningToUser( self, index_names ):
index_names = sorted( index_names )
HydrusData.DebugPrint( 'The "{}" database module is missing the following indices:'.format( self.name ) )
HydrusData.DebugPrint( os.linesep.join( index_names ) )
message = 'Your "{}" database module was missing {} indices. More information has been written to the log. This may or may not be a big deal, and on its own is completely recoverable. If you do not have further problems, hydev does not need to know about it. The indices will be regenerated once you proceed--it may take some time.'.format( self.name, len( index_names ) )
BlockingSafeShowMessage( message )
HG.client_controller.frame_splash_status.SetText( 'recreating indices' )
def _PresentMissingTablesWarningToUser( self, table_names ):
table_names = sorted( table_names )
HydrusData.DebugPrint( 'The "{}" database module is missing the following tables:'.format( self.name ) )
HydrusData.DebugPrint( os.linesep.join( table_names ) )
message = 'Your "{}" database module was missing {} tables. More information has been written to the log. This is a serious problem and possibly due to hard drive damage. You should check "install_dir/db/help my db is broke.txt" for background reading. If you have a functional backup, kill the hydrus process now and rollback to that backup.'.format( self.name, len( table_names ) )
message += os.linesep * 2
message += 'Otherwise, proceed and the missing tables will be recreated. Your client should be able to boot, but full automatic recovery may not be possible and you may encounter further errors. A database maintenance task or repository processing reset may be able to fix you up once the client boots. Hydev will be able to help if you run into trouble.'
BlockingSafeShowMessage( message )
HG.client_controller.frame_splash_status.SetText( 'recreating tables' )

View File

@ -6,8 +6,7 @@ import typing
from hydrus.core import HydrusConstants as HC
from hydrus.core import HydrusData
from hydrus.core import HydrusDB
from hydrus.core import HydrusDBModule
from hydrus.core import HydrusDBBase
from hydrus.core import HydrusExceptions
from hydrus.core.networking import HydrusNetwork
@ -16,6 +15,7 @@ from hydrus.client.db import ClientDBDefinitionsCache
from hydrus.client.db import ClientDBFilesMaintenance
from hydrus.client.db import ClientDBFilesMetadataBasic
from hydrus.client.db import ClientDBFilesStorage
from hydrus.client.db import ClientDBModule
from hydrus.client.db import ClientDBServices
def GenerateRepositoryDefinitionTableNames( service_id: int ):
@ -47,12 +47,12 @@ def GenerateRepositoryUpdatesTableNames( service_id: int ):
return ( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name )
class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
class ClientDBRepositories( ClientDBModule.ClientDBModule ):
def __init__(
self,
cursor: sqlite3.Cursor,
cursor_transaction_wrapper: HydrusDB.DBCursorTransactionWrapper,
cursor_transaction_wrapper: HydrusDBBase.DBCursorTransactionWrapper,
modules_services: ClientDBServices.ClientDBMasterServices,
modules_files_storage: ClientDBFilesStorage.ClientDBFilesStorage,
modules_files_metadata_basic: ClientDBFilesMetadataBasic.ClientDBFilesMetadataBasic,
@ -63,7 +63,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
# since we'll mostly be talking about hashes and tags we don't have locally, I think we shouldn't use the local caches
HydrusDBModule.HydrusDBModule.__init__( self, 'client repositories', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client repositories', cursor )
self._cursor_transaction_wrapper = cursor_transaction_wrapper
self.modules_services = modules_services
@ -96,11 +96,40 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
def _GetInitialIndexGenerationTuples( self ):
def _GetServiceIndexGenerationDict( self, service_id ) -> dict:
index_generation_tuples = []
( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id )
return index_generation_tuples
index_generation_dict = {}
index_generation_dict[ repository_updates_table_name ] = [
( [ 'hash_id' ], True, 449 )
]
index_generation_dict[ repository_updates_processed_table_name ] = [
( [ 'content_type' ], False, 449 )
]
return index_generation_dict
def _GetServiceTableGenerationDict( self, service_id ) -> dict:
( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id )
( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryDefinitionTableNames( service_id )
return {
repository_updates_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( update_index INTEGER, hash_id INTEGER, PRIMARY KEY ( update_index, hash_id ) );', 449 ),
repository_unregistered_updates_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );', 449 ),
repository_updates_processed_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER, content_type INTEGER, processed INTEGER_BOOLEAN, PRIMARY KEY ( hash_id, content_type ) );', 449 ),
hash_id_map_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( service_hash_id INTEGER PRIMARY KEY, hash_id INTEGER );', 400 ),
tag_id_map_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( service_tag_id INTEGER PRIMARY KEY, tag_id INTEGER );', 400 )
}
def _GetServiceIdsWeGenerateDynamicTablesFor( self ):
return self.modules_services.GetServiceIds( HC.REPOSITORIES )
def _HandleCriticalRepositoryDefinitionError( self, service_id, name, bad_ids ):
@ -216,11 +245,6 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
self._RegisterUpdates( service_id )
def CreateInitialTables( self ):
pass
def DropRepositoryTables( self, service_id: int ):
( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id )
@ -247,28 +271,19 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
def GenerateRepositoryTables( self, service_id: int ):
( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id )
table_generation_dict = self._GetServiceTableGenerationDict( service_id )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( update_index INTEGER, hash_id INTEGER, PRIMARY KEY ( update_index, hash_id ) );'.format( repository_updates_table_name ) )
self._CreateIndex( repository_updates_table_name, [ 'hash_id' ] )
for ( table_name, ( create_query_without_name, version_added ) ) in table_generation_dict.items():
self._Execute( create_query_without_name.format( table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );'.format( repository_unregistered_updates_table_name ) )
index_generation_dict = self._GetServiceIndexGenerationDict( service_id )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER, content_type INTEGER, processed INTEGER_BOOLEAN, PRIMARY KEY ( hash_id, content_type ) );'.format( repository_updates_processed_table_name ) )
self._CreateIndex( repository_updates_processed_table_name, [ 'content_type' ] )
( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryDefinitionTableNames( service_id )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( service_hash_id INTEGER PRIMARY KEY, hash_id INTEGER );'.format( hash_id_map_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( service_tag_id INTEGER PRIMARY KEY, tag_id INTEGER );'.format( tag_id_map_table_name ) )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = [
]
return expected_table_names
for ( table_name, columns, unique, version_added ) in self._FlattenIndexGenerationDict( index_generation_dict ):
self._CreateIndex( table_name, columns, unique = unique )
def GetRepositoryProgress( self, service_key: bytes ):

View File

@ -6,13 +6,13 @@ import typing
from hydrus.core import HydrusConstants as HC
from hydrus.core import HydrusData
from hydrus.core import HydrusDB
from hydrus.core import HydrusDBModule
from hydrus.core import HydrusDBBase
from hydrus.core import HydrusExceptions
from hydrus.core import HydrusGlobals as HG
from hydrus.core import HydrusSerialisable
from hydrus.client import ClientConstants as CC
from hydrus.client.db import ClientDBModule
from hydrus.client.db import ClientDBServices
YAML_DUMP_ID_SINGLE = 0
@ -142,32 +142,35 @@ class MaintenanceTracker( object ):
self._total_new_hashed_serialisable_bytes += num_bytes
class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
class ClientDBSerialisable( ClientDBModule.ClientDBModule ):
def __init__( self, cursor: sqlite3.Cursor, db_dir, cursor_transaction_wrapper: HydrusDB.DBCursorTransactionWrapper, modules_services: ClientDBServices.ClientDBMasterServices ):
def __init__( self, cursor: sqlite3.Cursor, db_dir, cursor_transaction_wrapper: HydrusDBBase.DBCursorTransactionWrapper, modules_services: ClientDBServices.ClientDBMasterServices ):
HydrusDBModule.HydrusDBModule.__init__( self, 'client serialisable', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client serialisable', cursor )
self._db_dir = db_dir
self._cursor_transaction_wrapper = cursor_transaction_wrapper
self.modules_services = modules_services
def _GetInitialIndexGenerationTuples( self ):
def _GetCriticalTableNames( self ) -> typing.Collection[ str ]:
index_generation_tuples = []
return index_generation_tuples
return {
'main.json_dict',
'main.json_dumps',
'main.yaml_dumps'
}
def CreateInitialTables( self ):
def _GetInitialTableGenerationDict( self ) -> dict:
self._Execute( 'CREATE TABLE json_dict ( name TEXT PRIMARY KEY, dump BLOB_BYTES );' )
self._Execute( 'CREATE TABLE json_dumps ( dump_type INTEGER PRIMARY KEY, version INTEGER, dump BLOB_BYTES );' )
self._Execute( 'CREATE TABLE json_dumps_named ( dump_type INTEGER, dump_name TEXT, version INTEGER, timestamp INTEGER, dump BLOB_BYTES, PRIMARY KEY ( dump_type, dump_name, timestamp ) );' )
self._Execute( 'CREATE TABLE json_dumps_hashed ( hash BLOB_BYTES PRIMARY KEY, dump_type INTEGER, version INTEGER, dump BLOB_BYTES );' )
self._Execute( 'CREATE TABLE yaml_dumps ( dump_type INTEGER, dump_name TEXT, dump TEXT_YAML, PRIMARY KEY ( dump_type, dump_name ) );' )
return {
'main.json_dict' : ( 'CREATE TABLE {} ( name TEXT PRIMARY KEY, dump BLOB_BYTES );', 400 ),
'main.json_dumps' : ( 'CREATE TABLE {} ( dump_type INTEGER PRIMARY KEY, version INTEGER, dump BLOB_BYTES );', 400 ),
'main.json_dumps_named' : ( 'CREATE TABLE {} ( dump_type INTEGER, dump_name TEXT, version INTEGER, timestamp INTEGER, dump BLOB_BYTES, PRIMARY KEY ( dump_type, dump_name, timestamp ) );', 400 ),
'main.json_dumps_hashed' : ( 'CREATE TABLE {} ( hash BLOB_BYTES PRIMARY KEY, dump_type INTEGER, version INTEGER, dump BLOB_BYTES );', 442 ),
'main.yaml_dumps' : ( 'CREATE TABLE {} ( dump_type INTEGER, dump_name TEXT, dump TEXT_YAML, PRIMARY KEY ( dump_type, dump_name ) );', 400 )
}
def DeleteJSONDump( self, dump_type ):
@ -231,18 +234,6 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
return all_expected_hashes
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = [
'json_dict',
'json_dumps',
'json_dumps_named',
'yaml_dumps'
]
return expected_table_names
def GetHashedJSONDumps( self, hashes ):
shown_missing_dump_message = False
@ -563,6 +554,19 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
return result is not None
def HaveHashedJSONDumps( self, hashes ):
for hash in hashes:
if not self.HaveHashedJSONDump( hash ):
return False
return True
def MaintainHashedStorage( self, force_start = False ):
maintenance_tracker = MaintenanceTracker.instance()
@ -668,9 +672,9 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
if dump_type == HydrusSerialisable.SERIALISABLE_TYPE_GUI_SESSION_CONTAINER:
if not obj.HasAllPageData():
if not obj.HasAllDirtyPageData():
raise Exception( 'A session with name "{}" was set to save, but it did not have all its page data!'.format( dump_name ) )
raise Exception( 'A session with name "{}" was set to save, but it did not have all its dirty page data!'.format( dump_name ) )
hashes_to_page_data = obj.GetHashesToPageData()

View File

@ -3,19 +3,19 @@ import typing
from hydrus.core import HydrusConstants as HC
from hydrus.core import HydrusData
from hydrus.core import HydrusDBModule
from hydrus.core import HydrusExceptions
from hydrus.core import HydrusSerialisable
from hydrus.client import ClientConstants as CC
from hydrus.client import ClientSearch
from hydrus.client import ClientServices
from hydrus.client.db import ClientDBModule
class ClientDBMasterServices( HydrusDBModule.HydrusDBModule ):
class ClientDBMasterServices( ClientDBModule.ClientDBModule ):
def __init__( self, cursor: sqlite3.Cursor ):
HydrusDBModule.HydrusDBModule.__init__( self, 'client services master', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client services master', cursor )
self._service_ids_to_services = {}
self._service_keys_to_service_ids = {}
@ -29,11 +29,18 @@ class ClientDBMasterServices( HydrusDBModule.HydrusDBModule ):
self._InitCaches()
def _GetInitialIndexGenerationTuples( self ):
def _GetCriticalTableNames( self ) -> typing.Collection[ str ]:
index_generation_tuples = []
return {
'main.services'
}
return index_generation_tuples
def _GetInitialTableGenerationDict( self ) -> dict:
return {
'main.services' : ( 'CREATE TABLE {} ( service_id INTEGER PRIMARY KEY AUTOINCREMENT, service_key BLOB_BYTES UNIQUE, service_type INTEGER, name TEXT, dictionary_string TEXT );', 400 )
}
def _InitCaches( self ):
@ -61,20 +68,6 @@ class ClientDBMasterServices( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._Execute( 'CREATE TABLE services ( service_id INTEGER PRIMARY KEY AUTOINCREMENT, service_key BLOB_BYTES UNIQUE, service_type INTEGER, name TEXT, dictionary_string TEXT );' )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = [
'services'
]
return expected_table_names
def AddService( self, service_key, service_type, name, dictionary: HydrusSerialisable.SerialisableBase ) -> int:
dictionary_string = dictionary.DumpToString()

View File

@ -5,22 +5,22 @@ import typing
from hydrus.core import HydrusConstants as HC
from hydrus.core import HydrusData
from hydrus.core import HydrusDB
from hydrus.core import HydrusDBModule
from hydrus.core import HydrusDBBase
from hydrus.core import HydrusGlobals as HG
from hydrus.client import ClientThreading
from hydrus.client.db import ClientDBFilesStorage
from hydrus.client.db import ClientDBModule
from hydrus.client.db import ClientDBServices
class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
class ClientDBSimilarFiles( ClientDBModule.ClientDBModule ):
def __init__( self, cursor: sqlite3.Cursor, modules_services: ClientDBServices.ClientDBMasterServices, modules_files_storage: ClientDBFilesStorage.ClientDBFilesStorage ):
self.modules_services = modules_services
self.modules_files_storage = modules_files_storage
HydrusDBModule.HydrusDBModule.__init__( self, 'client similar files', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client similar files', cursor )
def _AddLeaf( self, phash_id, phash ):
@ -193,14 +193,30 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
self._ExecuteMany( 'INSERT OR REPLACE INTO shape_vptree ( phash_id, parent_id, radius, inner_id, inner_population, outer_id, outer_population ) VALUES ( ?, ?, ?, ?, ?, ?, ? );', insert_rows )
def _GetInitialIndexGenerationTuples( self ):
def _GetInitialIndexGenerationDict( self ) -> dict:
index_generation_tuples = []
index_generation_dict = {}
index_generation_tuples.append( ( 'external_master.shape_perceptual_hash_map', [ 'hash_id' ], False ) )
index_generation_tuples.append( ( 'external_caches.shape_vptree', [ 'parent_id' ], False ) )
index_generation_dict[ 'external_master.shape_perceptual_hash_map' ] = [
( [ 'hash_id' ], False, 451 )
]
return index_generation_tuples
index_generation_dict[ 'external_caches.shape_vptree' ] = [
( [ 'parent_id' ], False, 400 )
]
return index_generation_dict
def _GetInitialTableGenerationDict( self ) -> dict:
return {
'external_master.shape_perceptual_hashes' : ( 'CREATE TABLE IF NOT EXISTS {} ( phash_id INTEGER PRIMARY KEY, phash BLOB_BYTES UNIQUE );', 451 ),
'external_master.shape_perceptual_hash_map' : ( 'CREATE TABLE IF NOT EXISTS {} ( phash_id INTEGER, hash_id INTEGER, PRIMARY KEY ( phash_id, hash_id ) );', 451 ),
'external_caches.shape_vptree' : ( 'CREATE TABLE IF NOT EXISTS {} ( phash_id INTEGER PRIMARY KEY, parent_id INTEGER, radius INTEGER, inner_id INTEGER, inner_population INTEGER, outer_id INTEGER, outer_population INTEGER );', 400 ),
'external_caches.shape_maintenance_branch_regen' : ( 'CREATE TABLE IF NOT EXISTS {} ( phash_id INTEGER PRIMARY KEY );', 400 ),
'main.shape_search_cache' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, searched_distance INTEGER );', 451 )
}
def _GetPHashId( self, phash ):
@ -383,6 +399,14 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
def _RepairRepopulateTables( self, repopulate_table_names, cursor_transaction_wrapper: HydrusDBBase.DBCursorTransactionWrapper ):
if 'external_caches.shape_vptree' in repopulate_table_names or 'external_caches.shape_maintenance_branch_regen' in repopulate_table_names:
self.RegenerateTree()
def AssociatePHashes( self, hash_id, phashes ):
phash_ids = set()
@ -404,19 +428,6 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
return phash_ids
def CreateInitialTables( self ):
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.shape_perceptual_hashes ( phash_id INTEGER PRIMARY KEY, phash BLOB_BYTES UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.shape_perceptual_hash_map ( phash_id INTEGER, hash_id INTEGER, PRIMARY KEY ( phash_id, hash_id ) );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_vptree ( phash_id INTEGER PRIMARY KEY, parent_id INTEGER, radius INTEGER, inner_id INTEGER, inner_population INTEGER, outer_id INTEGER, outer_population INTEGER );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_maintenance_branch_regen ( phash_id INTEGER PRIMARY KEY );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS shape_search_cache ( hash_id INTEGER PRIMARY KEY, searched_distance INTEGER );' )
def DisassociatePHashes( self, hash_id, phash_ids ):
self._ExecuteMany( 'DELETE FROM shape_perceptual_hash_map WHERE phash_id = ? AND hash_id = ?;', ( ( phash_id, hash_id ) for phash_id in phash_ids ) )
@ -435,14 +446,6 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
return result is not None
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = []
return expected_table_names
def GetMaintenanceStatus( self ):
searched_distances_to_count = collections.Counter( dict( self._Execute( 'SELECT searched_distance, COUNT( * ) FROM shape_search_cache GROUP BY searched_distance;' ) ) )

View File

@ -5,9 +5,10 @@ import typing
from hydrus.core import HydrusConstants as HC
from hydrus.core import HydrusData
from hydrus.core import HydrusDBModule
from hydrus.core import HydrusDBBase
from hydrus.client.db import ClientDBDefinitionsCache
from hydrus.client.db import ClientDBModule
from hydrus.client.db import ClientDBServices
from hydrus.client.db import ClientDBTagSiblings
from hydrus.client.metadata import ClientTags
@ -33,7 +34,7 @@ def GenerateTagParentsLookupCacheTableNames( service_id ):
return ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name )
class ClientDBTagParents( HydrusDBModule.HydrusDBModule ):
class ClientDBTagParents( ClientDBModule.ClientDBModule ):
def __init__(
self,
@ -52,17 +53,85 @@ class ClientDBTagParents( HydrusDBModule.HydrusDBModule ):
self._service_ids_to_applicable_service_ids = None
self._service_ids_to_interested_service_ids = None
HydrusDBModule.HydrusDBModule.__init__( self, 'client tag parents', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client tag parents', cursor )
def _GetInitialIndexGenerationTuples( self ):
def _GetInitialIndexGenerationDict( self ) -> dict:
index_generation_tuples = [
( 'tag_parents', [ 'service_id', 'parent_tag_id' ], False ),
( 'tag_parent_petitions', [ 'service_id', 'parent_tag_id' ], False ),
index_generation_dict = {}
index_generation_dict[ 'tag_parents' ] = [
( [ 'service_id', 'parent_tag_id' ], False, 420 )
]
return index_generation_tuples
index_generation_dict[ 'tag_parent_petitions' ] = [
( [ 'service_id', 'parent_tag_id' ], False, 420 )
]
return index_generation_dict
def _GetInitialTableGenerationDict( self ) -> dict:
return {
'main.tag_parents' : ( 'CREATE TABLE {} ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );', 414 ),
'main.tag_parent_petitions' : ( 'CREATE TABLE {} ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );', 414 ),
'main.tag_parent_application' : ( 'CREATE TABLE {} ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );', 414 )
}
def _GetServiceIndexGenerationDict( self, service_id ) -> dict:
( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( service_id )
index_generation_dict = {}
index_generation_dict[ cache_actual_tag_parents_lookup_table_name ] = [
( [ 'ancestor_tag_id' ], False, 414 )
]
index_generation_dict[ cache_ideal_tag_parents_lookup_table_name ] = [
( [ 'ancestor_tag_id' ], False, 414 )
]
return index_generation_dict
def _GetServiceTableGenerationDict( self, service_id ) -> dict:
( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( service_id )
return {
cache_actual_tag_parents_lookup_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( child_tag_id INTEGER, ancestor_tag_id INTEGER, PRIMARY KEY ( child_tag_id, ancestor_tag_id ) );', 414 ),
cache_ideal_tag_parents_lookup_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( child_tag_id INTEGER, ancestor_tag_id INTEGER, PRIMARY KEY ( child_tag_id, ancestor_tag_id ) );', 414 )
}
def _GetServiceIdsWeGenerateDynamicTablesFor( self ):
return self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
def _RepairRepopulateTables( self, repopulate_table_names, cursor_transaction_wrapper: HydrusDBBase.DBCursorTransactionWrapper ):
for service_id in self._GetServiceIdsWeGenerateDynamicTablesFor():
table_generation_dict = self._GetServiceTableGenerationDict( service_id )
this_service_table_names = set( table_generation_dict.keys() )
this_service_needs_repopulation = len( this_service_table_names.intersection( repopulate_table_names ) ) > 0
if this_service_needs_repopulation:
self._service_ids_to_applicable_service_ids = None
self._service_ids_to_interested_service_ids = None
self.Regen( ( service_id, ) )
cursor_transaction_wrapper.CommitAndBegin()
def AddTagParents( self, service_id, pairs ):
@ -85,14 +154,6 @@ class ClientDBTagParents( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._Execute( 'CREATE TABLE tag_parents ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );' )
self._Execute( 'CREATE TABLE tag_parent_petitions ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );' )
self._Execute( 'CREATE TABLE tag_parent_application ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );' )
def DeleteTagParents( self, service_id, pairs ):
self._ExecuteMany( 'DELETE FROM tag_parents WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( ( service_id, child_tag_id, parent_tag_id ) for ( child_tag_id, parent_tag_id ) in pairs ) )
@ -155,13 +216,19 @@ class ClientDBTagParents( HydrusDBModule.HydrusDBModule ):
def Generate( self, tag_service_id ):
( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( tag_service_id )
table_generation_dict = self._GetServiceTableGenerationDict( tag_service_id )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( child_tag_id INTEGER, ancestor_tag_id INTEGER, PRIMARY KEY ( child_tag_id, ancestor_tag_id ) );'.format( cache_actual_tag_parents_lookup_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( child_tag_id INTEGER, ancestor_tag_id INTEGER, PRIMARY KEY ( child_tag_id, ancestor_tag_id ) );'.format( cache_ideal_tag_parents_lookup_table_name ) )
for ( table_name, ( create_query_without_name, version_added ) ) in table_generation_dict.items():
self._Execute( create_query_without_name.format( table_name ) )
self._CreateIndex( cache_actual_tag_parents_lookup_table_name, [ 'ancestor_tag_id' ] )
self._CreateIndex( cache_ideal_tag_parents_lookup_table_name, [ 'ancestor_tag_id' ] )
index_generation_dict = self._GetServiceIndexGenerationDict( tag_service_id )
for ( table_name, columns, unique, version_added ) in self._FlattenIndexGenerationDict( index_generation_dict ):
self._CreateIndex( table_name, columns, unique = unique )
self._Execute( 'INSERT OR IGNORE INTO tag_parent_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', ( tag_service_id, 0, tag_service_id ) )
@ -364,17 +431,6 @@ class ClientDBTagParents( HydrusDBModule.HydrusDBModule ):
return descendant_ids
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = [
'tag_parents',
'tag_parent_petitions',
'tag_parent_application'
]
return expected_table_names
def GetInterestedServiceIds( self, tag_service_id ):
if self._service_ids_to_interested_service_ids is None:

View File

@ -5,11 +5,12 @@ import typing
from hydrus.core import HydrusConstants as HC
from hydrus.core import HydrusData
from hydrus.core import HydrusDBModule
from hydrus.core import HydrusDBBase
from hydrus.client import ClientConstants as CC
from hydrus.client.db import ClientDBDefinitionsCache
from hydrus.client.db import ClientDBMaster
from hydrus.client.db import ClientDBModule
from hydrus.client.db import ClientDBServices
from hydrus.client.metadata import ClientTags
from hydrus.client.metadata import ClientTagsHandling
@ -34,7 +35,7 @@ def GenerateTagSiblingsLookupCacheTableNames( service_id ):
return ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name )
class ClientDBTagSiblings( HydrusDBModule.HydrusDBModule ):
class ClientDBTagSiblings( ClientDBModule.ClientDBModule ):
def __init__( self, cursor: sqlite3.Cursor, modules_services: ClientDBServices.ClientDBMasterServices, modules_tags: ClientDBMaster.ClientDBMasterTags, modules_tags_local_cache: ClientDBDefinitionsCache.ClientDBCacheLocalTags ):
@ -47,7 +48,7 @@ class ClientDBTagSiblings( HydrusDBModule.HydrusDBModule ):
self._service_ids_to_applicable_service_ids = None
self._service_ids_to_interested_service_ids = None
HydrusDBModule.HydrusDBModule.__init__( self, 'client tag siblings', cursor )
ClientDBModule.ClientDBModule.__init__( self, 'client tag siblings', cursor )
def _GenerateApplicationDicts( self ):
@ -69,14 +70,82 @@ class ClientDBTagSiblings( HydrusDBModule.HydrusDBModule ):
def _GetInitialIndexGenerationTuples( self ):
def _GetInitialIndexGenerationDict( self ) -> dict:
index_generation_tuples = [
( 'tag_siblings', [ 'service_id', 'good_tag_id' ], False ),
( 'tag_sibling_petitions', [ 'service_id', 'good_tag_id' ], False ),
index_generation_dict = {}
index_generation_dict[ 'tag_siblings' ] = [
( [ 'service_id', 'good_tag_id' ], False, 420 )
]
return index_generation_tuples
index_generation_dict[ 'tag_sibling_petitions' ] = [
( [ 'service_id', 'good_tag_id' ], False, 420 )
]
return index_generation_dict
def _GetInitialTableGenerationDict( self ) -> dict:
return {
'main.tag_siblings' : ( 'CREATE TABLE {} ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );', 414 ),
'main.tag_sibling_petitions' : ( 'CREATE TABLE {} ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );', 414 ),
'main.tag_sibling_application' : ( 'CREATE TABLE {} ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );', 414 )
}
def _GetServiceIndexGenerationDict( self, service_id ) -> dict:
( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( service_id )
index_generation_dict = {}
index_generation_dict[ cache_actual_tag_siblings_lookup_table_name ] = [
( [ 'ideal_tag_id' ], False, 414 )
]
index_generation_dict[ cache_ideal_tag_siblings_lookup_table_name ] = [
( [ 'ideal_tag_id' ], False, 414 )
]
return index_generation_dict
def _GetServiceTableGenerationDict( self, service_id ) -> dict:
( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( service_id )
return {
cache_actual_tag_siblings_lookup_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );', 414 ),
cache_ideal_tag_siblings_lookup_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );', 414 )
}
def _GetServiceIdsWeGenerateDynamicTablesFor( self ):
return self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
def _RepairRepopulateTables( self, repopulate_table_names, cursor_transaction_wrapper: HydrusDBBase.DBCursorTransactionWrapper ):
for service_id in self._GetServiceIdsWeGenerateDynamicTablesFor():
table_generation_dict = self._GetServiceTableGenerationDict( service_id )
this_service_table_names = set( table_generation_dict.keys() )
this_service_needs_repopulation = len( this_service_table_names.intersection( repopulate_table_names ) ) > 0
if this_service_needs_repopulation:
self._service_ids_to_applicable_service_ids = None
self._service_ids_to_interested_service_ids = None
self.Regen( ( service_id, ) )
cursor_transaction_wrapper.CommitAndBegin()
def AddTagSiblings( self, service_id, pairs ):
@ -99,14 +168,6 @@ class ClientDBTagSiblings( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._Execute( 'CREATE TABLE tag_siblings ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );' )
self._Execute( 'CREATE TABLE tag_sibling_petitions ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );' )
self._Execute( 'CREATE TABLE tag_sibling_application ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );' )
def DeleteTagSiblings( self, service_id, pairs ):
self._ExecuteMany( 'DELETE FROM tag_siblings WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ?;', ( ( service_id, bad_tag_id, good_tag_id ) for ( bad_tag_id, good_tag_id ) in pairs ) )
@ -195,16 +256,19 @@ class ClientDBTagSiblings( HydrusDBModule.HydrusDBModule ):
def Generate( self, tag_service_id ):
self._service_ids_to_applicable_service_ids = None
self._service_ids_to_interested_service_ids = None
table_generation_dict = self._GetServiceTableGenerationDict( tag_service_id )
( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id )
for ( table_name, ( create_query_without_name, version_added ) ) in table_generation_dict.items():
self._Execute( create_query_without_name.format( table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_actual_tag_siblings_lookup_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_ideal_tag_siblings_lookup_table_name ) )
index_generation_dict = self._GetServiceIndexGenerationDict( tag_service_id )
self._CreateIndex( cache_actual_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] )
self._CreateIndex( cache_ideal_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] )
for ( table_name, columns, unique, version_added ) in self._FlattenIndexGenerationDict( index_generation_dict ):
self._CreateIndex( table_name, columns, unique = unique )
self._Execute( 'INSERT OR IGNORE INTO tag_sibling_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', ( tag_service_id, 0, tag_service_id ) )
@ -335,17 +399,6 @@ class ClientDBTagSiblings( HydrusDBModule.HydrusDBModule ):
self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id ) SELECT bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( results_table_name, ideal_tag_ids_table_name, cache_tag_siblings_lookup_table_name ) )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
expected_table_names = [
'tag_siblings',
'tag_sibling_petitions',
'tag_sibling_application'
]
return expected_table_names
def GetIdeal( self, display_type, tag_service_id, tag_id ) -> int:
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )

View File

@ -577,6 +577,8 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
self._UpdateSystemTrayIcon( currently_booting = True )
self._notebook.freshSessionLoaded.connect( self.ReportFreshSessionLoaded )
self._controller.CallLaterQtSafe( self, 0.5, 'initialise session', self._InitialiseSession ) # do this in callafter as some pages want to talk to controller.gui, which doesn't exist yet!
@ -850,11 +852,16 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
if result == QW.QDialog.Accepted:
session = self._notebook.GetCurrentGUISession( 'last session' )
only_changed_page_data = True
about_to_save = True
session = self._notebook.GetCurrentGUISession( CC.LAST_SESSION_SESSION_NAME, only_changed_page_data, about_to_save )
session = self._FleshOutSessionWithCleanDataIfNeeded( self._notebook, CC.LAST_SESSION_SESSION_NAME, session )
self._controller.SaveGUISession( session )
session.SetName( 'exit session' )
session.SetName( CC.EXIT_SESSION_SESSION_NAME )
self._controller.SaveGUISession( session )
@ -1735,6 +1742,23 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
def _FleshOutSessionWithCleanDataIfNeeded( self, notebook: ClientGUIPages.PagesNotebook, name: str, session: ClientGUISession.GUISessionContainer ):
unchanged_page_data_hashes = session.GetUnchangedPageDataHashes()
have_hashed_serialised_objects = self._controller.Read( 'have_hashed_serialised_objects', unchanged_page_data_hashes )
if not have_hashed_serialised_objects:
only_changed_page_data = False
about_to_save = True
session = notebook.GetCurrentGUISession( name, only_changed_page_data, about_to_save )
return session
def _FlipClipboardWatcher( self, option_name ):
self._controller.new_options.FlipBoolean( option_name )
@ -2274,7 +2298,7 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
self._controller.CallLaterQtSafe( self, 0.25, 'load a blank page', do_it, default_gui_session, load_a_blank_page )
self._controller.CallLaterQtSafe( self, 0.25, 'load initial session', do_it, default_gui_session, load_a_blank_page )
def _LockServer( self, service_key, lock ):
@ -4212,7 +4236,7 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
def qt_session_gubbins():
self.ProposeSaveGUISession( 'last session' )
self.ProposeSaveGUISession( CC.LAST_SESSION_SESSION_NAME )
page = self._notebook.GetPageFromPageKey( bytes.fromhex( destination_page_key_hex ) )
@ -4220,7 +4244,7 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
self._notebook.CloseCurrentPage()
self.ProposeSaveGUISession( 'last session' )
self.ProposeSaveGUISession( CC.LAST_SESSION_SESSION_NAME )
page = self._notebook.NewPageQuery( CC.COMBINED_LOCAL_FILE_SERVICE_KEY )
@ -4310,7 +4334,7 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
t += 0.25
HG.client_controller.CallLaterQtSafe( self, t, 'test job', self.ProposeSaveGUISession, 'last session' )
HG.client_controller.CallLaterQtSafe( self, t, 'test job', self.ProposeSaveGUISession, CC.LAST_SESSION_SESSION_NAME )
return page_of_pages
@ -5227,9 +5251,14 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
else:
if HC.options[ 'default_gui_session' ] == 'last session':
if HC.options[ 'default_gui_session' ] == CC.LAST_SESSION_SESSION_NAME:
session = self._notebook.GetCurrentGUISession( 'last session' )
only_changed_page_data = True
about_to_save = True
session = self._notebook.GetCurrentGUISession( CC.LAST_SESSION_SESSION_NAME, only_changed_page_data, about_to_save )
session = self._FleshOutSessionWithCleanDataIfNeeded( self._notebook, CC.LAST_SESSION_SESSION_NAME, session )
callable = self.AutoSaveLastSession
@ -6089,7 +6118,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
ClientGUIMenus.AppendMenuItem( gui_actions, 'make a parentless text ctrl dialog', 'Make a parentless text control in a dialog to test some character event catching.', self._DebugMakeParentlessTextCtrl )
ClientGUIMenus.AppendMenuItem( gui_actions, 'reset multi-column list settings to default', 'Reset all multi-column list widths and other display settings to default.', self._DebugResetColumnListManager )
ClientGUIMenus.AppendMenuItem( gui_actions, 'force a main gui layout now', 'Tell the gui to relayout--useful to test some gui bootup layout issues.', self.adjustSize )
ClientGUIMenus.AppendMenuItem( gui_actions, 'save \'last session\' gui session', 'Make an immediate save of the \'last session\' gui session. Mostly for testing crashes, where last session is not saved correctly.', self.ProposeSaveGUISession, 'last session' )
ClientGUIMenus.AppendMenuItem( gui_actions, 'save \'last session\' gui session', 'Make an immediate save of the \'last session\' gui session. Mostly for testing crashes, where last session is not saved correctly.', self.ProposeSaveGUISession, CC.LAST_SESSION_SESSION_NAME )
ClientGUIMenus.AppendMenu( debug, gui_actions, 'gui actions' )
@ -7089,7 +7118,12 @@ Try to keep this below 10 million!'''
#
session = notebook.GetCurrentGUISession( name )
only_changed_page_data = True
about_to_save = True
session = notebook.GetCurrentGUISession( name, only_changed_page_data, about_to_save )
self._FleshOutSessionWithCleanDataIfNeeded( notebook, name, session )
self._controller.CallToThread( self._controller.SaveGUISession, session )
@ -7352,6 +7386,14 @@ Try to keep this below 10 million!'''
def ReportFreshSessionLoaded( self, gui_session: ClientGUISession.GUISessionContainer ):
if gui_session.GetName() == CC.LAST_SESSION_SESSION_NAME:
self._controller.ReportLastSessionLoaded( gui_session )
def ReplaceMenu( self, name, menu_or_none, label ):
if menu_or_none is not None:
@ -7501,11 +7543,16 @@ Try to keep this below 10 million!'''
#
session = self._notebook.GetCurrentGUISession( 'last session' )
only_changed_page_data = True
about_to_save = True
session = self._notebook.GetCurrentGUISession( CC.LAST_SESSION_SESSION_NAME, only_changed_page_data, about_to_save )
session = self._FleshOutSessionWithCleanDataIfNeeded( self._notebook, CC.LAST_SESSION_SESSION_NAME, session )
self._controller.SaveGUISession( session )
session.SetName( 'exit session' )
session.SetName( CC.EXIT_SESSION_SESSION_NAME )
self._controller.SaveGUISession( session )

View File

@ -103,6 +103,9 @@ def AppendMenuItem( menu, label, description, callable, *args, **kwargs ):
def AppendMenuLabel( menu, label, description = '' ):
original_label_text = label
label = SanitiseLabel( label )
if description is None:
description = ''
@ -123,7 +126,7 @@ def AppendMenuLabel( menu, label, description = '' ):
menu.addAction( menu_item )
BindMenuItem( menu_item, HG.client_controller.pub, 'clipboard', 'text', label )
BindMenuItem( menu_item, HG.client_controller.pub, 'clipboard', 'text', original_label_text )
return menu_item

View File

@ -367,19 +367,42 @@ class EditDeleteFilesPanel( ClientGUIScrolledPanels.EditPanel ):
permitted_reason_choices.append( ( default_reason, default_reason ) )
for s in HG.client_controller.new_options.GetStringList( 'advanced_file_deletion_reasons' ):
last_advanced_file_deletion_reason = HG.client_controller.new_options.GetNoneableString( 'last_advanced_file_deletion_reason' )
if last_advanced_file_deletion_reason is None:
selection_index = 0 # default, top row
else:
selection_index = None # text or custom
for ( i, s ) in enumerate( HG.client_controller.new_options.GetStringList( 'advanced_file_deletion_reasons' ) ):
permitted_reason_choices.append( ( s, s ) )
if last_advanced_file_deletion_reason is not None and s == last_advanced_file_deletion_reason:
selection_index = i + 1
permitted_reason_choices.append( ( 'custom', None ) )
self._reason_radio = ClientGUICommon.BetterRadioBox( self._reason_panel, choices = permitted_reason_choices, vertical = True )
self._reason_radio.Select( 0 )
self._custom_reason = QW.QLineEdit( self._reason_panel )
if selection_index is None:
selection_index = len( permitted_reason_choices ) - 1 # custom
self._custom_reason.setText( last_advanced_file_deletion_reason )
self._reason_radio.Select( selection_index )
#
( file_service_key, hashes, description ) = self._action_radio.GetValue()
@ -610,6 +633,8 @@ class EditDeleteFilesPanel( ClientGUIScrolledPanels.EditPanel ):
reason = self._GetReason()
save_reason = False
local_file_services = ( CC.LOCAL_FILE_SERVICE_KEY, )
if file_service_key in local_file_services:
@ -622,6 +647,8 @@ class EditDeleteFilesPanel( ClientGUIScrolledPanels.EditPanel ):
jobs = [ { file_service_key : [ content_update ] } for content_update in content_updates ]
save_reason = True
elif file_service_key == 'physical_delete':
chunks_of_hashes = HydrusData.SplitListIntoChunks( hashes, 64 )
@ -634,6 +661,8 @@ class EditDeleteFilesPanel( ClientGUIScrolledPanels.EditPanel ):
involves_physical_delete = True
save_reason = True
elif file_service_key == 'clear_delete':
chunks_of_hashes = list( HydrusData.SplitListIntoChunks( hashes, 64 ) ) # iterator, so list it to use it more than once, jej
@ -657,6 +686,20 @@ class EditDeleteFilesPanel( ClientGUIScrolledPanels.EditPanel ):
jobs = [ { file_service_key : content_updates } ]
if save_reason:
if self._reason_radio.GetCurrentIndex() <= 0:
last_advanced_file_deletion_reason = None
else:
last_advanced_file_deletion_reason = reason
HG.client_controller.new_options.SetNoneableString( 'last_advanced_file_deletion_reason', last_advanced_file_deletion_reason )
return ( involves_physical_delete, jobs )

View File

@ -372,7 +372,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
text = 'Enter strings such as "http://ip:port" or "http://user:pass@ip:port" to use for http and https traffic. It should take effect immediately on dialog ok.'
text += os.linesep * 2
text += 'no_proxy takes the form of comma-separated hosts/domains, just as in curl or the NO_PROXY environment variable. When http and/or https proxies are set, they will not be used for these.'
text += 'NO PROXY DOES NOT WORK UNLESS YOU HAVE A CUSTOM BUILD OF REQUESTS, SORRY! no_proxy takes the form of comma-separated hosts/domains, just as in curl or the NO_PROXY environment variable. When http and/or https proxies are set, they will not be used for these.'
text += os.linesep * 2
if ClientNetworkingSessions.SOCKS_PROXY_OK:
@ -1215,6 +1215,9 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
tt = 'In many places across the program (typically import status lists), the client will state a timestamp as "5 days ago". If you would prefer a standard ISO string, like "2018-03-01 12:40:23", check this.'
self._always_show_iso_time.setToolTip( tt )
self._human_bytes_sig_figs = QP.MakeQSpinBox( self._misc_panel, min = 1, max = 6 )
self._human_bytes_sig_figs.setToolTip( 'When the program presents a bytes size above 1KB, like 21.3KB or 4.11GB, how many total digits do we want in the number? 2 or 3 is best.')
self._discord_dnd_fix = QW.QCheckBox( self._misc_panel )
self._discord_dnd_fix.setToolTip( 'This makes small file drag-and-drops a little laggier in exchange for discord support.' )
@ -1251,6 +1254,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
self._always_show_iso_time.setChecked( self._new_options.GetBoolean( 'always_show_iso_time' ) )
self._human_bytes_sig_figs.setValue( self._new_options.GetInteger( 'human_bytes_sig_figs' ) )
self._popup_message_character_width.setValue( self._new_options.GetInteger( 'popup_message_character_width' ) )
self._popup_message_force_min_width.setChecked( self._new_options.GetBoolean( 'popup_message_force_min_width' ) )
@ -1307,6 +1312,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
rows.append( ( 'BUGFIX: Discord file drag-and-drop fix (works for <=25, <200MB file DnDs): ', self._discord_dnd_fix ) )
rows.append( ( 'Discord drag-and-drop filename pattern: ', self._discord_dnd_filename_pattern ) )
rows.append( ( 'Export pattern shortcuts: ', ClientGUICommon.ExportPatternButton( self ) ) )
rows.append( ( 'EXPERIMENTAL: Bytes strings >1KB pseudo significant figures: ', self._human_bytes_sig_figs ) )
rows.append( ( 'EXPERIMENTAL BUGFIX: Secret discord file drag-and-drop fix: ', self._secret_discord_dnd_fix ) )
rows.append( ( 'ANTI-CRASH BUGFIX: Use Qt file/directory selection dialogs, rather than OS native: ', self._use_qt_file_dialogs ) )
@ -1372,6 +1378,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
self._new_options.SetBoolean( 'always_show_iso_time', self._always_show_iso_time.isChecked() )
self._new_options.SetInteger( 'human_bytes_sig_figs', self._human_bytes_sig_figs.value() )
self._new_options.SetBoolean( 'activate_window_on_tag_search_page_activation', self._activate_window_on_tag_search_page_activation.isChecked() )
self._new_options.SetInteger( 'popup_message_character_width', self._popup_message_character_width.value() )
@ -1494,9 +1502,9 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
gui_session_names = HG.client_controller.Read( 'serialisable_names', HydrusSerialisable.SERIALISABLE_TYPE_GUI_SESSION_CONTAINER )
if 'last session' not in gui_session_names:
if CC.LAST_SESSION_SESSION_NAME not in gui_session_names:
gui_session_names.insert( 0, 'last session' )
gui_session_names.insert( 0, CC.LAST_SESSION_SESSION_NAME )
self._default_gui_session.addItem( 'just a blank page', None )

View File

@ -102,6 +102,12 @@ class NetworkJobControl( QW.QFrame ):
if self._network_job is not None and self._network_job.engine is not None:
url = self._network_job.GetURL()
ClientGUIMenus.AppendMenuLabel( menu, url, description = 'copy URL to the clipboard' )
ClientGUIMenus.AppendSeparator( menu )
network_contexts = self._network_job.GetNetworkContexts()
if len( network_contexts ) > 0:
@ -340,6 +346,8 @@ class NetworkJobControl( QW.QFrame ):
self._network_job = None
self._gauge.setToolTip( '' )
self._Update()
HG.client_controller.gui.UnregisterUIUpdateWindow( self )
@ -351,6 +359,8 @@ class NetworkJobControl( QW.QFrame ):
self._network_job = network_job
self._gauge.setToolTip( self._network_job.GetURL() )
self._Update()
HG.client_controller.gui.RegisterUIUpdateWindow( self )

View File

@ -223,6 +223,8 @@ class ManagementController( HydrusSerialisable.SerialisableBase ):
self._management_type = None
self._last_serialisable_change_timestamp = 0
self._keys = {}
self._simples = {}
self._serialisables = {}
@ -273,6 +275,11 @@ class ManagementController( HydrusSerialisable.SerialisableBase ):
self._serialisables.update( { name : HydrusSerialisable.CreateFromSerialisableTuple( value ) for ( name, value ) in list(serialisable_serialisables.items()) } )
def _SerialisableChangeMade( self ):
self._last_serialisable_change_timestamp = HydrusData.GetNow()
def _UpdateSerialisableInfo( self, version, old_serialisable_info ):
if version == 1:
@ -582,46 +589,43 @@ class ManagementController( HydrusSerialisable.SerialisableBase ):
def GetValueRange( self ):
try:
if self.IsImporter():
if self._management_type == MANAGEMENT_TYPE_IMPORT_HDD:
try:
hdd_import = self._serialisables[ 'hdd_import' ]
if self._management_type == MANAGEMENT_TYPE_IMPORT_HDD:
importer = self._serialisables[ 'hdd_import' ]
elif self._management_type == MANAGEMENT_TYPE_IMPORT_SIMPLE_DOWNLOADER:
importer = self._serialisables[ 'simple_downloader_import' ]
elif self._management_type == MANAGEMENT_TYPE_IMPORT_MULTIPLE_GALLERY:
importer = self._serialisables[ 'multiple_gallery_import' ]
elif self._management_type == MANAGEMENT_TYPE_IMPORT_MULTIPLE_WATCHER:
importer = self._serialisables[ 'multiple_watcher_import' ]
elif self._management_type == MANAGEMENT_TYPE_IMPORT_URLS:
importer = self._serialisables[ 'urls_import' ]
return hdd_import.GetValueRange()
return importer.GetValueRange()
elif self._management_type == MANAGEMENT_TYPE_IMPORT_SIMPLE_DOWNLOADER:
except KeyError:
simple_downloader_import = self._serialisables[ 'simple_downloader_import' ]
return simple_downloader_import.GetValueRange()
elif self._management_type == MANAGEMENT_TYPE_IMPORT_MULTIPLE_GALLERY:
multiple_gallery_import = self._serialisables[ 'multiple_gallery_import' ]
return multiple_gallery_import.GetValueRange()
elif self._management_type == MANAGEMENT_TYPE_IMPORT_MULTIPLE_WATCHER:
multiple_watcher_import = self._serialisables[ 'multiple_watcher_import' ]
return multiple_watcher_import.GetValueRange()
elif self._management_type == MANAGEMENT_TYPE_IMPORT_URLS:
urls_import = self._serialisables[ 'urls_import' ]
return urls_import.GetValueRange()
return ( 0, 0 )
except KeyError:
else:
return ( 0, 0 )
return ( 0, 0 )
def GetVariable( self, name ):
@ -635,6 +639,40 @@ class ManagementController( HydrusSerialisable.SerialisableBase ):
def HasSerialisableChangesSince( self, since_timestamp ):
if self.IsImporter():
if self._management_type == MANAGEMENT_TYPE_IMPORT_HDD:
importer = self._serialisables[ 'hdd_import' ]
elif self._management_type == MANAGEMENT_TYPE_IMPORT_SIMPLE_DOWNLOADER:
importer = self._serialisables[ 'simple_downloader_import' ]
elif self._management_type == MANAGEMENT_TYPE_IMPORT_MULTIPLE_GALLERY:
importer = self._serialisables[ 'multiple_gallery_import' ]
elif self._management_type == MANAGEMENT_TYPE_IMPORT_MULTIPLE_WATCHER:
importer = self._serialisables[ 'multiple_watcher_import' ]
elif self._management_type == MANAGEMENT_TYPE_IMPORT_URLS:
importer = self._serialisables[ 'urls_import' ]
if importer.HasSerialisableChangesSince( since_timestamp ):
return True
return self._last_serialisable_change_timestamp > since_timestamp
def HasVariable( self, name ):
return name in self._simples or name in self._serialisables
@ -649,10 +687,17 @@ class ManagementController( HydrusSerialisable.SerialisableBase ):
self._keys[ name ] = key
self._SerialisableChangeMade()
def SetPageName( self, name ):
self._page_name = name
if name != self._page_name:
self._page_name = name
self._SerialisableChangeMade()
def SetType( self, management_type ):
@ -661,16 +706,28 @@ class ManagementController( HydrusSerialisable.SerialisableBase ):
self._InitialiseDefaults()
self._SerialisableChangeMade()
def SetVariable( self, name, value ):
if isinstance( value, HydrusSerialisable.SerialisableBase ):
self._serialisables[ name ] = value
if name not in self._serialisables or value.DumpToString() != self._serialisables[ name ].DumpToString():
self._serialisables[ name ] = value
self._SerialisableChangeMade()
else:
self._simples[ name ] = value
if name not in self._simples or value != self._simples[ name ]:
self._simples[ name ] = value
self._SerialisableChangeMade()
@ -1819,7 +1876,7 @@ class ManagementPanelImporterMultipleGallery( ManagementPanelImporter ):
self._highlighted_gallery_import = None
self._multiple_gallery_import.SetHighlightedGalleryImport( self._highlighted_gallery_import )
self._multiple_gallery_import.ClearHighlightedGalleryImport()
self._gallery_importers_listctrl_panel.UpdateButtons()
@ -2638,7 +2695,7 @@ class ManagementPanelImporterMultipleWatcher( ManagementPanelImporter ):
self._highlighted_watcher = None
self._multiple_watcher_import.SetHighlightedWatcher( self._highlighted_watcher )
self._multiple_watcher_import.ClearHighlightedWatcher()
self._watchers_listctrl_panel.UpdateButtons()
@ -2739,6 +2796,18 @@ class ManagementPanelImporterMultipleWatcher( ManagementPanelImporter ):
return ( display_tuple, sort_tuple )
def _CopySelectedSubjects( self ):
watchers = self._watchers_listctrl.GetData( only_selected = True )
if len( watchers ) > 0:
text = os.linesep.join( ( watcher.GetSubject() for watcher in watchers ) )
HG.client_controller.pub( 'clipboard', 'text', text )
def _CopySelectedURLs( self ):
watchers = self._watchers_listctrl.GetData( only_selected = True )
@ -2766,12 +2835,16 @@ class ManagementPanelImporterMultipleWatcher( ManagementPanelImporter ):
menu = QW.QMenu()
ClientGUIMenus.AppendMenuItem( menu, 'copy urls', 'Copy all the selected watchers\' urls to clipboard.', self._CopySelectedURLs )
ClientGUIMenus.AppendMenuItem( menu, 'open urls', 'Open all the selected watchers\' urls in your browser.', self._OpenSelectedURLs )
ClientGUIMenus.AppendSeparator( menu )
ClientGUIMenus.AppendMenuItem( menu, 'copy subjects', 'Copy all the selected watchers\' subjects to clipboard.', self._CopySelectedSubjects )
ClientGUIMenus.AppendSeparator( menu )
ClientGUIMenus.AppendMenuItem( menu, 'show all watchers\' presented files', 'Gather the presented files for the selected watchers and show them in a new page.', self._ShowSelectedImportersFiles, show='presented' )
ClientGUIMenus.AppendMenuItem( menu, 'show all watchers\' new files', 'Gather the presented files for the selected watchers and show them in a new page.', self._ShowSelectedImportersFiles, show='new' )
ClientGUIMenus.AppendMenuItem( menu, 'show all watchers\' files', 'Gather the presented files for the selected watchers and show them in a new page.', self._ShowSelectedImportersFiles, show='all' )

View File

@ -1,4 +1,5 @@
import collections
import hashlib
import os
import typing
@ -466,6 +467,10 @@ class Page( QW.QSplitter ):
self._controller.sub( self, 'SetSplitterPositions', 'set_splitter_positions' )
self._current_session_page_container = None
self._current_session_page_container_hashes_hash = self._GetCurrentSessionPageHashesHash()
self._current_session_page_container_timestamp = 0
self._ConnectMediaPanelSignals()
@ -479,6 +484,22 @@ class Page( QW.QSplitter ):
self._management_panel.ConnectMediaPanelSignals( self._media_panel )
def _GetCurrentSessionPageHashesHash( self ):
hashlist = self.GetHashes()
hashlist_hashable = tuple( hashlist )
return hash( hashlist_hashable )
def _SetCurrentPageContainer( self, page_container: ClientGUISession.GUISessionContainerPageSingle ):
self._current_session_page_container = page_container
self._current_session_page_container_hashes_hash = self._GetCurrentSessionPageHashesHash()
self._current_session_page_container_timestamp = HydrusData.GetNow()
def _SetPrettyStatus( self, status: str ):
self._pretty_status = status
@ -684,7 +705,16 @@ class Page( QW.QSplitter ):
return self._parent_notebook
def GetSerialisablePage( self ):
def GetSerialisablePage( self, only_changed_page_data, about_to_save ):
if only_changed_page_data and not self.IsCurrentSessionPageDirty():
hashes_to_page_data = {}
skipped_unchanged_page_hashes = { self._current_session_page_container.GetPageDataHash() }
return ( self._current_session_page_container, hashes_to_page_data, skipped_unchanged_page_hashes )
name = self.GetName()
@ -698,7 +728,14 @@ class Page( QW.QSplitter ):
hashes_to_page_data = { page_data_hash : page_data }
return ( page_container, hashes_to_page_data )
if about_to_save:
self._SetCurrentPageContainer( page_container )
skipped_unchanged_page_hashes = set()
return ( page_container, hashes_to_page_data, skipped_unchanged_page_hashes )
def GetSessionAPIInfoDict( self, is_selected = False ):
@ -755,6 +792,23 @@ class Page( QW.QSplitter ):
return num_hashes + ( num_seeds * 20 )
def IsCurrentSessionPageDirty( self ):
if self._current_session_page_container is None:
return True
else:
if self._GetCurrentSessionPageHashesHash() != self._current_session_page_container_hashes_hash:
return True
return self._management_controller.HasSerialisableChangesSince( self._current_session_page_container_timestamp )
def IsGalleryDownloaderPage( self ):
return self._management_controller.GetType() == ClientGUIManagement.MANAGEMENT_TYPE_IMPORT_MULTIPLE_GALLERY
@ -819,6 +873,11 @@ class Page( QW.QSplitter ):
return self._management_controller.SetPageName( name )
def SetPageContainerClean( self, page_container: ClientGUISession.GUISessionContainerPageSingle ):
self._SetCurrentPageContainer( page_container )
def SetPrettyStatus( self, page_key, status ):
if page_key == self._page_key:
@ -999,6 +1058,8 @@ directions_for_notebook_tabs[ CC.DIRECTION_DOWN ] = QW.QTabWidget.South
class PagesNotebook( QP.TabWidgetWithDnD ):
freshSessionLoaded = QC.Signal( ClientGUISession.GUISessionContainer )
def __init__( self, parent, controller, name ):
QP.TabWidgetWithDnD.__init__( self, parent )
@ -1226,13 +1287,16 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
page = self.widget( index )
( container, hashes_to_page_data ) = page.GetSerialisablePage()
only_changed_page_data = False
about_to_save = False
( container, hashes_to_page_data, skipped_unchanged_page_hashes ) = page.GetSerialisablePage( only_changed_page_data, about_to_save )
top_notebook_container = ClientGUISession.GUISessionContainerPageNotebook( 'dupe top notebook', page_containers = [ container ] )
session = ClientGUISession.GUISessionContainer( 'dupe session', top_notebook_container = top_notebook_container, hashes_to_page_data = hashes_to_page_data )
self.InsertSession( index + 1, session )
self.InsertSession( index + 1, session, session_is_clean = False )
def _GetDefaultPageInsertionIndex( self ):
@ -1937,6 +2001,8 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
destination.AppendGUISession( session )
self.freshSessionLoaded.emit( session )
job_key.Delete()
@ -2087,11 +2153,11 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
return {}
def GetCurrentGUISession( self, name ):
def GetCurrentGUISession( self, name: str, only_changed_page_data: bool, about_to_save: bool ):
( page_container, hashes_to_page_data ) = self.GetSerialisablePage()
( page_container, hashes_to_page_data, skipped_unchanged_page_hashes ) = self.GetSerialisablePage( only_changed_page_data, about_to_save )
session = ClientGUISession.GUISessionContainer( name, top_notebook_container = page_container, hashes_to_page_data = hashes_to_page_data )
session = ClientGUISession.GUISessionContainer( name, top_notebook_container = page_container, hashes_to_page_data = hashes_to_page_data, skipped_unchanged_page_hashes = skipped_unchanged_page_hashes )
return session
@ -2314,24 +2380,27 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
return self._parent_notebook
def GetSerialisablePage( self ):
def GetSerialisablePage( self, only_changed_page_data, about_to_save ):
page_containers = []
hashes_to_page_data = {}
skipped_unchanged_page_hashes = set()
for page in self._GetPages():
( sub_page_container, some_hashes_to_page_data ) = page.GetSerialisablePage()
( sub_page_container, some_hashes_to_page_data, some_skipped_unchanged_page_hashes ) = page.GetSerialisablePage( only_changed_page_data, about_to_save )
page_containers.append( sub_page_container )
hashes_to_page_data.update( some_hashes_to_page_data )
skipped_unchanged_page_hashes.update( some_skipped_unchanged_page_hashes )
page_container = ClientGUISession.GUISessionContainerPageNotebook( self._name, page_containers = page_containers )
return ( page_container, hashes_to_page_data )
return ( page_container, hashes_to_page_data, skipped_unchanged_page_hashes )
def GetSessionAPIInfoDict( self, is_selected = True ):
@ -2513,7 +2582,7 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
return False
def InsertSession( self, forced_insertion_index: int, session: ClientGUISession.GUISessionContainer ):
def InsertSession( self, forced_insertion_index: int, session: ClientGUISession.GUISessionContainer, session_is_clean = True ):
# get the top notebook, then for every page in there...
@ -2522,10 +2591,10 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
page_containers = top_notebook_container.GetPageContainers()
select_first_page = True
self.InsertSessionNotebookPages( forced_insertion_index, session, page_containers, select_first_page )
self.InsertSessionNotebookPages( forced_insertion_index, session, page_containers, select_first_page, session_is_clean = session_is_clean )
def InsertSessionNotebook( self, forced_insertion_index: int, session: ClientGUISession.GUISessionContainer, notebook_page_container: ClientGUISession.GUISessionContainerPageNotebook, select_first_page: bool ):
def InsertSessionNotebook( self, forced_insertion_index: int, session: ClientGUISession.GUISessionContainer, notebook_page_container: ClientGUISession.GUISessionContainerPageNotebook, select_first_page: bool, session_is_clean = True ):
name = notebook_page_container.GetName()
@ -2533,10 +2602,10 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
page_containers = notebook_page_container.GetPageContainers()
page.InsertSessionNotebookPages( 0, session, page_containers, select_first_page )
page.InsertSessionNotebookPages( 0, session, page_containers, select_first_page, session_is_clean = session_is_clean )
def InsertSessionNotebookPages( self, forced_insertion_index: int, session: ClientGUISession.GUISessionContainer, page_containers: typing.Collection[ ClientGUISession.GUISessionContainerPage ], select_first_page: bool ):
def InsertSessionNotebookPages( self, forced_insertion_index: int, session: ClientGUISession.GUISessionContainer, page_containers: typing.Collection[ ClientGUISession.GUISessionContainerPage ], select_first_page: bool, session_is_clean = True ):
done_first_page = False
@ -2548,11 +2617,11 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
if isinstance( page_container, ClientGUISession.GUISessionContainerPageNotebook ):
self.InsertSessionNotebook( forced_insertion_index, session, page_container, select_page )
self.InsertSessionNotebook( forced_insertion_index, session, page_container, select_page, session_is_clean = session_is_clean )
else:
result = self.InsertSessionPage( forced_insertion_index, session, page_container, select_page )
result = self.InsertSessionPage( forced_insertion_index, session, page_container, select_page, session_is_clean = session_is_clean )
if result is None:
@ -2571,7 +2640,7 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
def InsertSessionPage( self, forced_insertion_index: int, session: ClientGUISession.GUISessionContainer, page_container: ClientGUISession.GUISessionContainerPageSingle, select_page: bool ):
def InsertSessionPage( self, forced_insertion_index: int, session: ClientGUISession.GUISessionContainer, page_container: ClientGUISession.GUISessionContainerPageSingle, select_page: bool, session_is_clean = True ):
try:
@ -2589,7 +2658,14 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
management_controller = page_data.GetManagementController()
initial_hashes = page_data.GetHashes()
return self.NewPage( management_controller, initial_hashes = initial_hashes, forced_insertion_index = forced_insertion_index, select_page = select_page )
page = self.NewPage( management_controller, initial_hashes = initial_hashes, forced_insertion_index = forced_insertion_index, select_page = select_page )
if session_is_clean and page is not None:
page.SetPageContainerClean( page_container )
return page
def IsMultipleWatcherPage( self ):

View File

@ -3,7 +3,9 @@ import itertools
from hydrus.core import HydrusExceptions
from hydrus.core import HydrusSerialisable
RESERVED_SESSION_NAMES = { '', 'just a blank page', 'last session', 'exit session' }
from hydrus.client import ClientConstants as CC
RESERVED_SESSION_NAMES = { '', 'just a blank page', CC.LAST_SESSION_SESSION_NAME, CC.EXIT_SESSION_SESSION_NAME }
class GUISessionContainer( HydrusSerialisable.SerialisableBaseNamed ):
@ -11,7 +13,7 @@ class GUISessionContainer( HydrusSerialisable.SerialisableBaseNamed ):
SERIALISABLE_NAME = 'GUI Session Container'
SERIALISABLE_VERSION = 1
def __init__( self, name, top_notebook_container = None, hashes_to_page_data = None ):
def __init__( self, name, top_notebook_container = None, hashes_to_page_data = None, skipped_unchanged_page_hashes = None ):
HydrusSerialisable.SerialisableBaseNamed.__init__( self, name )
@ -25,8 +27,14 @@ class GUISessionContainer( HydrusSerialisable.SerialisableBaseNamed ):
hashes_to_page_data = {}
if skipped_unchanged_page_hashes is None:
skipped_unchanged_page_hashes = set()
self._top_notebook_container = top_notebook_container
self._hashes_to_page_data = hashes_to_page_data
self._skipped_unchanged_page_hashes = skipped_unchanged_page_hashes
def _GetSerialisableInfo( self ):
@ -58,7 +66,7 @@ class GUISessionContainer( HydrusSerialisable.SerialisableBaseNamed ):
return self._hashes_to_page_data[ hash ]
def GetPageDataHashes( self ):
def GetPageDataHashes( self ) -> set:
return self._top_notebook_container.GetPageDataHashes()
@ -68,6 +76,19 @@ class GUISessionContainer( HydrusSerialisable.SerialisableBaseNamed ):
return self._top_notebook_container
def GetUnchangedPageDataHashes( self ):
return set( self._skipped_unchanged_page_hashes )
def HasAllDirtyPageData( self ):
expected_hashes = self.GetPageDataHashes().difference( self._skipped_unchanged_page_hashes )
actual_hashes = set( self._hashes_to_page_data.keys() )
return expected_hashes.issubset( actual_hashes )
def HasAllPageData( self ):
expected_hashes = self.GetPageDataHashes()
@ -122,7 +143,7 @@ class GUISessionContainerPageNotebook( GUISessionContainerPage ):
self._page_containers = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_pages )
def GetPageDataHashes( self ):
def GetPageDataHashes( self ) -> set:
return set( itertools.chain.from_iterable( ( page.GetPageDataHashes() for page in self._page_containers ) ) )
@ -164,12 +185,12 @@ class GUISessionContainerPageSingle( GUISessionContainerPage ):
self._page_data_hash = bytes.fromhex( page_data_hash_hex )
def GetPageDataHash( self ):
def GetPageDataHash( self ) -> bytes:
return self._page_data_hash
def GetPageDataHashes( self ):
def GetPageDataHashes( self ) -> set:
return { self._page_data_hash }

View File

@ -135,7 +135,7 @@ class PredicateSystemRatingNumericalControl( QW.QWidget ):
self._rated_checkbox = QW.QCheckBox( 'rated', self )
self._not_rated_checkbox = QW.QCheckBox( 'not rated', self )
self._operator = QP.RadioBox( self, choices = [ '>', '<', '=', '\u2248'] )
self._operator = QP.RadioBox( self, choices = [ '>', '<', '=', CC.UNICODE_ALMOST_EQUAL_TO ] )
self._rating_control = ClientGUIRatings.RatingNumericalDialog( self, service_key )
self._operator.Select( 2 )

View File

@ -178,7 +178,7 @@ class PanelPredicateSystemAgeDate( PanelPredicateSystemSingle ):
PanelPredicateSystemSingle.__init__( self, parent )
self._sign = QP.RadioBox( self, choices=['<','\u2248','=','>'] )
self._sign = QP.RadioBox( self, choices=['<',CC.UNICODE_ALMOST_EQUAL_TO,'=','>'] )
self._date = QW.QCalendarWidget( self )
@ -239,7 +239,7 @@ class PanelPredicateSystemAgeDelta( PanelPredicateSystemSingle ):
PanelPredicateSystemSingle.__init__( self, parent )
self._sign = QP.RadioBox( self, choices=['<','\u2248','>'] )
self._sign = QP.RadioBox( self, choices=['<',CC.UNICODE_ALMOST_EQUAL_TO,'>'] )
self._years = QP.MakeQSpinBox( self, max=30, width = 60 )
self._months = QP.MakeQSpinBox( self, max=60, width = 60 )
@ -297,7 +297,7 @@ class PanelPredicateSystemModifiedDate( PanelPredicateSystemSingle ):
PanelPredicateSystemSingle.__init__( self, parent )
self._sign = QP.RadioBox( self, choices=['<','\u2248','=','>'] )
self._sign = QP.RadioBox( self, choices=['<',CC.UNICODE_ALMOST_EQUAL_TO,'=','>'] )
self._date = QW.QCalendarWidget( self )
@ -358,7 +358,7 @@ class PanelPredicateSystemModifiedDelta( PanelPredicateSystemSingle ):
PanelPredicateSystemSingle.__init__( self, parent )
self._sign = QP.RadioBox( self, choices=['<','\u2248','>'] )
self._sign = QP.RadioBox( self, choices=['<',CC.UNICODE_ALMOST_EQUAL_TO,'>'] )
self._years = QP.MakeQSpinBox( self, max=30 )
self._months = QP.MakeQSpinBox( self, max=60 )
@ -416,7 +416,7 @@ class PanelPredicateSystemDuplicateRelationships( PanelPredicateSystemSingle ):
PanelPredicateSystemSingle.__init__( self, parent )
choices = [ '<', '\u2248', '=', '>' ]
choices = [ '<', CC.UNICODE_ALMOST_EQUAL_TO, '=', '>' ]
self._sign = QP.RadioBox( self, choices = choices )
@ -472,7 +472,7 @@ class PanelPredicateSystemDuration( PanelPredicateSystemSingle ):
PanelPredicateSystemSingle.__init__( self, parent )
choices = [ '<', '\u2248', '=', '>' ]
choices = [ '<', CC.UNICODE_ALMOST_EQUAL_TO, '=', CC.UNICODE_NOT_EQUAL_TO, '>' ]
self._sign = QP.RadioBox( self, choices = choices )
@ -592,7 +592,7 @@ class PanelPredicateSystemFileViewingStatsViews( PanelPredicateSystemSingle ):
self._viewing_locations.Append( 'media views', 'media' )
self._viewing_locations.Append( 'preview views', 'preview' )
self._sign = QP.RadioBox( self, choices=['<','\u2248','=','>'] )
self._sign = QP.RadioBox( self, choices=['<',CC.UNICODE_ALMOST_EQUAL_TO,'=','>'] )
self._num = QP.MakeQSpinBox( self, min=0, max=1000000 )
@ -664,7 +664,7 @@ class PanelPredicateSystemFileViewingStatsViewtime( PanelPredicateSystemSingle )
self._viewing_locations.Append( 'media viewtime', 'media' )
self._viewing_locations.Append( 'preview viewtime', 'preview' )
self._sign = QP.RadioBox( self, choices=['<','\u2248','=','>'] )
self._sign = QP.RadioBox( self, choices=['<',CC.UNICODE_ALMOST_EQUAL_TO,'=','>'] )
self._time_delta = ClientGUITime.TimeDeltaCtrl( self, min = 0, days = True, hours = True, minutes = True, seconds = True )
@ -731,7 +731,7 @@ class PanelPredicateSystemFramerate( PanelPredicateSystemSingle ):
PanelPredicateSystemSingle.__init__( self, parent )
choices = [ '<', '=', '>' ]
choices = [ '<', '=', CC.UNICODE_NOT_EQUAL_TO, '>' ]
self._sign = QP.RadioBox( self, choices = choices )
@ -907,7 +907,7 @@ class PanelPredicateSystemHeight( PanelPredicateSystemSingle ):
PanelPredicateSystemSingle.__init__( self, parent )
self._sign = QP.RadioBox( self, choices=['<','\u2248','=','>'] )
self._sign = QP.RadioBox( self, choices=['<',CC.UNICODE_ALMOST_EQUAL_TO,'=',CC.UNICODE_NOT_EQUAL_TO,'>'] )
self._height = QP.MakeQSpinBox( self, max=200000, width = 60 )
@ -1350,7 +1350,7 @@ class PanelPredicateSystemNumPixels( PanelPredicateSystemSingle ):
PanelPredicateSystemSingle.__init__( self, parent )
self._sign = QP.RadioBox( self, choices=[ '<', '\u2248', '=', '>' ] )
self._sign = QP.RadioBox( self, choices=[ '<', CC.UNICODE_ALMOST_EQUAL_TO, '=', CC.UNICODE_NOT_EQUAL_TO, '>' ] )
self._num_pixels = QP.MakeQSpinBox( self, max=1048576, width = 60 )
@ -1384,7 +1384,7 @@ class PanelPredicateSystemNumPixels( PanelPredicateSystemSingle ):
def GetDefaultPredicate( self ):
sign = '\u2248'
sign = CC.UNICODE_ALMOST_EQUAL_TO
num_pixels = 2
unit = 1000000
@ -1404,7 +1404,7 @@ class PanelPredicateSystemNumFrames( PanelPredicateSystemSingle ):
PanelPredicateSystemSingle.__init__( self, parent )
choices = [ '<', '\u2248', '=', '>' ]
choices = [ '<', CC.UNICODE_ALMOST_EQUAL_TO, '=', CC.UNICODE_NOT_EQUAL_TO, '>' ]
self._sign = QP.RadioBox( self, choices = choices )
@ -1456,7 +1456,7 @@ class PanelPredicateSystemNumTags( PanelPredicateSystemSingle ):
self._namespace = ClientGUICommon.NoneableTextCtrl( self, none_phrase = 'all tags' )
self._namespace.setToolTip( 'Enable but leave blank for unnamespaced tags.' )
self._sign = QP.RadioBox( self, choices=['<','\u2248','=','>'] )
self._sign = QP.RadioBox( self, choices=['<',CC.UNICODE_ALMOST_EQUAL_TO,'=','>'] )
self._num_tags = QP.MakeQSpinBox( self, max=2000, width = 60 )
@ -1579,7 +1579,7 @@ class PanelPredicateSystemNumWords( PanelPredicateSystemSingle ):
PanelPredicateSystemSingle.__init__( self, parent )
self._sign = QP.RadioBox( self, choices=['<','\u2248','=','>'] )
self._sign = QP.RadioBox( self, choices=['<',CC.UNICODE_ALMOST_EQUAL_TO,'=',CC.UNICODE_NOT_EQUAL_TO,'>'] )
self._num_words = QP.MakeQSpinBox( self, max=1000000, width = 60 )
@ -1627,7 +1627,7 @@ class PanelPredicateSystemRatio( PanelPredicateSystemSingle ):
PanelPredicateSystemSingle.__init__( self, parent )
self._sign = QP.RadioBox( self, choices=['=','wider than','taller than','\u2248'] )
self._sign = QP.RadioBox( self, choices=['=','wider than','taller than',CC.UNICODE_ALMOST_EQUAL_TO,CC.UNICODE_NOT_EQUAL_TO] )
self._width = QP.MakeQSpinBox( self, max=50000, width = 60 )
@ -1710,7 +1710,7 @@ class PanelPredicateSystemSimilarTo( PanelPredicateSystemSingle ):
QP.AddToLayout( hbox, ClientGUICommon.BetterStaticText(self,'system:similar_to'), CC.FLAGS_CENTER_PERPENDICULAR )
QP.AddToLayout( hbox, self._hashes, CC.FLAGS_CENTER_PERPENDICULAR )
QP.AddToLayout( hbox, QW.QLabel( '\u2248', self ), CC.FLAGS_CENTER_PERPENDICULAR )
QP.AddToLayout( hbox, QW.QLabel( CC.UNICODE_ALMOST_EQUAL_TO, self ), CC.FLAGS_CENTER_PERPENDICULAR )
QP.AddToLayout( hbox, self._max_hamming, CC.FLAGS_CENTER_PERPENDICULAR )
hbox.addStretch( 1 )
@ -1743,7 +1743,7 @@ class PanelPredicateSystemSize( PanelPredicateSystemSingle ):
PanelPredicateSystemSingle.__init__( self, parent )
self._sign = QP.RadioBox( self, choices=['<','\u2248','=','>'] )
self._sign = QP.RadioBox( self, choices=['<',CC.UNICODE_ALMOST_EQUAL_TO,'=',CC.UNICODE_NOT_EQUAL_TO,'>'] )
self._bytes = ClientGUIControls.BytesControl( self )
@ -1796,7 +1796,7 @@ class PanelPredicateSystemTagAsNumber( PanelPredicateSystemSingle ):
self._namespace = QW.QLineEdit( self )
choices = [ '<', '\u2248', '>' ]
choices = [ '<', CC.UNICODE_ALMOST_EQUAL_TO, '>' ]
self._sign = QP.RadioBox( self, choices = choices )
@ -1848,7 +1848,7 @@ class PanelPredicateSystemWidth( PanelPredicateSystemSingle ):
PanelPredicateSystemSingle.__init__( self, parent )
self._sign = QP.RadioBox( self, choices=['<','\u2248','=','>'] )
self._sign = QP.RadioBox( self, choices=['<',CC.UNICODE_ALMOST_EQUAL_TO,'=',CC.UNICODE_NOT_EQUAL_TO,'>'] )
self._width = QP.MakeQSpinBox( self, max=200000, width = 60 )

View File

@ -93,6 +93,8 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
self._files_repeating_job = None
self._gallery_repeating_job = None
self._last_serialisable_change_timestamp = 0
HG.client_controller.sub( self, 'NotifyFileSeedsUpdated', 'file_seed_cache_file_seeds_updated' )
HG.client_controller.sub( self, 'NotifyGallerySeedsUpdated', 'gallery_seed_log_gallery_seeds_updated' )
@ -126,19 +128,6 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
return ( serialisable_gallery_import_key, self._creation_time, self._query, self._source_name, self._current_page_index, self._num_urls_found, self._num_new_urls_found, self._file_limit, self._gallery_paused, self._files_paused, serialisable_file_import_options, serialisable_tag_import_options, serialisable_gallery_seed_log, serialisable_file_seed_cache, self._no_work_until, self._no_work_until_reason )
def _InitialiseFromSerialisableInfo( self, serialisable_info ):
( serialisable_gallery_import_key, self._creation_time, self._query, self._source_name, self._current_page_index, self._num_urls_found, self._num_new_urls_found, self._file_limit, self._gallery_paused, self._files_paused, serialisable_file_import_options, serialisable_tag_import_options, serialisable_gallery_seed_log, serialisable_file_seed_cache, self._no_work_until, self._no_work_until_reason ) = serialisable_info
self._gallery_import_key = bytes.fromhex( serialisable_gallery_import_key )
self._file_import_options = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_file_import_options )
self._tag_import_options = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_tag_import_options )
self._gallery_seed_log = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_gallery_seed_log )
self._file_seed_cache = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_file_seed_cache )
def _FileNetworkJobPresentationContextFactory( self, network_job ):
def enter_call():
@ -181,6 +170,19 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
return ClientImporting.NetworkJobPresentationContext( enter_call, exit_call )
def _InitialiseFromSerialisableInfo( self, serialisable_info ):
( serialisable_gallery_import_key, self._creation_time, self._query, self._source_name, self._current_page_index, self._num_urls_found, self._num_new_urls_found, self._file_limit, self._gallery_paused, self._files_paused, serialisable_file_import_options, serialisable_tag_import_options, serialisable_gallery_seed_log, serialisable_file_seed_cache, self._no_work_until, self._no_work_until_reason ) = serialisable_info
self._gallery_import_key = bytes.fromhex( serialisable_gallery_import_key )
self._file_import_options = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_file_import_options )
self._tag_import_options = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_tag_import_options )
self._gallery_seed_log = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_gallery_seed_log )
self._file_seed_cache = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_file_seed_cache )
def _NetworkJobFactory( self, *args, **kwargs ):
network_job = ClientNetworkingJobs.NetworkJobDownloader( self._gallery_import_key, *args, **kwargs )
@ -188,6 +190,11 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
return network_job
def _SerialisableChangeMade( self ):
self._last_serialisable_change_timestamp = HydrusData.GetNow()
def _UpdateSerialisableInfo( self, version, old_serialisable_info ):
if version == 1:
@ -676,12 +683,22 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
def HasSerialisableChangesSince( self, since_timestamp ):
with self._lock:
return self._last_serialisable_change_timestamp > since_timestamp
def NotifyFileSeedsUpdated( self, file_seed_cache_key, file_seeds ):
if file_seed_cache_key == self._file_seed_cache.GetFileSeedCacheKey():
ClientImporting.WakeRepeatingJob( self._files_repeating_job )
self._SerialisableChangeMade()
def NotifyGallerySeedsUpdated( self, gallery_seed_log_key, gallery_seeds ):
@ -690,6 +707,8 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._gallery_repeating_job )
self._SerialisableChangeMade()
def PausePlayFiles( self ):
@ -700,6 +719,8 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._files_repeating_job )
self._SerialisableChangeMade()
def PausePlayGallery( self ):
@ -710,13 +731,20 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._gallery_repeating_job )
self._SerialisableChangeMade()
def PublishToPage( self, publish_to_page ):
with self._lock:
self._publish_to_page = publish_to_page
if publish_to_page != self._publish_to_page:
self._publish_to_page = publish_to_page
self._SerialisableChangeMade()
@ -734,6 +762,8 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
self._file_seed_cache.RetryFailed()
self._SerialisableChangeMade()
def RetryIgnored( self, ignored_regex = None ):
@ -742,45 +772,64 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
self._file_seed_cache.RetryIgnored( ignored_regex = ignored_regex )
self._SerialisableChangeMade()
def SetFileLimit( self, file_limit ):
with self._lock:
self._file_limit = file_limit
if file_limit != self._file_limit:
self._file_limit = file_limit
self._SerialisableChangeMade()
def SetFileImportOptions( self, file_import_options ):
def SetFileImportOptions( self, file_import_options: FileImportOptions.FileImportOptions ):
with self._lock:
self._file_import_options = file_import_options
if file_import_options.DumpToString() != self._file_import_options.DumpToString():
self._file_import_options = file_import_options
self._SerialisableChangeMade()
def SetFileSeedCache( self, file_seed_cache ):
def SetFileSeedCache( self, file_seed_cache: ClientImportFileSeeds.FileSeedCache ):
with self._lock:
self._file_seed_cache = file_seed_cache
self._SerialisableChangeMade()
def SetGallerySeedLog( self, gallery_seed_log ):
def SetGallerySeedLog( self, gallery_seed_log: ClientImportGallerySeeds.GallerySeedLog ):
with self._lock:
self._gallery_seed_log = gallery_seed_log
self._SerialisableChangeMade()
def SetTagImportOptions( self, tag_import_options ):
def SetTagImportOptions( self, tag_import_options: TagImportOptions.TagImportOptions ):
with self._lock:
self._tag_import_options = tag_import_options
if tag_import_options.DumpToString() != self._tag_import_options.DumpToString():
self._tag_import_options = tag_import_options
@ -876,6 +925,8 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
HG.client_controller.WaitUntilViewFree()
self._SerialisableChangeMade()
except Exception as e:
HydrusData.ShowException( e )
@ -934,6 +985,8 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
HG.client_controller.WaitUntilViewFree()
self._SerialisableChangeMade()
except Exception as e:
HydrusData.ShowException( e )
@ -994,6 +1047,8 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
self._have_started = False
self._last_serialisable_change_timestamp = 0
self._last_pubbed_value_range = ( 0, 0 )
self._next_pub_value_check_time = 0
@ -1099,7 +1154,12 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
del self._gallery_import_keys_to_gallery_imports[ gallery_import_key ]
def _SetDirty( self ):
def _SerialisableChangeMade( self ):
self._last_serialisable_change_timestamp = HydrusData.GetNow()
def _SetStatusDirty( self ):
self._status_dirty = True
@ -1214,6 +1274,19 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
def ClearHighlightedGalleryImport( self ):
with self._lock:
if self._highlighted_gallery_import_key is not None:
self._highlighted_gallery_import_key = None
self._SerialisableChangeMade()
def CurrentlyWorking( self ):
with self._lock:
@ -1373,6 +1446,27 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
def HasSerialisableChangesSince( self, since_timestamp ):
with self._lock:
if self._last_serialisable_change_timestamp > since_timestamp:
return True
for gallery_import in self._gallery_imports:
if gallery_import.HasSerialisableChangesSince( since_timestamp ):
return True
return False
def PendSubscriptionGapDownloader( self, gug_key_and_name, query_text, file_limit ):
with self._lock:
@ -1413,7 +1507,9 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._importers_repeating_job )
self._SetDirty()
self._SetStatusDirty()
self._SerialisableChangeMade()
@ -1489,7 +1585,12 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._importers_repeating_job )
self._SetDirty()
self._SetStatusDirty()
if len( created_importers ) > 0:
self._SerialisableChangeMade()
return created_importers
@ -1501,7 +1602,9 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
self._RemoveGalleryImport( gallery_import_key )
self._SetDirty()
self._SetStatusDirty()
self._SerialisableChangeMade()
@ -1509,7 +1612,12 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
with self._lock:
self._file_limit = file_limit
if file_limit != self._file_limit:
self._file_limit = file_limit
self._SerialisableChangeMade()
@ -1517,7 +1625,12 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
with self._lock:
self._file_import_options = file_import_options
if self._file_import_options.DumpToString() != file_import_options.DumpToString():
self._file_import_options = file_import_options
self._SerialisableChangeMade()
@ -1525,22 +1638,27 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
with self._lock:
self._gug_key_and_name = gug_key_and_name
if gug_key_and_name != self._gug_key_and_name:
self._gug_key_and_name = gug_key_and_name
self._SerialisableChangeMade()
def SetHighlightedGalleryImport( self, highlighted_gallery_import ):
def SetHighlightedGalleryImport( self, highlighted_gallery_import: GalleryImport ):
with self._lock:
if highlighted_gallery_import is None:
self._highlighted_gallery_import_key = None
else:
highlighted_gallery_import_key = highlighted_gallery_import.GetGalleryImportKey()
if highlighted_gallery_import_key != self._highlighted_gallery_import_key:
self._highlighted_gallery_import_key = highlighted_gallery_import.GetGalleryImportKey()
self._SerialisableChangeMade()
@ -1548,9 +1666,26 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
with self._lock:
self._start_file_queues_paused = start_file_queues_paused
self._start_gallery_queues_paused = start_gallery_queues_paused
self._merge_simultaneous_pends_to_one_importer = merge_simultaneous_pends_to_one_importer
if start_file_queues_paused != self._start_file_queues_paused:
self._start_file_queues_paused = start_file_queues_paused
self._SerialisableChangeMade()
if start_gallery_queues_paused != self._start_gallery_queues_paused:
self._start_gallery_queues_paused = start_gallery_queues_paused
self._SerialisableChangeMade()
if merge_simultaneous_pends_to_one_importer != self._merge_simultaneous_pends_to_one_importer:
self._merge_simultaneous_pends_to_one_importer = merge_simultaneous_pends_to_one_importer
self._SerialisableChangeMade()
@ -1558,7 +1693,12 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
with self._lock:
self._tag_import_options = tag_import_options
if tag_import_options.DumpToString() != self._tag_import_options.DumpToString():
self._tag_import_options = tag_import_options
self._SerialisableChangeMade()
@ -1606,7 +1746,7 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
if file_seed_cache.GetStatus().GetGenerationTime() > self._status_cache.GetGenerationTime(): # has there has been an update?
self._SetDirty()
self._SetStatusDirty()
break

View File

@ -18,6 +18,7 @@ from hydrus.client import ClientPaths
from hydrus.client import ClientThreading
from hydrus.client.importing import ClientImporting
from hydrus.client.importing import ClientImportFileSeeds
from hydrus.client.importing.options import FileImportOptions
from hydrus.client.importing.options import TagImportOptions
from hydrus.client.metadata import ClientTags
@ -77,6 +78,8 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
self._files_repeating_job = None
self._last_serialisable_change_timestamp = 0
HG.client_controller.sub( self, 'NotifyFileSeedsUpdated', 'file_seed_cache_file_seeds_updated' )
@ -96,6 +99,11 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
self._file_import_options = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_options )
def _SerialisableChangeMade( self ):
self._last_serialisable_change_timestamp = HydrusData.GetNow()
def _UpdateSerialisableInfo( self, version, old_serialisable_info ):
if version == 1:
@ -269,12 +277,22 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
def HasSerialisableChangesSince( self, since_timestamp ):
with self._lock:
return self._last_serialisable_change_timestamp > since_timestamp
def NotifyFileSeedsUpdated( self, file_seed_cache_key, file_seeds ):
if file_seed_cache_key == self._file_seed_cache.GetFileSeedCacheKey():
ClientImporting.WakeRepeatingJob( self._files_repeating_job )
self._SerialisableChangeMade()
def PausePlay( self ):
@ -285,13 +303,20 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._files_repeating_job )
self._SerialisableChangeMade()
def SetFileImportOptions( self, file_import_options ):
def SetFileImportOptions( self, file_import_options: FileImportOptions.FileImportOptions ):
with self._lock:
self._file_import_options = file_import_options
if file_import_options.DumpToString() != self._file_import_options.DumpToString():
self._file_import_options = file_import_options
self._SerialisableChangeMade()
@ -348,6 +373,8 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
HG.client_controller.WaitUntilViewFree()
self._SerialisableChangeMade()
except Exception as e:
HydrusData.ShowException( e )

View File

@ -13,6 +13,7 @@ from hydrus.client import ClientConstants as CC
from hydrus.client.importing import ClientImporting
from hydrus.client.importing import ClientImportFileSeeds
from hydrus.client.importing import ClientImportGallerySeeds
from hydrus.client.importing.options import FileImportOptions
from hydrus.client.importing.options import TagImportOptions
from hydrus.client.metadata import ClientTags
from hydrus.client.networking import ClientNetworkingJobs
@ -52,6 +53,8 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
self._files_repeating_job = None
self._queue_repeating_job = None
self._last_serialisable_change_timestamp = 0
HG.client_controller.sub( self, 'NotifyFileSeedsUpdated', 'file_seed_cache_file_seeds_updated' )
@ -126,6 +129,11 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
return ClientImporting.NetworkJobPresentationContext( enter_call, exit_call )
def _SerialisableChangeMade( self ):
self._last_serialisable_change_timestamp = HydrusData.GetNow()
def _UpdateSerialisableInfo( self, version, old_serialisable_info ):
if version == 1:
@ -371,6 +379,8 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
self._pending_jobs.insert( index - 1, job )
self._SerialisableChangeMade()
@ -399,6 +409,8 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
self._pending_jobs.insert( index + 1, job )
self._SerialisableChangeMade()
@ -411,6 +423,8 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
self._pending_jobs.remove( job )
self._SerialisableChangeMade()
@ -496,12 +510,22 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
def HasSerialisableChangesSince( self, since_timestamp ):
with self._lock:
return self._last_serialisable_change_timestamp > since_timestamp
def NotifyFileSeedsUpdated( self, file_seed_cache_key, file_seeds ):
if file_seed_cache_key == self._file_seed_cache.GetFileSeedCacheKey():
ClientImporting.WakeRepeatingJob( self._files_repeating_job )
self._SerialisableChangeMade()
def PausePlayFiles( self ):
@ -512,6 +536,8 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._files_repeating_job )
self._SerialisableChangeMade()
def PausePlayQueue( self ):
@ -522,6 +548,8 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._queue_repeating_job )
self._SerialisableChangeMade()
def PendJob( self, job ):
@ -534,14 +562,21 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._queue_repeating_job )
self._SerialisableChangeMade()
def SetFileImportOptions( self, file_import_options ):
def SetFileImportOptions( self, file_import_options: FileImportOptions.FileImportOptions ):
with self._lock:
self._file_import_options = file_import_options
if file_import_options.DumpToString() != self._file_import_options.DumpToString():
self._file_import_options = file_import_options
self._SerialisableChangeMade()
@ -549,7 +584,12 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
with self._lock:
self._formula_name = formula_name
if formula_name != self._formula_name:
self._formula_name = formula_name
self._SerialisableChangeMade()
@ -633,6 +673,8 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
HG.client_controller.WaitUntilViewFree()
self._SerialisableChangeMade()
except Exception as e:
HydrusData.ShowException( e )
@ -681,6 +723,8 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
HG.client_controller.WaitUntilViewFree()
self._SerialisableChangeMade()
except Exception as e:
HydrusData.ShowException( e )
@ -718,6 +762,8 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
self._files_repeating_job = None
self._gallery_repeating_job = None
self._last_serialisable_change_timestamp = 0
HG.client_controller.sub( self, 'NotifyFileSeedsUpdated', 'file_seed_cache_file_seeds_updated' )
HG.client_controller.sub( self, 'NotifyGallerySeedsUpdated', 'gallery_seed_log_gallery_seeds_updated' )
@ -791,6 +837,11 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
return network_job
def _SerialisableChangeMade( self ):
self._last_serialisable_change_timestamp = HydrusData.GetNow()
def _UpdateSerialisableInfo( self, version, old_serialisable_info ):
if version == 1:
@ -968,6 +1019,14 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
def HasSerialisableChangesSince( self, since_timestamp ):
with self._lock:
return self._last_serialisable_change_timestamp > since_timestamp
def IsPaused( self ):
with self._lock:
@ -982,6 +1041,8 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._files_repeating_job )
self._SerialisableChangeMade()
def NotifyGallerySeedsUpdated( self, gallery_seed_log_key, gallery_seeds ):
@ -990,6 +1051,8 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._gallery_repeating_job )
self._SerialisableChangeMade()
def PausePlay( self ):
@ -1001,6 +1064,8 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._files_repeating_job )
ClientImporting.WakeRepeatingJob( self._gallery_repeating_job )
self._SerialisableChangeMade()
def PendURLs( self, urls, filterable_tags = None, additional_service_keys_to_tags = None ):
@ -1062,6 +1127,8 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._gallery_repeating_job )
self._SerialisableChangeMade()
if len( file_seeds ) > 0:
@ -1069,22 +1136,34 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._files_repeating_job )
self._SerialisableChangeMade()
def SetFileImportOptions( self, file_import_options ):
def SetFileImportOptions( self, file_import_options: FileImportOptions.FileImportOptions ):
with self._lock:
self._file_import_options = file_import_options
if file_import_options.DumpToString() != self._file_import_options.DumpToString():
self._file_import_options = file_import_options
self._SerialisableChangeMade()
def SetTagImportOptions( self, tag_import_options ):
def SetTagImportOptions( self, tag_import_options: TagImportOptions.TagImportOptions ):
with self._lock:
self._tag_import_options = tag_import_options
if tag_import_options.DumpToString() != self._tag_import_options.DumpToString():
self._tag_import_options = tag_import_options
self._SerialisableChangeMade()
@ -1167,6 +1246,7 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
self._WorkOnFiles( page_key )
HG.client_controller.WaitUntilViewFree()
self._SerialisableChangeMade()
except Exception as e:
@ -1214,6 +1294,8 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
HG.client_controller.WaitUntilViewFree()
self._SerialisableChangeMade()
except Exception as e:
HydrusData.ShowException( e )

View File

@ -66,6 +66,8 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ):
self._last_time_watchers_changed = HydrusData.GetNowPrecise()
self._last_serialisable_change_timestamp = 0
self._last_pubbed_value_range = ( 0, 0 )
self._next_pub_value_check_time = 0
@ -160,6 +162,11 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ):
del self._watcher_keys_to_watchers[ watcher_key ]
def _SerialisableChangeMade( self ):
self._last_serialisable_change_timestamp = HydrusData.GetNow()
def _SetDirty( self ):
self._status_dirty = True
@ -262,6 +269,19 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ):
def ClearHighlightedWatcher( self ):
with self._lock:
if self._highlighted_watcher_url is not None:
self._highlighted_watcher_url = None
self._SerialisableChangeMade()
def GetAPIInfoDict( self, simple ):
highlighted_watcher = self.GetHighlightedWatcher()
@ -427,6 +447,27 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ):
return watcher.GetSimpleStatus()
def HasSerialisableChangesSince( self, since_timestamp ):
with self._lock:
if self._last_serialisable_change_timestamp > since_timestamp:
return True
for watcher in self._watchers:
if watcher.HasSerialisableChangesSince( since_timestamp ):
return True
return False
def RemoveWatcher( self, watcher_key ):
with self._lock:
@ -435,19 +476,21 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ):
self._SetDirty()
self._SerialisableChangeMade()
def SetHighlightedWatcher( self, highlighted_watcher ):
with self._lock:
if highlighted_watcher is None:
highlighted_watcher_url = highlighted_watcher.GetURL()
if highlighted_watcher_url != self._highlighted_watcher_url:
self._highlighted_watcher_url = None
self._highlighted_watcher_url = highlighted_watcher_url
else:
self._highlighted_watcher_url = highlighted_watcher.GetURL()
self._SerialisableChangeMade()
@ -456,9 +499,26 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ):
with self._lock:
self._checker_options = checker_options
self._file_import_options = file_import_options
self._tag_import_options = tag_import_options
if checker_options.DumpToString() != self._checker_options.DumpToString():
self._checker_options = checker_options
self._SerialisableChangeMade()
if file_import_options.DumpToString() != self._file_import_options.DumpToString():
self._file_import_options = file_import_options
self._SerialisableChangeMade()
if tag_import_options.DumpToString() != self._tag_import_options.DumpToString():
self._tag_import_options = tag_import_options
self._SerialisableChangeMade()
@ -595,6 +655,8 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
self._files_repeating_job = None
self._checker_repeating_job = None
self._last_serialisable_change_timestamp = 0
HG.client_controller.sub( self, 'NotifyFileSeedsUpdated', 'file_seed_cache_file_seeds_updated' )
@ -843,6 +905,11 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
self._tag_import_options = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_tag_import_options )
def _SerialisableChangeMade( self ):
self._last_serialisable_change_timestamp = HydrusData.GetNow()
def _UpdateFileVelocityStatus( self ):
self._file_velocity_status = self._checker_options.GetPrettyCurrentVelocity( self._file_seed_cache, self._last_check_time )
@ -1089,6 +1156,8 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._checker_repeating_job )
self._SerialisableChangeMade()
def CurrentlyAlive( self ):
@ -1368,6 +1437,11 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
def HasSerialisableChangesSince( self, since_timestamp ):
return self._last_serialisable_change_timestamp > since_timestamp
def HasURL( self ):
with self._lock:
@ -1395,6 +1469,8 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._files_repeating_job )
self._SerialisableChangeMade()
def PausePlayChecking( self ):
@ -1411,6 +1487,8 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._checker_repeating_job )
self._SerialisableChangeMade()
@ -1422,6 +1500,8 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._files_repeating_job )
self._SerialisableChangeMade()
def PublishToPage( self, publish_to_page ):
@ -1446,6 +1526,8 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
self._file_seed_cache.RetryFailed()
self._SerialisableChangeMade()
def RetryIgnored( self, ignored_regex = None ):
@ -1454,27 +1536,39 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
self._file_seed_cache.RetryIgnored( ignored_regex = ignored_regex )
def SetCheckerOptions( self, checker_options ):
with self._lock:
self._checker_options = checker_options
self._UpdateNextCheckTime()
self._UpdateFileVelocityStatus()
ClientImporting.WakeRepeatingJob( self._checker_repeating_job )
self._SerialisableChangeMade()
def SetFileImportOptions( self, file_import_options ):
def SetCheckerOptions( self, checker_options: ClientImportOptions.CheckerOptions ):
with self._lock:
self._file_import_options = file_import_options
if checker_options.DumpToString() != self._checker_options.DumpToString():
self._checker_options = checker_options
self._UpdateNextCheckTime()
self._UpdateFileVelocityStatus()
ClientImporting.WakeRepeatingJob( self._checker_repeating_job )
self._SerialisableChangeMade()
def SetFileImportOptions( self, file_import_options: FileImportOptions.FileImportOptions ):
with self._lock:
if file_import_options.DumpToString() != self._file_import_options.DumpToString():
self._file_import_options = file_import_options
self._SerialisableChangeMade()
@ -1482,7 +1576,14 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
with self._lock:
self._external_additional_service_keys_to_tags = ClientTags.ServiceKeysToTags( service_keys_to_tags )
external_additional_service_keys_to_tags = ClientTags.ServiceKeysToTags( service_keys_to_tags )
if external_additional_service_keys_to_tags.DumpToString() != self._external_additional_service_keys_to_tags.DumpToString():
self._external_additional_service_keys_to_tags = external_additional_service_keys_to_tags
self._SerialisableChangeMade()
@ -1490,15 +1591,27 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
with self._lock:
self._external_filterable_tags = set( tags )
tags_set = set( tags )
if tags_set != self._external_filterable_tags:
self._external_filterable_tags = tags_set
self._SerialisableChangeMade()
def SetTagImportOptions( self, tag_import_options ):
def SetTagImportOptions( self, tag_import_options: TagImportOptions.TagImportOptions ):
with self._lock:
self._tag_import_options = tag_import_options
if tag_import_options.DumpToString() != self._tag_import_options.DumpToString():
self._tag_import_options = tag_import_options
self._SerialisableChangeMade()
@ -1527,6 +1640,8 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
ClientImporting.WakeRepeatingJob( self._checker_repeating_job )
self._SerialisableChangeMade()
def Start( self, page_key, publish_to_page ):
@ -1623,6 +1738,8 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
HG.client_controller.WaitUntilViewFree()
self._SerialisableChangeMade()
except Exception as e:
HydrusData.ShowException( e )
@ -1684,6 +1801,8 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
self._CheckWatchableURL()
self._SerialisableChangeMade()
except Exception as e:
HydrusData.ShowException( e )

View File

@ -180,7 +180,7 @@ def GetDuplicateComparisonStatements( shown_media, comparison_media ):
else:
operator = '\u2248'
operator = CC.UNICODE_ALMOST_EQUAL_TO
score = 0

View File

@ -273,77 +273,6 @@ class NetworkJob( object ):
return ( connect_timeout, read_timeout )
def _SendRequestAndGetResponse( self ) -> requests.Response:
with self._lock:
ncs = list( self._network_contexts )
headers = self.engine.domain_manager.GetHeaders( ncs )
with self._lock:
method = self._method
url = self._url
data = self._body
files = self._files
if self.IS_HYDRUS_SERVICE or self.IS_IPFS_SERVICE:
headers[ 'User-Agent' ] = 'hydrus client/' + str( HC.NETWORK_VERSION )
referral_url = self.engine.domain_manager.GetReferralURL( self._url, self._referral_url )
url_headers = self.engine.domain_manager.GetURLClassHeaders( self._url )
headers.update( url_headers )
if HG.network_report_mode:
HydrusData.ShowText( 'Network Jobs Referral URLs for {}:{}Given: {}{}Used: {}'.format( self._url, os.linesep, self._referral_url, os.linesep, referral_url ) )
if referral_url is not None:
try:
referral_url.encode( 'latin-1' )
except UnicodeEncodeError:
# quick and dirty way to quote this url when it comes here with full unicode chars. not perfect, but does the job
referral_url = urllib.parse.quote( referral_url, "!#$%&'()*+,/:;=?@[]~" )
if HG.network_report_mode:
HydrusData.ShowText( 'Network Jobs Quoted Referral URL for {}:{}{}'.format( self._url, os.linesep, referral_url ) )
headers[ 'referer' ] = referral_url
for ( key, value ) in self._additional_headers.items():
headers[ key ] = value
self._status_text = 'sending request\u2026'
snc = self._session_network_context
session = self.engine.session_manager.GetSession( snc )
( connect_timeout, read_timeout ) = self._GetTimeouts()
response = session.request( method, url, data = data, files = files, headers = headers, stream = True, timeout = ( connect_timeout, read_timeout ) )
return response
def _IsCancelled( self ):
if self._is_cancelled:
@ -516,6 +445,10 @@ class NetworkJob( object ):
# ok the issue with some larger files failing here is they are actually 206, or at least would be (rather than 200), if we sent "Range: bytes=0-" or similar header
# if 206 and/or "Accept-Ranges: bytes" response header exists, then we may well be in this situation
# essentially we'll have to build infrastructure to recognise this situation and try with an actual fresh request with a new range and resume from where we left off, which means preserving bytes_read and so on
if self._num_bytes_to_read is not None and num_bytes_read_is_accurate and self._num_bytes_read < self._num_bytes_to_read:
raise HydrusExceptions.ShouldReattemptNetworkException( 'Incomplete response: Was expecting {} but actually got {} !'.format( HydrusData.ToHumanBytes( self._num_bytes_to_read ), HydrusData.ToHumanBytes( self._num_bytes_read ) ) )
@ -529,6 +462,77 @@ class NetworkJob( object ):
self.engine.bandwidth_manager.ReportDataUsed( self._network_contexts, num_bytes )
def _SendRequestAndGetResponse( self ) -> requests.Response:
with self._lock:
ncs = list( self._network_contexts )
headers = self.engine.domain_manager.GetHeaders( ncs )
with self._lock:
method = self._method
url = self._url
data = self._body
files = self._files
if self.IS_HYDRUS_SERVICE or self.IS_IPFS_SERVICE:
headers[ 'User-Agent' ] = 'hydrus client/' + str( HC.NETWORK_VERSION )
referral_url = self.engine.domain_manager.GetReferralURL( self._url, self._referral_url )
url_headers = self.engine.domain_manager.GetURLClassHeaders( self._url )
headers.update( url_headers )
if HG.network_report_mode:
HydrusData.ShowText( 'Network Jobs Referral URLs for {}:{}Given: {}{}Used: {}'.format( self._url, os.linesep, self._referral_url, os.linesep, referral_url ) )
if referral_url is not None:
try:
referral_url.encode( 'latin-1' )
except UnicodeEncodeError:
# quick and dirty way to quote this url when it comes here with full unicode chars. not perfect, but does the job
referral_url = urllib.parse.quote( referral_url, "!#$%&'()*+,/:;=?@[]~" )
if HG.network_report_mode:
HydrusData.ShowText( 'Network Jobs Quoted Referral URL for {}:{}{}'.format( self._url, os.linesep, referral_url ) )
headers[ 'referer' ] = referral_url
for ( key, value ) in self._additional_headers.items():
headers[ key ] = value
self._status_text = 'sending request\u2026'
snc = self._session_network_context
session = self.engine.session_manager.GetSession( snc )
( connect_timeout, read_timeout ) = self._GetTimeouts()
response = session.request( method, url, data = data, files = files, headers = headers, stream = True, timeout = ( connect_timeout, read_timeout ) )
return response
def _SetCancelled( self ):
self._is_cancelled = True
@ -1181,7 +1185,6 @@ class NetworkJob( object ):
# but this will do as a patch for now
self._actual_fetched_url = response.url
if self._actual_fetched_url != self._url and HG.network_report_mode:
HydrusData.ShowText( 'Network Jobs Redirect: {} -> {}'.format( self._url, self._actual_fetched_url ) )

View File

@ -81,7 +81,7 @@ options = {}
# Misc
NETWORK_VERSION = 20
SOFTWARE_VERSION = 454
SOFTWARE_VERSION = 455
CLIENT_API_VERSION = 20
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )

View File

@ -158,119 +158,6 @@ def VacuumDB( db_path ):
c.execute( 'PRAGMA journal_mode = {};'.format( HG.db_journal_mode ) )
class DBCursorTransactionWrapper( HydrusDBBase.DBBase ):
def __init__( self, c: sqlite3.Cursor, transaction_commit_period: int ):
HydrusDBBase.DBBase.__init__( self )
self._SetCursor( c )
self._transaction_commit_period = transaction_commit_period
self._transaction_start_time = 0
self._in_transaction = False
self._transaction_contains_writes = False
self._last_mem_refresh_time = HydrusData.GetNow()
self._last_wal_checkpoint_time = HydrusData.GetNow()
def BeginImmediate( self ):
if not self._in_transaction:
self._Execute( 'BEGIN IMMEDIATE;' )
self._Execute( 'SAVEPOINT hydrus_savepoint;' )
self._transaction_start_time = HydrusData.GetNow()
self._in_transaction = True
self._transaction_contains_writes = False
def Commit( self ):
if self._in_transaction:
self._Execute( 'COMMIT;' )
self._in_transaction = False
self._transaction_contains_writes = False
if HG.db_journal_mode == 'WAL' and HydrusData.TimeHasPassed( self._last_wal_checkpoint_time + 1800 ):
self._Execute( 'PRAGMA wal_checkpoint(PASSIVE);' )
self._last_wal_checkpoint_time = HydrusData.GetNow()
if HydrusData.TimeHasPassed( self._last_mem_refresh_time + 600 ):
self._Execute( 'DETACH mem;' )
self._Execute( 'ATTACH ":memory:" AS mem;' )
HydrusDBBase.TemporaryIntegerTableNameCache.instance().Clear()
self._last_mem_refresh_time = HydrusData.GetNow()
else:
HydrusData.Print( 'Received a call to commit, but was not in a transaction!' )
def CommitAndBegin( self ):
if self._in_transaction:
self.Commit()
self.BeginImmediate()
def InTransaction( self ):
return self._in_transaction
def NotifyWriteOccuring( self ):
self._transaction_contains_writes = True
def Rollback( self ):
if self._in_transaction:
self._Execute( 'ROLLBACK TO hydrus_savepoint;' )
# any temp int tables created in this lad will be rolled back, so 'initialised' can't be trusted. just reset, no big deal
HydrusDBBase.TemporaryIntegerTableNameCache.instance().Clear()
# still in transaction
# transaction may no longer contain writes, but it isn't important to figure out that it doesn't
else:
HydrusData.Print( 'Received a call to rollback, but was not in a transaction!' )
def Save( self ):
self._Execute( 'RELEASE hydrus_savepoint;' )
self._Execute( 'SAVEPOINT hydrus_savepoint;' )
def TimeToCommit( self ):
return self._in_transaction and self._transaction_contains_writes and HydrusData.TimeHasPassed( self._transaction_start_time + self._transaction_commit_period )
class HydrusDB( HydrusDBBase.DBBase ):
READ_WRITE_ACTIONS = []
@ -361,7 +248,7 @@ class HydrusDB( HydrusDBBase.DBBase ):
raise Exception( 'Your current database version of hydrus ' + str( version ) + ' is too old for this software version ' + str( HC.SOFTWARE_VERSION ) + ' to update. Please try updating with version ' + str( version + 45 ) + ' or earlier first.' )
self._RepairDB()
self._RepairDB( version )
while version < HC.SOFTWARE_VERSION:
@ -575,7 +462,7 @@ class HydrusDB( HydrusDBBase.DBBase ):
self._is_connected = True
self._cursor_transaction_wrapper = DBCursorTransactionWrapper( self._c, HG.db_transaction_commit_period )
self._cursor_transaction_wrapper = HydrusDBBase.DBCursorTransactionWrapper( self._c, HG.db_transaction_commit_period )
self._LoadModules()
@ -743,9 +630,12 @@ class HydrusDB( HydrusDBBase.DBBase ):
raise NotImplementedError()
def _RepairDB( self ):
def _RepairDB( self, version ):
pass
for module in self._modules:
module.Repair( version, self._cursor_transaction_wrapper )
def _ReportOverupdatedDB( self, version ):

View File

@ -1,6 +1,7 @@
import collections
import sqlite3
from hydrus.core import HydrusData
from hydrus.core import HydrusGlobals as HG
class TemporaryIntegerTableNameCache( object ):
@ -179,11 +180,6 @@ class DBBase( object ):
self._c.executemany( query, args_iterator )
def _GenerateIndexName( self, table_name, columns ):
return '{}_{}_index'.format( table_name, '_'.join( columns ) )
def _ExecuteManySelectSingleParam( self, query, single_param_iterator ):
select_args_iterator = ( ( param, ) for param in single_param_iterator )
@ -208,6 +204,27 @@ class DBBase( object ):
def _GenerateIndexName( self, table_name, columns ):
return '{}_{}_index'.format( table_name, '_'.join( columns ) )
def _GetAttachedDatabaseNames( self, include_temp = False ):
if include_temp:
f = lambda schema_name, path: True
else:
f = lambda schema_name, path: schema_name != 'temp' and path != ''
names = [ schema_name for ( number, schema_name, path ) in self._Execute( 'PRAGMA database_list;' ) if f( schema_name, path ) ]
return names
def _GetLastRowId( self ) -> int:
return self._c.lastrowid
@ -227,6 +244,13 @@ class DBBase( object ):
def _IndexExists( self, table_name, columns ):
index_name = self._GenerateIndexName( table_name, columns )
return self._TableOrIndexExists( index_name, 'index' )
def _MakeTemporaryIntegerTable( self, integer_iterable, column_name ):
return TemporaryIntegerTable( self._c, integer_iterable, column_name )
@ -257,4 +281,149 @@ class DBBase( object ):
return { item for ( item, ) in iterable_cursor }
def _TableExists( self, table_name ):
return self._TableOrIndexExists( table_name, 'table' )
def _TableOrIndexExists( self, name, item_type ):
if '.' in name:
( schema, name ) = name.split( '.', 1 )
search_schemas = [ schema ]
else:
search_schemas = self._GetAttachedDatabaseNames()
for schema in search_schemas:
result = self._Execute( 'SELECT 1 FROM {}.sqlite_master WHERE name = ? AND type = ?;'.format( schema ), ( name, item_type ) ).fetchone()
if result is not None:
return True
return False
class DBCursorTransactionWrapper( DBBase ):
def __init__( self, c: sqlite3.Cursor, transaction_commit_period: int ):
DBBase.__init__( self )
self._SetCursor( c )
self._transaction_commit_period = transaction_commit_period
self._transaction_start_time = 0
self._in_transaction = False
self._transaction_contains_writes = False
self._last_mem_refresh_time = HydrusData.GetNow()
self._last_wal_checkpoint_time = HydrusData.GetNow()
def BeginImmediate( self ):
if not self._in_transaction:
self._Execute( 'BEGIN IMMEDIATE;' )
self._Execute( 'SAVEPOINT hydrus_savepoint;' )
self._transaction_start_time = HydrusData.GetNow()
self._in_transaction = True
self._transaction_contains_writes = False
def Commit( self ):
if self._in_transaction:
self._Execute( 'COMMIT;' )
self._in_transaction = False
self._transaction_contains_writes = False
if HG.db_journal_mode == 'WAL' and HydrusData.TimeHasPassed( self._last_wal_checkpoint_time + 1800 ):
self._Execute( 'PRAGMA wal_checkpoint(PASSIVE);' )
self._last_wal_checkpoint_time = HydrusData.GetNow()
if HydrusData.TimeHasPassed( self._last_mem_refresh_time + 600 ):
self._Execute( 'DETACH mem;' )
self._Execute( 'ATTACH ":memory:" AS mem;' )
TemporaryIntegerTableNameCache.instance().Clear()
self._last_mem_refresh_time = HydrusData.GetNow()
else:
HydrusData.Print( 'Received a call to commit, but was not in a transaction!' )
def CommitAndBegin( self ):
if self._in_transaction:
self.Commit()
self.BeginImmediate()
def InTransaction( self ):
return self._in_transaction
def NotifyWriteOccuring( self ):
self._transaction_contains_writes = True
def Rollback( self ):
if self._in_transaction:
self._Execute( 'ROLLBACK TO hydrus_savepoint;' )
# any temp int tables created in this lad will be rolled back, so 'initialised' can't be trusted. just reset, no big deal
TemporaryIntegerTableNameCache.instance().Clear()
# still in transaction
# transaction may no longer contain writes, but it isn't important to figure out that it doesn't
else:
HydrusData.Print( 'Received a call to rollback, but was not in a transaction!' )
def Save( self ):
self._Execute( 'RELEASE hydrus_savepoint;' )
self._Execute( 'SAVEPOINT hydrus_savepoint;' )
def TimeToCommit( self ):
return self._in_transaction and self._transaction_contains_writes and HydrusData.TimeHasPassed( self._transaction_start_time + self._transaction_commit_period )

View File

@ -2,6 +2,7 @@ import sqlite3
import typing
from hydrus.core import HydrusDBBase
from hydrus.core import HydrusExceptions
class HydrusDBModule( HydrusDBBase.DBBase ):
@ -14,16 +15,92 @@ class HydrusDBModule( HydrusDBBase.DBBase ):
self._SetCursor( cursor )
def _GetInitialIndexGenerationTuples( self ):
def _FlattenIndexGenerationDict( self, index_generation_dict: dict ):
tuples = []
for ( table_name, index_rows ) in index_generation_dict.items():
tuples.extend( ( ( table_name, columns, unique, version_added ) for ( columns, unique, version_added ) in index_rows ) )
return tuples
def _GetCriticalTableNames( self ) -> typing.Collection[ str ]:
return set()
def _GetServiceIndexGenerationDict( self, service_id ) -> dict:
return {}
def _GetServiceTableGenerationDict( self, service_id ) -> dict:
return {}
def _GetServicesIndexGenerationDict( self ) -> dict:
index_generation_dict = {}
for service_id in self._GetServiceIdsWeGenerateDynamicTablesFor():
index_generation_dict.update( self._GetServiceIndexGenerationDict( service_id ) )
return index_generation_dict
def _GetServicesTableGenerationDict( self ) -> dict:
table_generation_dict = {}
for service_id in self._GetServiceIdsWeGenerateDynamicTablesFor():
table_generation_dict.update( self._GetServiceTableGenerationDict( service_id ) )
return table_generation_dict
def _GetServiceIdsWeGenerateDynamicTablesFor( self ):
return []
def _GetInitialIndexGenerationDict( self ) -> dict:
return {}
def _GetInitialTableGenerationDict( self ) -> dict:
return {}
def _PresentMissingIndicesWarningToUser( self, index_names ):
raise NotImplementedError()
def _PresentMissingTablesWarningToUser( self, table_names ):
raise NotImplementedError()
def _RepairRepopulateTables( self, table_names, cursor_transaction_wrapper: HydrusDBBase.DBCursorTransactionWrapper ):
pass
def CreateInitialIndices( self ):
index_generation_tuples = self._GetInitialIndexGenerationTuples()
index_generation_dict = self._GetInitialIndexGenerationDict()
for ( table_name, columns, unique ) in index_generation_tuples:
for ( table_name, columns, unique, version_added ) in self._FlattenIndexGenerationDict( index_generation_dict ):
self._CreateIndex( table_name, columns, unique = unique )
@ -31,21 +108,54 @@ class HydrusDBModule( HydrusDBBase.DBBase ):
def CreateInitialTables( self ):
raise NotImplementedError()
table_generation_dict = self._GetInitialTableGenerationDict()
for ( table_name, ( create_query_without_name, version_added ) ) in table_generation_dict.items():
self._Execute( create_query_without_name.format( table_name ) )
def GetExpectedIndexNames( self ) -> typing.Collection[ str ]:
def GetExpectedServiceIndexNames( self ) -> typing.Collection[ str ]:
index_generation_tuples = self._GetInitialIndexGenerationTuples()
index_generation_dict = self._GetServicesIndexGenerationDict()
expected_index_names = [ self._GenerateIndexName( table_name, columns ) for ( table_name, columns, unique ) in index_generation_tuples ]
expected_index_names = []
for ( table_name, columns, unique, version_added ) in self._FlattenIndexGenerationDict( index_generation_dict ):
expected_index_names.append( self._GenerateIndexName( table_name, columns ) )
return expected_index_names
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
def GetExpectedInitialIndexNames( self ) -> typing.Collection[ str ]:
raise NotImplementedError()
index_generation_dict = self._GetInitialIndexGenerationDict()
expected_index_names = []
for ( table_name, columns, unique, version_added ) in self._FlattenIndexGenerationDict( index_generation_dict ):
expected_index_names.append( self._GenerateIndexName( table_name, columns ) )
return expected_index_names
def GetExpectedServiceTableNames( self ) -> typing.Collection[ str ]:
table_generation_dict = self._GetServicesTableGenerationDict()
return list( table_generation_dict.keys() )
def GetExpectedInitialTableNames( self ) -> typing.Collection[ str ]:
table_generation_dict = self._GetInitialTableGenerationDict()
return list( table_generation_dict.keys() )
def GetTablesAndColumnsThatUseDefinitions( self, content_type: int ) -> typing.List[ typing.Tuple[ str, str ] ]:
@ -55,3 +165,100 @@ class HydrusDBModule( HydrusDBBase.DBBase ):
raise NotImplementedError()
def Repair( self, current_db_version, cursor_transaction_wrapper: HydrusDBBase.DBCursorTransactionWrapper ):
# core, initial tables first
table_generation_dict = self._GetInitialTableGenerationDict()
missing_table_rows = [ ( table_name, create_query_without_name ) for ( table_name, ( create_query_without_name, version_added ) ) in table_generation_dict.items() if version_added <= current_db_version and not self._TableExists( table_name ) ]
if len( missing_table_rows ) > 0:
missing_table_names = sorted( [ missing_table_row[0] for missing_table_row in missing_table_rows ] )
critical_table_names = self._GetCriticalTableNames()
missing_critical_table_names = set( missing_table_names ).intersection( critical_table_names )
if len( missing_critical_table_names ) > 0:
message = 'Unfortunately, this database is missing one or more critical tables! This database is non functional and cannot be repaired. Please check out "install_dir/db/help my db is broke.txt" for the next steps.'
raise HydrusExceptions.DBAccessException( message )
self._PresentMissingTablesWarningToUser( missing_table_names )
for ( table_name, create_query_without_name ) in missing_table_rows:
self._Execute( create_query_without_name.format( table_name ) )
cursor_transaction_wrapper.CommitAndBegin()
self._RepairRepopulateTables( missing_table_names, cursor_transaction_wrapper )
cursor_transaction_wrapper.CommitAndBegin()
# now indices for those tables
index_generation_dict = self._GetInitialIndexGenerationDict()
missing_index_rows = [ ( self._GenerateIndexName( table_name, columns ), table_name, columns, unique ) for ( table_name, columns, unique, version_added ) in self._FlattenIndexGenerationDict( index_generation_dict ) if version_added <= current_db_version and not self._IndexExists( table_name, columns ) ]
if len( missing_index_rows ):
self._PresentMissingIndicesWarningToUser( sorted( [ index_name for ( index_name, table_name, columns, unique ) in missing_index_rows ] ) )
for ( index_name, table_name, columns, unique ) in missing_index_rows:
self._CreateIndex( table_name, columns, unique = unique )
cursor_transaction_wrapper.CommitAndBegin()
# now do service tables, same thing over again
table_generation_dict = self._GetServicesTableGenerationDict()
missing_table_rows = [ ( table_name, create_query_without_name ) for ( table_name, ( create_query_without_name, version_added ) ) in table_generation_dict.items() if version_added <= current_db_version and not self._TableExists( table_name ) ]
if len( missing_table_rows ) > 0:
missing_table_names = sorted( [ missing_table_row[0] for missing_table_row in missing_table_rows ] )
self._PresentMissingTablesWarningToUser( missing_table_names )
for ( table_name, create_query_without_name ) in missing_table_rows:
self._Execute( create_query_without_name.format( table_name ) )
cursor_transaction_wrapper.CommitAndBegin()
self._RepairRepopulateTables( missing_table_names, cursor_transaction_wrapper )
cursor_transaction_wrapper.CommitAndBegin()
# now indices for those tables
index_generation_dict = self._GetServicesIndexGenerationDict()
missing_index_rows = [ ( self._GenerateIndexName( table_name, columns ), table_name, columns, unique ) for ( table_name, columns, unique, version_added ) in self._FlattenIndexGenerationDict( index_generation_dict ) if version_added <= current_db_version and not self._IndexExists( table_name, columns ) ]
if len( missing_index_rows ):
self._PresentMissingIndicesWarningToUser( sorted( [ index_name for ( index_name, table_name, columns, unique ) in missing_index_rows ] ) )
for ( index_name, table_name, columns, unique ) in missing_index_rows:
self._CreateIndex( table_name, columns, unique = unique )
cursor_transaction_wrapper.CommitAndBegin()

View File

@ -1,5 +1,6 @@
import collections
import cProfile
import decimal
import fractions
import io
import itertools
@ -435,7 +436,7 @@ def ConvertTimestampToPrettyTime( timestamp, in_utc = False, include_24h_time =
return 'unparseable time {}'.format( timestamp )
def TimestampToPrettyTimeDelta( timestamp, just_now_string = 'now', just_now_threshold = 3, history_suffix = ' ago', show_seconds = True, no_prefix = False ):
def BaseTimestampToPrettyTimeDelta( timestamp, just_now_string = 'now', just_now_threshold = 3, history_suffix = ' ago', show_seconds = True, no_prefix = False ):
if timestamp is None:
@ -479,6 +480,8 @@ def TimestampToPrettyTimeDelta( timestamp, just_now_string = 'now', just_now_thr
return 'unparseable time {}'.format( timestamp )
TimestampToPrettyTimeDelta = BaseTimestampToPrettyTimeDelta
def ConvertUglyNamespaceToPrettyString( namespace ):
if namespace is None or namespace == '':
@ -1553,13 +1556,62 @@ def TimeUntil( timestamp ):
return timestamp - GetNow()
def ToHumanBytes( size ):
def BaseToHumanBytes( size, sig_figs = 3 ):
#
# ░█▓▓▓▓▓▒ ░▒░ ▒ ▒ ░ ░ ░▒ ░░ ░▒ ░ ░▒░ ▒░▒▒▒░▓▓▒▒▓
# ▒▓▒▒▓▒ ░ ░ ▒ ░░ ░░ ░▒▒▒▓▒▒▓▓
# ▓█▓▒░ ▒▒░ ▒░ ▒▓░ ░ ░░░ ░ ░░░▒▒ ░ ░░░ ▒▓▓▓▒▓▓▒
# ▒▒░▒▒░░ ▒░░▒▓░▒░▒░░░░░░░░ ░▒▒▒▓ ░ ░▒▒░ ▒▓▒▓█▒
# ░█▓ ░░░ ▒▒░▒▒ ▒▒▒░░░░▒▒░░▒▒▒░░▒▒ ▒░ ░░░░░░▒█▒
# ░░▒▒ ▒░░▒░▒▒▒░ ░░░▒▒▒░░▒░ ░ ▓▒▒░ ░ ▒█▓░▒░
# ░░░ ░▒ ▓▒░▒░▒ ░▒▒▒▒ ▒▓░ ░ ▒ ▒█▓
# ░░░ ▓░▒▒░░▒ ▒▒▒▒░░ ░░░ ░░░ ▒ ░▒░ ░
# ░▒░ ▓░▒▒░░▒░▒▓▓████▓ ░▒▒▒▒▒ ░▒░░ ▓▒ ▒▒ ░▒░
# ░░░░ ░░ ░░▒░▒░░░ ░░░ ░ ░▒▒▒▒▓█▓ ▒░░░▒▓░▒▒░ ░░░░
# ▒ ░ ░░░░░░ ░▓░▓▒░░░ ░ ░░ ░▒▒░▒▒▒▒░▓░ ░ ░▒░
# ▒░░ ░░░░░░ ▒▒░▒░▒▒░░░░░░ ░░░░░░▒▒░▓▒░▒▒▓░ ░ ░░
# ░░░░▒▒▒░░░ ░░ ▓░▒░░▒▒░ ░░░░░░ ░▒▒▒▒▓▓ ░ ░ ░░ ▒░░
# ▒▒░ ▒▒░ ░░ ▒▒▒▓░░▒ ░ ░░░░░░░ ░░▒░░▒▓ ░░░░ ░░ ░▒░░▒
# ▒░░ ░ ▒▒ ░░ ▒▓▒▓▒ ▒░░▒▓ ▓▒░▒▒░░░▓▒ ░░ ░░ ▒▒░ ░░
# ░ ░▒▒░░▒▒░░ ░▒▒▒▒▒ ▒░░▒▓▒░ ░▒▓█▒░▒ ░░▒▓▒ ▒▒▒ ▒░▒░
# ▒░▒▓▓░░░░░▒▒▒▒░░░▓▒ ▒░ ▒▓░░▒▒░ ░░▒▒░▒▓░ ▒░ ▒░▒▒▒ ░░░░▓ ▒░▒░
# ▒ ▒▒ ▒░░░▒░░ ▒▒ ░░░▒▒░░░░▒▒▒░ ░░▒▒░▒▒░░▒▓▒░▒ ░ ▒░▒▒░░░░░▒░░▒▒▒▒
# ░ ▒▒▒░ ░░▒ ▒░░▒░░░░░▒▒▒▒▒▒▒▒▒▒▓▒░░▒░▒▒░▒▒░ ░ ▒░░░▒░░░░░░░▒▒▒░
# ▒ ▒▓ ░▒▒ ░▒░▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▓▒░▒▒▒▒▒░░▒░ ▒░ ░▒▒░▒░ ▒
# ▒ ▒▒░ ░▒░ ░ ▒░░▒▒░ ░▒▒▒▒ ░▒▒▒░ ▒░▒▒ ▒▒ ▒▒ ▒ ▓░░▒░
# ░░ ░▒ ░▒▒░ ░ ░▒░░░░ ░░ ░ ▒▒▓▓█▒ ░░▒▓▒░ ▒░ ░▒▒░░░
# ▒░ ░░▒ ▒▒▒ ░░▒▒░░░ ░ ▒░ ░░▒▒▒▒▒░██▓▒▓██ ▒▓▒▒░ ░ ▒▒ ▒▓▒ ░░
# ░░▒▒▒▒▒▒░ ░░░ ░ ▒▒░▒ ░▒▒░▓█░ ██▓▓▓▓ ▒ ░▓ ░░▒░ ░▒░ ▒░░░░▒
# ▒▒░ ░▒▒ ▒▓▒ ▓▓▓▓█▒▓▓▓▒▒▓▓▓▓ ▒▓▒ ▒▒ ░░░ ░▒▒░░░▒░
# ░ ░░▒▒▓▒▒ ░░▓▓▓███▒▒█▓██▒░░ ▒▒▒▒░ ▒▓ ░▒▒░ ░ ░▒▒▒░
# ░ ░░ ░▒▒▒▒▒▓▓▒▒░ ▒▒▒█▓▓▓░ ▓▓▒▓▓▒░░░▒▓▒░░ ▒▒ ░▒█▓░ ░░ ░░░
# ▒░░▒▓▒░▒░ ▒▒ ▒▒░ ░░░░▒▓▓▓▒▓▓▓▓ ▒▓▒▒▓▓░▒▒▒▒▒▒▒▓▒ ▒▒ ░▒▒▒▓▓▒░░▒ ░▒▒▒
# ▒▒▒░ ▒░ ░░ ▒ ░▓▓▒▒░░░░░░░ ▓░▒░ ░▓▓▓░ ░▒▓░▒▓▒▓▒▓▓▓
# ▓░ ▒░ ▒▓ ▓░ ░░░░░░░░░░░▒▒▒▒▒▒▓▓▒ ▒▒▓▓▓▓▓▓▓▓▓░ ▒▓▒▓▓▓▒▓▓▓▓
# ░░ ▓▒ ░▒▓▓▒▒▒▓▓▓▒▓▓▓▓▓▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▒▒▒▒▒▓▓▓▓▒▓▓▒▒▒▒▒▓▓▓▓▒
# ░░░ ▒▒▒▒▓▓░▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░▓▒▒▒▒▒▒▒▓▓▓▓▓▓▓▒▒▒▒▒░ ░
# ░░ ▒▓ ██░▓▓▓▓▓▓▓▓▓▓█▓▓▒▒▒ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░▒▓▒▒▒▒▒▒▒▒▒░▒▒░░▒▒░ ░░░░░
# ░▒░░░▒▓ ▓░▒▓▓▓▓▓▓▓▓▓▒ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▓ ▓░ ▒▒▒▒▒▒▒▒░ ░▒▒▒▒░░░░
# ▒▒░ ░▒▓ ▒▓░▓▓▓▓█▓▒ ░▓▓▓▓▓▓▓█▒▓▒ ▒▓▓▓▓▓▒▓░ ▓ ░▒▒░▒░░░░▒▒▒▒▒░░░░░
# ▓▒▒▒▒▒▒▓▓ ▒▓▒▓▓▓▒░ ░▓▓██▓▓▓█▓▒ ░ ▒▓▓▓▓▒▓▒ ▒▓ ▒ ░▓▓▓▓▒ ░░░░▒
# ░░▓▓▓▓▒▓▓ ░░▓▓ ░▓████▓▓▓▒▒▒ ▒██▓▒▓█▒ ▓▒ ▒ ░░▒▓▓▒▓▒▒▒
# ░░▒▒▒▒ ▒▒ ▒▓█▓▒░░ ░▓██▓▒░░░█░ ▓ ▒ ░ ░▓▒▒▓▓▒▓▒
# ░░░░░ ▒▒ ▓▒ ░░ ▒░░▒▒▓▓▓▒░░ ░█▓█▒ ▒▓ ▒░░ ░▓▒▒
# ▒░░░░▒▓▒ ░██ ▒▒▒▒░░▒▓ ▒▒ ░▒░▒███░ ▓▒ ▒░░ ░▓▒░░░░▒
# ░ ░░░▓▒░ ▒█▒▓█░ ░ ▒░ ░░ ▒▓▒ ▓ ░▒ ░▓▓▒░▒░░░
# ░░▒▒▒▓▓▓▓░▓▓ ██▒ ░▒░▒▒▒▒░░▒ ▒▓ ▓ ▒▓ ▒░ ░░▓▓▒░▒▒▒░
# ░░▒▒▓█▓░ █ ░█▒ ░ ░ ░▒▒░░▓░ ▓░ ░▓░ █░ ░░ ▒░ ░ ▓▓▒░ ░▒░
# ░░▒▓░ █ ██ ░ ░▒ ▒ █▓▒▒▒░░▒▒░ ▓▒ ░ ▒▓▒░ ░
#
if size is None:
return 'unknown size'
# my definition of sig figs is nonsense here. basically I mean 'can we show decimal places and not look stupid long?'
if size < 1024:
return ToHumanInt( size ) + 'B'
@ -1578,19 +1630,44 @@ def ToHumanBytes( size ):
suffix = suffixes[ suffix_index ]
if size < 10.0:
d = decimal.Decimal( size )
ctx = decimal.getcontext()
# ok, if we have 237KB, we still want all 237, even if user said 2 sf
while d.log10() >= sig_figs:
# 3.1MB
return '{:.1f}{}B'.format( size, suffix )
else:
# 23MB
return '{:.0f}{}B'.format( size, suffix )
sig_figs += 1
ctx.prec = sig_figs
ctx.rounding = decimal.ROUND_HALF_EVEN
d = d.normalize( ctx )
try:
# if we have 30, this will be normalised to 3E+1, so we want to quantize it back
( sign, digits, exp ) = d.as_tuple()
if exp > 0:
ctx.prec = 10 # careful to make precising bigger again though, or we get an error
d = d.quantize( 0 )
except:
# blarg
pass
return '{}{}B'.format( d, suffix )
ToHumanBytes = BaseToHumanBytes
def ToHumanInt( num ):
num = int( num )

View File

@ -30,8 +30,8 @@ export_folders_running = False
profile_mode = False
db_profile_min_job_time_ms = 16
callto_profile_min_job_time_ms = 5
server_profile_min_job_time_ms = 5
callto_profile_min_job_time_ms = 10
server_profile_min_job_time_ms = 10
menu_profile_min_job_time_ms = 16
pubsub_profile_min_job_time_ms = 5
ui_timer_profile_min_job_time_ms = 5

View File

@ -333,8 +333,11 @@ def parse_operator(string, spec):
if spec is None:
return string, None
elif spec == Operators.RELATIONAL:
ops = ['\u2248', '=', '<', '>']
ops = ['\u2248', '=', '<', '>', '\u2260']
if string.startswith('=='): return string[2:], '='
if string.startswith('!='): return string[2:], '\u2260'
if string.startswith('is not'): return string[6:], '\u2260'
if string.startswith('isn\'t'): return string[5:], '\u2260'
if string.startswith( '~=' ): return string[2:], '\u2248'
for op in ops:
if string.startswith(op): return string[len(op):], op
@ -344,6 +347,7 @@ def parse_operator(string, spec):
if string.startswith('=='): return string[2:], '='
if string.startswith('='): return string[1:], '='
if string.startswith('is'): return string[2:], '='
if string.startswith( '\u2260' ): return string[1:], '!='
if string.startswith('!='): return string[2:], '!='
if string.startswith('is not'): return string[6:], '!='
if string.startswith('isn\'t'): return string[5:], '!='
@ -359,7 +363,7 @@ def parse_operator(string, spec):
if match: return string[len(match[0]):], 'is not pending to'
raise ValueError("Invalid operator, expected a file service relationship")
elif spec == Operators.TAG_RELATIONAL:
match = re.match('(?P<tag>.*)\s+(?P<op>(<|>|=|==|~=|\u2248|is))', string)
match = re.match('(?P<tag>.*)\s+(?P<op>(<|>|=|==|~=|\u2248|\u2260|is|is not))', string)
if re.match:
tag = match['tag']
op = match['op']

View File

@ -386,8 +386,8 @@ class TestClientDB( unittest.TestCase ):
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_AGE, ( '<', 'delta', ( 1, 1, 1, 1, ) ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_AGE, ( '<', 'delta', ( 0, 0, 0, 0, ) ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_AGE, ( '\u2248', 'delta', ( 1, 1, 1, 1, ) ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_AGE, ( '\u2248', 'delta', ( 0, 0, 0, 0, ) ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_AGE, ( CC.UNICODE_ALMOST_EQUAL_TO, 'delta', ( 1, 1, 1, 1, ) ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_AGE, ( CC.UNICODE_ALMOST_EQUAL_TO, 'delta', ( 0, 0, 0, 0, ) ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_AGE, ( '>', 'delta', ( 1, 1, 1, 1, ) ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_AGE, ( '>', 'delta', ( 0, 0, 0, 0, ) ), 1 ) )
@ -395,8 +395,8 @@ class TestClientDB( unittest.TestCase ):
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_DURATION, ( '<', 100, ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_DURATION, ( '<', 0, ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_DURATION, ( '\u2248', 100, ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_DURATION, ( '\u2248', 0, ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_DURATION, ( CC.UNICODE_ALMOST_EQUAL_TO, 100, ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_DURATION, ( CC.UNICODE_ALMOST_EQUAL_TO, 0, ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_DURATION, ( '=', 100, ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_DURATION, ( '=', 0, ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_DURATION, ( '>', 100, ), 0 ) )
@ -418,9 +418,9 @@ class TestClientDB( unittest.TestCase ):
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HEIGHT, ( '<', 201 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HEIGHT, ( '<', 200 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HEIGHT, ( '<', 0 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HEIGHT, ( '\u2248', 200 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HEIGHT, ( '\u2248', 60 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HEIGHT, ( '\u2248', 0 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HEIGHT, ( CC.UNICODE_ALMOST_EQUAL_TO, 200 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HEIGHT, ( CC.UNICODE_ALMOST_EQUAL_TO, 60 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HEIGHT, ( CC.UNICODE_ALMOST_EQUAL_TO, 0 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HEIGHT, ( '=', 200 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HEIGHT, ( '=', 0 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HEIGHT, ( '>', 200 ), 0 ) )
@ -460,8 +460,8 @@ class TestClientDB( unittest.TestCase ):
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_WORDS, ( '<', 1 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_WORDS, ( '<', 0 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_WORDS, ( '\u2248', 0 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_WORDS, ( '\u2248', 1 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_WORDS, ( CC.UNICODE_ALMOST_EQUAL_TO, 0 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_WORDS, ( CC.UNICODE_ALMOST_EQUAL_TO, 1 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_WORDS, ( '=', 0 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_WORDS, ( '=', 1 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_WORDS, ( '>', 0 ), 0 ) )
@ -469,9 +469,9 @@ class TestClientDB( unittest.TestCase ):
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_RATIO, ( '=', 1, 1 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_RATIO, ( '=', 4, 3 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_RATIO, ( '\u2248', 1, 1 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_RATIO, ( '\u2248', 200, 201 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_RATIO, ( '\u2248', 4, 1 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_RATIO, ( CC.UNICODE_ALMOST_EQUAL_TO, 1, 1 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_RATIO, ( CC.UNICODE_ALMOST_EQUAL_TO, 200, 201 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_RATIO, ( CC.UNICODE_ALMOST_EQUAL_TO, 4, 1 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_SIMILAR_TO, ( ( hash, ), 5 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_SIMILAR_TO, ( ( bytes.fromhex( '0123456789abcdef' * 4 ), ), 5 ), 0 ) )
@ -481,8 +481,8 @@ class TestClientDB( unittest.TestCase ):
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_SIZE, ( '<', 5271, HydrusData.ConvertUnitToInt( 'B' ) ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_SIZE, ( '=', 5270, HydrusData.ConvertUnitToInt( 'B' ) ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_SIZE, ( '=', 0, HydrusData.ConvertUnitToInt( 'B' ) ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_SIZE, ( '\u2248', 5270, HydrusData.ConvertUnitToInt( 'B' ) ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_SIZE, ( '\u2248', 0, HydrusData.ConvertUnitToInt( 'B' ) ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_SIZE, ( CC.UNICODE_ALMOST_EQUAL_TO, 5270, HydrusData.ConvertUnitToInt( 'B' ) ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_SIZE, ( CC.UNICODE_ALMOST_EQUAL_TO, 0, HydrusData.ConvertUnitToInt( 'B' ) ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_SIZE, ( '>', 5270, HydrusData.ConvertUnitToInt( 'B' ) ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_SIZE, ( '>', 5269, HydrusData.ConvertUnitToInt( 'B' ) ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_SIZE, ( '>', 0, HydrusData.ConvertUnitToInt( 'B' ) ), 1 ) )
@ -493,9 +493,9 @@ class TestClientDB( unittest.TestCase ):
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_WIDTH, ( '<', 201 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_WIDTH, ( '<', 200 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_WIDTH, ( '<', 0 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_WIDTH, ( '\u2248', 200 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_WIDTH, ( '\u2248', 60 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_WIDTH, ( '\u2248', 0 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_WIDTH, ( CC.UNICODE_ALMOST_EQUAL_TO, 200 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_WIDTH, ( CC.UNICODE_ALMOST_EQUAL_TO, 60 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_WIDTH, ( CC.UNICODE_ALMOST_EQUAL_TO, 0 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_WIDTH, ( '=', 200 ), 1 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_WIDTH, ( '=', 0 ), 0 ) )
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_WIDTH, ( '>', 200 ), 0 ) )

View File

@ -1608,7 +1608,7 @@ class TestTagObjects( unittest.TestCase ):
self.assertEqual( p.GetNamespace(), 'system' )
self.assertEqual( p.GetTextsAndNamespaces( render_for_user ), [ ( p.ToString(), p.GetNamespace() ) ] )
p = ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_AGE, ( '\u2248', 'delta', ( 1, 2, 3, 4 ) ) )
p = ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_AGE, ( CC.UNICODE_ALMOST_EQUAL_TO, 'delta', ( 1, 2, 3, 4 ) ) )
self.assertEqual( p.ToString(), 'system:time imported: around 1 year 2 months ago' )
self.assertEqual( p.GetNamespace(), 'system' )

View File

@ -444,7 +444,7 @@ class TestBandwidthTracker( unittest.TestCase ):
bandwidth_tracker.ReportDataUsed( 1024 )
bandwidth_tracker.ReportRequestUsed()
self.assertEqual( bandwidth_tracker.GetCurrentMonthSummary(), 'used 1.0KB in 1 requests this month' )
self.assertEqual( bandwidth_tracker.GetCurrentMonthSummary(), 'used 1KB in 1 requests this month' )
self.assertEqual( bandwidth_tracker.GetUsage( HC.BANDWIDTH_TYPE_DATA, 0 ), 0 )
self.assertEqual( bandwidth_tracker.GetUsage( HC.BANDWIDTH_TYPE_REQUESTS, 0 ), 0 )
@ -497,7 +497,7 @@ class TestBandwidthTracker( unittest.TestCase ):
bandwidth_tracker.ReportDataUsed( 32 )
bandwidth_tracker.ReportRequestUsed()
self.assertEqual( bandwidth_tracker.GetCurrentMonthSummary(), 'used 1.1KB in 3 requests this month' )
self.assertEqual( bandwidth_tracker.GetCurrentMonthSummary(), 'used 1.06KB in 3 requests this month' )
self.assertEqual( bandwidth_tracker.GetUsage( HC.BANDWIDTH_TYPE_DATA, 0 ), 0 )
self.assertEqual( bandwidth_tracker.GetUsage( HC.BANDWIDTH_TYPE_REQUESTS, 0 ), 0 )