parent
147efa5a84
commit
58ac41357b
|
@ -8,6 +8,51 @@
|
|||
<div class="content">
|
||||
<h3 id="changelog"><a href="#changelog">changelog</a></h3>
|
||||
<ul>
|
||||
<li><h3 id="version_465"><a href="#version_465">version 465</a></h3></li>
|
||||
<ul>
|
||||
<li>misc:</li>
|
||||
<li>fixed a recent bug in wildcard search where 't\*g' unnamespaced wildcards were not returning namespace results</li>
|
||||
<li>sped up multi-predicate wildcard searches in some unusual file domains</li>
|
||||
<li>the file maintenance manager is now more polite about how it works. no matter its speed settings, it now pauses regularly to check on and wait until the main UI is feeling good to start new work. this should relieve heavier clients on older machines who will get a bunch of new work to do this week</li>
|
||||
<li>'all local files' in _review services_ now states how many files are awaiting physical deletion in the new deferred delete queue. this live updates when the values change but should be 0 pretty much all the time</li>
|
||||
<li>'all known files' in _review services_ also gets a second safety yes/no dialog on its clear deleted files record button</li>
|
||||
<li>updated the gelbooru 0.2.x gallery page parser, which at some point had been pulling 'delete from favourites' links when running the login-required 'gelbooru favorites by user id' downloader!!! it was deleting favourites, which I presume and hope was a recent change in gelbooru's html. in any case, the parser now skips over any deletion url (issue #1023)</li>
|
||||
<li>fixed a bad index to label conversion in a common database progress method. it was commonly saying 22/21 progress instead of 21/21</li>
|
||||
<li>fixed an error when manage tags dialog posts tags from the autocomplete during dialog shutdown</li>
|
||||
<li>fixed a layout issue with the new presentation import options where the dropdowns could grow a little tall and make a sub-panel scrollbar</li>
|
||||
<li>added handling for an error raised on loading an image with what seems to be a borked ICC profile</li>
|
||||
<li>increased the default per-db-file cache size from 200MB up to 256MB</li>
|
||||
<li>.</li>
|
||||
<li>some new options:</li>
|
||||
<li>the default tag service in the manage tags dialog (and some similar 'tag services in a row of tabs' dialogs) is reset this week. from now on, the last used service is remembered for the next time the dialog is opened. let's see how that works out. if you don't like it, you can go back to the old fixed default setting under the 'tags' options panel</li>
|
||||
<li>added a checkbox to the 'search' options panel that controls whether new search pages are in 'searching immediately' or 'searching paused' state (issue #761)</li>
|
||||
<li>moved default tag sort from 'tags' options panel to 'sort/collect'</li>
|
||||
<li>.</li>
|
||||
<li>deleted files and ipfs searchability:</li>
|
||||
<li>wrote a new virtual file service to hold all previously deleted files of all real file services. this provides a mapping cache and tag lookup cache allowing for fast search of any deleted file domain in the future</li>
|
||||
<li>ipfs services also now have mapping caches and tag search caches</li>
|
||||
<li>ipfs services are now searchable in the main file search view! just select them from the autocomplete dropdown file domain button. they have tag counts and everything</li>
|
||||
<li>it will take some time to populate the new ipfs and deleted files caches. if you don't have much deleted files history and no ipfs, it will be a few seconds. if you have a massive client with many deleted/ipfs files and many tags, it could be twenty minutes or more</li>
|
||||
<li>.</li>
|
||||
<li>'has icc profile' now cached in database:</li>
|
||||
<li>the client database now keeps track of which image files have an icc profile. this data is added on file import</li>
|
||||
<li>a new file maintenance task can generate it retroactively, and if a file is discovered to have an icc profile, it will be scheduled for a thumbnail regeneration too</li>
|
||||
<li>a new system predicate, 'system:has icc profile', can now search this data. this system pred is weird, so I expect in future it will get tucked into an umbrella system pred for advanced/rare stuff</li>
|
||||
<li>on update, all your existing image files are scheduled for the maintenance task. your 'has icc profile' will populate over time, and thumbnails will correct themselves</li>
|
||||
<li>.</li>
|
||||
<li>pixel hash now cached in database:</li>
|
||||
<li>the client database now keeps track of image 'pixel hashes', which are fast unique identifiers that aggregate all that image's pixels. if two images have the same pixel hash, they are pixel duplicates. this data is added on file import</li>
|
||||
<li>a new file maintenance task can generate it retroactively</li>
|
||||
<li>on update, all your existing image files are scheduled for the maintenance task. it'll work lightly in the background in prep for future duplicate file system improvements</li>
|
||||
<li>.</li>
|
||||
<li>boring search code cleanup:</li>
|
||||
<li>fixed a bug where the tag search cache could lose sibling/parent-chained values when their count was zeroed, even though they should always exist in a domain's lookup</li>
|
||||
<li>fixed up some repository reset code that was regenerating surplus tag search data</li>
|
||||
<li>with the new deleted files domain, simplified the new complex domain search pattern</li>
|
||||
<li>converted basic tag search code to support multiple location domains</li>
|
||||
<li>cleaned up some search cache and general table creation code to handle legacy orphan tables without error</li>
|
||||
<li>misc tag and mapping cache code and combined local files code refactoring and cleanup</li>
|
||||
</ul>
|
||||
<li><h3 id="version_464"><a href="#version_464">version 464</a></h3></li>
|
||||
<ul>
|
||||
<li>image icc:</li>
|
||||
|
|
|
@ -1180,6 +1180,8 @@
|
|||
<li>system:is not the best quality file of its duplicate group</li>
|
||||
<li>system:has audio</li>
|
||||
<li>system:no audio</li>
|
||||
<li>system:has icc profile</li>
|
||||
<li>system:no icc profile</li>
|
||||
<li>system:has tags</li>
|
||||
<li>system:no tags</li>
|
||||
<li>system:untagged</li>
|
||||
|
|
|
@ -49,7 +49,7 @@
|
|||
</li>
|
||||
<li>
|
||||
<b>--db_cache_size DB_CACHE_SIZE</b>
|
||||
<p>Change the size of the cache SQLite will use for each db file, in MB. By default this is 200, for 200MB, which for the four main client db files could mean 800MB peak use if you run a very heavy client and perform a long period of PTR sync. This does not matter so much (nor should it be fully used) if you have a smaller client.</p>
|
||||
<p>Change the size of the cache SQLite will use for each db file, in MB. By default this is 256, for 256MB, which for the four main client db files could mean an absolute 1GB peak use if you run a very heavy client and perform a long period of PTR sync. This does not matter so much (nor should it be fully used) if you have a smaller client.</p>
|
||||
</li>
|
||||
<li>
|
||||
<b>--db_synchronous_override {0,1,2,3}</b>
|
||||
|
|
|
@ -558,6 +558,8 @@ COMBINED_LOCAL_FILE_SERVICE_KEY = b'all local files'
|
|||
|
||||
COMBINED_FILE_SERVICE_KEY = b'all known files'
|
||||
|
||||
COMBINED_DELETED_FILE_SERVICE_KEY = b'all deleted files'
|
||||
|
||||
COMBINED_TAG_SERVICE_KEY = b'all known tags'
|
||||
|
||||
TEST_SERVICE_KEY = b'test service'
|
||||
|
|
|
@ -65,8 +65,6 @@ def GetClientDefaultOptions():
|
|||
|
||||
options[ 'confirm_client_exit' ] = False
|
||||
|
||||
options[ 'default_tag_repository' ] = CC.DEFAULT_LOCAL_TAG_SERVICE_KEY
|
||||
|
||||
options[ 'pause_export_folders_sync' ] = False
|
||||
options[ 'pause_import_folders_sync' ] = False
|
||||
options[ 'pause_repo_sync' ] = False
|
||||
|
|
|
@ -39,6 +39,8 @@ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_URL = 12
|
|||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE = 13
|
||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_URL_THEN_RECORD = 14
|
||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_URL_THEN_RECORD = 15
|
||||
REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE = 16
|
||||
REGENERATE_FILE_DATA_JOB_PIXEL_HASH = 17
|
||||
|
||||
regen_file_enum_to_str_lookup = {}
|
||||
|
||||
|
@ -58,6 +60,8 @@ regen_file_enum_to_str_lookup[ REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS ] = 'fix
|
|||
regen_file_enum_to_str_lookup[ REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP ] = 'check for membership in the similar files search system'
|
||||
regen_file_enum_to_str_lookup[ REGENERATE_FILE_DATA_JOB_SIMILAR_FILES_METADATA ] = 'regenerate similar files metadata'
|
||||
regen_file_enum_to_str_lookup[ REGENERATE_FILE_DATA_JOB_FILE_MODIFIED_TIMESTAMP ] = 'regenerate file modified date'
|
||||
regen_file_enum_to_str_lookup[ REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE ] = 'determine if the file has an icc profile'
|
||||
regen_file_enum_to_str_lookup[ REGENERATE_FILE_DATA_JOB_PIXEL_HASH ] = 'calculate file pixel hash'
|
||||
|
||||
regen_file_enum_to_description_lookup = {}
|
||||
|
||||
|
@ -77,6 +81,8 @@ regen_file_enum_to_description_lookup[ REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS
|
|||
regen_file_enum_to_description_lookup[ REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP ] = 'This checks to see if files should be in the similar files system, and if they are falsely in or falsely out, it will remove their record or queue them up for a search as appropriate. It is useful to repair database damage.'
|
||||
regen_file_enum_to_description_lookup[ REGENERATE_FILE_DATA_JOB_SIMILAR_FILES_METADATA ] = 'This forces a regeneration of the file\'s similar-files \'phashes\'. It is not useful unless you know there is missing data to repair.'
|
||||
regen_file_enum_to_description_lookup[ REGENERATE_FILE_DATA_JOB_FILE_MODIFIED_TIMESTAMP ] = 'This rechecks the file\'s modified timestamp and saves it to the database.'
|
||||
regen_file_enum_to_description_lookup[ REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE ] = 'This loads the file to see if it has an ICC profile, which is used in "system:has icc profile" search.'
|
||||
regen_file_enum_to_description_lookup[ REGENERATE_FILE_DATA_JOB_PIXEL_HASH ] = 'This generates a fast unique identifier for the pixels in a still image, which is used in duplicate pixel searches.'
|
||||
|
||||
NORMALISED_BIG_JOB_WEIGHT = 100
|
||||
|
||||
|
@ -98,6 +104,8 @@ regen_file_enum_to_job_weight_lookup[ REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS ]
|
|||
regen_file_enum_to_job_weight_lookup[ REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP ] = 50
|
||||
regen_file_enum_to_job_weight_lookup[ REGENERATE_FILE_DATA_JOB_SIMILAR_FILES_METADATA ] = 100
|
||||
regen_file_enum_to_job_weight_lookup[ REGENERATE_FILE_DATA_JOB_FILE_MODIFIED_TIMESTAMP ] = 10
|
||||
regen_file_enum_to_job_weight_lookup[ REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE ] = 100
|
||||
regen_file_enum_to_job_weight_lookup[ REGENERATE_FILE_DATA_JOB_PIXEL_HASH ] = 100
|
||||
|
||||
regen_file_enum_to_overruled_jobs = {}
|
||||
|
||||
|
@ -117,8 +125,10 @@ regen_file_enum_to_overruled_jobs[ REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS ] =
|
|||
regen_file_enum_to_overruled_jobs[ REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP ] = []
|
||||
regen_file_enum_to_overruled_jobs[ REGENERATE_FILE_DATA_JOB_SIMILAR_FILES_METADATA ] = [ REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP ]
|
||||
regen_file_enum_to_overruled_jobs[ REGENERATE_FILE_DATA_JOB_FILE_MODIFIED_TIMESTAMP ] = []
|
||||
regen_file_enum_to_overruled_jobs[ REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE ] = []
|
||||
regen_file_enum_to_overruled_jobs[ REGENERATE_FILE_DATA_JOB_PIXEL_HASH ] = []
|
||||
|
||||
ALL_REGEN_JOBS_IN_PREFERRED_ORDER = [ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_URL_THEN_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_URL_THEN_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE, REGENERATE_FILE_DATA_JOB_FILE_METADATA, REGENERATE_FILE_DATA_JOB_REFIT_THUMBNAIL, REGENERATE_FILE_DATA_JOB_FORCE_THUMBNAIL, REGENERATE_FILE_DATA_JOB_SIMILAR_FILES_METADATA, REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP, REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS, REGENERATE_FILE_DATA_JOB_FILE_MODIFIED_TIMESTAMP, REGENERATE_FILE_DATA_JOB_OTHER_HASHES, REGENERATE_FILE_DATA_JOB_DELETE_NEIGHBOUR_DUPES ]
|
||||
ALL_REGEN_JOBS_IN_PREFERRED_ORDER = [ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_URL_THEN_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_URL_THEN_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE, REGENERATE_FILE_DATA_JOB_FILE_METADATA, REGENERATE_FILE_DATA_JOB_REFIT_THUMBNAIL, REGENERATE_FILE_DATA_JOB_FORCE_THUMBNAIL, REGENERATE_FILE_DATA_JOB_SIMILAR_FILES_METADATA, REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP, REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS, REGENERATE_FILE_DATA_JOB_FILE_MODIFIED_TIMESTAMP, REGENERATE_FILE_DATA_JOB_OTHER_HASHES, REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE, REGENERATE_FILE_DATA_JOB_PIXEL_HASH, REGENERATE_FILE_DATA_JOB_DELETE_NEIGHBOUR_DUPES ]
|
||||
|
||||
def GetAllFilePaths( raw_paths, do_human_sort = True ):
|
||||
|
||||
|
@ -1050,12 +1060,19 @@ class ClientFilesManager( object ):
|
|||
|
||||
self._controller.WriteSynchronous( 'clear_deferred_physical_delete', file_hash = file_hash, thumbnail_hash = thumbnail_hash )
|
||||
|
||||
if num_files_deleted % 10 == 0 or num_thumbnails_deleted % 10 == 0:
|
||||
|
||||
self._controller.pub( 'notify_new_physical_file_delete_numbers' )
|
||||
|
||||
|
||||
|
||||
pauser.Pause()
|
||||
|
||||
|
||||
if num_files_deleted > 0 or num_thumbnails_deleted > 0:
|
||||
|
||||
self._controller.pub( 'notify_new_physical_file_delete_numbers' )
|
||||
|
||||
HydrusData.Print( 'Physically deleted {} files and {} thumbnails from file storage.'.format( HydrusData.ToHumanInt( num_files_deleted ), HydrusData.ToHumanInt( num_files_deleted ) ) )
|
||||
|
||||
|
||||
|
@ -1357,6 +1374,8 @@ class FilesMaintenanceManager( object ):
|
|||
|
||||
def _AbleToDoBackgroundMaintenance( self ):
|
||||
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
if HG.client_controller.CurrentlyIdle():
|
||||
|
||||
if not self._controller.new_options.GetBoolean( 'file_maintenance_during_idle' ):
|
||||
|
@ -1598,6 +1617,41 @@ class FilesMaintenanceManager( object ):
|
|||
|
||||
|
||||
|
||||
def _HasICCProfile( self, media_result ):
|
||||
|
||||
hash = media_result.GetHash()
|
||||
mime = media_result.GetMime()
|
||||
|
||||
if mime not in HC.FILES_THAT_CAN_HAVE_ICC_PROFILE:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
try:
|
||||
|
||||
path = self._controller.client_files_manager.GetFilePath( hash, mime )
|
||||
|
||||
try:
|
||||
|
||||
pil_image = HydrusImageHandling.RawOpenPILImage( path )
|
||||
|
||||
except:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
has_icc_profile = HydrusImageHandling.HasICCProfile( pil_image )
|
||||
|
||||
additional_data = has_icc_profile
|
||||
|
||||
return additional_data
|
||||
|
||||
except HydrusExceptions.FileMissingException:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
|
||||
def _RegenFileMetadata( self, media_result ):
|
||||
|
||||
hash = media_result.GetHash()
|
||||
|
@ -1687,32 +1741,6 @@ class FilesMaintenanceManager( object ):
|
|||
|
||||
|
||||
|
||||
def _RegenSimilarFilesMetadata( self, media_result ):
|
||||
|
||||
hash = media_result.GetHash()
|
||||
mime = media_result.GetMime()
|
||||
|
||||
if mime not in HC.MIMES_WE_CAN_PHASH:
|
||||
|
||||
self._controller.WriteSynchronous( 'file_maintenance_add_jobs_hashes', { hash }, REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP )
|
||||
|
||||
return None
|
||||
|
||||
|
||||
try:
|
||||
|
||||
path = self._controller.client_files_manager.GetFilePath( hash, mime )
|
||||
|
||||
except HydrusExceptions.FileMissingException:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
phashes = ClientImageHandling.GenerateShapePerceptualHashes( path, mime )
|
||||
|
||||
return phashes
|
||||
|
||||
|
||||
def _RegenFileThumbnailForce( self, media_result ):
|
||||
|
||||
mime = media_result.GetMime()
|
||||
|
@ -1753,6 +1781,72 @@ class FilesMaintenanceManager( object ):
|
|||
|
||||
|
||||
|
||||
def _RegenPixelHash( self, media_result ):
|
||||
|
||||
hash = media_result.GetHash()
|
||||
mime = media_result.GetMime()
|
||||
|
||||
if mime not in HC.FILES_THAT_CAN_HAVE_PIXEL_HASH:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
duration = media_result.GetDuration()
|
||||
|
||||
if duration is not None:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
try:
|
||||
|
||||
path = self._controller.client_files_manager.GetFilePath( hash, mime )
|
||||
|
||||
try:
|
||||
|
||||
pixel_hash = HydrusImageHandling.GetImagePixelHash( path, mime )
|
||||
|
||||
except:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
additional_data = pixel_hash
|
||||
|
||||
return additional_data
|
||||
|
||||
except HydrusExceptions.FileMissingException:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
|
||||
def _RegenSimilarFilesMetadata( self, media_result ):
|
||||
|
||||
hash = media_result.GetHash()
|
||||
mime = media_result.GetMime()
|
||||
|
||||
if mime not in HC.MIMES_WE_CAN_PHASH:
|
||||
|
||||
self._controller.WriteSynchronous( 'file_maintenance_add_jobs_hashes', { hash }, REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP )
|
||||
|
||||
return None
|
||||
|
||||
|
||||
try:
|
||||
|
||||
path = self._controller.client_files_manager.GetFilePath( hash, mime )
|
||||
|
||||
except HydrusExceptions.FileMissingException:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
phashes = ClientImageHandling.GenerateShapePerceptualHashes( path, mime )
|
||||
|
||||
return phashes
|
||||
|
||||
|
||||
def _ReInitialiseWorkRules( self ):
|
||||
|
||||
file_maintenance_idle_throttle_files = self._controller.new_options.GetInteger( 'file_maintenance_idle_throttle_files' )
|
||||
|
@ -1819,6 +1913,14 @@ class FilesMaintenanceManager( object ):
|
|||
|
||||
additional_data = self._RegenFileOtherHashes( media_result )
|
||||
|
||||
elif job_type == REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE:
|
||||
|
||||
additional_data = self._HasICCProfile( media_result )
|
||||
|
||||
elif job_type == REGENERATE_FILE_DATA_JOB_PIXEL_HASH:
|
||||
|
||||
additional_data = self._RegenPixelHash( media_result )
|
||||
|
||||
elif job_type == REGENERATE_FILE_DATA_JOB_FORCE_THUMBNAIL:
|
||||
|
||||
self._RegenFileThumbnailForce( media_result )
|
||||
|
|
|
@ -211,6 +211,7 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
self._dictionary[ 'booleans' ][ 'watch_clipboard_for_watcher_urls' ] = False
|
||||
self._dictionary[ 'booleans' ][ 'watch_clipboard_for_other_recognised_urls' ] = False
|
||||
|
||||
self._dictionary[ 'booleans' ][ 'default_search_synchronised' ] = True
|
||||
self._dictionary[ 'booleans' ][ 'autocomplete_float_main_gui' ] = True
|
||||
self._dictionary[ 'booleans' ][ 'autocomplete_float_frames' ] = False
|
||||
|
||||
|
@ -243,6 +244,8 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._dictionary[ 'booleans' ][ 'do_macos_debug_dialog_menus' ] = True
|
||||
|
||||
self._dictionary[ 'booleans' ][ 'save_default_tag_service_tab_on_change' ] = True
|
||||
|
||||
#
|
||||
|
||||
self._dictionary[ 'colours' ] = HydrusSerialisable.SerialisableDictionary()
|
||||
|
@ -405,6 +408,7 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._dictionary[ 'keys' ] = {}
|
||||
|
||||
self._dictionary[ 'keys' ][ 'default_tag_service_tab' ] = CC.DEFAULT_LOCAL_TAG_SERVICE_KEY.hex()
|
||||
self._dictionary[ 'keys' ][ 'default_tag_service_search_page' ] = CC.COMBINED_TAG_SERVICE_KEY.hex()
|
||||
self._dictionary[ 'keys' ][ 'default_gug_key' ] = HydrusData.GenerateKey().hex()
|
||||
|
||||
|
|
|
@ -61,6 +61,7 @@ PREDICATE_TYPE_SYSTEM_NUM_FRAMES = 37
|
|||
PREDICATE_TYPE_SYSTEM_NUM_NOTES = 38
|
||||
PREDICATE_TYPE_SYSTEM_NOTES = 39
|
||||
PREDICATE_TYPE_SYSTEM_HAS_NOTE_NAME = 40
|
||||
PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE = 41
|
||||
|
||||
SYSTEM_PREDICATE_TYPES = {
|
||||
PREDICATE_TYPE_SYSTEM_EVERYTHING,
|
||||
|
@ -80,6 +81,7 @@ SYSTEM_PREDICATE_TYPES = {
|
|||
PREDICATE_TYPE_SYSTEM_FRAMERATE,
|
||||
PREDICATE_TYPE_SYSTEM_NUM_FRAMES,
|
||||
PREDICATE_TYPE_SYSTEM_HAS_AUDIO,
|
||||
PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE,
|
||||
PREDICATE_TYPE_SYSTEM_MIME,
|
||||
PREDICATE_TYPE_SYSTEM_RATING,
|
||||
PREDICATE_TYPE_SYSTEM_SIMILAR_TO,
|
||||
|
@ -342,6 +344,13 @@ class FileSystemPredicates( object ):
|
|||
self._common_info[ 'has_audio' ] = has_audio
|
||||
|
||||
|
||||
if predicate_type == PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE:
|
||||
|
||||
has_icc_profile = value
|
||||
|
||||
self._common_info[ 'has_icc_profile' ] = has_icc_profile
|
||||
|
||||
|
||||
if predicate_type == PREDICATE_TYPE_SYSTEM_HASH:
|
||||
|
||||
( hashes, hash_type ) = value
|
||||
|
@ -1788,7 +1797,7 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
return Predicate( self._predicate_type, self._value, not self._inclusive )
|
||||
|
||||
elif self._predicate_type == PREDICATE_TYPE_SYSTEM_HAS_AUDIO:
|
||||
elif self._predicate_type in ( PREDICATE_TYPE_SYSTEM_HAS_AUDIO, PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE ):
|
||||
|
||||
return Predicate( self._predicate_type, not self._value )
|
||||
|
||||
|
@ -2323,6 +2332,20 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
elif self._predicate_type == PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE:
|
||||
|
||||
base = 'has icc profile'
|
||||
|
||||
if self._value is not None:
|
||||
|
||||
has_icc_profile = self._value
|
||||
|
||||
if not has_icc_profile:
|
||||
|
||||
base = 'no icc profile'
|
||||
|
||||
|
||||
|
||||
elif self._predicate_type == PREDICATE_TYPE_SYSTEM_HASH:
|
||||
|
||||
base = 'hash'
|
||||
|
|
|
@ -114,6 +114,8 @@ pred_generators = {
|
|||
SystemPredicateParser.Predicate.NOT_BEST_QUALITY_OF_GROUP : lambda o, v, u: ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_FILE_RELATIONSHIPS_KING, False ),
|
||||
SystemPredicateParser.Predicate.HAS_AUDIO : lambda o, v, u: ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_AUDIO, True ),
|
||||
SystemPredicateParser.Predicate.NO_AUDIO : lambda o, v, u: ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_AUDIO, False ),
|
||||
SystemPredicateParser.Predicate.HAS_ICC_PROFILE : lambda o, v, u: ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE, True ),
|
||||
SystemPredicateParser.Predicate.NO_ICC_PROFILE : lambda o, v, u: ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE, False ),
|
||||
SystemPredicateParser.Predicate.LIMIT : lambda o, v, u: ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_LIMIT, v ),
|
||||
SystemPredicateParser.Predicate.FILETYPE : lambda o, v, u: ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_MIME, tuple( v ) ),
|
||||
SystemPredicateParser.Predicate.HAS_DURATION : lambda o, v, u: ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_DURATION, ( '>', 0 ) ),
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -96,6 +96,30 @@ class ClientDBFilesMaintenance( ClientDBModule.ClientDBModule ):
|
|||
|
||||
self.modules_hashes.SetExtraHashes( hash_id, md5, sha1, sha512 )
|
||||
|
||||
elif job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE:
|
||||
|
||||
previous_has_icc_profile = self.modules_files_metadata_basic.GetHasICCProfile( hash_id )
|
||||
|
||||
has_icc_profile = additional_data
|
||||
|
||||
if previous_has_icc_profile != has_icc_profile:
|
||||
|
||||
self.modules_files_metadata_basic.SetHasICCProfile( hash_id, has_icc_profile )
|
||||
|
||||
if has_icc_profile: # we have switched from off to on
|
||||
|
||||
self.modules_files_maintenance_queue.AddJobs( { hash_id }, ClientFiles.REGENERATE_FILE_DATA_JOB_FORCE_THUMBNAIL )
|
||||
|
||||
|
||||
|
||||
elif job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_PIXEL_HASH:
|
||||
|
||||
pixel_hash = additional_data
|
||||
|
||||
pixel_hash_id = self.modules_hashes.GetHashId( pixel_hash )
|
||||
|
||||
self.modules_files_metadata_basic.SetPixelHash( hash_id, pixel_hash_id )
|
||||
|
||||
elif job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_MODIFIED_TIMESTAMP:
|
||||
|
||||
file_modified_timestamp = additional_data
|
||||
|
|
|
@ -32,14 +32,20 @@ class ClientDBFilesMetadataBasic( ClientDBModule.ClientDBModule ):
|
|||
( [ 'num_frames' ], False, 400 )
|
||||
]
|
||||
|
||||
index_generation_dict[ 'main.pixel_hash_map' ] = [
|
||||
( [ 'pixel_hash_id' ], False, 465 )
|
||||
]
|
||||
|
||||
return index_generation_dict
|
||||
|
||||
|
||||
def _GetInitialTableGenerationDict( self ) -> dict:
|
||||
|
||||
return {
|
||||
'main.file_inbox' : ( 'CREATE TABLE {} ( hash_id INTEGER PRIMARY KEY );', 400 ),
|
||||
'main.files_info' : ( 'CREATE TABLE {} ( hash_id INTEGER PRIMARY KEY, size INTEGER, mime INTEGER, width INTEGER, height INTEGER, duration INTEGER, num_frames INTEGER, has_audio INTEGER_BOOLEAN, num_words INTEGER );', 400 )
|
||||
'main.file_inbox' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );', 400 ),
|
||||
'main.files_info' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, size INTEGER, mime INTEGER, width INTEGER, height INTEGER, duration INTEGER, num_frames INTEGER, has_audio INTEGER_BOOLEAN, num_words INTEGER );', 400 ),
|
||||
'main.has_icc_profile' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );', 465 ),
|
||||
'main.pixel_hash_map' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER, pixel_hash_id INTEGER, PRIMARY KEY ( hash_id, pixel_hash_id ) );', 465 )
|
||||
}
|
||||
|
||||
|
||||
|
@ -85,6 +91,11 @@ class ClientDBFilesMetadataBasic( ClientDBModule.ClientDBModule ):
|
|||
return archiveable_hash_ids
|
||||
|
||||
|
||||
def ClearPixelHash( self, hash_id: int ):
|
||||
|
||||
self._Execute( 'DELETE FROM pixel_hash_map WHERE hash_id = ?;', ( hash_id, ) )
|
||||
|
||||
|
||||
def GetMime( self, hash_id: int ) -> int:
|
||||
|
||||
result = self._Execute( 'SELECT mime FROM files_info WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
|
||||
|
@ -134,7 +145,13 @@ class ClientDBFilesMetadataBasic( ClientDBModule.ClientDBModule ):
|
|||
|
||||
if HC.CONTENT_TYPE_HASH:
|
||||
|
||||
return [ ( 'files_info', 'hash_id' ) ]
|
||||
return [
|
||||
( 'file_inbox', 'hash_id' ),
|
||||
( 'files_info', 'hash_id' ),
|
||||
( 'has_icc_profile', 'hash_id' ),
|
||||
( 'pixel_hash_map', 'hash_id' ),
|
||||
( 'pixel_hash_map', 'pixel_hash_id' )
|
||||
]
|
||||
|
||||
|
||||
return []
|
||||
|
@ -166,6 +183,23 @@ class ClientDBFilesMetadataBasic( ClientDBModule.ClientDBModule ):
|
|||
return total_size
|
||||
|
||||
|
||||
def GetHasICCProfile( self, hash_id: int ):
|
||||
|
||||
result = self._Execute( 'SELECT hash_id FROM has_icc_profile WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
|
||||
|
||||
return result is not None
|
||||
|
||||
|
||||
def GetHasICCProfileHashIds( self, hash_ids: typing.Collection[ int ] ) -> typing.Set[ int ]:
|
||||
|
||||
with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
|
||||
|
||||
has_icc_profile_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN has_icc_profile USING ( hash_id );'.format( temp_hash_ids_table_name ) ) )
|
||||
|
||||
|
||||
return has_icc_profile_hash_ids
|
||||
|
||||
|
||||
def InboxFiles( self, hash_ids: typing.Collection[ int ] ) -> typing.Set[ int ]:
|
||||
|
||||
if not isinstance( hash_ids, set ):
|
||||
|
@ -185,3 +219,22 @@ class ClientDBFilesMetadataBasic( ClientDBModule.ClientDBModule ):
|
|||
return inboxable_hash_ids
|
||||
|
||||
|
||||
def SetHasICCProfile( self, hash_id: int, has_icc_profile: bool ):
|
||||
|
||||
if has_icc_profile:
|
||||
|
||||
self._Execute( 'INSERT OR IGNORE INTO has_icc_profile ( hash_id ) VALUES ( ? );', ( hash_id, ) )
|
||||
|
||||
else:
|
||||
|
||||
self._Execute( 'DELETE FROM has_icc_profile WHERE hash_id = ?;', ( hash_id, ) )
|
||||
|
||||
|
||||
|
||||
def SetPixelHash( self, hash_id: int, pixel_hash_id: int ):
|
||||
|
||||
self.ClearPixelHash( hash_id )
|
||||
|
||||
self._Execute( 'INSERT INTO pixel_hash_map ( hash_id, pixel_hash_id ) VALUES ( ?, ? );', ( hash_id, pixel_hash_id ) )
|
||||
|
||||
|
||||
|
|
|
@ -104,8 +104,8 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
|
||||
return {
|
||||
'main.local_file_deletion_reasons' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );', 400 ),
|
||||
'deferred_physical_file_deletes' : ( 'CREATE TABLE {} ( hash_id INTEGER PRIMARY KEY );', 464 ),
|
||||
'deferred_physical_thumbnail_deletes' : ( 'CREATE TABLE {} ( hash_id INTEGER PRIMARY KEY );', 464 )
|
||||
'deferred_physical_file_deletes' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );', 464 ),
|
||||
'deferred_physical_thumbnail_deletes' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );', 464 )
|
||||
|
||||
}
|
||||
|
||||
|
@ -146,7 +146,7 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
|
||||
def _GetServiceIdsWeGenerateDynamicTablesFor( self ):
|
||||
|
||||
return self.modules_services.GetServiceIds( HC.SPECIFIC_FILE_SERVICES )
|
||||
return self.modules_services.GetServiceIds( HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES )
|
||||
|
||||
|
||||
def AddFiles( self, service_id, insert_rows ):
|
||||
|
@ -273,6 +273,20 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
return service_ids_to_nums_cleared
|
||||
|
||||
|
||||
def ConvertLocationToCoveringFileServiceKeys( self, location_search_context: ClientSearch.LocationSearchContext ):
|
||||
|
||||
file_location_is_cross_referenced = not ( location_search_context.IsAllKnownFiles() or location_search_context.SearchesDeleted() )
|
||||
|
||||
file_service_keys = list( location_search_context.current_service_keys )
|
||||
|
||||
if location_search_context.SearchesDeleted():
|
||||
|
||||
file_service_keys.append( CC.COMBINED_DELETED_FILE_SERVICE_KEY )
|
||||
|
||||
|
||||
return ( file_service_keys, file_location_is_cross_referenced )
|
||||
|
||||
|
||||
def DeferFilesDeleteIfNowOrphan( self, hash_ids, definitely_no_thumbnails = False, ignore_service_id = None ):
|
||||
|
||||
orphan_hash_ids = self.FilterOrphanFileHashIds( hash_ids, ignore_service_id = ignore_service_id )
|
||||
|
@ -282,6 +296,7 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
self._ExecuteMany( 'INSERT OR IGNORE INTO deferred_physical_file_deletes ( hash_id ) VALUES ( ? );', ( ( hash_id, ) for hash_id in orphan_hash_ids ) )
|
||||
|
||||
self._cursor_transaction_wrapper.pub_after_job( 'notify_new_physical_file_deletes' )
|
||||
self._cursor_transaction_wrapper.pub_after_job( 'notify_new_physical_file_delete_numbers' )
|
||||
|
||||
|
||||
if not definitely_no_thumbnails:
|
||||
|
@ -293,6 +308,7 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
self._ExecuteMany( 'INSERT OR IGNORE INTO deferred_physical_thumbnail_deletes ( hash_id ) VALUES ( ? );', ( ( hash_id, ) for hash_id in orphan_hash_ids ) )
|
||||
|
||||
self._cursor_transaction_wrapper.pub_after_job( 'notify_new_physical_file_deletes' )
|
||||
self._cursor_transaction_wrapper.pub_after_job( 'notify_new_physical_file_delete_numbers' )
|
||||
|
||||
|
||||
|
||||
|
@ -335,7 +351,7 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
|
||||
if just_these_service_ids is None:
|
||||
|
||||
service_ids = self.modules_services.GetServiceIds( HC.SPECIFIC_FILE_SERVICES )
|
||||
service_ids = self.modules_services.GetServiceIds( HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -363,7 +379,7 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
|
||||
if just_these_service_ids is None:
|
||||
|
||||
service_ids = self.modules_services.GetServiceIds( HC.SPECIFIC_FILE_SERVICES )
|
||||
service_ids = self.modules_services.GetServiceIds( HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -710,6 +726,14 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
return ( file_result, thumbnail_result )
|
||||
|
||||
|
||||
def GetDeferredPhysicalDeleteCounts( self ):
|
||||
|
||||
( num_files, ) = self._Execute( 'SELECT COUNT( * ) FROM deferred_physical_file_deletes;' ).fetchone()
|
||||
( num_thumbnails, ) = self._Execute( 'SELECT COUNT( * ) FROM deferred_physical_thumbnail_deletes;' ).fetchone()
|
||||
|
||||
return ( num_files, num_thumbnails )
|
||||
|
||||
|
||||
def GetDeletedFilesCount( self, service_id: int ) -> int:
|
||||
|
||||
deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED )
|
||||
|
@ -825,7 +849,7 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
|
||||
hash_ids_to_current_file_service_ids = collections.defaultdict( list )
|
||||
|
||||
for service_id in self.modules_services.GetServiceIds( HC.SPECIFIC_FILE_SERVICES ):
|
||||
for service_id in self.modules_services.GetServiceIds( HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES ):
|
||||
|
||||
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
|
@ -845,7 +869,7 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
hash_ids_to_pending_file_service_ids = collections.defaultdict( list )
|
||||
hash_ids_to_petitioned_file_service_ids = collections.defaultdict( list )
|
||||
|
||||
for service_id in self.modules_services.GetServiceIds( HC.SPECIFIC_FILE_SERVICES ):
|
||||
for service_id in self.modules_services.GetServiceIds( HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES ):
|
||||
|
||||
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
|
||||
|
||||
|
@ -878,6 +902,15 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
)
|
||||
|
||||
|
||||
def GetLocationSearchContextForAllServicesDeletedFiles( self ) -> ClientSearch.LocationSearchContext:
|
||||
|
||||
deleted_service_keys = { service.GetServiceKey() for service in self.modules_services.GetServices( limited_types = HC.FILE_SERVICES_COVERED_BY_COMBINED_DELETED_FILE ) }
|
||||
|
||||
location_search_context = ClientSearch.LocationSearchContext( current_service_keys = [], deleted_service_keys = deleted_service_keys )
|
||||
|
||||
return location_search_context
|
||||
|
||||
|
||||
def GetNumLocal( self, service_id: int ) -> int:
|
||||
|
||||
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
@ -916,7 +949,7 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
|
||||
service_ids_to_counts = {}
|
||||
|
||||
for service_id in self.modules_services.GetServiceIds( HC.SPECIFIC_FILE_SERVICES ):
|
||||
for service_id in self.modules_services.GetServiceIds( HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES ):
|
||||
|
||||
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
|
@ -962,7 +995,7 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
|
||||
if HC.CONTENT_TYPE_HASH:
|
||||
|
||||
for service_id in self.modules_services.GetServiceIds( HC.SPECIFIC_FILE_SERVICES ):
|
||||
for service_id in self.modules_services.GetServiceIds( HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES ):
|
||||
|
||||
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
|
||||
|
||||
|
|
|
@ -24,9 +24,9 @@ class ClientDBMaintenance( ClientDBModule.ClientDBModule ):
|
|||
def _GetInitialTableGenerationDict( self ) -> dict:
|
||||
|
||||
return {
|
||||
'main.last_shutdown_work_time' : ( 'CREATE TABLE {} ( last_shutdown_work_time INTEGER );', 400 ),
|
||||
'main.analyze_timestamps' : ( 'CREATE TABLE {} ( name TEXT, num_rows INTEGER, timestamp INTEGER );', 400 ),
|
||||
'main.vacuum_timestamps' : ( 'CREATE TABLE {} ( name TEXT, timestamp INTEGER );', 400 )
|
||||
'main.last_shutdown_work_time' : ( 'CREATE TABLE IF NOT EXISTS {} ( last_shutdown_work_time INTEGER );', 400 ),
|
||||
'main.analyze_timestamps' : ( 'CREATE TABLE IF NOT EXISTS {} ( name TEXT, num_rows INTEGER, timestamp INTEGER );', 400 ),
|
||||
'main.vacuum_timestamps' : ( 'CREATE TABLE IF NOT EXISTS {} ( name TEXT, timestamp INTEGER );', 400 )
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -64,9 +64,10 @@ class ClientDBMappingsCounts( ClientDBModule.ClientDBModule ):
|
|||
|
||||
table_name = self.GetCountsCacheTableName( tag_display_type, file_service_id, tag_service_id )
|
||||
|
||||
version = 400 if tag_display_type == ClientTags.TAG_DISPLAY_STORAGE else 408
|
||||
# the version was earlier here but we updated when adding combined delete files and ipfs to these tables
|
||||
version = 465
|
||||
|
||||
table_dict[ table_name ] = ( 'CREATE TABLE {} ( tag_id INTEGER PRIMARY KEY, current_count INTEGER, pending_count INTEGER );', version )
|
||||
table_dict[ table_name ] = ( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER PRIMARY KEY, current_count INTEGER, pending_count INTEGER );', version )
|
||||
|
||||
return table_dict
|
||||
|
||||
|
@ -77,7 +78,7 @@ class ClientDBMappingsCounts( ClientDBModule.ClientDBModule ):
|
|||
|
||||
table_dict = {}
|
||||
|
||||
file_service_ids = list( self.modules_services.GetServiceIds( HC.AUTOCOMPLETE_CACHE_SPECIFIC_FILE_SERVICES ) )
|
||||
file_service_ids = list( self.modules_services.GetServiceIds( HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES ) )
|
||||
file_service_ids.append( self.modules_services.combined_file_service_id )
|
||||
|
||||
for file_service_id in file_service_ids:
|
||||
|
@ -100,7 +101,7 @@ class ClientDBMappingsCounts( ClientDBModule.ClientDBModule ):
|
|||
|
||||
def _RepairRepopulateTables( self, table_names, cursor_transaction_wrapper: HydrusDBBase.DBCursorTransactionWrapper ):
|
||||
|
||||
file_service_ids = list( self.modules_services.GetServiceIds( HC.TAG_CACHE_SPECIFIC_FILE_SERVICES ) )
|
||||
file_service_ids = list( self.modules_services.GetServiceIds( HC.FILE_SERVICES_WITH_SPECIFIC_TAG_LOOKUP_CACHES ) )
|
||||
file_service_ids.append( self.modules_services.combined_file_service_id )
|
||||
|
||||
tag_service_ids = list( self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES ) )
|
||||
|
@ -448,7 +449,7 @@ class ClientDBMappingsCounts( ClientDBModule.ClientDBModule ):
|
|||
counts_cache_table_name = self.GetCountsCacheTableName( tag_display_type, file_service_id, tag_service_id )
|
||||
|
||||
return 'SELECT tag_id FROM {} WHERE current_count > 0'.format( counts_cache_table_name )
|
||||
|
||||
|
||||
|
||||
def GetTablesAndColumnsThatUseDefinitions( self, content_type: int ) -> typing.List[ typing.Tuple[ str, str ] ]:
|
||||
|
||||
|
|
|
@ -165,11 +165,11 @@ class ClientDBSerialisable( ClientDBModule.ClientDBModule ):
|
|||
def _GetInitialTableGenerationDict( self ) -> dict:
|
||||
|
||||
return {
|
||||
'main.json_dict' : ( 'CREATE TABLE {} ( name TEXT PRIMARY KEY, dump BLOB_BYTES );', 400 ),
|
||||
'main.json_dumps' : ( 'CREATE TABLE {} ( dump_type INTEGER PRIMARY KEY, version INTEGER, dump BLOB_BYTES );', 400 ),
|
||||
'main.json_dumps_named' : ( 'CREATE TABLE {} ( dump_type INTEGER, dump_name TEXT, version INTEGER, timestamp INTEGER, dump BLOB_BYTES, PRIMARY KEY ( dump_type, dump_name, timestamp ) );', 400 ),
|
||||
'main.json_dumps_hashed' : ( 'CREATE TABLE {} ( hash BLOB_BYTES PRIMARY KEY, dump_type INTEGER, version INTEGER, dump BLOB_BYTES );', 442 ),
|
||||
'main.yaml_dumps' : ( 'CREATE TABLE {} ( dump_type INTEGER, dump_name TEXT, dump TEXT_YAML, PRIMARY KEY ( dump_type, dump_name ) );', 400 )
|
||||
'main.json_dict' : ( 'CREATE TABLE IF NOT EXISTS {} ( name TEXT PRIMARY KEY, dump BLOB_BYTES );', 400 ),
|
||||
'main.json_dumps' : ( 'CREATE TABLE IF NOT EXISTS {} ( dump_type INTEGER PRIMARY KEY, version INTEGER, dump BLOB_BYTES );', 400 ),
|
||||
'main.json_dumps_named' : ( 'CREATE TABLE IF NOT EXISTS {} ( dump_type INTEGER, dump_name TEXT, version INTEGER, timestamp INTEGER, dump BLOB_BYTES, PRIMARY KEY ( dump_type, dump_name, timestamp ) );', 400 ),
|
||||
'main.json_dumps_hashed' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash BLOB_BYTES PRIMARY KEY, dump_type INTEGER, version INTEGER, dump BLOB_BYTES );', 442 ),
|
||||
'main.yaml_dumps' : ( 'CREATE TABLE IF NOT EXISTS {} ( dump_type INTEGER, dump_name TEXT, dump TEXT_YAML, PRIMARY KEY ( dump_type, dump_name ) );', 400 )
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -24,6 +24,7 @@ class ClientDBMasterServices( ClientDBModule.ClientDBModule ):
|
|||
self.trash_service_id = None
|
||||
self.combined_local_file_service_id = None
|
||||
self.combined_file_service_id = None
|
||||
self.combined_deleted_file_service_id = None
|
||||
self.combined_tag_service_id = None
|
||||
|
||||
self._InitCaches()
|
||||
|
@ -39,7 +40,7 @@ class ClientDBMasterServices( ClientDBModule.ClientDBModule ):
|
|||
def _GetInitialTableGenerationDict( self ) -> dict:
|
||||
|
||||
return {
|
||||
'main.services' : ( 'CREATE TABLE {} ( service_id INTEGER PRIMARY KEY AUTOINCREMENT, service_key BLOB_BYTES UNIQUE, service_type INTEGER, name TEXT, dictionary_string TEXT );', 400 )
|
||||
'main.services' : ( 'CREATE TABLE IF NOT EXISTS {} ( service_id INTEGER PRIMARY KEY AUTOINCREMENT, service_key BLOB_BYTES UNIQUE, service_type INTEGER, name TEXT, dictionary_string TEXT );', 400 )
|
||||
}
|
||||
|
||||
|
||||
|
@ -64,6 +65,18 @@ class ClientDBMasterServices( ClientDBModule.ClientDBModule ):
|
|||
self.trash_service_id = self.GetServiceId( CC.TRASH_SERVICE_KEY )
|
||||
self.combined_local_file_service_id = self.GetServiceId( CC.COMBINED_LOCAL_FILE_SERVICE_KEY )
|
||||
self.combined_file_service_id = self.GetServiceId( CC.COMBINED_FILE_SERVICE_KEY )
|
||||
|
||||
try:
|
||||
|
||||
self.combined_deleted_file_service_id = self.GetServiceId( CC.COMBINED_DELETED_FILE_SERVICE_KEY )
|
||||
|
||||
except HydrusExceptions.DataMissing:
|
||||
|
||||
# version 465 it might not be in yet
|
||||
|
||||
pass
|
||||
|
||||
|
||||
self.combined_tag_service_id = self.GetServiceId( CC.COMBINED_TAG_SERVICE_KEY )
|
||||
|
||||
|
||||
|
@ -97,6 +110,10 @@ class ClientDBMasterServices( ClientDBModule.ClientDBModule ):
|
|||
|
||||
self.combined_file_service_id = service_id
|
||||
|
||||
elif service_key == CC.COMBINED_DELETED_FILE_SERVICE_KEY:
|
||||
|
||||
self.combined_deleted_file_service_id = service_id
|
||||
|
||||
elif service_key == CC.COMBINED_TAG_SERVICE_KEY:
|
||||
|
||||
self.combined_tag_service_id = service_id
|
||||
|
@ -126,7 +143,7 @@ class ClientDBMasterServices( ClientDBModule.ClientDBModule ):
|
|||
|
||||
service_type = self.GetService( service_id ).GetServiceType()
|
||||
|
||||
return service_type in ( HC.LOCAL_FILE_DOMAIN, HC.LOCAL_FILE_TRASH_DOMAIN )
|
||||
return service_type in HC.FILE_SERVICES_COVERED_BY_COMBINED_LOCAL_FILE
|
||||
|
||||
|
||||
def GetNonDupeName( self, name ) -> str:
|
||||
|
@ -188,18 +205,17 @@ class ClientDBMasterServices( ClientDBModule.ClientDBModule ):
|
|||
return False
|
||||
|
||||
|
||||
file_location_is_all_local = False
|
||||
|
||||
service_ids = { self.GetServiceId( service_key ) for service_key in location_search_context.current_service_keys }
|
||||
|
||||
service_types = { self.GetService( service_id ).GetServiceType() for service_id in service_ids }
|
||||
|
||||
if False not in ( service_type in HC.LOCAL_FILE_SERVICES for service_type in service_types ):
|
||||
for service_id in service_ids:
|
||||
|
||||
file_location_is_all_local = True
|
||||
if not self.FileServiceIsCoveredByAllLocalFiles( service_id ):
|
||||
|
||||
return False
|
||||
|
||||
|
||||
|
||||
return file_location_is_all_local
|
||||
return True
|
||||
|
||||
|
||||
def UpdateService( self, service: ClientServices.Service ):
|
||||
|
|
|
@ -81,15 +81,26 @@ class ClientDBTagDisplay( ClientDBModule.ClientDBModule ):
|
|||
# all parent definitions are sibling collapsed, so are terminus of their sibling chains
|
||||
# so get all of the parent chain, then get all chains that point to those
|
||||
|
||||
ideal_tag_ids = self.modules_tag_siblings.GetIdeals( display_type, tag_service_id, tag_ids )
|
||||
|
||||
parent_chain_members = self.modules_tag_parents.GetChainsMembers( display_type, tag_service_id, ideal_tag_ids )
|
||||
|
||||
sibling_chain_members = self.modules_tag_siblings.GetChainsMembersFromIdeals( display_type, tag_service_id, parent_chain_members )
|
||||
|
||||
return sibling_chain_members.union( parent_chain_members )
|
||||
|
||||
# ok revisit this sometime I guess but it needs work
|
||||
'''
|
||||
with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_tag_ids_table_name:
|
||||
|
||||
with self._MakeTemporaryIntegerTable( [], 'ideal_tag_id' ) as temp_ideal_tag_ids_table_name:
|
||||
|
||||
self.modules_tag_siblings.GetIdealsIntoTable( display_type, tag_service_id, temp_tag_ids_table_name, temp_ideal_tag_ids_table_name )
|
||||
|
||||
with self._MakeTemporaryIntegerTable( [], 'tag_id' ) as temp_parent_chain_members_table_name:
|
||||
with self._MakeTemporaryIntegerTable( [], 'ideal_tag_id' ) as temp_parent_chain_members_table_name:
|
||||
|
||||
self.modules_tag_parents.GetChainsMembersTables( display_type, tag_service_id, temp_ideal_tag_ids_table_name, temp_parent_chain_members_table_name )
|
||||
# this is actually not implemented LMAO
|
||||
self.modules_tag_parents.GetChainsMembersTables( display_type, tag_service_id, temp_ideal_tag_ids_table_name, temp_parent_chain_members_table_name, 'ideal_tag_id' )
|
||||
|
||||
with self._MakeTemporaryIntegerTable( [], 'tag_id' ) as temp_chain_members_table_name:
|
||||
|
||||
|
@ -100,7 +111,7 @@ class ClientDBTagDisplay( ClientDBModule.ClientDBModule ):
|
|||
|
||||
|
||||
|
||||
|
||||
'''
|
||||
|
||||
def GetImpliedBy( self, display_type, tag_service_id, tag_id ) -> typing.Set[ int ]:
|
||||
|
||||
|
|
|
@ -76,9 +76,9 @@ class ClientDBTagParents( ClientDBModule.ClientDBModule ):
|
|||
def _GetInitialTableGenerationDict( self ) -> dict:
|
||||
|
||||
return {
|
||||
'main.tag_parents' : ( 'CREATE TABLE {} ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );', 414 ),
|
||||
'main.tag_parent_petitions' : ( 'CREATE TABLE {} ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );', 414 ),
|
||||
'main.tag_parent_application' : ( 'CREATE TABLE {} ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );', 414 )
|
||||
'main.tag_parents' : ( 'CREATE TABLE IF NOT EXISTS {} ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );', 414 ),
|
||||
'main.tag_parent_petitions' : ( 'CREATE TABLE IF NOT EXISTS {} ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );', 414 ),
|
||||
'main.tag_parent_application' : ( 'CREATE TABLE IF NOT EXISTS {} ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );', 414 )
|
||||
}
|
||||
|
||||
|
||||
|
@ -383,6 +383,8 @@ class ClientDBTagParents( ClientDBModule.ClientDBModule ):
|
|||
|
||||
def GetChainsMembersTables( self, display_type: int, tag_service_id: int, ideal_tag_ids_table_name: str, results_table_name: str ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
# if it isn't crazy, I should write this whole lad to be one or two recursive queries
|
||||
|
||||
cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
|
|
@ -13,6 +13,8 @@ from hydrus.client import ClientSearch
|
|||
from hydrus.client.db import ClientDBMaster
|
||||
from hydrus.client.db import ClientDBModule
|
||||
from hydrus.client.db import ClientDBServices
|
||||
from hydrus.client.db import ClientDBTagDisplay
|
||||
from hydrus.client.metadata import ClientTags
|
||||
|
||||
# Sqlite can handle -( 2 ** 63 ) -> ( 2 ** 63 ) - 1
|
||||
MIN_CACHED_INTEGER = - ( 2 ** 63 )
|
||||
|
@ -126,10 +128,11 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
CAN_REPOPULATE_ALL_MISSING_DATA = True
|
||||
|
||||
def __init__( self, cursor: sqlite3.Cursor, modules_services: ClientDBServices.ClientDBMasterServices, modules_tags: ClientDBMaster.ClientDBMasterTags ):
|
||||
def __init__( self, cursor: sqlite3.Cursor, modules_services: ClientDBServices.ClientDBMasterServices, modules_tags: ClientDBMaster.ClientDBMasterTags, modules_tag_display: ClientDBTagDisplay.ClientDBTagDisplay ):
|
||||
|
||||
self.modules_services = modules_services
|
||||
self.modules_tags = modules_tags
|
||||
self.modules_tag_display = modules_tag_display
|
||||
|
||||
ClientDBModule.ClientDBModule.__init__( self, 'client tag search', cursor )
|
||||
|
||||
|
@ -146,16 +149,16 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
index_generation_dict = {}
|
||||
|
||||
index_generation_dict[ tags_table_name ] = [
|
||||
( [ 'namespace_id', 'subtag_id' ], True, 424 ),
|
||||
( [ 'subtag_id' ], False, 424 )
|
||||
( [ 'namespace_id', 'subtag_id' ], True, 465 ),
|
||||
( [ 'subtag_id' ], False, 465 )
|
||||
]
|
||||
|
||||
index_generation_dict[ subtags_searchable_map_table_name ] = [
|
||||
( [ 'searchable_subtag_id' ], False, 430 )
|
||||
( [ 'searchable_subtag_id' ], False, 465 )
|
||||
]
|
||||
|
||||
index_generation_dict[ integer_subtags_table_name ] = [
|
||||
( [ 'integer_subtag' ], False, 424 )
|
||||
( [ 'integer_subtag' ], False, 465 )
|
||||
]
|
||||
|
||||
return index_generation_dict
|
||||
|
@ -167,7 +170,7 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
index_generation_dict = {}
|
||||
|
||||
file_service_ids = list( self.modules_services.GetServiceIds( HC.AUTOCOMPLETE_CACHE_SPECIFIC_FILE_SERVICES ) )
|
||||
file_service_ids = list( self.modules_services.GetServiceIds( HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES ) )
|
||||
file_service_ids.append( self.modules_services.combined_file_service_id )
|
||||
|
||||
for file_service_id in file_service_ids:
|
||||
|
@ -188,10 +191,10 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
integer_subtags_table_name = self.GetIntegerSubtagsTableName( file_service_id, tag_service_id )
|
||||
|
||||
table_dict = {
|
||||
tags_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER PRIMARY KEY, namespace_id INTEGER, subtag_id INTEGER );', 424 ),
|
||||
subtags_fts4_table_name : ( 'CREATE VIRTUAL TABLE IF NOT EXISTS {} USING fts4( subtag );', 424 ),
|
||||
subtags_searchable_map_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( subtag_id INTEGER PRIMARY KEY, searchable_subtag_id INTEGER );', 430 ),
|
||||
integer_subtags_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( subtag_id INTEGER PRIMARY KEY, integer_subtag INTEGER );', 424 )
|
||||
tags_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER PRIMARY KEY, namespace_id INTEGER, subtag_id INTEGER );', 465 ),
|
||||
subtags_fts4_table_name : ( 'CREATE VIRTUAL TABLE IF NOT EXISTS {} USING fts4( subtag );', 465 ),
|
||||
subtags_searchable_map_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( subtag_id INTEGER PRIMARY KEY, searchable_subtag_id INTEGER );', 465 ),
|
||||
integer_subtags_table_name : ( 'CREATE TABLE IF NOT EXISTS {} ( subtag_id INTEGER PRIMARY KEY, integer_subtag INTEGER );', 465 )
|
||||
}
|
||||
|
||||
return table_dict
|
||||
|
@ -203,7 +206,7 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
table_dict = {}
|
||||
|
||||
file_service_ids = list( self.modules_services.GetServiceIds( HC.AUTOCOMPLETE_CACHE_SPECIFIC_FILE_SERVICES ) )
|
||||
file_service_ids = list( self.modules_services.GetServiceIds( HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES ) )
|
||||
file_service_ids.append( self.modules_services.combined_file_service_id )
|
||||
|
||||
for file_service_id in file_service_ids:
|
||||
|
@ -223,7 +226,7 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
def _RepairRepopulateTables( self, table_names, cursor_transaction_wrapper: HydrusDBBase.DBCursorTransactionWrapper ):
|
||||
|
||||
file_service_ids = list( self.modules_services.GetServiceIds( HC.TAG_CACHE_SPECIFIC_FILE_SERVICES ) )
|
||||
file_service_ids = list( self.modules_services.GetServiceIds( HC.FILE_SERVICES_WITH_SPECIFIC_TAG_LOOKUP_CACHES ) )
|
||||
file_service_ids.append( self.modules_services.combined_file_service_id )
|
||||
|
||||
tag_service_ids = list( self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES ) )
|
||||
|
@ -317,50 +320,6 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
|
||||
|
||||
def ConvertLocationAndTagContextToCoveringCachePairs( self, location_search_context: ClientSearch.LocationSearchContext, tag_search_context: ClientSearch.TagSearchContext ):
|
||||
|
||||
# a GREAT way to support fast deleted file search is to have a domain that covers all files ever deleted in all domains mate
|
||||
# adding on trash or full delete and only removing files from it on clear delete record for all services
|
||||
# then it just needs the cross-reference after search
|
||||
# if I do this, then I think we'll be covering all specific file services again, even if not perfectly
|
||||
# which means I can ditch the COMBINED_TAG_SERVICE_KEY gubbins here
|
||||
# and therefore ditch this being 'TagContext'-CoveringPairs. tag context will always be the same!
|
||||
|
||||
# furthermore, if we are moving to imperfect file caches, potentially we could go to just 'all local files' cache for current files, too
|
||||
# the only real drawback would be when searching a tiny local file service, like trash
|
||||
|
||||
tag_search_contexts = [ tag_search_context ]
|
||||
|
||||
if location_search_context.SearchesCurrent() and not location_search_context.SearchesDeleted():
|
||||
|
||||
file_location_is_cross_referenced = not location_search_context.IsAllKnownFiles()
|
||||
|
||||
file_service_keys = list( location_search_context.current_service_keys )
|
||||
|
||||
else:
|
||||
|
||||
file_location_is_cross_referenced = False
|
||||
|
||||
file_service_keys = [ CC.COMBINED_FILE_SERVICE_KEY ]
|
||||
|
||||
if tag_search_context.IsAllKnownTags() == CC.COMBINED_TAG_SERVICE_KEY:
|
||||
|
||||
tag_search_contexts = []
|
||||
|
||||
for tag_service in self.modules_services.GetServices( HC.REAL_TAG_SERVICES ):
|
||||
|
||||
tsc = tag_search_context.Duplicate()
|
||||
|
||||
tsc.service_key = tag_service.GetServiceKey()
|
||||
|
||||
tag_search_contexts.append( tsc )
|
||||
|
||||
|
||||
|
||||
|
||||
return ( list( itertools.product( file_service_keys, tag_search_contexts ) ), file_location_is_cross_referenced )
|
||||
|
||||
|
||||
def DeleteTags( self, file_service_id, tag_service_id, tag_ids ):
|
||||
|
||||
if len( tag_ids ) == 0:
|
||||
|
@ -368,6 +327,20 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
return
|
||||
|
||||
|
||||
if not isinstance( tag_ids, set ):
|
||||
|
||||
tag_ids = set( tag_ids )
|
||||
|
||||
|
||||
#
|
||||
|
||||
# we always include all chained guys regardless of count
|
||||
chained_tag_ids = self.modules_tag_display.GetChainsMembers( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, tag_ids )
|
||||
|
||||
tag_ids = tag_ids.difference( chained_tag_ids )
|
||||
|
||||
#
|
||||
|
||||
tags_table_name = self.GetTagsTableName( file_service_id, tag_service_id )
|
||||
subtags_fts4_table_name = self.GetSubtagsFTS4TableName( file_service_id, tag_service_id )
|
||||
subtags_searchable_map_table_name = self.GetSubtagsSearchableMapTableName( file_service_id, tag_service_id )
|
||||
|
@ -788,7 +761,7 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
table_dict = {}
|
||||
|
||||
file_service_ids = list( self.modules_services.GetServiceIds( HC.AUTOCOMPLETE_CACHE_SPECIFIC_FILE_SERVICES ) )
|
||||
file_service_ids = list( self.modules_services.GetServiceIds( HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES ) )
|
||||
file_service_ids.append( self.modules_services.combined_file_service_id )
|
||||
|
||||
for file_service_id in file_service_ids:
|
||||
|
|
|
@ -90,9 +90,9 @@ class ClientDBTagSiblings( ClientDBModule.ClientDBModule ):
|
|||
def _GetInitialTableGenerationDict( self ) -> dict:
|
||||
|
||||
return {
|
||||
'main.tag_siblings' : ( 'CREATE TABLE {} ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );', 414 ),
|
||||
'main.tag_sibling_petitions' : ( 'CREATE TABLE {} ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );', 414 ),
|
||||
'main.tag_sibling_application' : ( 'CREATE TABLE {} ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );', 414 )
|
||||
'main.tag_siblings' : ( 'CREATE TABLE IF NOT EXISTS {} ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );', 414 ),
|
||||
'main.tag_sibling_petitions' : ( 'CREATE TABLE IF NOT EXISTS {} ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );', 414 ),
|
||||
'main.tag_sibling_application' : ( 'CREATE TABLE IF NOT EXISTS {} ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );', 414 )
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -4633,11 +4633,11 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
|
|||
|
||||
def _MigrateTags( self ):
|
||||
|
||||
default_tag_repository_key = HC.options[ 'default_tag_repository' ]
|
||||
default_tag_service_key = self._controller.new_options.GetKey( 'default_tag_service_tab' )
|
||||
|
||||
frame = ClientGUITopLevelWindowsPanels.FrameThatTakesScrollablePanel( self, 'migrate tags' )
|
||||
|
||||
panel = ClientGUIScrolledPanelsReview.MigrateTagsPanel( frame, default_tag_repository_key )
|
||||
panel = ClientGUIScrolledPanelsReview.MigrateTagsPanel( frame, default_tag_service_key )
|
||||
|
||||
frame.SetPanel( panel )
|
||||
|
||||
|
|
|
@ -1430,7 +1430,7 @@ class EditLocalImportFilenameTaggingPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
services = HG.client_controller.services_manager.GetServices( HC.REAL_TAG_SERVICES )
|
||||
|
||||
default_tag_repository_key = HC.options[ 'default_tag_repository' ]
|
||||
default_tag_service_key = HG.client_controller.new_options.GetKey( 'default_tag_service_tab' )
|
||||
|
||||
for service in services:
|
||||
|
||||
|
@ -1439,7 +1439,7 @@ class EditLocalImportFilenameTaggingPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
page = self._Panel( self._tag_repositories, service_key, paths )
|
||||
|
||||
select = service_key == default_tag_repository_key
|
||||
select = service_key == default_tag_service_key
|
||||
|
||||
tab_index = self._tag_repositories.addTab( page, name )
|
||||
if select: self._tag_repositories.setCurrentIndex( tab_index )
|
||||
|
@ -1453,6 +1453,23 @@ class EditLocalImportFilenameTaggingPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
self.widget().setLayout( vbox )
|
||||
|
||||
self._tag_repositories.currentChanged.connect( self._SaveDefaultTagServiceKey )
|
||||
|
||||
|
||||
def _SaveDefaultTagServiceKey( self ):
|
||||
|
||||
if self.sender() != self._tag_repositories:
|
||||
|
||||
return
|
||||
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'save_default_tag_service_tab_on_change' ):
|
||||
|
||||
current_page = self._tag_repositories.currentWidget()
|
||||
|
||||
HG.client_controller.new_options.SetKey( 'default_tag_service_tab', current_page.GetServiceKey() )
|
||||
|
||||
|
||||
|
||||
def GetValue( self ):
|
||||
|
||||
|
@ -1547,7 +1564,7 @@ class EditLocalImportFilenameTaggingPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
paths = [ path for ( index, path ) in self._paths_list.GetData( only_selected = True ) ]
|
||||
|
||||
self._filename_tagging_panel.SetSelectedPaths( paths )
|
||||
|
||||
|
||||
|
||||
def GetInfo( self ):
|
||||
|
||||
|
@ -1556,6 +1573,11 @@ class EditLocalImportFilenameTaggingPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
return ( self._service_key, paths_to_tags )
|
||||
|
||||
|
||||
def GetServiceKey( self ):
|
||||
|
||||
return self._service_key
|
||||
|
||||
|
||||
def RefreshFileList( self ):
|
||||
|
||||
self._paths_list.UpdateDatas()
|
||||
|
|
|
@ -2126,6 +2126,8 @@ class EditPresentationImportOptions( ClientGUIScrolledPanels.EditPanel ):
|
|||
QP.AddToLayout( vbox, st, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, hbox, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
vbox.addStretch( 1 )
|
||||
|
||||
self.widget().setLayout( vbox )
|
||||
|
||||
#
|
||||
|
|
|
@ -69,7 +69,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
self._listbook.AddPage( 'colours', 'colours', self._ColoursPanel( self._listbook ) )
|
||||
self._listbook.AddPage( 'popups', 'popups', self._PopupPanel( self._listbook, self._new_options ) )
|
||||
self._listbook.AddPage( 'regex favourites', 'regex favourites', self._RegexPanel( self._listbook ) )
|
||||
self._listbook.AddPage( 'sort/collect', 'sort/collect', self._SortCollectPanel( self._listbook ) )
|
||||
self._listbook.AddPage( 'sort/collect', 'sort/collect', self._SortCollectPanel( self._listbook, self._new_options ) )
|
||||
self._listbook.AddPage( 'downloading', 'downloading', self._DownloadingPanel( self._listbook, self._new_options ) )
|
||||
self._listbook.AddPage( 'duplicates', 'duplicates', self._DuplicatesPanel( self._listbook, self._new_options ) )
|
||||
self._listbook.AddPage( 'importing', 'importing', self._ImportingPanel( self._listbook, self._new_options ) )
|
||||
|
@ -2399,6 +2399,10 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
self._autocomplete_panel = ClientGUICommon.StaticBox( self, 'autocomplete' )
|
||||
|
||||
self._default_search_synchronised = QW.QCheckBox( self._autocomplete_panel )
|
||||
tt = 'This refers to the button on the autocomplete dropdown that enables new searches to start. If this is on, then new search pages will search as soon as you enter the first search predicate. If off, no search will happen until you switch it back on.'
|
||||
self._default_search_synchronised.setToolTip( tt )
|
||||
|
||||
self._autocomplete_float_main_gui = QW.QCheckBox( self._autocomplete_panel )
|
||||
tt = 'The autocomplete dropdown can either \'float\' on top of the main window, or if that does not work well for you, it can embed into the parent panel.'
|
||||
self._autocomplete_float_main_gui.setToolTip( tt )
|
||||
|
@ -2425,6 +2429,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
#
|
||||
|
||||
self._default_search_synchronised.setChecked( self._new_options.GetBoolean( 'default_search_synchronised' ) )
|
||||
|
||||
self._autocomplete_float_main_gui.setChecked( self._new_options.GetBoolean( 'autocomplete_float_main_gui' ) )
|
||||
self._autocomplete_float_frames.setChecked( self._new_options.GetBoolean( 'autocomplete_float_frames' ) )
|
||||
|
||||
|
@ -2449,6 +2455,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
#
|
||||
|
||||
rows.append( ( 'Start new search pages in \'searching immediately\': ', self._default_search_synchronised ) )
|
||||
rows.append( ( 'Autocomplete results float in main gui: ', self._autocomplete_float_main_gui ) )
|
||||
rows.append( ( 'Autocomplete results float in other windows: ', self._autocomplete_float_frames ) )
|
||||
rows.append( ( '\'Read\' autocomplete list height: ', self._ac_read_list_height_num_chars ) )
|
||||
|
@ -2470,6 +2477,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
def UpdateOptions( self ):
|
||||
|
||||
self._new_options.SetBoolean( 'default_search_synchronised', self._default_search_synchronised.isChecked() )
|
||||
|
||||
self._new_options.SetBoolean( 'autocomplete_float_main_gui', self._autocomplete_float_main_gui.isChecked() )
|
||||
self._new_options.SetBoolean( 'autocomplete_float_frames', self._autocomplete_float_frames.isChecked() )
|
||||
|
||||
|
@ -2483,26 +2492,32 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
class _SortCollectPanel( QW.QWidget ):
|
||||
|
||||
def __init__( self, parent ):
|
||||
def __init__( self, parent, new_options ):
|
||||
|
||||
QW.QWidget.__init__( self, parent )
|
||||
|
||||
self._default_media_sort = ClientGUIResultsSortCollect.MediaSortControl( self )
|
||||
self._new_options = new_options
|
||||
|
||||
self._fallback_media_sort = ClientGUIResultsSortCollect.MediaSortControl( self )
|
||||
self._tag_sort_panel = ClientGUICommon.StaticBox( self, 'tag sort' )
|
||||
|
||||
self._save_page_sort_on_change = QW.QCheckBox( self )
|
||||
self._default_tag_sort = ClientGUITagSorting.TagSortControl( self._tag_sort_panel, self._new_options.GetDefaultTagSort(), show_siblings = True )
|
||||
|
||||
self._default_media_collect = ClientGUIResultsSortCollect.MediaCollectControl( self, silent = True )
|
||||
self._file_sort_panel = ClientGUICommon.StaticBox( self, 'file sort' )
|
||||
|
||||
namespace_sorting_box = ClientGUICommon.StaticBox( self, 'namespace sorting' )
|
||||
self._default_media_sort = ClientGUIResultsSortCollect.MediaSortControl( self._file_sort_panel )
|
||||
|
||||
self._fallback_media_sort = ClientGUIResultsSortCollect.MediaSortControl( self._file_sort_panel )
|
||||
|
||||
self._save_page_sort_on_change = QW.QCheckBox( self._file_sort_panel )
|
||||
|
||||
self._default_media_collect = ClientGUIResultsSortCollect.MediaCollectControl( self._file_sort_panel, silent = True )
|
||||
|
||||
namespace_sorting_box = ClientGUICommon.StaticBox( self._file_sort_panel, 'namespace file sorting' )
|
||||
|
||||
self._namespace_sort_by = ClientGUIListBoxes.QueueListBox( namespace_sorting_box, 8, self._ConvertNamespaceTupleToSortString, self._AddNamespaceSort, self._EditNamespaceSort )
|
||||
|
||||
#
|
||||
|
||||
self._new_options = HG.client_controller.new_options
|
||||
|
||||
try:
|
||||
|
||||
self._default_media_sort.SetSort( self._new_options.GetDefaultSort() )
|
||||
|
@ -2540,19 +2555,34 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
namespace_sorting_box.Add( ClientGUICommon.BetterStaticText( namespace_sorting_box, sort_by_text ), CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
namespace_sorting_box.Add( self._namespace_sort_by, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
|
||||
#
|
||||
|
||||
rows = []
|
||||
|
||||
rows.append( ( 'Default sort: ', self._default_media_sort ) )
|
||||
rows.append( ( 'Secondary sort (when primary gives two equal values): ', self._fallback_media_sort ) )
|
||||
rows.append( ( 'Update default sort every time a new sort is manually chosen: ', self._save_page_sort_on_change ) )
|
||||
rows.append( ( 'Default tag sort: ', self._default_tag_sort ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( self._tag_sort_panel, rows )
|
||||
|
||||
self._tag_sort_panel.Add( gridbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
||||
#
|
||||
|
||||
rows = []
|
||||
|
||||
rows.append( ( 'Default file sort: ', self._default_media_sort ) )
|
||||
rows.append( ( 'Secondary file sort (when primary gives two equal values): ', self._fallback_media_sort ) )
|
||||
rows.append( ( 'Update default file sort every time a new sort is manually chosen: ', self._save_page_sort_on_change ) )
|
||||
rows.append( ( 'Default collect: ', self._default_media_collect ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( self, rows )
|
||||
|
||||
self._file_sort_panel.Add( gridbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
self._file_sort_panel.Add( namespace_sorting_box, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
|
||||
vbox = QP.VBoxLayout()
|
||||
|
||||
QP.AddToLayout( vbox, gridbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, namespace_sorting_box, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
QP.AddToLayout( vbox, self._tag_sort_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, self._file_sort_panel, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
|
||||
self.setLayout( vbox )
|
||||
|
||||
|
@ -2578,6 +2608,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
def UpdateOptions( self ):
|
||||
|
||||
self._new_options.SetDefaultTagSort( self._default_tag_sort.GetValue() )
|
||||
|
||||
self._new_options.SetDefaultSort( self._default_media_sort.GetSort() )
|
||||
self._new_options.SetFallbackSort( self._fallback_media_sort.GetSort() )
|
||||
self._new_options.SetBoolean( 'save_page_sort_on_change', self._save_page_sort_on_change.isChecked() )
|
||||
|
@ -3208,9 +3240,9 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
general_panel = ClientGUICommon.StaticBox( self, 'general tag options' )
|
||||
|
||||
self._default_tag_sort = ClientGUITagSorting.TagSortControl( general_panel, self._new_options.GetDefaultTagSort(), show_siblings = True )
|
||||
self._default_tag_service_tab = ClientGUICommon.BetterChoice( general_panel )
|
||||
|
||||
self._default_tag_repository = ClientGUICommon.BetterChoice( general_panel )
|
||||
self._save_default_tag_service_tab_on_change = QW.QCheckBox( general_panel )
|
||||
|
||||
self._default_tag_service_search_page = ClientGUICommon.BetterChoice( general_panel )
|
||||
|
||||
|
@ -3238,16 +3270,16 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
for service in services:
|
||||
|
||||
self._default_tag_repository.addItem( service.GetName(), service.GetServiceKey() )
|
||||
self._default_tag_service_tab.addItem( service.GetName(), service.GetServiceKey() )
|
||||
|
||||
self._default_tag_service_search_page.addItem( service.GetName(), service.GetServiceKey() )
|
||||
|
||||
|
||||
default_tag_repository_key = HC.options[ 'default_tag_repository' ]
|
||||
self._default_tag_service_tab.SetValue( self._new_options.GetKey( 'default_tag_service_tab' ) )
|
||||
|
||||
self._default_tag_repository.SetValue( default_tag_repository_key )
|
||||
self._save_default_tag_service_tab_on_change.setChecked( self._new_options.GetBoolean( 'save_default_tag_service_tab_on_change' ) )
|
||||
|
||||
self._default_tag_service_search_page.SetValue( new_options.GetKey( 'default_tag_service_search_page' ) )
|
||||
self._default_tag_service_search_page.SetValue( self._new_options.GetKey( 'default_tag_service_search_page' ) )
|
||||
|
||||
self._expand_parents_on_storage_taglists.setChecked( self._new_options.GetBoolean( 'expand_parents_on_storage_taglists' ) )
|
||||
|
||||
|
@ -3261,7 +3293,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
#
|
||||
|
||||
self._favourites.SetTags( new_options.GetStringList( 'favourite_tags' ) )
|
||||
self._favourites.SetTags( self._new_options.GetStringList( 'favourite_tags' ) )
|
||||
|
||||
#
|
||||
|
||||
|
@ -3269,9 +3301,9 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
rows = []
|
||||
|
||||
rows.append( ( 'Default tag service in manage tag dialogs: ', self._default_tag_repository ) )
|
||||
rows.append( ( 'Default tag service in manage tag dialogs: ', self._default_tag_service_tab ) )
|
||||
rows.append( ( 'Remember last used default tag service in manage tag dialogs: ', self._save_default_tag_service_tab_on_change ) )
|
||||
rows.append( ( 'Default tag service in search pages: ', self._default_tag_service_search_page ) )
|
||||
rows.append( ( 'Default tag sort: ', self._default_tag_sort ) )
|
||||
rows.append( ( 'Show parents expanded by default on edit/write taglists: ', self._expand_parents_on_storage_taglists ) )
|
||||
rows.append( ( 'Show parents expanded by default on edit/write autocomplete taglists: ', self._expand_parents_on_storage_autocomplete_taglists ) )
|
||||
rows.append( ( 'By default, select the first tag result with actual count in write-autocomplete: ', self._ac_select_first_with_count ) )
|
||||
|
@ -3294,18 +3326,31 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
self.setLayout( vbox )
|
||||
|
||||
#
|
||||
|
||||
self._save_default_tag_service_tab_on_change.clicked.connect( self._UpdateDefaultTagServiceControl )
|
||||
|
||||
self._UpdateDefaultTagServiceControl()
|
||||
|
||||
|
||||
def _UpdateDefaultTagServiceControl( self ):
|
||||
|
||||
enabled = not self._save_default_tag_service_tab_on_change.isChecked()
|
||||
|
||||
self._default_tag_service_tab.setEnabled( enabled )
|
||||
|
||||
|
||||
def UpdateOptions( self ):
|
||||
|
||||
HC.options[ 'default_tag_repository' ] = self._default_tag_repository.GetValue()
|
||||
self._new_options.SetDefaultTagSort( self._default_tag_sort.GetValue() )
|
||||
self._new_options.SetKey( 'default_tag_service_tab', self._default_tag_service_tab.GetValue() )
|
||||
self._new_options.SetBoolean( 'save_default_tag_service_tab_on_change', self._save_default_tag_service_tab_on_change.isChecked() )
|
||||
|
||||
self._new_options.SetKey( 'default_tag_service_search_page', self._default_tag_service_search_page.GetValue() )
|
||||
|
||||
self._new_options.SetBoolean( 'expand_parents_on_storage_taglists', self._expand_parents_on_storage_taglists.isChecked() )
|
||||
self._new_options.SetBoolean( 'expand_parents_on_storage_autocomplete_taglists', self._expand_parents_on_storage_autocomplete_taglists.isChecked() )
|
||||
self._new_options.SetBoolean( 'ac_select_first_with_count', self._ac_select_first_with_count.isChecked() )
|
||||
|
||||
self._new_options.SetKey( 'default_tag_service_search_page', self._default_tag_service_search_page.GetValue() )
|
||||
|
||||
#
|
||||
|
||||
self._new_options.SetStringList( 'favourite_tags', list( self._favourites.GetTags() ) )
|
||||
|
|
|
@ -1824,37 +1824,37 @@ class ManageTagsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
self._hashes.update( m.GetHashes() )
|
||||
|
||||
|
||||
self._tag_repositories = ClientGUICommon.BetterNotebook( self )
|
||||
self._tag_services = ClientGUICommon.BetterNotebook( self )
|
||||
|
||||
#
|
||||
|
||||
services = HG.client_controller.services_manager.GetServices( HC.REAL_TAG_SERVICES )
|
||||
|
||||
default_tag_repository_key = HC.options[ 'default_tag_repository' ]
|
||||
default_tag_service_key = HG.client_controller.new_options.GetKey( 'default_tag_service_tab' )
|
||||
|
||||
for service in services:
|
||||
|
||||
service_key = service.GetServiceKey()
|
||||
name = service.GetName()
|
||||
|
||||
page = self._Panel( self._tag_repositories, self._file_service_key, service.GetServiceKey(), self._current_media, self._immediate_commit, canvas_key = self._canvas_key )
|
||||
page = self._Panel( self._tag_services, self._file_service_key, service.GetServiceKey(), self._current_media, self._immediate_commit, canvas_key = self._canvas_key )
|
||||
page._add_tag_box.selectUp.connect( self.EventSelectUp )
|
||||
page._add_tag_box.selectDown.connect( self.EventSelectDown )
|
||||
page._add_tag_box.showPrevious.connect( self.EventShowPrevious )
|
||||
page._add_tag_box.showNext.connect( self.EventShowNext )
|
||||
page.okSignal.connect( self.okSignal )
|
||||
|
||||
select = service_key == default_tag_repository_key
|
||||
select = service_key == default_tag_service_key
|
||||
|
||||
self._tag_repositories.addTab( page, name )
|
||||
if select: self._tag_repositories.setCurrentIndex( self._tag_repositories.count() - 1 )
|
||||
self._tag_services.addTab( page, name )
|
||||
if select: self._tag_services.setCurrentIndex( self._tag_services.count() - 1 )
|
||||
|
||||
|
||||
#
|
||||
|
||||
vbox = QP.VBoxLayout()
|
||||
|
||||
QP.AddToLayout( vbox, self._tag_repositories, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
QP.AddToLayout( vbox, self._tag_services, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
|
||||
self.widget().setLayout( vbox )
|
||||
|
||||
|
@ -1865,7 +1865,7 @@ class ManageTagsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
self._my_shortcut_handler = ClientGUIShortcuts.ShortcutsHandler( self, [ 'global', 'media', 'main_gui' ] )
|
||||
|
||||
self._tag_repositories.currentChanged.connect( self.EventServiceChanged )
|
||||
self._tag_services.currentChanged.connect( self.EventServiceChanged )
|
||||
|
||||
self._SetSearchFocus()
|
||||
|
||||
|
@ -1874,7 +1874,7 @@ class ManageTagsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
groups_of_service_keys_to_content_updates = []
|
||||
|
||||
for page in self._tag_repositories.GetPages():
|
||||
for page in self._tag_services.GetPages():
|
||||
|
||||
( service_key, groups_of_content_updates ) = page.GetGroupsOfContentUpdates()
|
||||
|
||||
|
@ -1894,7 +1894,7 @@ class ManageTagsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
def _SetSearchFocus( self ):
|
||||
|
||||
page = self._tag_repositories.currentWidget()
|
||||
page = self._tag_services.currentWidget()
|
||||
|
||||
if page is not None:
|
||||
|
||||
|
@ -1910,7 +1910,7 @@ class ManageTagsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
self._current_media = ( new_media_singleton.Duplicate(), )
|
||||
|
||||
for page in self._tag_repositories.GetPages():
|
||||
for page in self._tag_services.GetPages():
|
||||
|
||||
page.SetMedia( self._current_media )
|
||||
|
||||
|
@ -1922,7 +1922,7 @@ class ManageTagsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
ClientGUIScrolledPanels.ManagePanel.CleanBeforeDestroy( self )
|
||||
|
||||
for page in self._tag_repositories.GetPages():
|
||||
for page in self._tag_services.GetPages():
|
||||
|
||||
page.CleanBeforeDestroy()
|
||||
|
||||
|
@ -1940,14 +1940,14 @@ class ManageTagsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
def EventSelectDown( self ):
|
||||
|
||||
self._tag_repositories.SelectRight()
|
||||
self._tag_services.SelectRight()
|
||||
|
||||
self._SetSearchFocus()
|
||||
|
||||
|
||||
def EventSelectUp( self ):
|
||||
|
||||
self._tag_repositories.SelectLeft()
|
||||
self._tag_services.SelectLeft()
|
||||
|
||||
self._SetSearchFocus()
|
||||
|
||||
|
@ -1975,18 +1975,25 @@ class ManageTagsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
return
|
||||
|
||||
|
||||
if self.sender() != self._tag_repositories:
|
||||
if self.sender() != self._tag_services:
|
||||
|
||||
return
|
||||
|
||||
|
||||
page = self._tag_repositories.currentWidget()
|
||||
page = self._tag_services.currentWidget()
|
||||
|
||||
if page is not None:
|
||||
|
||||
HG.client_controller.CallAfterQtSafe( page, 'setting page focus', page.SetTagBoxFocus )
|
||||
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'save_default_tag_service_tab_on_change' ):
|
||||
|
||||
current_page = self._tag_services.currentWidget()
|
||||
|
||||
HG.client_controller.new_options.SetKey( 'default_tag_service_tab', current_page.GetServiceKey() )
|
||||
|
||||
|
||||
|
||||
def ProcessApplicationCommand( self, command: CAC.ApplicationCommand ):
|
||||
|
||||
|
@ -2676,6 +2683,11 @@ class ManageTagsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
return ( self._tag_service_key, self._groups_of_content_updates )
|
||||
|
||||
|
||||
def GetServiceKey( self ):
|
||||
|
||||
return self._tag_service_key
|
||||
|
||||
|
||||
def HasChanges( self ):
|
||||
|
||||
return len( self._groups_of_content_updates ) > 0
|
||||
|
@ -2795,11 +2807,11 @@ class ManageTagParents( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
ClientGUIScrolledPanels.ManagePanel.__init__( self, parent )
|
||||
|
||||
self._tag_repositories = ClientGUICommon.BetterNotebook( self )
|
||||
self._tag_services = ClientGUICommon.BetterNotebook( self )
|
||||
|
||||
#
|
||||
|
||||
default_tag_repository_key = HC.options[ 'default_tag_repository' ]
|
||||
default_tag_service_key = HG.client_controller.new_options.GetKey( 'default_tag_service_tab' )
|
||||
|
||||
services = list( HG.client_controller.services_manager.GetServices( ( HC.LOCAL_TAG, ) ) )
|
||||
|
||||
|
@ -2810,26 +2822,38 @@ class ManageTagParents( ClientGUIScrolledPanels.ManagePanel ):
|
|||
name = service.GetName()
|
||||
service_key = service.GetServiceKey()
|
||||
|
||||
page = self._Panel( self._tag_repositories, service_key, tags )
|
||||
page = self._Panel( self._tag_services, service_key, tags )
|
||||
|
||||
select = service_key == default_tag_repository_key
|
||||
select = service_key == default_tag_service_key
|
||||
|
||||
self._tag_repositories.addTab( page, name )
|
||||
if select: self._tag_repositories.setCurrentWidget( page )
|
||||
self._tag_services.addTab( page, name )
|
||||
if select: self._tag_services.setCurrentWidget( page )
|
||||
|
||||
|
||||
#
|
||||
|
||||
vbox = QP.VBoxLayout()
|
||||
|
||||
QP.AddToLayout( vbox, self._tag_repositories, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
QP.AddToLayout( vbox, self._tag_services, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
|
||||
self.widget().setLayout( vbox )
|
||||
|
||||
self._tag_services.currentChanged.connect( self._SaveDefaultTagServiceKey )
|
||||
|
||||
|
||||
def _SaveDefaultTagServiceKey( self ):
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'save_default_tag_service_tab_on_change' ):
|
||||
|
||||
current_page = self._tag_services.currentWidget()
|
||||
|
||||
HG.client_controller.new_options.SetKey( 'default_tag_service_tab', current_page.GetServiceKey() )
|
||||
|
||||
|
||||
|
||||
def _SetSearchFocus( self ):
|
||||
|
||||
page = self._tag_repositories.currentWidget()
|
||||
page = self._tag_services.currentWidget()
|
||||
|
||||
if page is not None:
|
||||
|
||||
|
@ -2841,7 +2865,7 @@ class ManageTagParents( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
service_keys_to_content_updates = {}
|
||||
|
||||
for page in self._tag_repositories.GetPages():
|
||||
for page in self._tag_services.GetPages():
|
||||
|
||||
( service_key, content_updates ) = page.GetContentUpdates()
|
||||
|
||||
|
@ -2859,7 +2883,7 @@ class ManageTagParents( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
def UserIsOKToOK( self ):
|
||||
|
||||
if self._tag_repositories.currentWidget().HasUncommittedPair():
|
||||
if self._tag_services.currentWidget().HasUncommittedPair():
|
||||
|
||||
message = 'Are you sure you want to OK? You have an uncommitted pair.'
|
||||
|
||||
|
@ -3586,6 +3610,11 @@ class ManageTagParents( ClientGUIScrolledPanels.ManagePanel ):
|
|||
return ( self._service_key, content_updates )
|
||||
|
||||
|
||||
def GetServiceKey( self ):
|
||||
|
||||
return self._service_key
|
||||
|
||||
|
||||
def HasUncommittedPair( self ):
|
||||
|
||||
return len( self._children.GetTags() ) > 0 and len( self._parents.GetTags() ) > 0
|
||||
|
@ -3764,11 +3793,11 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
ClientGUIScrolledPanels.ManagePanel.__init__( self, parent )
|
||||
|
||||
self._tag_repositories = ClientGUICommon.BetterNotebook( self )
|
||||
self._tag_services = ClientGUICommon.BetterNotebook( self )
|
||||
|
||||
#
|
||||
|
||||
default_tag_repository_key = HC.options[ 'default_tag_repository' ]
|
||||
default_tag_service_key = HG.client_controller.new_options.GetKey( 'default_tag_service_tab' )
|
||||
|
||||
services = list( HG.client_controller.services_manager.GetServices( ( HC.LOCAL_TAG, ) ) )
|
||||
|
||||
|
@ -3779,26 +3808,38 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
name = service.GetName()
|
||||
service_key = service.GetServiceKey()
|
||||
|
||||
page = self._Panel( self._tag_repositories, service_key, tags )
|
||||
page = self._Panel( self._tag_services, service_key, tags )
|
||||
|
||||
select = service_key == default_tag_repository_key
|
||||
select = service_key == default_tag_service_key
|
||||
|
||||
self._tag_repositories.addTab( page, name )
|
||||
if select: self._tag_repositories.setCurrentIndex( self._tag_repositories.indexOf( page ) )
|
||||
self._tag_services.addTab( page, name )
|
||||
if select: self._tag_services.setCurrentIndex( self._tag_services.indexOf( page ) )
|
||||
|
||||
|
||||
#
|
||||
|
||||
vbox = QP.VBoxLayout()
|
||||
|
||||
QP.AddToLayout( vbox, self._tag_repositories, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
QP.AddToLayout( vbox, self._tag_services, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
|
||||
self.widget().setLayout( vbox )
|
||||
|
||||
self._tag_services.currentChanged.connect( self._SaveDefaultTagServiceKey )
|
||||
|
||||
|
||||
def _SaveDefaultTagServiceKey( self ):
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'save_default_tag_service_tab_on_change' ):
|
||||
|
||||
current_page = self._tag_services.currentWidget()
|
||||
|
||||
HG.client_controller.new_options.SetKey( 'default_tag_service_tab', current_page.GetServiceKey() )
|
||||
|
||||
|
||||
|
||||
def _SetSearchFocus( self ):
|
||||
|
||||
page = self._tag_repositories.currentWidget()
|
||||
page = self._tag_services.currentWidget()
|
||||
|
||||
if page is not None:
|
||||
|
||||
|
@ -3810,7 +3851,7 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
service_keys_to_content_updates = {}
|
||||
|
||||
for page in self._tag_repositories.GetPages():
|
||||
for page in self._tag_services.GetPages():
|
||||
|
||||
( service_key, content_updates ) = page.GetContentUpdates()
|
||||
|
||||
|
@ -3828,7 +3869,7 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
def UserIsOKToOK( self ):
|
||||
|
||||
if self._tag_repositories.currentWidget().HasUncommittedPair():
|
||||
if self._tag_services.currentWidget().HasUncommittedPair():
|
||||
|
||||
message = 'Are you sure you want to OK? You have an uncommitted pair.'
|
||||
|
||||
|
@ -3845,7 +3886,7 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
def EventServiceChanged( self, event ):
|
||||
|
||||
page = self._tag_repositories.currentWidget()
|
||||
page = self._tag_services.currentWidget()
|
||||
|
||||
if page is not None:
|
||||
|
||||
|
@ -4602,6 +4643,11 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
return ( self._service_key, content_updates )
|
||||
|
||||
|
||||
def GetServiceKey( self ):
|
||||
|
||||
return self._service_key
|
||||
|
||||
|
||||
def HasUncommittedPair( self ):
|
||||
|
||||
return len( self._old_siblings.GetTags() ) > 0 and self._current_new is not None
|
||||
|
|
|
@ -262,9 +262,11 @@ def CreateManagementControllerQuery( page_name, file_search_context: ClientSearc
|
|||
|
||||
management_controller = CreateManagementController( page_name, MANAGEMENT_TYPE_QUERY, file_service_key = file_service_key )
|
||||
|
||||
synchronised = HG.client_controller.new_options.GetBoolean( 'default_search_synchronised' )
|
||||
|
||||
management_controller.SetVariable( 'file_search_context', file_search_context )
|
||||
management_controller.SetVariable( 'search_enabled', search_enabled )
|
||||
management_controller.SetVariable( 'synchronised', True )
|
||||
management_controller.SetVariable( 'synchronised', synchronised )
|
||||
|
||||
return management_controller
|
||||
|
||||
|
|
|
@ -526,6 +526,13 @@ class ListBoxTagsPredicatesAC( ClientGUIListBoxes.ListBoxTagsPredicates ):
|
|||
|
||||
widget = self.window().parentWidget()
|
||||
|
||||
if not QP.isValid( widget ):
|
||||
|
||||
# seems to be a dialog posting late or similar
|
||||
|
||||
return False
|
||||
|
||||
|
||||
else:
|
||||
|
||||
widget = self
|
||||
|
@ -1515,6 +1522,7 @@ class AutoCompleteDropdownTags( AutoCompleteDropdown ):
|
|||
|
||||
|
||||
service_types_in_order.append( HC.FILE_REPOSITORY )
|
||||
service_types_in_order.append( HC.IPFS )
|
||||
|
||||
self._AddAllKnownFilesServiceTypeIfAllowed( service_types_in_order )
|
||||
|
||||
|
|
|
@ -60,6 +60,7 @@ FLESH_OUT_SYSTEM_PRED_TYPES = {
|
|||
ClientSearch.PREDICATE_TYPE_SYSTEM_HASH,
|
||||
ClientSearch.PREDICATE_TYPE_SYSTEM_DURATION,
|
||||
ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_AUDIO,
|
||||
ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE,
|
||||
ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_WORDS,
|
||||
ClientSearch.PREDICATE_TYPE_SYSTEM_MIME,
|
||||
ClientSearch.PREDICATE_TYPE_SYSTEM_RATING,
|
||||
|
@ -136,7 +137,7 @@ def FilterAndConvertLabelPredicates( predicates: typing.Collection[ ClientSearch
|
|||
|
||||
def FleshOutPredicates( widget: QW.QWidget, predicates: typing.Collection[ ClientSearch.Predicate ] ) -> typing.List[ ClientSearch.Predicate ]:
|
||||
|
||||
window = widget.window()
|
||||
window = None
|
||||
|
||||
predicates = FilterAndConvertLabelPredicates( predicates )
|
||||
|
||||
|
@ -149,6 +150,14 @@ def FleshOutPredicates( widget: QW.QWidget, predicates: typing.Collection[ Clien
|
|||
|
||||
if value is None and predicate_type in FLESH_OUT_SYSTEM_PRED_TYPES:
|
||||
|
||||
if window is None:
|
||||
|
||||
if QP.isValid( widget ):
|
||||
|
||||
window = widget.window()
|
||||
|
||||
|
||||
|
||||
from hydrus.client.gui import ClientGUITopLevelWindowsPanels
|
||||
|
||||
with ClientGUITopLevelWindowsPanels.DialogEdit( window, 'input predicate', hide_buttons = True ) as dlg:
|
||||
|
@ -492,6 +501,13 @@ class FleshOutPredicatePanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
static_pred_buttons.append( ClientGUIPredicatesSingle.StaticSystemPredicateButton( self, self, ( ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_AUDIO, True ), ) ) )
|
||||
static_pred_buttons.append( ClientGUIPredicatesSingle.StaticSystemPredicateButton( self, self, ( ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_AUDIO, False ), ) ) )
|
||||
|
||||
elif predicate_type == ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE:
|
||||
|
||||
recent_predicate_types = []
|
||||
|
||||
static_pred_buttons.append( ClientGUIPredicatesSingle.StaticSystemPredicateButton( self, self, ( ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE, True ), ) ) )
|
||||
static_pred_buttons.append( ClientGUIPredicatesSingle.StaticSystemPredicateButton( self, self, ( ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE, False ), ) ) )
|
||||
|
||||
elif predicate_type == ClientSearch.PREDICATE_TYPE_SYSTEM_HASH:
|
||||
|
||||
editable_pred_panels.append( self._PredOKPanel( self, ClientGUIPredicatesSingle.PanelPredicateSystemHash, predicate ) )
|
||||
|
|
|
@ -2058,32 +2058,97 @@ class ReviewServiceCombinedLocalFilesSubPanel( ClientGUICommon.StaticBox ):
|
|||
|
||||
self._service = service
|
||||
|
||||
self._my_updater = ClientGUIAsync.FastThreadToGUIUpdater( self, self._Refresh )
|
||||
|
||||
self._deferred_delete_status = ClientGUICommon.BetterStaticText( self, label = 'loading\u2026' )
|
||||
|
||||
self._clear_deleted_files_record = ClientGUICommon.BetterButton( self, 'clear deleted files record', self._ClearDeletedFilesRecord )
|
||||
|
||||
#
|
||||
|
||||
self._Refresh()
|
||||
|
||||
#
|
||||
|
||||
self.Add( self._deferred_delete_status, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
self.Add( self._clear_deleted_files_record, CC.FLAGS_ON_RIGHT )
|
||||
|
||||
HG.client_controller.sub( self, 'ServiceUpdated', 'service_updated' )
|
||||
HG.client_controller.sub( self, '_Refresh', 'notify_new_physical_file_delete_numbers' )
|
||||
|
||||
|
||||
def _ClearDeletedFilesRecord( self ):
|
||||
|
||||
message = 'This will instruct your database to forget its entire record of locally deleted files, meaning that if it ever encounters any of those files again, it will assume they are new and reimport them. This operation cannot be undone.'
|
||||
message = 'This will instruct your database to forget its _entire_ record of locally deleted files, meaning that if it ever encounters any of those files again, it will assume they are new and reimport them. This operation cannot be undone.'
|
||||
|
||||
result = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = 'do it', no_label = 'forget it' )
|
||||
|
||||
if result == QW.QDialog.Accepted:
|
||||
|
||||
hashes = None
|
||||
message = 'Hey, I am just going to ask again--are you _absolutely_ sure? This is an advanced action that may mess up your downloads/imports in future.'
|
||||
|
||||
content_update = HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_ADVANCED, ( 'delete_deleted', hashes ) )
|
||||
result = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = 'yes, I am', no_label = 'no, I am not sure' )
|
||||
|
||||
service_keys_to_content_updates = { CC.COMBINED_LOCAL_FILE_SERVICE_KEY : [ content_update ] }
|
||||
if result == QW.QDialog.Accepted:
|
||||
|
||||
hashes = None
|
||||
|
||||
content_update = HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_ADVANCED, ( 'delete_deleted', hashes ) )
|
||||
|
||||
service_keys_to_content_updates = { CC.COMBINED_LOCAL_FILE_SERVICE_KEY : [ content_update ] }
|
||||
|
||||
HG.client_controller.Write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
HG.client_controller.pub( 'service_updated', self._service )
|
||||
|
||||
|
||||
HG.client_controller.Write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
|
||||
def _Refresh( self ):
|
||||
|
||||
if not self or not QP.isValid( self ):
|
||||
|
||||
HG.client_controller.pub( 'service_updated', self._service )
|
||||
return
|
||||
|
||||
|
||||
HG.client_controller.CallToThread( self.THREADFetchInfo, self._service )
|
||||
|
||||
|
||||
def ServiceUpdated( self, service ):
|
||||
|
||||
if service.GetServiceKey() == self._service.GetServiceKey():
|
||||
|
||||
self._service = service
|
||||
|
||||
self._my_updater.Update()
|
||||
|
||||
|
||||
|
||||
def THREADFetchInfo( self, service ):
|
||||
|
||||
def qt_code( text ):
|
||||
|
||||
if not self or not QP.isValid( self ):
|
||||
|
||||
return
|
||||
|
||||
|
||||
self._deferred_delete_status.setText( text )
|
||||
|
||||
|
||||
( num_files, num_thumbnails ) = HG.client_controller.Read( 'num_deferred_file_deletes' )
|
||||
|
||||
if num_files == 0 and num_thumbnails == 0:
|
||||
|
||||
text = 'No files are awaiting physical deletion from file storage.'
|
||||
|
||||
else:
|
||||
|
||||
text = '{} files and {} thumbnails are awaiting physical deletion from file storage.'.format( HydrusData.ToHumanInt( num_files ), HydrusData.ToHumanInt( num_thumbnails ) )
|
||||
|
||||
|
||||
QP.CallAfter( qt_code, text )
|
||||
|
||||
|
||||
class ReviewServiceFileSubPanel( ClientGUICommon.StaticBox ):
|
||||
|
||||
|
@ -2157,7 +2222,6 @@ class ReviewServiceFileSubPanel( ClientGUICommon.StaticBox ):
|
|||
QP.CallAfter( qt_code, text )
|
||||
|
||||
|
||||
|
||||
class ReviewServiceRemoteSubPanel( ClientGUICommon.StaticBox ):
|
||||
|
||||
def __init__( self, parent, service ):
|
||||
|
|
|
@ -117,6 +117,8 @@ class FileImportJob( object ):
|
|||
self._thumbnail_bytes = None
|
||||
self._phashes = None
|
||||
self._extra_hashes = None
|
||||
self._has_icc_profile = None
|
||||
self._pixel_hash = None
|
||||
self._file_modified_timestamp = None
|
||||
|
||||
|
||||
|
@ -351,6 +353,36 @@ class FileImportJob( object ):
|
|||
|
||||
self._extra_hashes = HydrusFileHandling.GetExtraHashesFromPath( self._temp_path )
|
||||
|
||||
has_icc_profile = False
|
||||
|
||||
if mime in HC.FILES_THAT_CAN_HAVE_ICC_PROFILE:
|
||||
|
||||
try:
|
||||
|
||||
pil_image = HydrusImageHandling.RawOpenPILImage( self._temp_path )
|
||||
|
||||
has_icc_profile = HydrusImageHandling.HasICCProfile( pil_image )
|
||||
|
||||
except:
|
||||
|
||||
pass
|
||||
|
||||
|
||||
|
||||
self._has_icc_profile = has_icc_profile
|
||||
|
||||
if mime in HC.FILES_THAT_CAN_HAVE_PIXEL_HASH and duration is None:
|
||||
|
||||
try:
|
||||
|
||||
self._pixel_hash = HydrusImageHandling.GetImagePixelHash( self._temp_path, mime )
|
||||
|
||||
except:
|
||||
|
||||
pass
|
||||
|
||||
|
||||
|
||||
self._file_modified_timestamp = HydrusFileHandling.GetFileModifiedTimestamp( self._temp_path )
|
||||
|
||||
|
||||
|
@ -389,6 +421,16 @@ class FileImportJob( object ):
|
|||
return self._phashes
|
||||
|
||||
|
||||
def GetPixelHash( self ):
|
||||
|
||||
return self._pixel_hash
|
||||
|
||||
|
||||
def HasICCProfile( self ) -> bool:
|
||||
|
||||
return self._has_icc_profile
|
||||
|
||||
|
||||
def PubsubContentUpdates( self ):
|
||||
|
||||
if self._post_import_file_status.AlreadyInDB() and self._file_import_options.AutomaticallyArchives():
|
||||
|
|
|
@ -81,7 +81,7 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 20
|
||||
SOFTWARE_VERSION = 464
|
||||
SOFTWARE_VERSION = 465
|
||||
CLIENT_API_VERSION = 22
|
||||
|
||||
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
@ -388,6 +388,7 @@ COMBINED_LOCAL_FILE = 15
|
|||
TEST_SERVICE = 16
|
||||
LOCAL_NOTES = 17
|
||||
CLIENT_API_SERVICE = 18
|
||||
COMBINED_DELETED_FILE = 19
|
||||
SERVER_ADMIN = 99
|
||||
NULL_SERVICE = 100
|
||||
|
||||
|
@ -412,6 +413,7 @@ service_string_lookup[ IPFS ] = 'ipfs daemon'
|
|||
service_string_lookup[ TEST_SERVICE ] = 'test service'
|
||||
service_string_lookup[ LOCAL_NOTES ] = 'local file notes service'
|
||||
service_string_lookup[ SERVER_ADMIN ] = 'hydrus server administration service'
|
||||
service_string_lookup[ COMBINED_DELETED_FILE ] = 'virtual deleted file service'
|
||||
service_string_lookup[ NULL_SERVICE ] = 'null service'
|
||||
|
||||
LOCAL_FILE_SERVICES = ( LOCAL_FILE_DOMAIN, LOCAL_FILE_TRASH_DOMAIN, COMBINED_LOCAL_FILE )
|
||||
|
@ -429,11 +431,16 @@ REAL_TAG_SERVICES = ( LOCAL_TAG, TAG_REPOSITORY )
|
|||
ADDREMOVABLE_SERVICES = ( LOCAL_TAG, LOCAL_RATING_LIKE, LOCAL_RATING_NUMERICAL, FILE_REPOSITORY, TAG_REPOSITORY, SERVER_ADMIN, IPFS )
|
||||
MUST_HAVE_AT_LEAST_ONE_SERVICES = ( LOCAL_TAG, )
|
||||
NONEDITABLE_SERVICES = ( LOCAL_FILE_DOMAIN, LOCAL_FILE_TRASH_DOMAIN, COMBINED_FILE, COMBINED_TAG, COMBINED_LOCAL_FILE )
|
||||
SPECIFIC_FILE_SERVICES = ( LOCAL_FILE_DOMAIN, LOCAL_FILE_TRASH_DOMAIN, COMBINED_LOCAL_FILE, FILE_REPOSITORY, IPFS )
|
||||
AUTOCOMPLETE_CACHE_SPECIFIC_FILE_SERVICES = ( LOCAL_FILE_DOMAIN, LOCAL_FILE_TRASH_DOMAIN, COMBINED_LOCAL_FILE, FILE_REPOSITORY )
|
||||
TAG_CACHE_SPECIFIC_FILE_SERVICES = ( COMBINED_LOCAL_FILE, FILE_REPOSITORY )
|
||||
|
||||
ALL_SERVICES = REMOTE_SERVICES + LOCAL_SERVICES + ( COMBINED_FILE, COMBINED_TAG )
|
||||
FILE_SERVICES_WITH_NO_DELETE_RECORD = ( LOCAL_FILE_TRASH_DOMAIN, COMBINED_DELETED_FILE )
|
||||
|
||||
FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES = ( LOCAL_FILE_DOMAIN, LOCAL_FILE_TRASH_DOMAIN, COMBINED_LOCAL_FILE, COMBINED_DELETED_FILE, FILE_REPOSITORY, IPFS )
|
||||
FILE_SERVICES_WITH_SPECIFIC_TAG_LOOKUP_CACHES = ( COMBINED_LOCAL_FILE, COMBINED_DELETED_FILE, FILE_REPOSITORY, IPFS )
|
||||
|
||||
FILE_SERVICES_COVERED_BY_COMBINED_LOCAL_FILE = ( LOCAL_FILE_DOMAIN, LOCAL_FILE_TRASH_DOMAIN )
|
||||
FILE_SERVICES_COVERED_BY_COMBINED_DELETED_FILE = ( LOCAL_FILE_DOMAIN, FILE_REPOSITORY, IPFS )
|
||||
|
||||
ALL_SERVICES = REMOTE_SERVICES + LOCAL_SERVICES + ( COMBINED_FILE, COMBINED_TAG, COMBINED_DELETED_FILE )
|
||||
ALL_TAG_SERVICES = REAL_TAG_SERVICES + ( COMBINED_TAG, )
|
||||
ALL_FILE_SERVICES = FILE_SERVICES + ( COMBINED_FILE, )
|
||||
|
||||
|
@ -602,6 +609,11 @@ ARCHIVES = { APPLICATION_ZIP, APPLICATION_HYDRUS_ENCRYPTED_ZIP, APPLICATION_RAR,
|
|||
|
||||
MIMES_WITH_THUMBNAILS = set( IMAGES ).union( ANIMATIONS ).union( VIDEO ).union( { APPLICATION_FLASH, APPLICATION_CLIP, APPLICATION_PSD } )
|
||||
|
||||
# I think this is correct, but not certain. we've seen something called icc_profile from PIL for all of these I think!
|
||||
FILES_THAT_CAN_HAVE_ICC_PROFILE = { IMAGE_JPEG, IMAGE_PNG, IMAGE_GIF, IMAGE_TIFF }
|
||||
|
||||
FILES_THAT_CAN_HAVE_PIXEL_HASH = set( IMAGES ).union( { IMAGE_GIF } )
|
||||
|
||||
HYDRUS_UPDATE_FILES = ( APPLICATION_HYDRUS_UPDATE_DEFINITIONS, APPLICATION_HYDRUS_UPDATE_CONTENT )
|
||||
|
||||
MIMES_WE_CAN_PHASH = ( IMAGE_JPEG, IMAGE_PNG, IMAGE_WEBP, IMAGE_TIFF, IMAGE_ICON )
|
||||
|
|
|
@ -115,9 +115,7 @@ def ReadLargeIdQueryInSeparateChunks( cursor, select_statement, chunk_size ):
|
|||
|
||||
i += len( chunk )
|
||||
|
||||
num_done = i + 1
|
||||
|
||||
yield ( chunk, num_done, num_to_do )
|
||||
yield ( chunk, i, num_to_do )
|
||||
|
||||
|
||||
cursor.execute( 'DROP TABLE ' + table_name + ';' )
|
||||
|
|
|
@ -16,7 +16,7 @@ no_db_temp_files = False
|
|||
|
||||
boot_debug = False
|
||||
|
||||
db_cache_size = 200
|
||||
db_cache_size = 256
|
||||
db_transaction_commit_period = 30
|
||||
|
||||
# if this is set to 1, transactions are not immediately synced to the journal so multiple can be undone following a power-loss
|
||||
|
|
|
@ -764,10 +764,15 @@ def NormaliseICCProfilePILImageToSRGB( pil_image: PILImage.Image ):
|
|||
|
||||
pil_image = PILImageCms.profileToProfile( pil_image, src_profile, PIL_SRGB_PROFILE, outputMode = outputMode )
|
||||
|
||||
except PILImageCms.PyCMSError:
|
||||
except ( PILImageCms.PyCMSError, OSError ):
|
||||
|
||||
# 'cannot build transform' and presumably some other fun errors
|
||||
# way more advanced that we can deal with, so we'll just no-op
|
||||
# way more advanced than we can deal with, so we'll just no-op
|
||||
|
||||
# OSError is due to a "OSError: cannot open profile from string" a user got
|
||||
# no idea, but that seems to be an ImageCms issue doing byte handling and ending up with an odd OSError?
|
||||
# or maybe somehow my PIL reader or bytesIO sending string for some reason?
|
||||
# in any case, nuke it for now
|
||||
|
||||
pass
|
||||
|
||||
|
|
|
@ -63,6 +63,8 @@ class Predicate(Enum):
|
|||
NOT_BEST_QUALITY_OF_GROUP = auto()
|
||||
HAS_AUDIO = auto()
|
||||
NO_AUDIO = auto()
|
||||
HAS_ICC_PROFILE = auto()
|
||||
NO_ICC_PROFILE = auto()
|
||||
HAS_TAGS = auto()
|
||||
UNTAGGED = auto()
|
||||
NUM_OF_TAGS = auto()
|
||||
|
@ -146,6 +148,8 @@ SYSTEM_PREDICATES = {
|
|||
'(((is )?not)|(isn\'t))( the)? best quality( file)? of( its)?( duplicate)? group': (Predicate.NOT_BEST_QUALITY_OF_GROUP, None, None, None),
|
||||
'has audio': (Predicate.HAS_AUDIO, None, None, None),
|
||||
'no audio': (Predicate.NO_AUDIO, None, None, None),
|
||||
'has icc profile': (Predicate.HAS_ICC_PROFILE, None, None, None),
|
||||
'no icc profile': (Predicate.NO_ICC_PROFILE, None, None, None),
|
||||
'has tags': (Predicate.HAS_TAGS, None, None, None),
|
||||
'untagged|no tags': (Predicate.UNTAGGED, None, None, None),
|
||||
'number of tags': (Predicate.NUM_OF_TAGS, Operators.RELATIONAL, Value.NATURAL, None),
|
||||
|
@ -401,6 +405,8 @@ examples = [
|
|||
"system:isn't the best quality file of its duplicate group",
|
||||
"system:has_audio",
|
||||
"system:no audio",
|
||||
"system:has icc profile",
|
||||
"system:no icc profile",
|
||||
"system:has tags",
|
||||
"system:no tags",
|
||||
"system:untagged",
|
||||
|
|
|
@ -34,7 +34,7 @@ try:
|
|||
argparser.add_argument( '-d', '--db_dir', help = 'set an external db location' )
|
||||
argparser.add_argument( '--temp_dir', help = 'override the program\'s temporary directory' )
|
||||
argparser.add_argument( '--db_journal_mode', default = 'WAL', choices = [ 'WAL', 'TRUNCATE', 'PERSIST', 'MEMORY' ], help = 'change db journal mode (default=WAL)' )
|
||||
argparser.add_argument( '--db_cache_size', type = int, help = 'override SQLite cache_size per db file, in MB (default=200)' )
|
||||
argparser.add_argument( '--db_cache_size', type = int, help = 'override SQLite cache_size per db file, in MB (default=256)' )
|
||||
argparser.add_argument( '--db_transaction_commit_period', type = int, help = 'override how often (in seconds) database changes are saved to disk (default=30,min=10)' )
|
||||
argparser.add_argument( '--db_synchronous_override', type = int, choices = range(4), help = 'override SQLite Synchronous PRAGMA (default=2)' )
|
||||
argparser.add_argument( '--no_db_temp_files', action='store_true', help = 'run db temp operations entirely in memory' )
|
||||
|
@ -104,7 +104,7 @@ try:
|
|||
|
||||
else:
|
||||
|
||||
HG.db_cache_size = 200
|
||||
HG.db_cache_size = 256
|
||||
|
||||
|
||||
if result.db_transaction_commit_period is not None:
|
||||
|
|
|
@ -43,7 +43,7 @@ try:
|
|||
argparser.add_argument( '-d', '--db_dir', help = 'set an external db location' )
|
||||
argparser.add_argument( '--temp_dir', help = 'override the program\'s temporary directory' )
|
||||
argparser.add_argument( '--db_journal_mode', default = 'WAL', choices = [ 'WAL', 'TRUNCATE', 'PERSIST', 'MEMORY' ], help = 'change db journal mode (default=WAL)' )
|
||||
argparser.add_argument( '--db_cache_size', type = int, help = 'override SQLite cache_size per db file, in MB (default=200)' )
|
||||
argparser.add_argument( '--db_cache_size', type = int, help = 'override SQLite cache_size per db file, in MB (default=256)' )
|
||||
argparser.add_argument( '--db_transaction_commit_period', type = int, help = 'override how often (in seconds) database changes are saved to disk (default=120,min=10)' )
|
||||
argparser.add_argument( '--db_synchronous_override', type = int, choices = range(4), help = 'override SQLite Synchronous PRAGMA (default=2)' )
|
||||
argparser.add_argument( '--no_db_temp_files', action='store_true', help = 'run db temp operations entirely in memory' )
|
||||
|
@ -115,7 +115,7 @@ try:
|
|||
|
||||
else:
|
||||
|
||||
HG.db_cache_size = 200
|
||||
HG.db_cache_size = 256
|
||||
|
||||
|
||||
if result.db_transaction_commit_period is not None:
|
||||
|
|
|
@ -412,6 +412,9 @@ class TestClientDB( unittest.TestCase ):
|
|||
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_AUDIO, True, 0 ) )
|
||||
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_AUDIO, False, 1 ) )
|
||||
|
||||
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE, True, 0 ) )
|
||||
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE, False, 1 ) )
|
||||
|
||||
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HASH, ( ( hash, ), 'sha256' ), 1 ) )
|
||||
tests.append( ( ClientSearch.PREDICATE_TYPE_SYSTEM_HASH, ( ( bytes.fromhex( '0123456789abcdef' * 4 ), ), 'sha256' ), 0 ) )
|
||||
|
||||
|
@ -762,7 +765,7 @@ class TestClientDB( unittest.TestCase ):
|
|||
predicates.append( ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_EVERYTHING, min_current_count = 1 ) )
|
||||
predicates.append( ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_INBOX, min_current_count = 1 ) )
|
||||
predicates.append( ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_ARCHIVE, min_current_count = 0 ) )
|
||||
predicates.extend( [ ClientSearch.Predicate( predicate_type ) for predicate_type in [ ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_TAGS, ClientSearch.PREDICATE_TYPE_SYSTEM_LIMIT, ClientSearch.PREDICATE_TYPE_SYSTEM_SIZE, ClientSearch.PREDICATE_TYPE_SYSTEM_AGE, ClientSearch.PREDICATE_TYPE_SYSTEM_MODIFIED_TIME, ClientSearch.PREDICATE_TYPE_SYSTEM_KNOWN_URLS, ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_AUDIO, ClientSearch.PREDICATE_TYPE_SYSTEM_HASH, ClientSearch.PREDICATE_TYPE_SYSTEM_DIMENSIONS, ClientSearch.PREDICATE_TYPE_SYSTEM_DURATION, ClientSearch.PREDICATE_TYPE_SYSTEM_NOTES, ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_WORDS, ClientSearch.PREDICATE_TYPE_SYSTEM_MIME, ClientSearch.PREDICATE_TYPE_SYSTEM_RATING, ClientSearch.PREDICATE_TYPE_SYSTEM_SIMILAR_TO, ClientSearch.PREDICATE_TYPE_SYSTEM_FILE_SERVICE, ClientSearch.PREDICATE_TYPE_SYSTEM_TAG_AS_NUMBER, ClientSearch.PREDICATE_TYPE_SYSTEM_FILE_RELATIONSHIPS, ClientSearch.PREDICATE_TYPE_SYSTEM_FILE_VIEWING_STATS ] ] )
|
||||
predicates.extend( [ ClientSearch.Predicate( predicate_type ) for predicate_type in [ ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_TAGS, ClientSearch.PREDICATE_TYPE_SYSTEM_LIMIT, ClientSearch.PREDICATE_TYPE_SYSTEM_SIZE, ClientSearch.PREDICATE_TYPE_SYSTEM_AGE, ClientSearch.PREDICATE_TYPE_SYSTEM_MODIFIED_TIME, ClientSearch.PREDICATE_TYPE_SYSTEM_KNOWN_URLS, ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_AUDIO, ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE, ClientSearch.PREDICATE_TYPE_SYSTEM_HASH, ClientSearch.PREDICATE_TYPE_SYSTEM_DIMENSIONS, ClientSearch.PREDICATE_TYPE_SYSTEM_DURATION, ClientSearch.PREDICATE_TYPE_SYSTEM_NOTES, ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_WORDS, ClientSearch.PREDICATE_TYPE_SYSTEM_MIME, ClientSearch.PREDICATE_TYPE_SYSTEM_RATING, ClientSearch.PREDICATE_TYPE_SYSTEM_SIMILAR_TO, ClientSearch.PREDICATE_TYPE_SYSTEM_FILE_SERVICE, ClientSearch.PREDICATE_TYPE_SYSTEM_TAG_AS_NUMBER, ClientSearch.PREDICATE_TYPE_SYSTEM_FILE_RELATIONSHIPS, ClientSearch.PREDICATE_TYPE_SYSTEM_FILE_VIEWING_STATS ] ] )
|
||||
|
||||
self.assertEqual( set( result ), set( predicates ) )
|
||||
|
||||
|
@ -1862,7 +1865,7 @@ class TestClientDB( unittest.TestCase ):
|
|||
|
||||
#
|
||||
|
||||
NUM_DEFAULT_SERVICES = 12
|
||||
NUM_DEFAULT_SERVICES = 13
|
||||
|
||||
services = self._read( 'services' )
|
||||
|
||||
|
|
|
@ -1656,6 +1656,24 @@ class TestTagObjects( unittest.TestCase ):
|
|||
self.assertEqual( p.GetNamespace(), 'system' )
|
||||
self.assertEqual( p.GetTextsAndNamespaces( render_for_user ), [ ( p.ToString(), p.GetNamespace() ) ] )
|
||||
|
||||
p = ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_AUDIO, False )
|
||||
|
||||
self.assertEqual( p.ToString(), 'system:no audio' )
|
||||
self.assertEqual( p.GetNamespace(), 'system' )
|
||||
self.assertEqual( p.GetTextsAndNamespaces( render_for_user ), [ ( p.ToString(), p.GetNamespace() ) ] )
|
||||
|
||||
p = ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE, True )
|
||||
|
||||
self.assertEqual( p.ToString(), 'system:has icc profile' )
|
||||
self.assertEqual( p.GetNamespace(), 'system' )
|
||||
self.assertEqual( p.GetTextsAndNamespaces( render_for_user ), [ ( p.ToString(), p.GetNamespace() ) ] )
|
||||
|
||||
p = ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_HAS_ICC_PROFILE, False )
|
||||
|
||||
self.assertEqual( p.ToString(), 'system:no icc profile' )
|
||||
self.assertEqual( p.GetNamespace(), 'system' )
|
||||
self.assertEqual( p.GetTextsAndNamespaces( render_for_user ), [ ( p.ToString(), p.GetNamespace() ) ] )
|
||||
|
||||
p = ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_HASH, ( ( bytes.fromhex( 'abcd' ), ), 'sha256' ) )
|
||||
|
||||
self.assertEqual( p.ToString(), 'system:sha256 hash is abcd' )
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 2.4 KiB After Width: | Height: | Size: 2.4 KiB |
Loading…
Reference in New Issue