parent
eb750512f3
commit
74e76bdc06
|
@ -7,6 +7,43 @@ title: Changelog
|
|||
!!! note
|
||||
This is the new changelog, only the most recent builds. For all versions, see the [old changelog](old_changelog.html).
|
||||
|
||||
## [Version 528](https://github.com/hydrusnetwork/hydrus/releases/tag/v528)
|
||||
|
||||
### faster file search cancelling
|
||||
|
||||
* if you start a large file search and then update or otherwise cancel it, the existing ongoing search should stop a little faster now
|
||||
* all timestamp-based searches now cancel very quickly. if you do a bare 'all files imported in the last six months' search and then amend it with 'system:inbox', it should now update super fast
|
||||
* all note-based searches now cancel quickly, either num_notes or note_names
|
||||
* all rating-based searches now cancel quickly
|
||||
* all OR searches cancel faster
|
||||
* and, in all cases, the cancel tech works a little faster by skipping any remaining search more efficiently
|
||||
* relatedly, I upgraded how I do the query cancel tech here to be a bit easier to integrate, and I switched the 20-odd existing cancels over to it. I'd like to add more in future, so let me know what cancels slow!
|
||||
|
||||
### system predicate parsing
|
||||
|
||||
* the parser is more forgiving of colons after the basename, e.g. 'system:import time: since 2023-01-01' now parses ok
|
||||
* added 'since', 'before', 'around', and 'day'/month' variants to system datetime predicate parsing as more human analogues of the '>' etc... operators
|
||||
* you can now say 'imported', 'modified', 'last viewed', and 'archived' without the 'time' part ('system:modified before 2020-01-01')
|
||||
* also, 'system:archived' with a 'd' will now parse as 'system:archive'
|
||||
* you now can stick 'ago' ('system:imported 7 days ago') on the end of a timedelta time system pred and it should parse ok! this should fix the text that is copied to clipboard from timedelta system preds
|
||||
* the system predicate parser now handles 'file service' system preds when your given name doesn't match due to upper/lowercase, and more broadly when the service has upper case characters. some stages of parsing convert everything to lowercase, making this tricky, but in general it now does a sweep of what you entered and then a sweep that ignores case entirely. related pro-tip: do not give two services the same name but with different case
|
||||
|
||||
### misc
|
||||
|
||||
* you can now edit the default slideshow durations that show up in the media viewer right-click menu, under _options->media_. it is a bit hacky, but it works just like the custom zoom steps, with comma-separated floats
|
||||
* fixed 'system:num notes < x', which was not including noteless files (i.e. num_notes = 0) in the result
|
||||
* fixed a bug in _manage services_ when adding a local file service and then deleting it in the same dialog open. a test that checks if the thing is empty of files before the delete wasn't recognising it didn't exist yet
|
||||
* improved type checking when pasting timestamps in the datetime widget, I think it was breaking some (older?) versions of python
|
||||
|
||||
### some more build stuff
|
||||
|
||||
* fixed the macOS App, which was showing a 'no' symbol rather than launching due to one more thing that needed to be redirected from 'client' to 'hydrus_client' last week (issue #1367)
|
||||
* fixed a second problem with the macOS app (unlike PyInstaller, PyOxidizer needed the 'hydrus' source directory, so that change is reverted)
|
||||
* I believe I've also fixed the client launching for some versions of Python/PyQt6, which had trouble with the QMediaPlayer imports
|
||||
* cleaned up the PyInstall spec files a little more, removing some 'hidden-import' stuff from the pyinstaller spec files that was no longer used and pushing the server executables to the binaries section
|
||||
* added a short section to the Windows 'running from source' help regarding pinning a shortcut to a bat to Start--there's a neat way to do it, if Windows won't let you
|
||||
* updated a couple little more areas in the help for client->hydrus_client
|
||||
|
||||
## [Version 527](https://github.com/hydrusnetwork/hydrus/releases/tag/v527)
|
||||
|
||||
### important updates
|
||||
|
@ -367,42 +404,3 @@ title: Changelog
|
|||
* all the commands in the modify accounts dialog now have nicer yes/no dialogs that say the number of accounts being affected and talk more about what is happening
|
||||
* fixed up some logical jank in the dialog. adding time to expires no longer tells you about 0 accounts having no expiry, and if circumstances mean 0 accounts are selected/valid for an operation, it no longer says 'want to set expiry for 0 accounts?' etc...
|
||||
* when modifying multiple accounts, the current account focus/selection is now preserved through list refreshes after jobs go through
|
||||
|
||||
## [Version 518](https://github.com/hydrusnetwork/hydrus/releases/tag/v518)
|
||||
|
||||
### autocomplete improvements
|
||||
|
||||
* tl;dr: I went through the whole tag autocomplete search pipeline, cleaned out the cruft, and made the pre-fetch results more sensible. searching for tags on thumbnails isn't horrible any more!
|
||||
* -
|
||||
* when you type a tag search, either in search or edit autocomplete contexts, and it needs to spend some time reading from the database, the search now always does the 'exact match' search first on what you typed. if you type in 'cat', it will show 'cat' and 'species:cat' and 'character:cat' and anything else that matches 'cat' exactly, with counts, and easy to select, while you are waiting for the full autocomplete results to come back
|
||||
* in edit contexts, this exact-matching pre-fetch results here now include sibling suggestions, even if the results have no count
|
||||
* in edit contexts, the full results should more reliably include sibling suggestions, including those with no count. in some situations ('all known tags'), there may be too many siblings, so let me know!
|
||||
* the main predicate sorting method now sorts by string secondarily, stabilising the sort between same-count preds
|
||||
* when the results list transitions from pre-fetch results to full results, your current selection is now preserved!!! selecting and then hitting enter right when the full results come in should be safe now!
|
||||
* when you type on a set of full results and it quickly filters down on the results cache to a smaller result, it now preserves selection. I'm not sure how totally useful this will be, but I did it anyway. hitting backspace and filtering 'up' will reset selection
|
||||
* when you search for tags on a page of thumbnails, you should now get some early results super fast! these results are lacking sibling data and will be replaced with the better answer soon after, but if you want something simple, they'll work! no more waiting ages for anything on thumbnail tag searches!
|
||||
* fixed an issue where the edit autocomplete was not caching results properly when you had the 'unnamespaced input gives (any namespace) wildcard results' option on
|
||||
* the different loading states of autocomplete all now have clear 'loading...' labels, and each label is a little different based on what it is doing, like 'loading sibling data...'
|
||||
* I generally cleared out jank. as the results move from one type to another, or as they filter down as you type, they _should_ flicker less
|
||||
* added a new gui debug mode to force a three second delay on all autocomplete database jobs, to help simulate slow searches and play with the above
|
||||
* NOTE: autocomplete has a heap of weird options under _tags->manage tag display and search_. I'm really happy with the above changes, but I messed around with the result injection rules, so I may have broken one of the combinations of wildcard rules here. let me know how you get on and I'll fix anything that I busted.
|
||||
|
||||
### pympler
|
||||
|
||||
* hydrus now optionally uses 'pympler', a python memory profiling library. for now, it replaces my old python gc (garbage collection) summarising commands under _help->debug->memory actions_, and gives much nicer formatting and now various estimates of actual memory use. this is a first version that mostly just replicates old behaviour, but I added a 'spam a more accurate total mem size of all the Qt widgets' in there too. I will keep developing this in future. we should be able to track some memory leaks better in future
|
||||
* pympler is now in all the requirements.txts, so if you run from source and want to play with it, please reinstall your venv and you'll be sorted. _help->about_ says whether you have it or not
|
||||
|
||||
### misc
|
||||
|
||||
* the system:time predicates now allow you to specify the hh:mm time on the calendar control. if needed, you can now easily search for files viewed between 10pm-11:30pm yesterday. all existing 'date' system predicates will update to midnight. if you are a time-search nerd, note this changes the precision of existing time predicates--previously they searched _before/after_ the given date, but now they search including the given date, pivoting around the minute (default: 0:00am) rather than the integer calendar day! 'same day as' remains the same, though--midnight to midnight of the given calendar day
|
||||
* if hydrus has previously initial-booted without mpv available and so set the media view options for video/animations/audio to 'show with native viewer', and you then boot with mpv available, hydrus now sets your view options to use mpv and gives a popup saying so. trying to get mpv to work should be a bit easier to test now, since it'll popup and fix itself as soon as you get it working, and people who never realised it was missing and fix it accidentally will now get sorted without having to do anything extra
|
||||
* made some small speed and memory optimisations to content processing for busy clients with large sessions, particularly those with large collect-by'd pages
|
||||
* also boosted the speed of the content update pipeline as it consults which files are affected by which update object
|
||||
* the migrate tags dialog now lets you filter the tag source by pending only on tag repositories
|
||||
* cleaned up some calendar/time code
|
||||
* updated the Client API help on how Hydrus-Client-API-Access-Key works in GET vs POST arguments
|
||||
* patched the legacy use of 'service_names_to_tags' in `/add_urls/add_url` in the client api. this parameter is more obsolete than the other legacy names (it got renamed a while ago to 'service_names_to_additional_tags'), but I'm supporting it again, just for a bit, for Hydrus Companion users stuck on an older version. sorry for the trouble here, this missed my legacy checks!
|
||||
|
||||
### windows mpv test
|
||||
|
||||
* hey, if you are an advanced windows user and want to run a test for me, please rename your mpv-2.dll to .old and then get this https://sourceforge.net/projects/mpv-player-windows/files/libmpv/mpv-dev-x86_64-20230212-git-a40958c.7z/download . extract the libmpv-2.dll and rename it to mpv-2.dll. does it work for you, showing api v2.1 in _help->about_? are you running the built windows release, or from source? it runs great for me from source, but I'd like to get a wider canvas before I update it for everyone. if it doesn't work, then delete the new dll and rename the .old back, and then let me know your windows version etc.., thank you!
|
||||
|
|
|
@ -34,6 +34,38 @@
|
|||
<div class="content">
|
||||
<h1 id="changelog"><a href="#changelog">changelog</a></h1>
|
||||
<ul>
|
||||
<li>
|
||||
<h2 id="version_528"><a href="#version_528">version 528</a></h2>
|
||||
<ul>
|
||||
<li><h3>faster file search cancelling</h3></li>
|
||||
<li>if you start a large file search and then update or otherwise cancel it, the existing ongoing search should stop a little faster now</li>
|
||||
<li>all timestamp-based searches now cancel very quickly. if you do a bare 'all files imported in the last six months' search and then amend it with 'system:inbox', it should now update super fast</li>
|
||||
<li>all note-based searches now cancel quickly, either num_notes or note_names</li>
|
||||
<li>all rating-based searches now cancel quickly</li>
|
||||
<li>all OR searches cancel faster</li>
|
||||
<li>and, in all cases, the cancel tech works a little faster by skipping any remaining search more efficiently</li>
|
||||
<li>relatedly, I upgraded how I do the query cancel tech here to be a bit easier to integrate, and I switched the 20-odd existing cancels over to it. I'd like to add more in future, so let me know what cancels slow!</li>
|
||||
<li><h3>system predicate parsing</h3></li>
|
||||
<li>the parser is more forgiving of colons after the basename, e.g. 'system:import time: since 2023-01-01' now parses ok</li>
|
||||
<li>added 'since', 'before', 'around', and 'day'/month' variants to system datetime predicate parsing as more human analogues of the '>' etc... operators</li>
|
||||
<li>you can now say 'imported', 'modified', 'last viewed', and 'archived' without the 'time' part ('system:modified before 2020-01-01')</li>
|
||||
<li>also, 'system:archived' with a 'd' will now parse as 'system:archive'</li>
|
||||
<li>you now can stick 'ago' ('system:imported 7 days ago') on the end of a timedelta time system pred and it should parse ok! this should fix the text that is copied to clipboard from timedelta system preds</li>
|
||||
<li>the system predicate parser now handles 'file service' system preds when your given name doesn't match due to upper/lowercase, and more broadly when the service has upper case characters. some stages of parsing convert everything to lowercase, making this tricky, but in general it now does a sweep of what you entered and then a sweep that ignores case entirely. related pro-tip: do not give two services the same name but with different case</li>
|
||||
<li><h3>misc</h3></li>
|
||||
<li>you can now edit the default slideshow durations that show up in the media viewer right-click menu, under _options->media_. it is a bit hacky, but it works just like the custom zoom steps, with comma-separated floats</li>
|
||||
<li>fixed 'system:num notes < x', which was not including noteless files (i.e. num_notes = 0) in the result</li>
|
||||
<li>fixed a bug in _manage services_ when adding a local file service and then deleting it in the same dialog open. a test that checks if the thing is empty of files before the delete wasn't recognising it didn't exist yet</li>
|
||||
<li>improved type checking when pasting timestamps in the datetime widget, I think it was breaking some (older?) versions of python</li>
|
||||
<li><h3>some more build stuff</h3></li>
|
||||
<li>fixed the macOS App, which was showing a 'no' symbol rather than launching due to one more thing that needed to be redirected from 'client' to 'hydrus_client' last week (issue #1367)</li>
|
||||
<li>fixed a second problem with the macOS app (unlike PyInstaller, PyOxidizer needed the 'hydrus' source directory, so that change is reverted)</li>
|
||||
<li>I believe I've also fixed the client launching for some versions of Python/PyQt6, which had trouble with the QMediaPlayer imports</li>
|
||||
<li>cleaned up the PyInstall spec files a little more, removing some 'hidden-import' stuff from the pyinstaller spec files that was no longer used and pushing the server executables to the binaries section</li>
|
||||
<li>added a short section to the Windows 'running from source' help regarding pinning a shortcut to a bat to Start--there's a neat way to do it, if Windows won't let you</li>
|
||||
<li>updated a couple little more areas in the help for client->hydrus_client</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h2 id="version_527"><a href="#version_527">version 527</a></h2>
|
||||
<ul>
|
||||
|
|
|
@ -178,14 +178,20 @@ The first start will take a little longer. It will operate just like a normal bu
|
|||
If you want to redirect your database or use any other launch arguments, then copy 'client.bat' to 'client-user.bat' and edit it, inserting your desired db path. Run this instead of 'client.bat'. New `git pull` commands will not affect 'client-user.bat'.
|
||||
|
||||
You probably can't pin your .bat file to your Taskbar or Start (and if you try and pin the running program to your taskbar, its icon may revert to Python), but you can make a shortcut to the .bat file, pin that to Start, and in its properties set a custom icon. There's a nice hydrus one in `install_dir/static`.
|
||||
|
||||
However, some versions of Windows won't let you pin a shortcut to a bat to the start menu. In this case, make a shortcut like this:
|
||||
|
||||
`C:\Windows\System32\cmd.exe /c "C:\hydrus\Hydrus Source\hydrus_client-user.bat"`
|
||||
|
||||
This is a shortcut to tell the terminal to run the bat; it should be pinnable to start. You can give it a nice name and the hydrus icon and you should be good!
|
||||
|
||||
=== "Linux"
|
||||
|
||||
If you want to redirect your database or use any other launch arguments, then copy 'client.sh' to 'client-user.sh' and edit it, inserting your desired db path. Run this instead of 'client.sh'. New `git pull` commands will not affect 'client-user.sh'.
|
||||
If you want to redirect your database or use any other launch arguments, then copy 'hydrus_client.sh' to 'hydrus_client-user.sh' and edit it, inserting your desired db path. Run this instead of 'hydrus_client.sh'. New `git pull` commands will not affect 'hydrus_client-user.sh'.
|
||||
|
||||
=== "macOS"
|
||||
|
||||
If you want to redirect your database or use any other launch arguments, then copy 'client.command' to 'client-user.command' and edit it, inserting your desired db path. Run this instead of 'client.command'. New `git pull` commands will not affect 'client-user.command'.
|
||||
If you want to redirect your database or use any other launch arguments, then copy 'hydrus_client.command' to 'hydrus_client-user.command' and edit it, inserting your desired db path. Run this instead of 'hydrus_client.command'. New `git pull` commands will not affect 'hydrus_client-user.command'.
|
||||
|
||||
### Simple Updating Guide
|
||||
|
||||
|
|
|
@ -786,6 +786,8 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._dictionary[ 'media_zooms' ] = [ 0.01, 0.05, 0.1, 0.15, 0.2, 0.3, 0.5, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.5, 2.0, 3.0, 5.0, 10.0, 20.0 ]
|
||||
|
||||
self._dictionary[ 'slideshow_durations' ] = [ 1.0, 5.0, 10.0, 30.0, 60.0 ]
|
||||
|
||||
#
|
||||
|
||||
self._dictionary[ 'misc' ] = HydrusSerialisable.SerialisableDictionary()
|
||||
|
@ -1430,6 +1432,14 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetSlideshowDurations( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return list( self._dictionary[ 'slideshow_durations' ] )
|
||||
|
||||
|
||||
|
||||
def GetString( self, name ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -1829,6 +1839,14 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def SetSlideshowDurations( self, slideshow_durations ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._dictionary[ 'slideshow_durations' ] = slideshow_durations
|
||||
|
||||
|
||||
|
||||
def SetString( self, name, value ):
|
||||
|
||||
with self._lock:
|
||||
|
|
|
@ -3609,6 +3609,14 @@ class ServicesManager( object ):
|
|||
|
||||
|
||||
|
||||
for service in self._services_sorted:
|
||||
|
||||
if service.GetServiceType() in allowed_types and service.GetName().lower() == service_name.lower():
|
||||
|
||||
return service.GetServiceKey()
|
||||
|
||||
|
||||
|
||||
raise HydrusExceptions.DataMissing()
|
||||
|
||||
|
||||
|
|
|
@ -3718,9 +3718,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
query = 'SELECT tag_id FROM {} WHERE {};'.format( mappings_table_name, search_predicate )
|
||||
|
||||
cursor = self._Execute( query )
|
||||
|
||||
loop_of_results = self._STI( HydrusDB.ReadFromCancellableCursor( cursor, 1024, cancelled_hook = cancelled_hook ) )
|
||||
loop_of_results = self._STI( self._ExecuteCancellable( query, (), cancelled_hook ) )
|
||||
|
||||
# counter can just take a list of gubbins like this
|
||||
results_dict.update( loop_of_results )
|
||||
|
|
|
@ -353,6 +353,13 @@ class ClientDBFilesSearchTags( ClientDBModule.ClientDBModule ):
|
|||
|
||||
BLOCK_SIZE = max( 64, int( len( hash_ids ) ** 0.5 ) ) # go for square root for now
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
for group_of_hash_ids in HydrusData.SplitIteratorIntoChunks( hash_ids, BLOCK_SIZE ):
|
||||
|
||||
with self._MakeTemporaryIntegerTable( group_of_hash_ids, 'hash_id' ) as hash_ids_table_name:
|
||||
|
@ -372,16 +379,7 @@ class ClientDBFilesSearchTags( ClientDBModule.ClientDBModule ):
|
|||
|
||||
query = 'SELECT hash_id, COUNT( tag_id ) FROM {} GROUP BY hash_id;'.format( unions )
|
||||
|
||||
cursor = self._Execute( query )
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
loop_of_results = HydrusDB.ReadFromCancellableCursor( cursor, 64, cancelled_hook = cancelled_hook )
|
||||
loop_of_results = self._ExecuteCancellable( query, (), cancelled_hook )
|
||||
|
||||
if job_key is not None and job_key.IsCancelled():
|
||||
|
||||
|
@ -546,9 +544,7 @@ class ClientDBFilesSearchTags( ClientDBModule.ClientDBModule ):
|
|||
|
||||
for query in queries:
|
||||
|
||||
cursor = self._Execute( query, ( tag_id, ) )
|
||||
|
||||
result_hash_ids.update( self._STI( HydrusDB.ReadFromCancellableCursor( cursor, 1024, cancelled_hook ) ) )
|
||||
result_hash_ids.update( self._STI( self._ExecuteCancellable( query, ( tag_id, ), cancelled_hook ) ) )
|
||||
|
||||
|
||||
else:
|
||||
|
@ -571,9 +567,7 @@ class ClientDBFilesSearchTags( ClientDBModule.ClientDBModule ):
|
|||
|
||||
for query in queries:
|
||||
|
||||
cursor = self._Execute( query )
|
||||
|
||||
result_hash_ids.update( self._STI( HydrusDB.ReadFromCancellableCursor( cursor, 1024, cancelled_hook ) ) )
|
||||
result_hash_ids.update( self._STI( self._ExecuteCancellable( query, (), cancelled_hook ) ) )
|
||||
|
||||
|
||||
|
||||
|
@ -847,9 +841,7 @@ class ClientDBFilesSearchTags( ClientDBModule.ClientDBModule ):
|
|||
|
||||
for query in queries:
|
||||
|
||||
cursor = self._Execute( query )
|
||||
|
||||
nonzero_tag_hash_ids.update( self._STI( HydrusDB.ReadFromCancellableCursor( cursor, 10240, cancelled_hook ) ) )
|
||||
nonzero_tag_hash_ids.update( self._STI( self._ExecuteCancellable( query, (), cancelled_hook ) ) )
|
||||
|
||||
if job_key is not None and job_key.IsCancelled():
|
||||
|
||||
|
@ -914,7 +906,7 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
ClientDBModule.ClientDBModule.__init__( self, 'client file query', cursor )
|
||||
|
||||
|
||||
def _DoNotePreds( self, system_predicates: ClientSearch.FileSystemPredicates, query_hash_ids: typing.Optional[ typing.Set[ int ] ] ) -> typing.Optional[ typing.Set[ int ] ]:
|
||||
def _DoNotePreds( self, system_predicates: ClientSearch.FileSystemPredicates, query_hash_ids: typing.Optional[ typing.Set[ int ] ], job_key: typing.Optional[ ClientThreading.JobKey ] = None ) -> typing.Optional[ typing.Set[ int ] ]:
|
||||
|
||||
simple_preds = system_predicates.GetSimpleInfo()
|
||||
|
||||
|
@ -944,7 +936,7 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
self._AnalyzeTempTable( temp_table_name )
|
||||
|
||||
num_notes_hash_ids = self.modules_notes_map.GetHashIdsFromNumNotes( min_num_notes, max_num_notes, temp_table_name )
|
||||
num_notes_hash_ids = self.modules_notes_map.GetHashIdsFromNumNotes( min_num_notes, max_num_notes, temp_table_name, job_key = job_key )
|
||||
|
||||
query_hash_ids = intersection_update_qhi( query_hash_ids, num_notes_hash_ids )
|
||||
|
||||
|
@ -960,7 +952,7 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
self._AnalyzeTempTable( temp_table_name )
|
||||
|
||||
notes_hash_ids = self.modules_notes_map.GetHashIdsFromNoteName( note_name, temp_table_name )
|
||||
notes_hash_ids = self.modules_notes_map.GetHashIdsFromNoteName( note_name, temp_table_name, job_key = job_key )
|
||||
|
||||
query_hash_ids = intersection_update_qhi( query_hash_ids, notes_hash_ids )
|
||||
|
||||
|
@ -977,7 +969,7 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
self._AnalyzeTempTable( temp_table_name )
|
||||
|
||||
notes_hash_ids = self.modules_notes_map.GetHashIdsFromNoteName( note_name, temp_table_name )
|
||||
notes_hash_ids = self.modules_notes_map.GetHashIdsFromNoteName( note_name, temp_table_name, job_key = job_key )
|
||||
|
||||
query_hash_ids.difference_update( notes_hash_ids )
|
||||
|
||||
|
@ -1037,7 +1029,14 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
|
||||
|
||||
def _DoSimpleRatingPreds( self, file_search_context: ClientSearch.FileSearchContext, query_hash_ids: typing.Optional[ typing.Set[ int ] ] ) -> typing.Optional[ typing.Set[ int ] ]:
|
||||
def _DoSimpleRatingPreds( self, file_search_context: ClientSearch.FileSearchContext, query_hash_ids: typing.Optional[ typing.Set[ int ] ], job_key: typing.Optional[ ClientThreading.JobKey ] = None ) -> typing.Optional[ typing.Set[ int ] ]:
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
system_predicates = file_search_context.GetSystemPredicates()
|
||||
|
||||
|
@ -1052,7 +1051,7 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
if value == 'rated':
|
||||
|
||||
rating_hash_ids = self._STI( self._Execute( 'SELECT hash_id FROM local_ratings WHERE service_id = ?;', ( service_id, ) ) )
|
||||
rating_hash_ids = self._STI( self._ExecuteCancellable( 'SELECT hash_id FROM local_ratings WHERE service_id = ?;', ( service_id, ), cancelled_hook ) )
|
||||
|
||||
query_hash_ids = intersection_update_qhi( query_hash_ids, rating_hash_ids )
|
||||
|
||||
|
@ -1103,7 +1102,9 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
continue
|
||||
|
||||
|
||||
rating_hash_ids = self._STI( self._Execute( 'SELECT hash_id FROM local_ratings WHERE service_id = ? AND ' + predicate + ';', ( service_id, ) ) )
|
||||
query = f'SELECT hash_id FROM local_ratings WHERE service_id = ? AND {predicate};'
|
||||
|
||||
rating_hash_ids = self._STI( self._ExecuteCancellable( query, ( service_id, ), cancelled_hook ) )
|
||||
|
||||
query_hash_ids = intersection_update_qhi( query_hash_ids, rating_hash_ids )
|
||||
|
||||
|
@ -1127,7 +1128,9 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
predicate = 'rating {} {}'.format( operator, value )
|
||||
|
||||
|
||||
rating_hash_ids = self._STI( self._Execute( 'SELECT hash_id FROM local_incdec_ratings WHERE service_id = ? AND ' + predicate + ';', ( service_id, ) ) )
|
||||
query = f'SELECT hash_id FROM local_incdec_ratings WHERE service_id = ? AND {predicate};'
|
||||
|
||||
rating_hash_ids = self._STI( self._ExecuteCancellable( query, ( service_id, ), cancelled_hook ) )
|
||||
|
||||
query_hash_ids = intersection_update_qhi( query_hash_ids, rating_hash_ids )
|
||||
|
||||
|
@ -1138,7 +1141,7 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
return query_hash_ids
|
||||
|
||||
|
||||
def _DoTimestampPreds( self, file_search_context: ClientSearch.FileSearchContext, query_hash_ids: typing.Optional[ typing.Set[ int ] ], have_cross_referenced_file_locations: bool ) -> typing.Tuple[ typing.Optional[ typing.Set[ int ] ], bool ]:
|
||||
def _DoTimestampPreds( self, file_search_context: ClientSearch.FileSearchContext, query_hash_ids: typing.Optional[ typing.Set[ int ] ], have_cross_referenced_file_locations: bool, job_key: typing.Optional[ ClientThreading.JobKey ] = None ) -> typing.Tuple[ typing.Optional[ typing.Set[ int ] ], bool ]:
|
||||
|
||||
system_predicates = file_search_context.GetSystemPredicates()
|
||||
|
||||
|
@ -1147,6 +1150,13 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
timestamp_ranges = system_predicates.GetTimestampRanges()
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
if not_all_known_files:
|
||||
|
||||
# in future we will hang an explicit locationcontext off this predicate
|
||||
|
@ -1174,6 +1184,7 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
pred_string = ' AND '.join( import_timestamp_predicates )
|
||||
|
||||
table_names = []
|
||||
|
||||
table_names.extend( ( ClientDBFilesStorage.GenerateFilesTableName( self.modules_services.GetServiceId( service_key ), HC.CONTENT_STATUS_CURRENT ) for service_key in location_context.current_service_keys ) )
|
||||
table_names.extend( ( ClientDBFilesStorage.GenerateFilesTableName( self.modules_services.GetServiceId( service_key ), HC.CONTENT_STATUS_DELETED ) for service_key in location_context.deleted_service_keys ) )
|
||||
|
||||
|
@ -1181,7 +1192,7 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
for table_name in table_names:
|
||||
|
||||
import_timestamp_hash_ids.update( self._STS( self._Execute( 'SELECT hash_id FROM {} WHERE {};'.format( table_name, pred_string ) ) ) )
|
||||
import_timestamp_hash_ids.update( self._STS( self._ExecuteCancellable( 'SELECT hash_id FROM {} WHERE {};'.format( table_name, pred_string ), (), cancelled_hook ) ) )
|
||||
|
||||
|
||||
query_hash_ids = intersection_update_qhi( query_hash_ids, import_timestamp_hash_ids )
|
||||
|
@ -1197,7 +1208,7 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
if len( ranges ) > 0:
|
||||
|
||||
modified_timestamp_hash_ids = self.modules_files_timestamps.GetHashIdsInRange( HC.TIMESTAMP_TYPE_MODIFIED_AGGREGATE, ranges )
|
||||
modified_timestamp_hash_ids = self.modules_files_timestamps.GetHashIdsInRange( HC.TIMESTAMP_TYPE_MODIFIED_AGGREGATE, ranges, job_key = job_key )
|
||||
|
||||
query_hash_ids = intersection_update_qhi( query_hash_ids, modified_timestamp_hash_ids )
|
||||
|
||||
|
@ -1209,7 +1220,7 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
if len( ranges ) > 0:
|
||||
|
||||
archived_timestamp_hash_ids = self.modules_files_timestamps.GetHashIdsInRange( HC.TIMESTAMP_TYPE_ARCHIVED, ranges )
|
||||
archived_timestamp_hash_ids = self.modules_files_timestamps.GetHashIdsInRange( HC.TIMESTAMP_TYPE_ARCHIVED, ranges, job_key = job_key )
|
||||
|
||||
query_hash_ids = intersection_update_qhi( query_hash_ids, archived_timestamp_hash_ids )
|
||||
|
||||
|
@ -1222,7 +1233,7 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
min_last_viewed_timestamp = ranges.get( '>', None )
|
||||
max_last_viewed_timestamp = ranges.get( '<', None )
|
||||
|
||||
last_viewed_timestamp_hash_ids = self.modules_files_viewing_stats.GetHashIdsFromLastViewed( min_last_viewed_timestamp = min_last_viewed_timestamp, max_last_viewed_timestamp = max_last_viewed_timestamp )
|
||||
last_viewed_timestamp_hash_ids = self.modules_files_viewing_stats.GetHashIdsFromLastViewed( min_last_viewed_timestamp = min_last_viewed_timestamp, max_last_viewed_timestamp = max_last_viewed_timestamp, job_key = job_key )
|
||||
|
||||
query_hash_ids = intersection_update_qhi( query_hash_ids, last_viewed_timestamp_hash_ids )
|
||||
|
||||
|
@ -1310,7 +1321,7 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
except HydrusExceptions.DataMissing:
|
||||
|
||||
HydrusData.ShowText( 'A file search query was run for a tag service that does not exist! If you just removed a service, you might want to try checking the search and/or restarting the client.' )
|
||||
HydrusData.ShowText( 'A file search query was run for a tag service that does not exist! If you just removed a service, you might want to check the search and/or restart the client.' )
|
||||
|
||||
return []
|
||||
|
||||
|
@ -1352,6 +1363,11 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
done_or_predicates = True
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return []
|
||||
|
||||
|
||||
|
||||
#
|
||||
|
||||
|
@ -1380,9 +1396,9 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
#
|
||||
|
||||
( query_hash_ids, have_cross_referenced_file_locations ) = self._DoTimestampPreds( file_search_context, query_hash_ids, have_cross_referenced_file_locations )
|
||||
( query_hash_ids, have_cross_referenced_file_locations ) = self._DoTimestampPreds( file_search_context, query_hash_ids, have_cross_referenced_file_locations, job_key = job_key )
|
||||
|
||||
query_hash_ids = self._DoSimpleRatingPreds( file_search_context, query_hash_ids )
|
||||
query_hash_ids = self._DoSimpleRatingPreds( file_search_context, query_hash_ids, job_key = job_key )
|
||||
|
||||
#
|
||||
|
||||
|
@ -1615,6 +1631,11 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
done_or_predicates = True
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return []
|
||||
|
||||
|
||||
|
||||
# now the simple preds and desperate last shot to populate query_hash_ids
|
||||
|
||||
|
@ -1856,6 +1877,11 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
done_or_predicates = True
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return []
|
||||
|
||||
|
||||
|
||||
# hide update files
|
||||
|
||||
|
@ -1996,7 +2022,7 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
|
||||
|
||||
query_hash_ids = self._DoNotePreds( system_predicates, query_hash_ids )
|
||||
query_hash_ids = self._DoNotePreds( system_predicates, query_hash_ids, job_key = job_key )
|
||||
|
||||
for ( view_type, viewing_locations, operator, viewing_value ) in system_predicates.GetFileViewingStatsPredicates():
|
||||
|
||||
|
|
|
@ -3,10 +3,10 @@ import typing
|
|||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusTime
|
||||
from hydrus.core import HydrusDB
|
||||
|
||||
from hydrus.client import ClientTime
|
||||
from hydrus.client import ClientThreading
|
||||
from hydrus.client.db import ClientDBModule
|
||||
from hydrus.client.db import ClientDBMaster
|
||||
from hydrus.client.db import ClientDBFilesStorage
|
||||
|
@ -105,7 +105,14 @@ class ClientDBFilesTimestamps( ClientDBModule.ClientDBModule ):
|
|||
# can't clear a file timestamp or file viewing timestamp from here, can't do it from UI either, so we good for now
|
||||
|
||||
|
||||
def GetHashIdsInRange( self, timestamp_type: int, ranges ):
|
||||
def GetHashIdsInRange( self, timestamp_type: int, ranges, job_key: typing.Optional[ ClientThreading.JobKey ] = None ):
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
if timestamp_type == HC.TIMESTAMP_TYPE_MODIFIED_AGGREGATE:
|
||||
|
||||
|
@ -130,7 +137,7 @@ class ClientDBFilesTimestamps( ClientDBModule.ClientDBModule ):
|
|||
|
||||
query = 'SELECT hash_id FROM ( {} UNION {} ) GROUP BY hash_id HAVING {};'.format( q1, q2, pred_string )
|
||||
|
||||
modified_timestamp_hash_ids = self._STS( self._Execute( query ) )
|
||||
modified_timestamp_hash_ids = self._STS( self._ExecuteCancellable( query, (), cancelled_hook ) )
|
||||
|
||||
return modified_timestamp_hash_ids
|
||||
|
||||
|
@ -161,12 +168,11 @@ class ClientDBFilesTimestamps( ClientDBModule.ClientDBModule ):
|
|||
|
||||
query = f'SELECT hash_id FROM {table_name} WHERE {pred_string};'
|
||||
|
||||
hash_ids = self._STS( self._Execute( query ) )
|
||||
hash_ids = self._STS( self._ExecuteCancellable( query, (), cancelled_hook ) )
|
||||
|
||||
return hash_ids
|
||||
|
||||
|
||||
|
||||
|
||||
return set()
|
||||
|
||||
|
|
|
@ -3,10 +3,11 @@ import typing
|
|||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusDB
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientThreading
|
||||
from hydrus.client import ClientTime
|
||||
from hydrus.client.db import ClientDBModule
|
||||
|
||||
|
@ -172,7 +173,14 @@ class ClientDBFilesViewingStats( ClientDBModule.ClientDBModule ):
|
|||
return hash_ids
|
||||
|
||||
|
||||
def GetHashIdsFromLastViewed( self, min_last_viewed_timestamp = None, max_last_viewed_timestamp = None ) -> typing.Set[ int ]:
|
||||
def GetHashIdsFromLastViewed( self, min_last_viewed_timestamp = None, max_last_viewed_timestamp = None, job_key: typing.Optional[ ClientThreading.JobKey ] = None ) -> typing.Set[ int ]:
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
last_viewed_timestamp_predicates = []
|
||||
|
||||
|
@ -186,9 +194,7 @@ class ClientDBFilesViewingStats( ClientDBModule.ClientDBModule ):
|
|||
|
||||
pred_string = ' AND '.join( last_viewed_timestamp_predicates )
|
||||
|
||||
last_viewed_timestamp_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM file_viewing_stats WHERE canvas_type = ? AND {};'.format( pred_string ), ( CC.CANVAS_MEDIA_VIEWER, ) ) )
|
||||
|
||||
return last_viewed_timestamp_hash_ids
|
||||
return self._STS( self._ExecuteCancellable( 'SELECT hash_id FROM file_viewing_stats WHERE canvas_type = ? AND {};'.format( pred_string ), ( CC.CANVAS_MEDIA_VIEWER, ), cancelled_hook ) )
|
||||
|
||||
|
||||
def GetHashIdsToFileViewingStatsRows( self, hash_ids_table_name ):
|
||||
|
|
|
@ -1,14 +1,13 @@
|
|||
import re
|
||||
import sqlite3
|
||||
import typing
|
||||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusTime
|
||||
from hydrus.core import HydrusDB
|
||||
|
||||
from hydrus.client import ClientThreading
|
||||
from hydrus.client.db import ClientDBMaster
|
||||
from hydrus.client.db import ClientDBModule
|
||||
from hydrus.client.db import ClientDBDefinitionsCache
|
||||
|
||||
class ClientDBNotesMap( ClientDBModule.ClientDBModule ):
|
||||
|
||||
|
@ -45,40 +44,51 @@ class ClientDBNotesMap( ClientDBModule.ClientDBModule ):
|
|||
self._Execute( 'DELETE FROM file_notes WHERE hash_id = ? AND name_id = ?;', ( hash_id, name_id ) )
|
||||
|
||||
|
||||
def GetHashIdsFromNoteName( self, name: str, hash_ids_table_name: str ):
|
||||
def GetHashIdsFromNoteName( self, name: str, hash_ids_table_name: str, job_key: typing.Optional[ ClientThreading.JobKey ] = None ):
|
||||
|
||||
label_id = self.modules_texts.GetLabelId( name )
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
# as note name is rare, we force this to run opposite to typical: notes to temp hashes
|
||||
return self._STS( self._Execute( 'SELECT hash_id FROM file_notes CROSS JOIN {} USING ( hash_id ) WHERE name_id = ?;'.format( hash_ids_table_name ), ( label_id, ) ) )
|
||||
return self._STS( self._ExecuteCancellable( 'SELECT hash_id FROM file_notes CROSS JOIN {} USING ( hash_id ) WHERE name_id = ?;'.format( hash_ids_table_name ), ( label_id, ), cancelled_hook ) )
|
||||
|
||||
|
||||
def GetHashIdsFromNumNotes( self, min_num_notes: typing.Optional[ int ], max_num_notes: typing.Optional[ int ], hash_ids_table_name: str ):
|
||||
def GetHashIdsFromNumNotes( self, min_num_notes: typing.Optional[ int ], max_num_notes: typing.Optional[ int ], hash_ids_table_name: str, job_key: typing.Optional[ ClientThreading.JobKey ] = None ):
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
has_notes = max_num_notes is None and min_num_notes == 1
|
||||
not_has_notes = ( min_num_notes is None or min_num_notes == 0 ) and max_num_notes is not None and max_num_notes == 0
|
||||
|
||||
if has_notes or not_has_notes:
|
||||
if has_notes:
|
||||
|
||||
has_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} WHERE EXISTS ( SELECT 1 FROM file_notes WHERE file_notes.hash_id = {}.hash_id );'.format( hash_ids_table_name, hash_ids_table_name ) ) )
|
||||
hash_ids = self.GetHashIdsThatHaveNotes( hash_ids_table_name, job_key = job_key )
|
||||
|
||||
if has_notes:
|
||||
|
||||
hash_ids = has_hash_ids
|
||||
|
||||
else:
|
||||
|
||||
all_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {};'.format( hash_ids_table_name ) ) )
|
||||
|
||||
hash_ids = all_hash_ids.difference( has_hash_ids )
|
||||
|
||||
elif not_has_notes:
|
||||
|
||||
hash_ids = self.GetHashIdsThatDoNotHaveNotes( hash_ids_table_name, job_key = job_key )
|
||||
|
||||
else:
|
||||
|
||||
include_zero_count_hash_ids = False
|
||||
|
||||
if min_num_notes is None:
|
||||
|
||||
filt = lambda c: c <= max_num_notes
|
||||
|
||||
include_zero_count_hash_ids = True
|
||||
|
||||
elif max_num_notes is None:
|
||||
|
||||
filt = lambda c: min_num_notes <= c
|
||||
|
@ -91,8 +101,47 @@ class ClientDBNotesMap( ClientDBModule.ClientDBModule ):
|
|||
# temp hashes to notes
|
||||
query = 'SELECT hash_id, COUNT( * ) FROM {} CROSS JOIN file_notes USING ( hash_id ) GROUP BY hash_id;'.format( hash_ids_table_name )
|
||||
|
||||
hash_ids = { hash_id for ( hash_id, count ) in self._Execute( query ) if filt( count ) }
|
||||
hash_ids = { hash_id for ( hash_id, count ) in self._ExecuteCancellable( query, (), cancelled_hook ) if filt( count ) }
|
||||
|
||||
if include_zero_count_hash_ids:
|
||||
|
||||
zero_hash_ids = self.GetHashIdsThatDoNotHaveNotes( hash_ids_table_name, job_key = job_key )
|
||||
|
||||
hash_ids.update( zero_hash_ids )
|
||||
|
||||
|
||||
|
||||
return hash_ids
|
||||
|
||||
|
||||
def GetHashIdsThatDoNotHaveNotes( self, hash_ids_table_name: str, job_key: typing.Optional[ ClientThreading.JobKey ] = None ):
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
query = 'SELECT hash_id FROM {} WHERE NOT EXISTS ( SELECT 1 FROM file_notes WHERE file_notes.hash_id = {}.hash_id );'.format( hash_ids_table_name, hash_ids_table_name )
|
||||
|
||||
hash_ids = self._STS( self._ExecuteCancellable( query, (), cancelled_hook ) )
|
||||
|
||||
return hash_ids
|
||||
|
||||
|
||||
def GetHashIdsThatHaveNotes( self, hash_ids_table_name: str, job_key: typing.Optional[ ClientThreading.JobKey ] = None ):
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
query = 'SELECT hash_id FROM {} WHERE EXISTS ( SELECT 1 FROM file_notes WHERE file_notes.hash_id = {}.hash_id );'.format( hash_ids_table_name, hash_ids_table_name )
|
||||
|
||||
hash_ids = self._STS( self._ExecuteCancellable( query, (), cancelled_hook ) )
|
||||
|
||||
return hash_ids
|
||||
|
||||
|
|
|
@ -437,12 +437,6 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
def GetAllTagIds( self, leaf: ClientDBServices.FileSearchContextLeaf, job_key = None ):
|
||||
|
||||
tag_ids = set()
|
||||
|
||||
query = '{};'.format( self.GetQueryPhraseForTagIds( leaf.file_service_id, leaf.tag_service_id ) )
|
||||
|
||||
cursor = self._Execute( query )
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
@ -450,14 +444,9 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
loop_of_tag_ids = self._STS( HydrusDB.ReadFromCancellableCursor( cursor, 1024, cancelled_hook = cancelled_hook ) )
|
||||
query = '{};'.format( self.GetQueryPhraseForTagIds( leaf.file_service_id, leaf.tag_service_id ) )
|
||||
|
||||
if job_key is not None and job_key.IsCancelled():
|
||||
|
||||
return set()
|
||||
|
||||
|
||||
tag_ids.update( loop_of_tag_ids )
|
||||
tag_ids = self._STS( self._ExecuteCancellable( query, (), cancelled_hook ) )
|
||||
|
||||
return tag_ids
|
||||
|
||||
|
@ -807,6 +796,13 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
def GetSubtagIdsFromWildcard( self, file_service_id: int, tag_service_id: int, subtag_wildcard, job_key = None ):
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
if tag_service_id == self.modules_services.combined_tag_service_id:
|
||||
|
||||
search_tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
|
||||
|
@ -829,7 +825,8 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
if subtag_wildcard == '*':
|
||||
|
||||
# hellmode, but shouldn't be called normally
|
||||
cursor = self._Execute( 'SELECT docid FROM {};'.format( subtags_fts4_table_name ) )
|
||||
query = 'SELECT docid FROM {};'.format( subtags_fts4_table_name )
|
||||
query_args = ()
|
||||
|
||||
elif ClientSearch.IsComplexWildcard( subtag_wildcard ) or not wildcard_has_fts4_searchable_characters:
|
||||
|
||||
|
@ -846,8 +843,7 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
# it also would not fix '*amu*', but with some cleverness could speed up '*amus ar*'
|
||||
|
||||
query = 'SELECT docid FROM {} WHERE subtag LIKE ?;'.format( subtags_fts4_table_name )
|
||||
|
||||
cursor = self._Execute( query, ( like_param, ) )
|
||||
query_args = ( like_param, )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -858,8 +854,7 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
prefix_fts4_wildcard_param = '"{}*"'.format( prefix_fts4_wildcard )
|
||||
|
||||
query = 'SELECT docid FROM {} WHERE subtag MATCH ? AND subtag LIKE ?;'.format( subtags_fts4_table_name )
|
||||
|
||||
cursor = self._Execute( query, ( prefix_fts4_wildcard_param, like_param ) )
|
||||
query_args = ( prefix_fts4_wildcard_param, like_param )
|
||||
|
||||
|
||||
else:
|
||||
|
@ -871,17 +866,11 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
subtags_fts4_param = '"{}"'.format( subtag_wildcard )
|
||||
|
||||
cursor = self._Execute( 'SELECT docid FROM {} WHERE subtag MATCH ?;'.format( subtags_fts4_table_name ), ( subtags_fts4_param, ) )
|
||||
query = 'SELECT docid FROM {} WHERE subtag MATCH ?;'.format( subtags_fts4_table_name )
|
||||
query_args = ( subtags_fts4_param, )
|
||||
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
loop_of_subtag_ids = self._STL( HydrusDB.ReadFromCancellableCursor( cursor, 1024, cancelled_hook = cancelled_hook ) )
|
||||
loop_of_subtag_ids = self._STL( self._ExecuteCancellable( query, query_args, cancelled_hook ) )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -924,6 +913,13 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
def GetSubtagIdsFromWildcardIntoTable( self, file_service_id: int, tag_service_id: int, subtag_wildcard, subtag_id_table_name, job_key = None ):
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
if tag_service_id == self.modules_services.combined_tag_service_id:
|
||||
|
||||
search_tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
|
||||
|
@ -944,7 +940,8 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
if subtag_wildcard == '*':
|
||||
|
||||
# hellmode, but shouldn't be called normally
|
||||
cursor = self._Execute( 'SELECT docid FROM {};'.format( subtags_fts4_table_name ) )
|
||||
query = self._Execute( 'SELECT docid FROM {};'.format( subtags_fts4_table_name ) )
|
||||
query_args = ()
|
||||
|
||||
elif ClientSearch.IsComplexWildcard( subtag_wildcard ) or not wildcard_has_fts4_searchable_characters:
|
||||
|
||||
|
@ -961,8 +958,7 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
# it also would not fix '*amu*', but with some cleverness could speed up '*amus ar*'
|
||||
|
||||
query = 'SELECT docid FROM {} WHERE subtag LIKE ?;'.format( subtags_fts4_table_name )
|
||||
|
||||
cursor = self._Execute( query, ( like_param, ) )
|
||||
query_args = ( like_param, )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -974,7 +970,7 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
query = 'SELECT docid FROM {} WHERE subtag MATCH ? AND subtag LIKE ?;'.format( subtags_fts4_table_name )
|
||||
|
||||
cursor = self._Execute( query, ( prefix_fts4_wildcard_param, like_param ) )
|
||||
query_args = ( prefix_fts4_wildcard_param, like_param )
|
||||
|
||||
|
||||
else:
|
||||
|
@ -986,17 +982,11 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
subtags_fts4_param = '"{}"'.format( subtag_wildcard )
|
||||
|
||||
cursor = self._Execute( 'SELECT docid FROM {} WHERE subtag MATCH ?;'.format( subtags_fts4_table_name ), ( subtags_fts4_param, ) )
|
||||
query = 'SELECT docid FROM {} WHERE subtag MATCH ?;'.format( subtags_fts4_table_name )
|
||||
query_args = ( subtags_fts4_param, )
|
||||
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
loop_of_subtag_id_tuples = HydrusDB.ReadFromCancellableCursor( cursor, 1024, cancelled_hook = cancelled_hook )
|
||||
loop_of_subtag_id_tuples = self._ExecuteCancellable( query, query_args, cancelled_hook )
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( subtag_id ) VALUES ( ? );'.format( subtag_id_table_name ), loop_of_subtag_id_tuples )
|
||||
|
||||
|
@ -1133,12 +1123,14 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
( namespace_id, ) = namespace_ids
|
||||
|
||||
cursor = self._Execute( 'SELECT tag_id FROM {} WHERE namespace_id = ?;'.format( tags_table_name ), ( namespace_id, ) )
|
||||
query = 'SELECT tag_id FROM {} WHERE namespace_id = ?;'.format( tags_table_name )
|
||||
query_args = ( namespace_id, )
|
||||
|
||||
else:
|
||||
|
||||
# temp namespaces to tags
|
||||
cursor = self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( namespace_id );'.format( temp_namespace_ids_table_name, tags_table_name ) )
|
||||
query = 'SELECT tag_id FROM {} CROSS JOIN {} USING ( namespace_id );'.format( temp_namespace_ids_table_name, tags_table_name )
|
||||
query_args = ()
|
||||
|
||||
|
||||
cancelled_hook = None
|
||||
|
@ -1148,7 +1140,7 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
result_tag_ids = self._STS( HydrusDB.ReadFromCancellableCursor( cursor, 128, cancelled_hook = cancelled_hook ) )
|
||||
result_tag_ids = self._STS( self._ExecuteCancellable( query, query_args, cancelled_hook ) )
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
|
@ -1182,6 +1174,13 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
def GetTagIdsFromNamespaceIdsSubtagIdsTables( self, file_service_id: int, tag_service_id: int, namespace_ids_table_name: str, subtag_ids_table_name: str, job_key = None ):
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
final_result_tag_ids = set()
|
||||
|
||||
if tag_service_id == self.modules_services.combined_tag_service_id:
|
||||
|
@ -1198,16 +1197,9 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
tags_table_name = self.GetTagsTableName( file_service_id, search_tag_service_id )
|
||||
|
||||
# temp subtags to tags to temp namespaces
|
||||
cursor = self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( subtag_id ) CROSS JOIN {} USING ( namespace_id );'.format( subtag_ids_table_name, tags_table_name, namespace_ids_table_name ) )
|
||||
query = 'SELECT tag_id FROM {} CROSS JOIN {} USING ( subtag_id ) CROSS JOIN {} USING ( namespace_id );'.format( subtag_ids_table_name, tags_table_name, namespace_ids_table_name )
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
result_tag_ids = self._STS( HydrusDB.ReadFromCancellableCursor( cursor, 128, cancelled_hook = cancelled_hook ) )
|
||||
result_tag_ids = self._STS( self._ExecuteCancellable( query, (), cancelled_hook ) )
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
|
@ -1238,6 +1230,13 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
def GetTagIdsFromSubtagIdsTable( self, file_service_id: int, tag_service_id: int, subtag_ids_table_name: str, job_key = None ):
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
final_result_tag_ids = set()
|
||||
|
||||
if tag_service_id == self.modules_services.combined_tag_service_id:
|
||||
|
@ -1254,16 +1253,9 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
tags_table_name = self.GetTagsTableName( file_service_id, search_tag_service_id )
|
||||
|
||||
# temp subtags to tags
|
||||
cursor = self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( subtag_id );'.format( subtag_ids_table_name, tags_table_name ) )
|
||||
query = 'SELECT tag_id FROM {} CROSS JOIN {} USING ( subtag_id );'.format( subtag_ids_table_name, tags_table_name )
|
||||
|
||||
cancelled_hook = None
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
cancelled_hook = job_key.IsCancelled
|
||||
|
||||
|
||||
result_tag_ids = self._STS( HydrusDB.ReadFromCancellableCursor( cursor, 128, cancelled_hook = cancelled_hook ) )
|
||||
result_tag_ids = self._STS( self._ExecuteCancellable( query, (), cancelled_hook ) )
|
||||
|
||||
if job_key is not None:
|
||||
|
||||
|
|
|
@ -2088,7 +2088,12 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
self._media_viewer_zoom_center.setToolTip( tt )
|
||||
|
||||
self._slideshow_durations = QW.QLineEdit( self )
|
||||
self._slideshow_durations.setToolTip( 'This is a bit hacky, but whatever you have here, in comma-separated floats, will end up in the slideshow menu in the media viewer.' )
|
||||
self._slideshow_durations.textChanged.connect( self.EventSlideshowDurationsChanged )
|
||||
|
||||
self._media_zooms = QW.QLineEdit( self )
|
||||
self._media_zooms.setToolTip( 'This is a bit hacky, but whatever you have here, in comma-separated floats, will be what the program steps through as you zoom a media up and down.' )
|
||||
self._media_zooms.textChanged.connect( self.EventZoomsChanged )
|
||||
|
||||
self._mpv_conf_path = QP.FilePickerCtrl( self, starting_directory = os.path.join( HC.STATIC_DIR, 'mpv-conf' ) )
|
||||
|
@ -2128,6 +2133,10 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
self._media_viewer_zoom_center.SetValue( self._new_options.GetInteger( 'media_viewer_zoom_center' ) )
|
||||
|
||||
slideshow_durations = self._new_options.GetSlideshowDurations()
|
||||
|
||||
self._slideshow_durations.setText( ','.join( ( str( slideshow_duration ) for slideshow_duration in slideshow_durations ) ) )
|
||||
|
||||
media_zooms = self._new_options.GetMediaZooms()
|
||||
|
||||
self._media_zooms.setText( ','.join( ( str( media_zoom ) for media_zoom in media_zooms ) ) )
|
||||
|
@ -2154,6 +2163,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
rows.append( ( 'Always Loop GIFs/APNGs:', self._always_loop_animations ) )
|
||||
rows.append( ( 'Draw image transparency as checkerboard:', self._draw_transparency_checkerboard_media_canvas ) )
|
||||
rows.append( ( 'Centerpoint for media zooming:', self._media_viewer_zoom_center ) )
|
||||
rows.append( ( 'Slideshow durations:', self._slideshow_durations ) )
|
||||
rows.append( ( 'Media zooms:', self._media_zooms ) )
|
||||
rows.append( ( 'Set a new mpv.conf on dialog ok?:', self._mpv_conf_path ) )
|
||||
rows.append( ( 'Animation scanbar height:', self._animated_scanbar_height ) )
|
||||
|
@ -2355,6 +2365,24 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
|
||||
|
||||
def EventSlideshowDurationsChanged( self, text ):
|
||||
|
||||
try:
|
||||
|
||||
slideshow_durations = [ float( slideshow_duration ) for slideshow_duration in self._slideshow_durations.text().split( ',' ) ]
|
||||
|
||||
self._slideshow_durations.setObjectName( '' )
|
||||
|
||||
except ValueError:
|
||||
|
||||
self._slideshow_durations.setObjectName( 'HydrusInvalid' )
|
||||
|
||||
|
||||
self._slideshow_durations.style().polish( self._slideshow_durations )
|
||||
|
||||
self._slideshow_durations.update()
|
||||
|
||||
|
||||
def EventZoomsChanged( self, text ):
|
||||
|
||||
try:
|
||||
|
@ -2411,6 +2439,22 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
self._new_options.SetInteger( 'media_viewer_zoom_center', self._media_viewer_zoom_center.GetValue() )
|
||||
|
||||
try:
|
||||
|
||||
slideshow_durations = [ float( slideshow_duration ) for slideshow_duration in self._slideshow_durations.text().split( ',' ) ]
|
||||
|
||||
slideshow_durations = [ slideshow_duration for slideshow_duration in slideshow_durations if slideshow_duration > 0.0 ]
|
||||
|
||||
if len( slideshow_durations ) > 0:
|
||||
|
||||
self._new_options.SetSlideshowDurations( slideshow_durations )
|
||||
|
||||
|
||||
except ValueError:
|
||||
|
||||
HydrusData.ShowText( 'Could not parse those slideshow durations, so they were not saved!' )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
media_zooms = [ float( media_zoom ) for media_zoom in self._media_zooms.text().split( ',' ) ]
|
||||
|
|
|
@ -526,7 +526,21 @@ class DateTimeCtrl( QW.QWidget ):
|
|||
|
||||
timestamp = json.loads( raw_text )
|
||||
|
||||
if not isinstance( timestamp, typing.Optional[ int ] ):
|
||||
if isinstance( timestamp, str ):
|
||||
|
||||
try:
|
||||
|
||||
timestamp = int( timestamp )
|
||||
|
||||
except ValueError:
|
||||
|
||||
raise Exception( 'Does not look like a number!' )
|
||||
|
||||
|
||||
|
||||
looks_good = timestamp is None or isinstance( timestamp, int )
|
||||
|
||||
if not looks_good:
|
||||
|
||||
raise Exception( 'Not a timestamp!' )
|
||||
|
||||
|
|
|
@ -4395,11 +4395,15 @@ class CanvasMediaListBrowser( CanvasMediaListNavigable ):
|
|||
|
||||
slideshow = ClientGUIMenus.GenerateMenu( menu )
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( slideshow, '1 second', 'Start a slideshow with a one second interval.', self._StartSlideshow, 1.0 )
|
||||
ClientGUIMenus.AppendMenuItem( slideshow, '5 second', 'Start a slideshow with a five second interval.', self._StartSlideshow, 5.0 )
|
||||
ClientGUIMenus.AppendMenuItem( slideshow, '10 second', 'Start a slideshow with a ten second interval.', self._StartSlideshow, 10.0 )
|
||||
ClientGUIMenus.AppendMenuItem( slideshow, '30 second', 'Start a slideshow with a thirty second interval.', self._StartSlideshow, 30.0 )
|
||||
ClientGUIMenus.AppendMenuItem( slideshow, '60 second', 'Start a slideshow with a one minute interval.', self._StartSlideshow, 60.0 )
|
||||
slideshow_durations = HG.client_controller.new_options.GetSlideshowDurations()
|
||||
|
||||
for slideshow_duration in slideshow_durations:
|
||||
|
||||
pretty_duration = HydrusTime.TimeDeltaToPrettyTimeDelta( slideshow_duration )
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( slideshow, pretty_duration, f'Start a slideshow that changes media every {pretty_duration}.', self._StartSlideshow, slideshow_duration )
|
||||
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( slideshow, 'very fast', 'Start a very fast slideshow.', self._StartSlideshow, 0.08 )
|
||||
ClientGUIMenus.AppendMenuItem( slideshow, 'custom interval', 'Start a slideshow with a custom interval.', self._StartSlideshowCustomInterval )
|
||||
|
||||
|
|
|
@ -7,6 +7,8 @@ from qtpy import QtGui as QG
|
|||
|
||||
try:
|
||||
|
||||
# this appears to be Python 3.8+ and/or the equivalent Qt versions
|
||||
|
||||
from qtpy import QtMultimediaWidgets as QMW
|
||||
from qtpy import QtMultimedia as QM
|
||||
|
||||
|
|
|
@ -196,7 +196,7 @@ class ManageClientServicesPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
for service in deletable_services:
|
||||
|
||||
if service.GetServiceType() == service_type:
|
||||
if service.GetServiceType() == service_type and service in self._original_services:
|
||||
|
||||
service_info = HG.client_controller.Read( 'service_info', service.GetServiceKey() )
|
||||
|
||||
|
|
|
@ -100,7 +100,7 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 20
|
||||
SOFTWARE_VERSION = 527
|
||||
SOFTWARE_VERSION = 528
|
||||
CLIENT_API_VERSION = 44
|
||||
|
||||
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
|
|
@ -52,6 +52,7 @@ def CheckCanVacuumData( db_path, page_size, page_count, freelist_count, stop_tim
|
|||
|
||||
HydrusDBBase.CheckHasSpaceForDBTransaction( db_dir, db_size )
|
||||
|
||||
|
||||
def GetApproxVacuumDuration( db_size ):
|
||||
|
||||
vacuum_estimate = int( db_size * 1.2 )
|
||||
|
@ -62,38 +63,7 @@ def GetApproxVacuumDuration( db_size ):
|
|||
|
||||
return approx_vacuum_duration
|
||||
|
||||
def ReadFromCancellableCursor( cursor, largest_group_size, cancelled_hook = None ):
|
||||
|
||||
if cancelled_hook is None:
|
||||
|
||||
return cursor.fetchall()
|
||||
|
||||
|
||||
NUM_TO_GET = 1
|
||||
|
||||
results = []
|
||||
|
||||
group_of_results = cursor.fetchmany( NUM_TO_GET )
|
||||
|
||||
while len( group_of_results ) > 0:
|
||||
|
||||
results.extend( group_of_results )
|
||||
|
||||
if cancelled_hook():
|
||||
|
||||
break
|
||||
|
||||
|
||||
if NUM_TO_GET < largest_group_size:
|
||||
|
||||
NUM_TO_GET *= 2
|
||||
|
||||
|
||||
group_of_results = cursor.fetchmany( NUM_TO_GET )
|
||||
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def ReadLargeIdQueryInSeparateChunks( cursor, select_statement, chunk_size ):
|
||||
|
||||
table_name = 'tempbigread' + os.urandom( 32 ).hex()
|
||||
|
|
|
@ -1,4 +1,6 @@
|
|||
import collections
|
||||
import typing
|
||||
|
||||
import psutil
|
||||
import sqlite3
|
||||
|
||||
|
@ -72,6 +74,45 @@ def CheckHasSpaceForDBTransaction( db_dir, num_bytes ):
|
|||
|
||||
|
||||
|
||||
|
||||
def ReadFromCancellableCursor( cursor, largest_group_size, cancelled_hook = None ):
|
||||
|
||||
if cancelled_hook is None:
|
||||
|
||||
return cursor.fetchall()
|
||||
|
||||
|
||||
results = []
|
||||
|
||||
if cancelled_hook():
|
||||
|
||||
return results
|
||||
|
||||
|
||||
NUM_TO_GET = 1
|
||||
|
||||
group_of_results = cursor.fetchmany( NUM_TO_GET )
|
||||
|
||||
while len( group_of_results ) > 0:
|
||||
|
||||
results.extend( group_of_results )
|
||||
|
||||
if cancelled_hook():
|
||||
|
||||
break
|
||||
|
||||
|
||||
if NUM_TO_GET < largest_group_size:
|
||||
|
||||
NUM_TO_GET *= 2
|
||||
|
||||
|
||||
group_of_results = cursor.fetchmany( NUM_TO_GET )
|
||||
|
||||
|
||||
return results
|
||||
|
||||
|
||||
class TemporaryIntegerTableNameCache( object ):
|
||||
|
||||
my_instance = None
|
||||
|
@ -223,18 +264,25 @@ class DBBase( object ):
|
|||
self._Execute( statement )
|
||||
|
||||
|
||||
def _Execute( self, query, *args ) -> sqlite3.Cursor:
|
||||
def _Execute( self, query, *query_args ) -> sqlite3.Cursor:
|
||||
|
||||
if HG.query_planner_mode and query not in HG.queries_planned:
|
||||
|
||||
plan_lines = self._c.execute( 'EXPLAIN QUERY PLAN {}'.format( query ), *args ).fetchall()
|
||||
plan_lines = self._c.execute( 'EXPLAIN QUERY PLAN {}'.format( query ), *query_args ).fetchall()
|
||||
|
||||
HG.query_planner_query_count += 1
|
||||
|
||||
HG.controller.PrintQueryPlan( query, plan_lines )
|
||||
|
||||
|
||||
return self._c.execute( query, *args )
|
||||
return self._c.execute( query, *query_args )
|
||||
|
||||
|
||||
def _ExecuteCancellable( self, query, query_args, cancelled_hook: typing.Callable[ [], bool ] ):
|
||||
|
||||
cursor = self._Execute( query, query_args )
|
||||
|
||||
return ReadFromCancellableCursor( cursor, 1024, cancelled_hook = cancelled_hook )
|
||||
|
||||
|
||||
def _ExecuteMany( self, query, args_iterator ):
|
||||
|
|
|
@ -136,6 +136,7 @@ class Value( Enum ):
|
|||
class Operators( Enum ):
|
||||
RELATIONAL = auto() # One of '=', '<', '>', '\u2248' ('≈') (takes '~=' too)
|
||||
RELATIONAL_EXACT = auto() # Like RELATIONAL but without the approximately equal operator
|
||||
RELATIONAL_TIME = auto() # One of '=', '<', '>', '\u2248' ('≈') (takes '~=' too), and the various 'since', 'before', 'the day of', 'the month of' time-based analogues
|
||||
EQUAL = auto() # One of '=' or '!='
|
||||
FILESERVICE_STATUS = auto() # One of 'is not currently in', 'is currently in', 'is not pending to', 'is pending to'
|
||||
TAG_RELATIONAL = auto() # A tuple of a string (a potential tag name) and a relational operator (as a string)
|
||||
|
@ -161,7 +162,7 @@ class Units( Enum ):
|
|||
SYSTEM_PREDICATES = {
|
||||
'everything': (Predicate.EVERYTHING, None, None, None),
|
||||
'inbox': (Predicate.INBOX, None, None, None),
|
||||
'archive$': (Predicate.ARCHIVE, None, None, None), # $ so as not to clash with system:archive(d) date
|
||||
'archived?$': (Predicate.ARCHIVE, None, None, None), # $ so as not to clash with system:archive(d) date
|
||||
'has duration': (Predicate.HAS_DURATION, None, None, None),
|
||||
'no duration': (Predicate.NO_DURATION, None, None, None),
|
||||
'(is the )?best quality( file)? of( its)?( duplicate)? group': (Predicate.BEST_QUALITY_OF_GROUP, None, None, None),
|
||||
|
@ -185,10 +186,10 @@ SYSTEM_PREDICATES = {
|
|||
'limit': (Predicate.LIMIT, Operators.ONLY_EQUAL, Value.NATURAL, None),
|
||||
'file ?type': (Predicate.FILETYPE, Operators.ONLY_EQUAL, Value.FILETYPE_LIST, None),
|
||||
'hash': (Predicate.HASH, Operators.EQUAL, Value.HASHLIST_WITH_ALGORITHM, None),
|
||||
'archived? (date|time)|(date|time) archived': (Predicate.ARCHIVED_DATE, Operators.RELATIONAL, Value.DATE_OR_TIME_INTERVAL, None),
|
||||
'modified (date|time)|(date|time) modified': (Predicate.MOD_DATE, Operators.RELATIONAL, Value.DATE_OR_TIME_INTERVAL, None),
|
||||
'last view(ed)? (date|time)|(date|time) last viewed': (Predicate.LAST_VIEWED_TIME, Operators.RELATIONAL, Value.DATE_OR_TIME_INTERVAL, None),
|
||||
'import(ed)? (date|time)|(date|time) imported': (Predicate.TIME_IMPORTED, Operators.RELATIONAL, Value.DATE_OR_TIME_INTERVAL, None),
|
||||
'archived? (date|time)|(date|time) archived|archived.': (Predicate.ARCHIVED_DATE, Operators.RELATIONAL_TIME, Value.DATE_OR_TIME_INTERVAL, None),
|
||||
'modified (date|time)|(date|time) modified|modified': (Predicate.MOD_DATE, Operators.RELATIONAL_TIME, Value.DATE_OR_TIME_INTERVAL, None),
|
||||
'last view(ed)? (date|time)|(date|time) last viewed|last viewed': (Predicate.LAST_VIEWED_TIME, Operators.RELATIONAL_TIME, Value.DATE_OR_TIME_INTERVAL, None),
|
||||
'import(ed)? (date|time)|(date|time) imported|imported': (Predicate.TIME_IMPORTED, Operators.RELATIONAL_TIME, Value.DATE_OR_TIME_INTERVAL, None),
|
||||
'duration': (Predicate.DURATION, Operators.RELATIONAL, Value.TIME_SEC_MSEC, None),
|
||||
'framerate': (Predicate.FRAMERATE, Operators.RELATIONAL_EXACT, Value.NATURAL, Units.FPS_OR_NONE),
|
||||
'number of frames': (Predicate.NUM_OF_FRAMES, Operators.RELATIONAL, Value.NATURAL, None),
|
||||
|
@ -334,7 +335,15 @@ def parse_value( string: str, spec ):
|
|||
months = int( match.group( 'month' ) ) if match.group( 'month' ) else 0
|
||||
days = int( match.group( 'day' ) ) if match.group( 'day' ) else 0
|
||||
hours = int( match.group( 'hour' ) ) if match.group( 'hour' ) else 0
|
||||
return string[ len( match[ 0 ] ): ], (years, months, days, hours)
|
||||
|
||||
string_result = string[ len( match[ 0 ] ): ]
|
||||
|
||||
if string_result == 'ago':
|
||||
|
||||
string_result = ''
|
||||
|
||||
|
||||
return string_result, (years, months, days, hours)
|
||||
match = re.match( '(?P<year>[0-9][0-9][0-9][0-9])-(?P<month>[0-9][0-9]?)-(?P<day>[0-9][0-9]?)', string )
|
||||
if match:
|
||||
# good expansion here would be to parse a full date with 08:20am kind of thing, but we'll wait for better datetime parsing library for that I think!
|
||||
|
@ -384,12 +393,58 @@ def parse_value( string: str, spec ):
|
|||
|
||||
|
||||
def parse_operator( string: str, spec ):
|
||||
string = string.strip()
|
||||
|
||||
while string.startswith( ':' ) or string.startswith( ' ' ):
|
||||
|
||||
string = string.strip()
|
||||
|
||||
if string.startswith( ':' ):
|
||||
|
||||
string = string[ 1 : ]
|
||||
|
||||
|
||||
|
||||
if spec is None:
|
||||
return string, None
|
||||
elif spec == Operators.RELATIONAL or spec == Operators.RELATIONAL_EXACT:
|
||||
elif spec in ( Operators.RELATIONAL, Operators.RELATIONAL_EXACT, Operators.RELATIONAL_TIME ):
|
||||
exact = spec == Operators.RELATIONAL_EXACT
|
||||
ops = [ '=', '<', '>' ]
|
||||
|
||||
if spec == Operators.RELATIONAL_TIME:
|
||||
|
||||
re_result = re.search( r'\d.*', string )
|
||||
|
||||
if re_result:
|
||||
|
||||
op_string = string[ : re_result.start() ]
|
||||
string_result = re_result.group()
|
||||
|
||||
looks_like_date = '-' in string_result
|
||||
invert_ops = not looks_like_date
|
||||
|
||||
if 'month' in op_string and looks_like_date:
|
||||
|
||||
return ( string_result, '\u2248' )
|
||||
|
||||
elif 'around' in op_string and not looks_like_date:
|
||||
|
||||
return ( string_result, '\u2248' )
|
||||
|
||||
elif 'day' in op_string and looks_like_date:
|
||||
|
||||
return ( string_result, '=' )
|
||||
|
||||
elif 'since' in op_string:
|
||||
|
||||
return ( string_result, '<' if invert_ops else '>' )
|
||||
|
||||
elif 'before' in op_string:
|
||||
|
||||
return ( string_result, '>' if invert_ops else '<' )
|
||||
|
||||
|
||||
|
||||
|
||||
if not exact:
|
||||
ops = ops + [ '\u2260', '\u2248' ]
|
||||
if string.startswith( '==' ): return string[ 2: ], '='
|
||||
|
|
|
@ -1993,6 +1993,8 @@ class TestTagObjects( unittest.TestCase ):
|
|||
( 'system:everything', "system:everything" ),
|
||||
( 'system:inbox', "system:inbox " ),
|
||||
( 'system:archive', "system:archive " ),
|
||||
( 'system:archive', "system:archived " ),
|
||||
( 'system:archive', "system:archived" ),
|
||||
( 'system:has duration', "system:has duration" ),
|
||||
( 'system:has duration', "system:has_duration" ),
|
||||
( 'system:no duration', " system:no_duration" ),
|
||||
|
@ -2043,6 +2045,7 @@ class TestTagObjects( unittest.TestCase ):
|
|||
( 'system:archived time: since 7 years 45 days ago', "system:archive time < 7 years 45 days 700h" ),
|
||||
( 'system:archived time: since 7 years 45 days ago', "system:date archived < 7 years 45 days 700h" ),
|
||||
( 'system:archived time: since 7 years 45 days ago', "system:time archived < 7 years 45 days 700h" ),
|
||||
( 'system:archived time: since 7 years 45 days ago', "system:archived < 7 years 45 days 700h" ),
|
||||
( 'system:modified time: since 7 years 45 days ago', "system:modified date < 7 years 45 days 700h" ),
|
||||
( 'system:modified time: since 2011-06-04', "system:modified date > 2011-06-04" ),
|
||||
( 'system:modified time: before 7 years 2 months ago', "system:date modified > 7 years 2 months" ),
|
||||
|
@ -2077,10 +2080,14 @@ class TestTagObjects( unittest.TestCase ):
|
|||
( 'system:import time: since 1 day ago', "system:imported date < 1 day" ),
|
||||
( 'system:import time: since 1 day ago', "system:time imported < 1 day" ),
|
||||
( 'system:import time: since 1 day ago', "system:date imported < 1 day" ),
|
||||
( 'system:import time: a month either side of 2020-01-03', "system:import time: the month of 2020-01-03" ),
|
||||
( 'system:import time: on the day of 2020-01-03', "system:import time: the day of 2020-01-03" ),
|
||||
( 'system:import time: around 7 days ago', "system:date imported around 7 days ago" ),
|
||||
( 'system:duration < 5.0 seconds', "system:duration < 5 seconds" ),
|
||||
( 'system:duration \u2248 11.0 seconds', "system:duration ~= 5 sec 6000 msecs" ),
|
||||
( 'system:duration > 3 milliseconds', "system:duration > 3 milliseconds" ),
|
||||
( 'system:is pending to my files', "system:file service is pending to my files" ),
|
||||
( 'system:is pending to my files', "system:file service is pending to MY FILES" ),
|
||||
( 'system:is currently in my files', " system:file service currently in my files" ),
|
||||
( 'system:is not currently in my files', "system:file service isn't currently in my files" ),
|
||||
( 'system:is not pending to my files', "system:file service is not pending to my files" ),
|
||||
|
|
Loading…
Reference in New Issue