Version 450

This commit is contained in:
Hydrus Network Developer 2021-08-11 16:14:12 -05:00
parent eee739898a
commit 4937616bd4
37 changed files with 2583 additions and 1895 deletions

View File

@ -1,14 +1,14 @@
### Virtual Memory Under Linux
# Virtual Memory Under Linux
# Why does hydrus keep crashing under Linux when it has lots of virtual memory?
## Why does hydrus keep crashing under Linux when it has lots of virtual memory?
## Symptoms
### Symptoms
- Hydrus crashes without a crash log
- Standard error reads `Killed`
- System logs say OOMKiller
- Programs appear to havevery high virtual memory utilization despite low real memory.
## tl;dr :: The fix
### tl;dr :: The fix
Add the followng line to the end of `/etc/sysctl.conf`. You will need admin, so use
@ -40,7 +40,7 @@ You may add as many swapfiles as you like, and should add a new swapfile before
Reboot for all changes to take effect, or use `sysctl` to set `vm` variables.
## Details
### Details
Linux's memory allocator is lazy and does not perform opportunistic reclaim. This means that the system will continue to give your process memory from the real and virtual memory pool(swap) until there is none left.
Linux will only cleanup if the available total real and virtual memory falls below the **watermark** as defined in the system control configuration file `/etc/sysctl.conf`.
@ -54,21 +54,17 @@ If `vm.min_free_kbytes` is less than the ammount requested and there is no virtu
Increase the `vm.min_free_kbytes` value to prevent this scenario.
### The OOM Killer
#### The OOM Killer
The OOM kill decides which program to kill to reclaim memory, since hydrus loves memory it is usually picked first, even if another program asking for memory caused the OOM condition. Setting the minimum free kilobytes higher will avoid the running of the OOMkiller which is always preferable, and almost always preventable.
### Memory Overcommmit
#### Memory Overcommmit
We mentioned that Linux will keep giving out memory, but actually it's possible for Linux to launch the OOM killer if it just feel like our program is aking for too much memory too quickly. Since hydrus is a heavyweight scientific processing package we need to turn this feature off. To turn it off change the value of `vm.overcommit_memory` which defaults to `2`.
Set `vm.overcommit_memory=1` this prevents the OS from using a heuristic and it will just always give memory to anyone who asks for it.
### What about swappiness?
#### What about swappiness?
Swapiness is a setting you might have seen, but it only determines Linux's desire to spend a little bit of time moving memory you haven't touched in a while out of real memory and into virtual memory, it will not prevent the OOM condition it just determines how much time to use for moving things into swap.
### Virtual Memory Under Linux 2: The rememoryning
# Why does my Linux system studder or become unresponsive when hydrus has been running a while?
You are running out of pages because Linux releases I/O buffer pages only when a file is closed. Thus the OS is waiting for you to hit the watermark(as described in "why is hydrus crashing") to start freeing pages, which causes the chug. When contents is written from memory to disk the page is retained so that if you reread that part of the disk the OS does not need to access disk it just pulls it from the much faster memory. This is usually a good thing, but Hydrus does not close database files so it eats up pages over time. This is really good for hydrus but sucks for the responsiveness of other apps, and will cause hydrus to consume pages after doing a lengthy operation in anticipation of needing them again, even when it is thereafter idle. You need to set `vm.dirtytime_expire_seconds` to a lower value.
@ -88,8 +84,6 @@ https://www.kernel.org/doc/Documentation/sysctl/vm.txt
### Virtual Memory Under Linux 3: The return of the memory
# Why does everything become clunky for a bit if I have tuned all of the above settings?
The kernel launches a process called `kswapd` to swap and reclaim memory pages, its behaviour is goverened by the following two values
@ -109,6 +103,7 @@ i.e. If 32GiB (real and virt) of memory, it will try to keep at least 0.224 GiB
An example /etc/sysctl.conf section for virtual memory settings.
```ini
########
# virtual memory
########
@ -132,4 +127,5 @@ vm.watermark_scale_factor=70
#Have the kernel prefer to reclaim I/O pages at 110% of the rate at which it frees other pages.
#Don't set this value much over 100 or the kernel will spend all its time reclaiming I/O pages
vm.vfs_cache_pressure=110
vm.vfs_cache_pressure=110
```

View File

@ -8,6 +8,44 @@
<div class="content">
<h3 id="changelog"><a href="#changelog">changelog</a></h3>
<ul>
<li><h3 id="version_450"><a href="#version_450">version 450</a></h3></li>
<ul>
<li>misc:</li>
<li>when exporting files from the file export window, a cancellable popup job with progress updates is also created. if you close the window, you can still cancel the job from the popup</li>
<li>fixed a crash bug in file export window</li>
<li>system:num file relationships (duplicates) now correctly only returns files in the current file search domain (previously, it returned all files, including those previously deleted etc...)</li>
<li>I rearranged some of the thumbnail menu file relationships actions menu. I'm not really happy with this, but a shuffle is easier than a full rework</li>
<li>fixed the '4k' resolution label replacer, which was looking at 2060 height not 2160 by mistake</li>
<li>the phash generation routine (part of the duplicates system, happens on image imports) now uses less memory and CPU for images with an alpha channel (pngs and still gifs), and if those images are taller or wider than 1:30 or 30:1, the phashes are also better quality</li>
<li>the 'fill in subscription gap' popup button now correctly boots its created downloader when the action also opens a new downloader page. previously, due to overactive safety code, it would hang on 'pending' until a client restart. related similar 'start downloader after creating page' actions off drag and drop or client api should also be more reliable</li>
<li>.</li>
<li>repositories (also the various improvements in 449-experimental are folded in):</li>
<li>fixed an issue with some 'force repository account refresh' code not kicking in immediately</li>
<li>when a client sees repository update period change, it now recalculates the metadata next check time</li>
<li>fixed a bug with the new repo sync where updates just added from additive sync were not being processed until client restart. related long-term buggy 'do we have this hash in updates?' and 'how many updates are there?' tests for update metadata are also fixed</li>
<li>the experimental by-content-type repository reset from last week now leaves pending content in place</li>
<li>the reset also now clears cached service info counts for files, tags, and mappings</li>
<li>.</li>
<li>client api:</li>
<li>the /get_files/search_files command now takes six new parameters for file/tag domain selection and file sort type and order</li>
<li>I wrote out some simple help and added some hacky unit tests for these new parameters. it needs another pass for potential bug fixes and readability/specificity (e.g. what does 'asc' for 'sort by ratio' mean?), but let me know how you get on anyway</li>
<li>fixed the new system predicate parsing for system:hash with only one hash</li>
<li>improved the url system predicate examples in client api documentation</li>
<li>client api version is now 19</li>
<li>.</li>
<li>mr bones:</li>
<li>mr bones now reports the correct numbers for your 'my files' again (and will continue to do so as multiple local file services are added)</li>
<li>mr bones now reports total files deleted and their total size</li>
<li>mr bones now reports your earliest recorded file import time</li>
<li>mr bones now has separate tabs for different stats types. this neatly ditches the giant stack of numbers this was becoming, but I may revisit it. some people who take mr bones screens will prefer all the info in one easy shot, while I others I know would rather the 'viewing habits' stuff were not immediately there. maybe expanding boxes?</li>
<li>fixed some mr bones layout</li>
<li>.</li>
<li>boring code cleanup:</li>
<li>made a new base class for the different database modules to hold cursor and collect common administrative functions</li>
<li>all database queries (about 1,200 of them) now go through a single location in the new class</li>
<li>a new profile mode, 'query planner' mode, now prints query text and EXPLAIN QUERY PLAN lines to a new profile log. this is a new experimental thing, extremely spammy, but will help with diagnosing very unusually slow queries on individual clients (it'll most likely show up odd sqlite versions, weird data distributions, or un-analysed tables)</li>
<li>updated a core function in 'all known files' mappings change autocomplete count adjustment. this seemed to have extremely bad worst case time, and I think it might have been giving some bad counts in unusual situations</li>
</ul>
<li><h3 id="version_449"><a href="#version_449">version 449</a></h3></li>
<ul>
<li>this is an experimental release! please do not use this unless you are an advanced user who has a good backup, syncs with a repository (e.g. the PTR), and would like to help me out. if this is you, I don't need you to do anything too special, just please use the client and repo as normal, downloading and uploading, and let me know if anything messes up during normal operation</li>
@ -18,7 +56,7 @@
<li>I have also split the 'network' and 'processing' sync progress gauges and their buttons into separate boxes for clarity</li>
<li>the 'fill in content gaps' maintenance job now lets you choose which content types to do it for</li>
<li>also, a new 'reset content' maintenance job lets you choose to delete and reprocess by content type. the nuclear 'complete' reset is now really just for critical situations where definitions or database tables are irrevocably messed up</li>
<li>all users have their siblings and parents processing reset this week. next time you have update processing, they should blitz back in your next processing time within seconds, and with luck we'll wipe out some years-old legacy bugs and hopefully discover some info about the remaining bugs. most importantly, we can re-trigger this reprocess in just a few seconds to quickly test future fixes</li>
<li>all users have their siblings and parents processing reset this week. next time you have update processing, they'll come back in over about fifteen minutes, and with luck we'll wipe out some years-old legacy bugs and hopefully discover some info about the remaining bugs. most importantly, we can re-trigger this reprocess in just a few seconds to quickly test future fixes</li>
<li>a variety of tests such as 'service is mostly caught up' are now careful just to test for the currently unpaused content types</li>
<li>if you try to commit some content that is currently processing-paused, the client now says 'hey, sorry this is paused, I won't upload that stuff right now' but still upload everything else that isn't paused. this is a ' service is caught up' issue</li>
<li>tag display sync, which does background work to make sure siblings and parents appear as they should, will now not run for a service if any of the services it relies on for siblings or parents is not process synced. when this happens, it is also shown on the tag display sync review panel. this stops big changes like the new sibling/parent reset from causing display sync to do a whole bunch of work before the service is ready and happy with what it has. with luck it will also smooth out new users' first PTR sync too</li>

View File

@ -1125,6 +1125,12 @@
<p>Arguments (in percent-encoded JSON):</p>
<ul>
<li>tags : (a list of tags you wish to search for)</li>
<li>file_service_name : (optional, selective, string, the file domain on which to search)</li>
<li>file_service_key : (optional, selective, hexadecimal, the file domain on which to search)</li>
<li>tag_service_name : (optional, selective, string, the tag domain on which to search)</li>
<li>tag_service_key : (optional, selective, hexadecimal, the tag domain on which to search)</li>
<li>file_sort_type : (optional, integer, the results sort method)</li>
<li>file_sort_asc : true or false (optional, the results sort order)</li>
<li><i>system_inbox : true or false (obsolete, use tags)</i></li>
<li><i>system_archive : true or false (obsolete, use tags)</i></li>
</ul>
@ -1135,7 +1141,7 @@
<li><p>/get_files/search_files?tags=%5B%22blue%20eyes%22%2C%20%22blonde%20hair%22%2C%20%22%5Cu043a%5Cu0438%5Cu043d%5Cu043e%22%2C%20%22system%3Ainbox%22%2C%20%22system%3Alimit%3D16%22%5D</p></li>
</ul>
</li>
<p>If the access key's permissions only permit search for certain tags, at least one whitelisted/non-blacklisted tag must be in the "tags" list or this will 403. Tags can be prepended with a hyphen to make a negated tag (e.g. "-green eyes"), but these will not be checked against the permissions whitelist.</p>
<p>If the access key's permissions only permit search for certain tags, at least one whitelisted/non-blacklisted tag must be in the "tags" list or this will 403. Tags can be prepended with a hyphen to make a negated tag (e.g. "-green eyes"), but these will not be checked against the permissions whitelist.</p>
<p>Wildcards and namespace searches are supported, so if you search for 'character:sam*' or 'series:*', this will be handled correctly clientside.</p>
<p>Many system predicates are also supported using a text parser! The parser was designed by a clever user for human input and allows for a certain amount of error (e.g. ~= instead of ≈, or "isn't" instead of "is not") or requires more information (e.g. the specific hashes for a hash lookup). Here's a big list of current formats supported:</p>
<ul>
@ -1196,21 +1202,43 @@
<li>system:num pixels ~= 5 kilopixel</li>
<li>system:media views ~= 10</li>
<li>system:all views &gt; 0</li>
<li>system:preview views &lt; 10 </li>
<li>system:preview views &lt; 10</li>
<li>system:media viewtime &lt; 1 days 1 hour 0 minutes</li>
<li>system:all viewtime &gt; 1 hours 100 seconds</li>
<li>system:preview viewtime ~= 1 day 30 hours 100 minutes 90s</li>
<li>system:has url matching regex reg.*ex </li>
<li>system:does not have a url matching regex test</li>
<li>system:has url https://test.test/</li>
<li>system:does not have url https://test.test/</li>
<li>system:has domain test.com</li>
<li>system:does not have domain test.com</li>
<li>system:has url matching regex index\.php</li>
<li>system:does not have a url matching regex index\.php</li>
<li>system:has url https://safebooru.donmai.us/posts/4695284</li>
<li>system:does not have url https://safebooru.donmai.us/posts/4695284</li>
<li>system:has domain safebooru.com</li>
<li>system:does not have domain safebooru.com</li>
<li>system:has a url with class safebooru file page</li>
<li>system:does not have a url with url class safebooru file page</li>
<li>system:tag as number page &lt; 5</li>
</ul>
<p>More system predicate types and input formats will be available in future. Please test out the system predicates you want to send. Reverse engineering system predicate data from text is obviously tricky. If a system predicate does not parse, you'll get 400.</p>
<p>The file and tag services are for search domain selection, just like clicking the buttons in the client. They are optional--default is 'my files' and 'all known tags', and you can use either key or name as in <a href="#get_services">GET /get_services</a>, whichever is easiest for your situation.</p>
<p>file_sort_type is an integer according to this enum (default is import time):</p>
<ul>
<li>0 - file size</li>
<li>1 - duration</li>
<li>2 - import time</li>
<li>3 - filetype</li>
<li>4 - random</li>
<li>5 - width</li>
<li>6 - height</li>
<li>7 - ratio</li>
<li>8 - number of pixels</li>
<li>9 - number of tags (on the current tag domain)</li>
<li>10 - number of media viewers</li>
<li>11 - total media viewtime</li>
<li>12 - approximate bitrate</li>
<li>13 - has audio</li>
<li>14 - modified time</li>
<li>15 - framerate</li>
<li>16 - number of frames</li>
</ul>
<p>file_sort_asc is 'true' for ascending, and 'false' for descending. The default is descending. What ascending or descending means in a context can be complicated (e.g. for ratio), so you might want to play around with it or double-check the UI in the client to figure it out.</p>
<p>Response description: The full list of numerical file ids that match the search.</p>
<li>
<p>Example response:</p>
@ -1223,8 +1251,7 @@
</ul>
</li>
<p>File ids are internal and specific to an individual client. For a client, a file with hash H always has the same file id N, but two clients will have different ideas about which N goes with which H. They are a bit faster than hashes to retrieve and search with <i>en masse</i>, which is why they are exposed here.</p>
<p>The search will be performed on the 'local files' file domain and 'all known tags' tag domain. At current, they will be sorted in import time order, newest to oldest (if you would like to paginate them before fetching metadata), but sort options will expand in future.</p>
<p>This search does <b>not</b> apply the implicit limit that most clients set to all searches (usually 10,000), so if you do system:everything on a client with millions of files, expect to get boshed. Even with a system:limit included, large queries may take several seconds to respond.</p>
<p>This search does <b>not</b> apply the implicit limit that most clients set to all searches (usually 10,000), so if you do system:everything on a client with millions of files, expect to get boshed. Even with a system:limit included, complicated queries with large result sets may take several seconds to respond. Just like the client itself.</p>
</ul>
</div>
<div class="apiborder">

View File

@ -49,9 +49,13 @@ def GenerateShapePerceptualHashes( path, mime ):
if depth == 4:
# doing this on 10000x10000 pngs eats ram like mad
target_resolution = HydrusImageHandling.GetThumbnailResolution( ( x, y ), ( 1024, 1024 ) )
# we don't want to do GetThumbnailResolution as for extremely wide or tall images, we'll then scale below 32 pixels for one dimension, losing information!
# however, it does not matter if we stretch the image a bit, since we'll be coercing 32x32 in a minute
numpy_image = HydrusImageHandling.ResizeNumPyImage( numpy_image, target_resolution )
new_x = min( 256, x )
new_y = min( 256, y )
numpy_image = cv2.resize( numpy_image, ( new_x, new_y ), interpolation = cv2.INTER_AREA )
( y, x, depth ) = numpy_image.shape
@ -59,22 +63,30 @@ def GenerateShapePerceptualHashes( path, mime ):
numpy_alpha = numpy_image[ :, :, 3 ]
numpy_alpha_float = numpy_alpha / 255.0
numpy_image_rgb = numpy_image[ :, :, :3 ]
numpy_image_bgr = numpy_image[ :, :, :3 ]
numpy_image_gray_bare = cv2.cvtColor( numpy_image_bgr, cv2.COLOR_RGB2GRAY )
numpy_image_gray_bare = cv2.cvtColor( numpy_image_rgb, cv2.COLOR_RGB2GRAY )
# create a white greyscale canvas
white = numpy.ones( ( y, x ) ) * 255.0
white = numpy.full( ( y, x ), 255.0 )
# paste the grayscale image onto the white canvas using: pixel * alpha + white * ( 1 - alpha )
# paste the grayscale image onto the white canvas using: pixel * alpha_float + white * ( 1 - alpha_float )
numpy_image_gray = numpy.uint8( ( numpy_image_gray_bare * numpy_alpha_float ) + ( white * ( numpy.ones( ( y, x ) ) - numpy_alpha_float ) ) )
# note alpha 255 = opaque, alpha 0 = transparent
# also, note:
# white * ( 1 - alpha_float )
# =
# 255 * ( 1 - ( alpha / 255 ) )
# =
# 255 - alpha
numpy_image_gray = numpy.uint8( ( numpy_image_gray_bare * ( numpy_alpha / 255.0 ) ) + ( white - numpy_alpha ) )
else:
# this single step is nice and fast, so we won't scale to 256x256 beforehand
numpy_image_gray = cv2.cvtColor( numpy_image, cv2.COLOR_RGB2GRAY )

View File

@ -995,6 +995,11 @@ class ServiceRestricted( ServiceRemote ):
self._service_options = dictionary[ 'service_options' ]
def _SetNewServiceOptions( self, service_options ):
self._service_options = service_options
def CanSyncAccount( self, including_external_communication = True ):
with self._lock:
@ -1269,7 +1274,7 @@ class ServiceRestricted( ServiceRemote ):
with self._lock:
self._next_account_sync = HydrusData.GetNow()
self._next_account_sync = HydrusData.GetNow() - 1
self._SetDirty()
@ -1350,7 +1355,9 @@ class ServiceRestricted( ServiceRemote ):
with self._lock:
self._service_options = options_response[ 'service_options' ]
service_options = options_response[ 'service_options' ]
self._SetNewServiceOptions( service_options )
except HydrusExceptions.SerialisationException:
@ -1543,6 +1550,18 @@ class ServiceRepository( ServiceRestricted ):
job_key.SetVariable( 'popup_text_2', popup_message )
def _SetNewServiceOptions( self, service_options ):
if 'update_period' in service_options and 'update_period' in self._service_options and service_options[ 'update_period' ] != self._service_options[ 'update_period' ]:
update_period = service_options[ 'update_period' ]
self._metadata.CalculateNewNextUpdateDue( update_period )
ServiceRestricted._SetNewServiceOptions( self, service_options )
def _SyncDownloadMetadata( self ):
with self._lock:

File diff suppressed because it is too large Load Diff

View File

@ -47,14 +47,14 @@ class ClientDBCacheLocalHashes( HydrusDBModule.HydrusDBModule ):
( uncached_hash_id, ) = uncached_hash_ids
# this makes 0 or 1 rows, so do fetchall rather than fetchone
local_uncached_hash_ids_to_hashes = { hash_id : hash for ( hash_id, hash ) in self._c.execute( 'SELECT hash_id, hash FROM local_hashes_cache WHERE hash_id = ?;', ( uncached_hash_id, ) ) }
local_uncached_hash_ids_to_hashes = { hash_id : hash for ( hash_id, hash ) in self._Execute( 'SELECT hash_id, hash FROM local_hashes_cache WHERE hash_id = ?;', ( uncached_hash_id, ) ) }
else:
with HydrusDB.TemporaryIntegerTable( self._c, uncached_hash_ids, 'hash_id' ) as temp_table_name:
with self._MakeTemporaryIntegerTable( uncached_hash_ids, 'hash_id' ) as temp_table_name:
# temp hash_ids to actual hashes
local_uncached_hash_ids_to_hashes = { hash_id : hash for ( hash_id, hash ) in self._c.execute( 'SELECT hash_id, hash FROM {} CROSS JOIN local_hashes_cache USING ( hash_id );'.format( temp_table_name ) ) }
local_uncached_hash_ids_to_hashes = { hash_id : hash for ( hash_id, hash ) in self._Execute( 'SELECT hash_id, hash FROM {} CROSS JOIN local_hashes_cache USING ( hash_id );'.format( temp_table_name ) ) }
@ -73,24 +73,24 @@ class ClientDBCacheLocalHashes( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.local_hashes_cache ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.local_hashes_cache ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES UNIQUE );' )
def AddHashIdsToCache( self, hash_ids ):
hash_ids_to_hashes = self.modules_hashes.GetHashIdsToHashes( hash_ids = hash_ids )
self._c.executemany( 'INSERT OR IGNORE INTO local_hashes_cache ( hash_id, hash ) VALUES ( ?, ? );', ( ( hash_id, sqlite3.Binary( hash ) ) for ( hash_id, hash ) in hash_ids_to_hashes.items() ) )
self._ExecuteMany( 'INSERT OR IGNORE INTO local_hashes_cache ( hash_id, hash ) VALUES ( ?, ? );', ( ( hash_id, sqlite3.Binary( hash ) ) for ( hash_id, hash ) in hash_ids_to_hashes.items() ) )
def ClearCache( self ):
self._c.execute( 'DELETE FROM local_hashes_cache;' )
self._Execute( 'DELETE FROM local_hashes_cache;' )
def DropHashIdsFromCache( self, hash_ids ):
self._c.executemany( 'DELETE FROM local_hashes_cache WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) )
self._ExecuteMany( 'DELETE FROM local_hashes_cache WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
@ -118,7 +118,7 @@ class ClientDBCacheLocalHashes( HydrusDBModule.HydrusDBModule ):
def GetHashId( self, hash ) -> int:
result = self._c.execute( 'SELECT hash_id FROM local_hashes_cache WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
result = self._Execute( 'SELECT hash_id FROM local_hashes_cache WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
if result is None:
@ -144,7 +144,7 @@ class ClientDBCacheLocalHashes( HydrusDBModule.HydrusDBModule ):
continue
result = self._c.execute( 'SELECT hash_id FROM local_hashes_cache WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
result = self._Execute( 'SELECT hash_id FROM local_hashes_cache WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
if result is None:
@ -191,7 +191,7 @@ class ClientDBCacheLocalHashes( HydrusDBModule.HydrusDBModule ):
def HasHashId( self, hash_id: int ):
result = self._c.execute( 'SELECT 1 FROM local_hashes_cache WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
result = self._Execute( 'SELECT 1 FROM local_hashes_cache WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
return result is not None
@ -235,14 +235,14 @@ class ClientDBCacheLocalTags( HydrusDBModule.HydrusDBModule ):
( uncached_tag_id, ) = uncached_tag_ids
# this makes 0 or 1 rows, so do fetchall rather than fetchone
local_uncached_tag_ids_to_tags = { tag_id : tag for ( tag_id, tag ) in self._c.execute( 'SELECT tag_id, tag FROM local_tags_cache WHERE tag_id = ?;', ( uncached_tag_id, ) ) }
local_uncached_tag_ids_to_tags = { tag_id : tag for ( tag_id, tag ) in self._Execute( 'SELECT tag_id, tag FROM local_tags_cache WHERE tag_id = ?;', ( uncached_tag_id, ) ) }
else:
with HydrusDB.TemporaryIntegerTable( self._c, uncached_tag_ids, 'tag_id' ) as temp_table_name:
with self._MakeTemporaryIntegerTable( uncached_tag_ids, 'tag_id' ) as temp_table_name:
# temp tag_ids to actual tags
local_uncached_tag_ids_to_tags = { tag_id : tag for ( tag_id, tag ) in self._c.execute( 'SELECT tag_id, tag FROM {} CROSS JOIN local_tags_cache USING ( tag_id );'.format( temp_table_name ) ) }
local_uncached_tag_ids_to_tags = { tag_id : tag for ( tag_id, tag ) in self._Execute( 'SELECT tag_id, tag FROM {} CROSS JOIN local_tags_cache USING ( tag_id );'.format( temp_table_name ) ) }
@ -261,24 +261,24 @@ class ClientDBCacheLocalTags( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.local_tags_cache ( tag_id INTEGER PRIMARY KEY, tag TEXT UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.local_tags_cache ( tag_id INTEGER PRIMARY KEY, tag TEXT UNIQUE );' )
def AddTagIdsToCache( self, tag_ids ):
tag_ids_to_tags = self.modules_tags.GetTagIdsToTags( tag_ids = tag_ids )
self._c.executemany( 'INSERT OR IGNORE INTO local_tags_cache ( tag_id, tag ) VALUES ( ?, ? );', tag_ids_to_tags.items() )
self._ExecuteMany( 'INSERT OR IGNORE INTO local_tags_cache ( tag_id, tag ) VALUES ( ?, ? );', tag_ids_to_tags.items() )
def ClearCache( self ):
self._c.execute( 'DELETE FROM local_tags_cache;' )
self._Execute( 'DELETE FROM local_tags_cache;' )
def DropTagIdsFromCache( self, tag_ids ):
self._c.executemany( 'DELETE FROM local_tags_cache WHERE tag_id = ?;', ( ( tag_id, ) for tag_id in tag_ids ) )
self._ExecuteMany( 'DELETE FROM local_tags_cache WHERE tag_id = ?;', ( ( tag_id, ) for tag_id in tag_ids ) )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
@ -317,7 +317,7 @@ class ClientDBCacheLocalTags( HydrusDBModule.HydrusDBModule ):
raise HydrusExceptions.TagSizeException( '"{}" tag seems not valid--when cleaned, it ends up with zero size!'.format( tag ) )
result = self._c.execute( 'SELECT tag_id FROM local_tags_cache WHERE tag = ?;', ( tag, ) ).fetchone()
result = self._Execute( 'SELECT tag_id FROM local_tags_cache WHERE tag = ?;', ( tag, ) ).fetchone()
if result is None:
@ -349,7 +349,7 @@ class ClientDBCacheLocalTags( HydrusDBModule.HydrusDBModule ):
def UpdateTagInCache( self, tag_id, tag ):
self._c.execute( 'UPDATE local_tags_cache SET tag = ? WHERE tag_id = ?;', ( tag, tag_id ) )
self._Execute( 'UPDATE local_tags_cache SET tag = ? WHERE tag_id = ?;', ( tag, tag_id ) )
if tag_id in self._tag_ids_to_tags_cache:

View File

@ -47,12 +47,12 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ):
for deletee_job_type in deletee_job_types:
self._c.executemany( 'DELETE FROM file_maintenance_jobs WHERE hash_id = ? AND job_type = ?;', ( ( hash_id, deletee_job_type ) for hash_id in hash_ids ) )
self._ExecuteMany( 'DELETE FROM file_maintenance_jobs WHERE hash_id = ? AND job_type = ?;', ( ( hash_id, deletee_job_type ) for hash_id in hash_ids ) )
#
self._c.executemany( 'REPLACE INTO file_maintenance_jobs ( hash_id, job_type, time_can_start ) VALUES ( ?, ?, ? );', ( ( hash_id, job_type, time_can_start ) for hash_id in hash_ids ) )
self._ExecuteMany( 'REPLACE INTO file_maintenance_jobs ( hash_id, job_type, time_can_start ) VALUES ( ?, ?, ? );', ( ( hash_id, job_type, time_can_start ) for hash_id in hash_ids ) )
def AddJobsHashes( self, hashes, job_type, time_can_start = 0 ):
@ -64,17 +64,17 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ):
def CancelFiles( self, hash_ids ):
self._c.executemany( 'DELETE FROM file_maintenance_jobs WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) )
self._ExecuteMany( 'DELETE FROM file_maintenance_jobs WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) )
def CancelJobs( self, job_type ):
self._c.execute( 'DELETE FROM file_maintenance_jobs WHERE job_type = ?;', ( job_type, ) )
self._Execute( 'DELETE FROM file_maintenance_jobs WHERE job_type = ?;', ( job_type, ) )
def CreateInitialTables( self ):
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.file_maintenance_jobs ( hash_id INTEGER, job_type INTEGER, time_can_start INTEGER, PRIMARY KEY ( hash_id, job_type ) );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.file_maintenance_jobs ( hash_id INTEGER, job_type INTEGER, time_can_start INTEGER, PRIMARY KEY ( hash_id, job_type ) );' )
def ClearJobs( self, cleared_job_tuples ):
@ -108,7 +108,7 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ):
self.AddJobs( { hash_id }, ClientFiles.REGENERATE_FILE_DATA_JOB_OTHER_HASHES )
result = self._c.execute( 'SELECT 1 FROM file_modified_timestamps WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
result = self._Execute( 'SELECT 1 FROM file_modified_timestamps WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
if result is None:
@ -131,7 +131,7 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ):
file_modified_timestamp = additional_data
self._c.execute( 'REPLACE INTO file_modified_timestamps ( hash_id, file_modified_timestamp ) VALUES ( ?, ? );', ( hash_id, file_modified_timestamp ) )
self._Execute( 'REPLACE INTO file_modified_timestamps ( hash_id, file_modified_timestamp ) VALUES ( ?, ? );', ( hash_id, file_modified_timestamp ) )
new_file_info.add( ( hash_id, hash ) )
@ -170,7 +170,7 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ):
job_types_to_delete.extend( ClientFiles.regen_file_enum_to_overruled_jobs[ job_type ] )
self._c.executemany( 'DELETE FROM file_maintenance_jobs WHERE hash_id = ? AND job_type = ?;', ( ( hash_id, job_type_to_delete ) for job_type_to_delete in job_types_to_delete ) )
self._ExecuteMany( 'DELETE FROM file_maintenance_jobs WHERE hash_id = ? AND job_type = ?;', ( ( hash_id, job_type_to_delete ) for job_type_to_delete in job_types_to_delete ) )
if len( new_file_info ) > 0:
@ -216,7 +216,7 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ):
for job_type in possible_job_types:
hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM file_maintenance_jobs WHERE job_type = ? AND time_can_start < ? LIMIT ?;', ( job_type, HydrusData.GetNow(), 256 ) ) )
hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM file_maintenance_jobs WHERE job_type = ? AND time_can_start < ? LIMIT ?;', ( job_type, HydrusData.GetNow(), 256 ) ) )
if len( hash_ids ) > 0:
@ -231,7 +231,7 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ):
def GetJobCounts( self ):
result = self._c.execute( 'SELECT job_type, COUNT( * ) FROM file_maintenance_jobs WHERE time_can_start < ? GROUP BY job_type;', ( HydrusData.GetNow(), ) ).fetchall()
result = self._Execute( 'SELECT job_type, COUNT( * ) FROM file_maintenance_jobs WHERE time_can_start < ? GROUP BY job_type;', ( HydrusData.GetNow(), ) ).fetchall()
job_types_to_count = dict( result )

View File

@ -34,9 +34,9 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ):
def _InitCaches( self ):
if self._c.execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'file_inbox', ) ).fetchone() is not None:
if self._Execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'file_inbox', ) ).fetchone() is not None:
self.inbox_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM file_inbox;' ) )
self.inbox_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM file_inbox;' ) )
@ -52,7 +52,7 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ):
# hash_id, size, mime, width, height, duration, num_frames, has_audio, num_words
self._c.executemany( insert_phrase + ' files_info ( hash_id, size, mime, width, height, duration, num_frames, has_audio, num_words ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ? );', rows )
self._ExecuteMany( insert_phrase + ' files_info ( hash_id, size, mime, width, height, duration, num_frames, has_audio, num_words ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ? );', rows )
def ArchiveFiles( self, hash_ids: typing.Collection[ int ] ) -> typing.Set[ int ]:
@ -66,7 +66,7 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ):
if len( archiveable_hash_ids ) > 0:
self._c.executemany( 'DELETE FROM file_inbox WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in archiveable_hash_ids ) )
self._ExecuteMany( 'DELETE FROM file_inbox WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in archiveable_hash_ids ) )
self.inbox_hash_ids.difference_update( archiveable_hash_ids )
@ -76,8 +76,8 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._c.execute( 'CREATE TABLE file_inbox ( hash_id INTEGER PRIMARY KEY );' )
self._c.execute( 'CREATE TABLE files_info ( hash_id INTEGER PRIMARY KEY, size INTEGER, mime INTEGER, width INTEGER, height INTEGER, duration INTEGER, num_frames INTEGER, has_audio INTEGER_BOOLEAN, num_words INTEGER );' )
self._Execute( 'CREATE TABLE file_inbox ( hash_id INTEGER PRIMARY KEY );' )
self._Execute( 'CREATE TABLE files_info ( hash_id INTEGER PRIMARY KEY, size INTEGER, mime INTEGER, width INTEGER, height INTEGER, duration INTEGER, num_frames INTEGER, has_audio INTEGER_BOOLEAN, num_words INTEGER );' )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
@ -92,7 +92,7 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ):
def GetMime( self, hash_id: int ) -> int:
result = self._c.execute( 'SELECT mime FROM files_info WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
result = self._Execute( 'SELECT mime FROM files_info WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
if result is None:
@ -110,13 +110,13 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ):
( hash_id, ) = hash_ids
result = self._STL( self._c.execute( 'SELECT mime FROM files_info WHERE hash_id = ?;', ( hash_id, ) ) )
result = self._STL( self._Execute( 'SELECT mime FROM files_info WHERE hash_id = ?;', ( hash_id, ) ) )
else:
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
result = self._STL( self._c.execute( 'SELECT mime FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_hash_ids_table_name ) ) )
result = self._STL( self._Execute( 'SELECT mime FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_hash_ids_table_name ) ) )
@ -125,7 +125,7 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ):
def GetResolution( self, hash_id: int ):
result = self._c.execute( 'SELECT width, height FROM files_info WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
result = self._Execute( 'SELECT width, height FROM files_info WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
if result is None:
@ -151,13 +151,13 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ):
( hash_id, ) = hash_ids
result = self._c.execute( 'SELECT size FROM files_info WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
result = self._Execute( 'SELECT size FROM files_info WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
else:
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
result = self._c.execute( 'SELECT SUM( size ) FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_hash_ids_table_name ) ).fetchone()
result = self._Execute( 'SELECT SUM( size ) FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_hash_ids_table_name ) ).fetchone()
@ -182,7 +182,7 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ):
if len( inboxable_hash_ids ) > 0:
self._c.executemany( 'INSERT OR IGNORE INTO file_inbox VALUES ( ? );', ( ( hash_id, ) for hash_id in inboxable_hash_ids ) )
self._ExecuteMany( 'INSERT OR IGNORE INTO file_inbox VALUES ( ? );', ( ( hash_id, ) for hash_id in inboxable_hash_ids ) )
self.inbox_hash_ids.update( inboxable_hash_ids )

View File

@ -66,11 +66,11 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
self._c.executemany( 'INSERT OR IGNORE INTO {} VALUES ( ?, ? );'.format( current_files_table_name ), ( ( hash_id, timestamp ) for ( hash_id, timestamp ) in insert_rows ) )
self._ExecuteMany( 'INSERT OR IGNORE INTO {} VALUES ( ?, ? );'.format( current_files_table_name ), ( ( hash_id, timestamp ) for ( hash_id, timestamp ) in insert_rows ) )
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( pending_files_table_name ), ( ( hash_id, ) for ( hash_id, timestamp ) in insert_rows ) )
self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( pending_files_table_name ), ( ( hash_id, ) for ( hash_id, timestamp ) in insert_rows ) )
pending_changed = HydrusDB.GetRowCount( self._c ) > 0
pending_changed = self._GetRowCount() > 0
return pending_changed
@ -79,21 +79,26 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED )
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( deleted_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( deleted_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
num_deleted = HydrusDB.GetRowCount( self._c )
num_deleted = self._GetRowCount()
return num_deleted
def ClearFilesTables( self, service_id: int ):
def ClearFilesTables( self, service_id: int, keep_pending = False ):
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
self._c.execute( 'DELETE FROM {};'.format( current_files_table_name ) )
self._c.execute( 'DELETE FROM {};'.format( deleted_files_table_name ) )
self._c.execute( 'DELETE FROM {};'.format( pending_files_table_name ) )
self._c.execute( 'DELETE FROM {};'.format( petitioned_files_table_name ) )
self._Execute( 'DELETE FROM {};'.format( current_files_table_name ) )
self._Execute( 'DELETE FROM {};'.format( deleted_files_table_name ) )
if not keep_pending:
self._Execute( 'DELETE FROM {};'.format( pending_files_table_name ) )
self._Execute( 'DELETE FROM {};'.format( petitioned_files_table_name ) )
def ClearLocalDeleteRecord( self, hash_ids = None ):
@ -112,14 +117,14 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED )
self._c.execute( 'DELETE FROM {} WHERE hash_id NOT IN ( SELECT hash_id FROM {} );'.format( deleted_files_table_name, trash_current_files_table_name ) )
self._Execute( 'DELETE FROM {} WHERE hash_id NOT IN ( SELECT hash_id FROM {} );'.format( deleted_files_table_name, trash_current_files_table_name ) )
num_cleared = HydrusDB.GetRowCount( self._c )
num_cleared = self._GetRowCount()
service_ids_to_nums_cleared[ service_id ] = num_cleared
self._c.execute( 'DELETE FROM local_file_deletion_reasons WHERE hash_id NOT IN ( SELECT hash_id FROM {} );'.format( trash_current_files_table_name ) )
self._Execute( 'DELETE FROM local_file_deletion_reasons WHERE hash_id NOT IN ( SELECT hash_id FROM {} );'.format( trash_current_files_table_name ) )
else:
@ -133,14 +138,14 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED )
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( deleted_files_table_name ), ( ( hash_id, ) for hash_id in ok_to_clear_hash_ids ) )
self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( deleted_files_table_name ), ( ( hash_id, ) for hash_id in ok_to_clear_hash_ids ) )
num_cleared = HydrusDB.GetRowCount( self._c )
num_cleared = self._GetRowCount()
service_ids_to_nums_cleared[ service_id ] = num_cleared
self._c.executemany( 'DELETE FROM local_file_deletion_reasons WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in ok_to_clear_hash_ids ) )
self._ExecuteMany( 'DELETE FROM local_file_deletion_reasons WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in ok_to_clear_hash_ids ) )
@ -149,25 +154,25 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._c.execute( 'CREATE TABLE local_file_deletion_reasons ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );' )
self._Execute( 'CREATE TABLE local_file_deletion_reasons ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );' )
def DeletePending( self, service_id: int ):
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
self._c.execute( 'DELETE FROM {};'.format( pending_files_table_name ) )
self._c.execute( 'DELETE FROM {};'.format( petitioned_files_table_name ) )
self._Execute( 'DELETE FROM {};'.format( pending_files_table_name ) )
self._Execute( 'DELETE FROM {};'.format( petitioned_files_table_name ) )
def DropFilesTables( self, service_id: int ):
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( current_files_table_name ) )
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( deleted_files_table_name ) )
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( pending_files_table_name ) )
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( petitioned_files_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( current_files_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( deleted_files_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( pending_files_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( petitioned_files_table_name ) )
def FilterAllCurrentHashIds( self, hash_ids, just_these_service_ids = None ):
@ -183,13 +188,13 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
current_hash_ids = set()
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
for service_id in service_ids:
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
hash_id_iterator = self._STI( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) )
hash_id_iterator = self._STI( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) )
current_hash_ids.update( hash_id_iterator )
@ -211,13 +216,13 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
pending_hash_ids = set()
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
for service_id in service_ids:
pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING )
hash_id_iterator = self._STI( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, pending_files_table_name ) ) )
hash_id_iterator = self._STI( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, pending_files_table_name ) ) )
pending_hash_ids.update( hash_id_iterator )
@ -233,11 +238,11 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
return set( hash_ids )
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
current_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) )
current_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) )
return current_hash_ids
@ -250,11 +255,11 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
return set( hash_ids )
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING )
pending_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, pending_files_table_name ) ) )
pending_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, pending_files_table_name ) ) )
return pending_hash_ids
@ -264,16 +269,16 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, timestamp INTEGER );'.format( current_files_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, timestamp INTEGER );'.format( current_files_table_name ) )
self._CreateIndex( current_files_table_name, [ 'timestamp' ] )
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, timestamp INTEGER, original_timestamp INTEGER );'.format( deleted_files_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, timestamp INTEGER, original_timestamp INTEGER );'.format( deleted_files_table_name ) )
self._CreateIndex( deleted_files_table_name, [ 'timestamp' ] )
self._CreateIndex( deleted_files_table_name, [ 'original_timestamp' ] )
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );'.format( pending_files_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );'.format( pending_files_table_name ) )
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );'.format( petitioned_files_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );'.format( petitioned_files_table_name ) )
self._CreateIndex( petitioned_files_table_name, [ 'reason_id' ] )
@ -281,7 +286,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING )
result = self._c.execute( 'SELECT hash_id FROM {};'.format( pending_files_table_name ) ).fetchone()
result = self._Execute( 'SELECT hash_id FROM {};'.format( pending_files_table_name ) ).fetchone()
if result is None:
@ -299,7 +304,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
petitioned_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PETITIONED )
result = self._c.execute( 'SELECT hash_id FROM {};'.format( petitioned_files_table_name ) ).fetchone()
result = self._Execute( 'SELECT hash_id FROM {};'.format( petitioned_files_table_name ) ).fetchone()
if result is None:
@ -320,11 +325,11 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
if only_viewable:
# hashes to mimes
result = self._c.execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN files_info USING ( hash_id ) WHERE mime IN {};'.format( current_files_table_name, HydrusData.SplayListForDB( HC.SEARCHABLE_MIMES ) ) ).fetchone()
result = self._Execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN files_info USING ( hash_id ) WHERE mime IN {};'.format( current_files_table_name, HydrusData.SplayListForDB( HC.SEARCHABLE_MIMES ) ) ).fetchone()
else:
result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( current_files_table_name ) ).fetchone()
result = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( current_files_table_name ) ).fetchone()
( count, ) = result
@ -336,7 +341,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
result = self._c.execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN file_inbox USING ( hash_id );'.format( current_files_table_name ) ).fetchone()
result = self._Execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN file_inbox USING ( hash_id );'.format( current_files_table_name ) ).fetchone()
( count, ) = result
@ -347,7 +352,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM {};'.format( current_files_table_name ) ) )
hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM {};'.format( current_files_table_name ) ) )
return hash_ids
@ -357,7 +362,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
# hashes to size
result = self._c.execute( 'SELECT SUM( size ) FROM {} CROSS JOIN files_info USING ( hash_id );'.format( current_files_table_name ) ).fetchone()
result = self._Execute( 'SELECT SUM( size ) FROM {} CROSS JOIN files_info USING ( hash_id );'.format( current_files_table_name ) ).fetchone()
( count, ) = result
@ -368,9 +373,9 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
rows = dict( self._c.execute( 'SELECT hash_id, timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) )
rows = dict( self._Execute( 'SELECT hash_id, timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) )
return rows
@ -387,7 +392,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
result = self._c.execute( 'SELECT timestamp FROM {} WHERE hash_id = ?;'.format( current_files_table_name ), ( hash_id, ) ).fetchone()
result = self._Execute( 'SELECT timestamp FROM {} WHERE hash_id = ?;'.format( current_files_table_name ), ( hash_id, ) ).fetchone()
if result is None:
@ -405,7 +410,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED )
result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( deleted_files_table_name ) ).fetchone()
result = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( deleted_files_table_name ) ).fetchone()
( count, ) = result
@ -415,7 +420,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
def GetDeletionStatus( self, service_id, hash_id ):
# can have a value here and just be in trash, so we fetch it whatever the end result
result = self._c.execute( 'SELECT reason_id FROM local_file_deletion_reasons WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
result = self._Execute( 'SELECT reason_id FROM local_file_deletion_reasons WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
if result is None:
@ -433,7 +438,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
is_deleted = False
timestamp = None
result = self._c.execute( 'SELECT timestamp FROM {} WHERE hash_id = ?;'.format( deleted_files_table_name ), ( hash_id, ) ).fetchone()
result = self._Execute( 'SELECT timestamp FROM {} WHERE hash_id = ?;'.format( deleted_files_table_name ), ( hash_id, ) ).fetchone()
if result is not None:
@ -462,7 +467,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
for hash_id in self._STI( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) ):
for hash_id in self._STI( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) ):
hash_ids_to_current_file_service_ids[ hash_id ].append( service_id )
@ -482,22 +487,22 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
for ( hash_id, timestamp ) in self._c.execute( 'SELECT hash_id, timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ):
for ( hash_id, timestamp ) in self._Execute( 'SELECT hash_id, timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ):
hash_ids_to_current_file_service_ids_and_timestamps[ hash_id ].append( ( service_id, timestamp ) )
for ( hash_id, timestamp, original_timestamp ) in self._c.execute( 'SELECT hash_id, timestamp, original_timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, deleted_files_table_name ) ):
for ( hash_id, timestamp, original_timestamp ) in self._Execute( 'SELECT hash_id, timestamp, original_timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, deleted_files_table_name ) ):
hash_ids_to_deleted_file_service_ids_and_timestamps[ hash_id ].append( ( service_id, timestamp, original_timestamp ) )
for hash_id in self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, pending_files_table_name ) ):
for hash_id in self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, pending_files_table_name ) ):
hash_ids_to_pending_file_service_ids[ hash_id ].append( service_id )
for hash_id in self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, petitioned_files_table_name ) ):
for hash_id in self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, petitioned_files_table_name ) ):
hash_ids_to_petitioned_file_service_ids[ hash_id ].append( service_id )
@ -516,7 +521,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
combined_local_current_files_table_name = GenerateFilesTableName( self.modules_services.combined_local_file_service_id, HC.CONTENT_STATUS_CURRENT )
( num_local, ) = self._c.execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN {} USING ( hash_id );'.format( current_files_table_name, combined_local_current_files_table_name ) ).fetchone()
( num_local, ) = self._Execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN {} USING ( hash_id );'.format( current_files_table_name, combined_local_current_files_table_name ) ).fetchone()
return num_local
@ -525,7 +530,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING )
result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( pending_files_table_name ) ).fetchone()
result = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( pending_files_table_name ) ).fetchone()
( count, ) = result
@ -536,7 +541,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
petitioned_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PETITIONED )
result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( petitioned_files_table_name ) ).fetchone()
result = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( petitioned_files_table_name ) ).fetchone()
( count, ) = result
@ -545,7 +550,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
def GetServiceIdCounts( self, hash_ids ) -> typing.Dict[ int, int ]:
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
service_ids_to_counts = {}
@ -554,7 +559,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
# temp hashes to files
( count, ) = self._c.execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ).fetchone()
( count, ) = self._Execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ).fetchone()
service_ids_to_counts[ service_id ] = count
@ -567,7 +572,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
petitioned_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PETITIONED )
petitioned_rows = list( HydrusData.BuildKeyToListDict( self._c.execute( 'SELECT reason_id, hash_id FROM {} ORDER BY reason_id LIMIT 100;'.format( petitioned_files_table_name ) ) ).items() )
petitioned_rows = list( HydrusData.BuildKeyToListDict( self._Execute( 'SELECT reason_id, hash_id FROM {} ORDER BY reason_id LIMIT 100;'.format( petitioned_files_table_name ) ) ).items() )
return petitioned_rows
@ -598,9 +603,9 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED )
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
rows = self._c.execute( 'SELECT hash_id, original_timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, deleted_files_table_name ) ).fetchall()
rows = self._Execute( 'SELECT hash_id, original_timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, deleted_files_table_name ) ).fetchall()
return rows
@ -610,16 +615,16 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING )
self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( pending_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( pending_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
def PetitionFiles( self, service_id, reason_id, hash_ids ):
petitioned_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PETITIONED )
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( petitioned_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( petitioned_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id, reason_id ) VALUES ( ?, ? );'.format( petitioned_files_table_name ), ( ( hash_id, reason_id ) for hash_id in hash_ids ) )
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id, reason_id ) VALUES ( ?, ? );'.format( petitioned_files_table_name ), ( ( hash_id, reason_id ) for hash_id in hash_ids ) )
def RecordDeleteFiles( self, service_id, insert_rows ):
@ -628,12 +633,12 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
now = HydrusData.GetNow()
self._c.executemany(
self._ExecuteMany(
'INSERT OR IGNORE INTO {} ( hash_id, timestamp, original_timestamp ) VALUES ( ?, ?, ? );'.format( deleted_files_table_name ),
( ( hash_id, now, original_timestamp ) for ( hash_id, original_timestamp ) in insert_rows )
)
num_new_deleted_files = HydrusDB.GetRowCount( self._c )
num_new_deleted_files = self._GetRowCount()
return num_new_deleted_files
@ -642,25 +647,25 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING )
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( pending_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( pending_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
def RescindPetitionFiles( self, service_id, hash_ids ):
petitioned_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PETITIONED )
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( petitioned_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( petitioned_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
def RemoveFiles( self, service_id, hash_ids ):
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( current_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( current_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( petitioned_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( petitioned_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
pending_changed = HydrusDB.GetRowCount( self._c ) > 0
pending_changed = self._GetRowCount() > 0
return pending_changed
@ -669,5 +674,5 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
reason_id = self.modules_texts.GetTextId( reason )
self._c.executemany( 'REPLACE INTO local_file_deletion_reasons ( hash_id, reason_id ) VALUES ( ?, ? );', ( ( hash_id, reason_id ) for hash_id in hash_ids ) )
self._ExecuteMany( 'REPLACE INTO local_file_deletion_reasons ( hash_id, reason_id ) VALUES ( ?, ? );', ( ( hash_id, reason_id ) for hash_id in hash_ids ) )

View File

@ -30,7 +30,7 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ):
def _TableHasAtLeastRowCount( self, name, row_count ):
cursor = self._c.execute( 'SELECT 1 FROM {};'.format( name ) )
cursor = self._Execute( 'SELECT 1 FROM {};'.format( name ) )
for i in range( row_count ):
@ -47,7 +47,7 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ):
def _TableIsEmpty( self, name ):
result = self._c.execute( 'SELECT 1 FROM {};'.format( name ) )
result = self._Execute( 'SELECT 1 FROM {};'.format( name ) )
return result is None
@ -95,7 +95,7 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ):
self._c.execute( 'ANALYZE sqlite_master;' ) # this reloads the current stats into the query planner
self._Execute( 'ANALYZE sqlite_master;' ) # this reloads the current stats into the query planner
job_key.SetVariable( 'popup_text_1', 'done!' )
@ -114,7 +114,7 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ):
do_it = True
result = self._c.execute( 'SELECT num_rows FROM analyze_timestamps WHERE name = ?;', ( name, ) ).fetchone()
result = self._Execute( 'SELECT num_rows FROM analyze_timestamps WHERE name = ?;', ( name, ) ).fetchone()
if result is not None:
@ -130,22 +130,22 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ):
if do_it:
self._c.execute( 'ANALYZE ' + name + ';' )
self._Execute( 'ANALYZE ' + name + ';' )
( num_rows, ) = self._c.execute( 'SELECT COUNT( * ) FROM ' + name + ';' ).fetchone()
( num_rows, ) = self._Execute( 'SELECT COUNT( * ) FROM ' + name + ';' ).fetchone()
self._c.execute( 'DELETE FROM analyze_timestamps WHERE name = ?;', ( name, ) )
self._Execute( 'DELETE FROM analyze_timestamps WHERE name = ?;', ( name, ) )
self._c.execute( 'INSERT OR IGNORE INTO analyze_timestamps ( name, num_rows, timestamp ) VALUES ( ?, ?, ? );', ( name, num_rows, HydrusData.GetNow() ) )
self._Execute( 'INSERT OR IGNORE INTO analyze_timestamps ( name, num_rows, timestamp ) VALUES ( ?, ?, ? );', ( name, num_rows, HydrusData.GetNow() ) )
def CreateInitialTables( self ):
self._c.execute( 'CREATE TABLE last_shutdown_work_time ( last_shutdown_work_time INTEGER );' )
self._Execute( 'CREATE TABLE last_shutdown_work_time ( last_shutdown_work_time INTEGER );' )
self._c.execute( 'CREATE TABLE analyze_timestamps ( name TEXT, num_rows INTEGER, timestamp INTEGER );' )
self._c.execute( 'CREATE TABLE vacuum_timestamps ( name TEXT, timestamp INTEGER );' )
self._Execute( 'CREATE TABLE analyze_timestamps ( name TEXT, num_rows INTEGER, timestamp INTEGER );' )
self._Execute( 'CREATE TABLE vacuum_timestamps ( name TEXT, timestamp INTEGER );' )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
@ -161,7 +161,7 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ):
def GetLastShutdownWorkTime( self ):
result = self._c.execute( 'SELECT last_shutdown_work_time FROM last_shutdown_work_time;' ).fetchone()
result = self._Execute( 'SELECT last_shutdown_work_time FROM last_shutdown_work_time;' ).fetchone()
if result is None:
@ -175,13 +175,13 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ):
def GetTableNamesDueAnalysis( self, force_reanalyze = False ):
db_names = [ name for ( index, name, path ) in self._c.execute( 'PRAGMA database_list;' ) if name not in ( 'mem', 'temp', 'durable_temp' ) ]
db_names = [ name for ( index, name, path ) in self._Execute( 'PRAGMA database_list;' ) if name not in ( 'mem', 'temp', 'durable_temp' ) ]
all_names = set()
for db_name in db_names:
all_names.update( ( name for ( name, ) in self._c.execute( 'SELECT name FROM {}.sqlite_master WHERE type = ?;'.format( db_name ), ( 'table', ) ) ) )
all_names.update( ( name for ( name, ) in self._Execute( 'SELECT name FROM {}.sqlite_master WHERE type = ?;'.format( db_name ), ( 'table', ) ) ) )
all_names.discard( 'sqlite_stat1' )
@ -203,7 +203,7 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ):
boundaries.append( ( 100000, False, 3 * 30 * 86400 ) )
# anything bigger than 100k rows will now not be analyzed
existing_names_to_info = { name : ( num_rows, timestamp ) for ( name, num_rows, timestamp ) in self._c.execute( 'SELECT name, num_rows, timestamp FROM analyze_timestamps;' ) }
existing_names_to_info = { name : ( num_rows, timestamp ) for ( name, num_rows, timestamp ) in self._Execute( 'SELECT name, num_rows, timestamp FROM analyze_timestamps;' ) }
names_to_analyze = []
@ -268,11 +268,11 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ):
path = os.path.join( self._db_dir, filename )
( page_size, ) = self._c.execute( 'PRAGMA {}.page_size;'.format( name ) ).fetchone()
( page_count, ) = self._c.execute( 'PRAGMA {}.page_count;'.format( name ) ).fetchone()
( freelist_count, ) = self._c.execute( 'PRAGMA {}.freelist_count;'.format( name ) ).fetchone()
( page_size, ) = self._Execute( 'PRAGMA {}.page_size;'.format( name ) ).fetchone()
( page_count, ) = self._Execute( 'PRAGMA {}.page_count;'.format( name ) ).fetchone()
( freelist_count, ) = self._Execute( 'PRAGMA {}.freelist_count;'.format( name ) ).fetchone()
result = self._c.execute( 'SELECT timestamp FROM vacuum_timestamps WHERE name = ?;', ( name, ) ).fetchone()
result = self._Execute( 'SELECT timestamp FROM vacuum_timestamps WHERE name = ?;', ( name, ) ).fetchone()
if result is None:
@ -299,14 +299,14 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ):
def RegisterShutdownWork( self ):
self._c.execute( 'DELETE from last_shutdown_work_time;' )
self._Execute( 'DELETE from last_shutdown_work_time;' )
self._c.execute( 'INSERT INTO last_shutdown_work_time ( last_shutdown_work_time ) VALUES ( ? );', ( HydrusData.GetNow(), ) )
self._Execute( 'INSERT INTO last_shutdown_work_time ( last_shutdown_work_time ) VALUES ( ? );', ( HydrusData.GetNow(), ) )
def RegisterSuccessfulVacuum( self, name: str ):
self._c.execute( 'DELETE FROM vacuum_timestamps WHERE name = ?;', ( name, ) )
self._Execute( 'DELETE FROM vacuum_timestamps WHERE name = ?;', ( name, ) )
self._c.execute( 'INSERT OR IGNORE INTO vacuum_timestamps ( name, timestamp ) VALUES ( ?, ? );', ( name, HydrusData.GetNow() ) )
self._Execute( 'INSERT OR IGNORE INTO vacuum_timestamps ( name, timestamp ) VALUES ( ?, ? );', ( name, HydrusData.GetNow() ) )

View File

@ -52,36 +52,36 @@ class ClientDBMappingsStorage( HydrusDBModule.HydrusDBModule ):
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
self._c.execute( 'DELETE FROM {};'.format( current_mappings_table_name ) )
self._c.execute( 'DELETE FROM {};'.format( deleted_mappings_table_name ) )
self._c.execute( 'DELETE FROM {};'.format( pending_mappings_table_name ) )
self._c.execute( 'DELETE FROM {};'.format( petitioned_mappings_table_name ) )
self._Execute( 'DELETE FROM {};'.format( current_mappings_table_name ) )
self._Execute( 'DELETE FROM {};'.format( deleted_mappings_table_name ) )
self._Execute( 'DELETE FROM {};'.format( pending_mappings_table_name ) )
self._Execute( 'DELETE FROM {};'.format( petitioned_mappings_table_name ) )
def DropMappingsTables( self, service_id: int ):
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( current_mappings_table_name ) )
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( deleted_mappings_table_name ) )
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( pending_mappings_table_name ) )
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( petitioned_mappings_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( current_mappings_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( deleted_mappings_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( pending_mappings_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( petitioned_mappings_table_name ) )
def GenerateMappingsTables( self, service_id: int ):
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( current_mappings_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( current_mappings_table_name ) )
self._CreateIndex( current_mappings_table_name, [ 'hash_id', 'tag_id' ], unique = True )
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( deleted_mappings_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( deleted_mappings_table_name ) )
self._CreateIndex( deleted_mappings_table_name, [ 'hash_id', 'tag_id' ], unique = True )
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( pending_mappings_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( pending_mappings_table_name ) )
self._CreateIndex( pending_mappings_table_name, [ 'hash_id', 'tag_id' ], unique = True )
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, reason_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( petitioned_mappings_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, reason_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( petitioned_mappings_table_name ) )
self._CreateIndex( petitioned_mappings_table_name, [ 'hash_id', 'tag_id' ], unique = True )
@ -89,7 +89,7 @@ class ClientDBMappingsStorage( HydrusDBModule.HydrusDBModule ):
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
result = self._c.execute( 'SELECT COUNT( DISTINCT hash_id ) FROM {};'.format( current_mappings_table_name ) ).fetchone()
result = self._Execute( 'SELECT COUNT( DISTINCT hash_id ) FROM {};'.format( current_mappings_table_name ) ).fetchone()
( count, ) = result
@ -100,7 +100,7 @@ class ClientDBMappingsStorage( HydrusDBModule.HydrusDBModule ):
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( deleted_mappings_table_name ) ).fetchone()
result = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( deleted_mappings_table_name ) ).fetchone()
( count, ) = result
@ -111,7 +111,7 @@ class ClientDBMappingsStorage( HydrusDBModule.HydrusDBModule ):
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( pending_mappings_table_name ) ).fetchone()
result = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( pending_mappings_table_name ) ).fetchone()
( count, ) = result
@ -122,7 +122,7 @@ class ClientDBMappingsStorage( HydrusDBModule.HydrusDBModule ):
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( petitioned_mappings_table_name ) ).fetchone()
result = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( petitioned_mappings_table_name ) ).fetchone()
( count, ) = result

View File

@ -53,14 +53,14 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ):
( uncached_hash_id, ) = uncached_hash_ids
rows = self._c.execute( 'SELECT hash_id, hash FROM hashes WHERE hash_id = ?;', ( uncached_hash_id, ) ).fetchall()
rows = self._Execute( 'SELECT hash_id, hash FROM hashes WHERE hash_id = ?;', ( uncached_hash_id, ) ).fetchall()
else:
with HydrusDB.TemporaryIntegerTable( self._c, uncached_hash_ids, 'hash_id' ) as temp_table_name:
with self._MakeTemporaryIntegerTable( uncached_hash_ids, 'hash_id' ) as temp_table_name:
# temp hash_ids to actual hashes
rows = self._c.execute( 'SELECT hash_id, hash FROM {} CROSS JOIN hashes USING ( hash_id );'.format( temp_table_name ) ).fetchall()
rows = self._Execute( 'SELECT hash_id, hash FROM {} CROSS JOIN hashes USING ( hash_id );'.format( temp_table_name ) ).fetchall()
@ -100,9 +100,9 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.hashes ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.hashes ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES UNIQUE );' )
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.local_hashes ( hash_id INTEGER PRIMARY KEY, md5 BLOB_BYTES, sha1 BLOB_BYTES, sha512 BLOB_BYTES );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.local_hashes ( hash_id INTEGER PRIMARY KEY, md5 BLOB_BYTES, sha1 BLOB_BYTES, sha512 BLOB_BYTES );' )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
@ -117,7 +117,7 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ):
def GetExtraHash( self, hash_type, hash_id ) -> bytes:
result = self._c.execute( 'SELECT {} FROM local_hashes WHERE hash_id = ?;'.format( hash_type ), ( hash_id, ) ).fetchone()
result = self._Execute( 'SELECT {} FROM local_hashes WHERE hash_id = ?;'.format( hash_type ), ( hash_id, ) ).fetchone()
if result is None:
@ -146,7 +146,7 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ):
continue
result = self._c.execute( 'SELECT hash_id FROM local_hashes WHERE {} = ?;'.format( given_hash_type ), ( sqlite3.Binary( given_hash ), ) ).fetchone()
result = self._Execute( 'SELECT hash_id FROM local_hashes WHERE {} = ?;'.format( given_hash_type ), ( sqlite3.Binary( given_hash ), ) ).fetchone()
if result is not None:
@ -163,7 +163,7 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ):
else:
desired_hashes = [ desired_hash for ( desired_hash, ) in self._c.execute( 'SELECT {} FROM local_hashes WHERE hash_id IN {};'.format( desired_hash_type, HydrusData.SplayListForDB( hash_ids ) ) ) ]
desired_hashes = [ desired_hash for ( desired_hash, ) in self._Execute( 'SELECT {} FROM local_hashes WHERE hash_id IN {};'.format( desired_hash_type, HydrusData.SplayListForDB( hash_ids ) ) ) ]
return desired_hashes
@ -185,13 +185,13 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ):
def GetHashId( self, hash ) -> int:
result = self._c.execute( 'SELECT hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
result = self._Execute( 'SELECT hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
if result is None:
self._c.execute( 'INSERT INTO hashes ( hash ) VALUES ( ? );', ( sqlite3.Binary( hash ), ) )
self._Execute( 'INSERT INTO hashes ( hash ) VALUES ( ? );', ( sqlite3.Binary( hash ), ) )
hash_id = self._c.lastrowid
hash_id = self._GetLastRowId()
else:
@ -205,15 +205,15 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ):
if hash_type == 'md5':
result = self._c.execute( 'SELECT hash_id FROM local_hashes WHERE md5 = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
result = self._Execute( 'SELECT hash_id FROM local_hashes WHERE md5 = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
elif hash_type == 'sha1':
result = self._c.execute( 'SELECT hash_id FROM local_hashes WHERE sha1 = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
result = self._Execute( 'SELECT hash_id FROM local_hashes WHERE sha1 = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
elif hash_type == 'sha512':
result = self._c.execute( 'SELECT hash_id FROM local_hashes WHERE sha512 = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
result = self._Execute( 'SELECT hash_id FROM local_hashes WHERE sha512 = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
if result is None:
@ -238,7 +238,7 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ):
continue
result = self._c.execute( 'SELECT hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
result = self._Execute( 'SELECT hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
if result is None:
@ -254,11 +254,11 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ):
if len( hashes_not_in_db ) > 0:
self._c.executemany( 'INSERT INTO hashes ( hash ) VALUES ( ? );', ( ( sqlite3.Binary( hash ), ) for hash in hashes_not_in_db ) )
self._ExecuteMany( 'INSERT INTO hashes ( hash ) VALUES ( ? );', ( ( sqlite3.Binary( hash ), ) for hash in hashes_not_in_db ) )
for hash in hashes_not_in_db:
( hash_id, ) = self._c.execute( 'SELECT hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
( hash_id, ) = self._Execute( 'SELECT hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
hash_ids.add( hash_id )
@ -295,21 +295,21 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ):
def HasExtraHashes( self, hash_id ):
result = self._c.execute( 'SELECT 1 FROM local_hashes WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
result = self._Execute( 'SELECT 1 FROM local_hashes WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
return result is not None
def HasHashId( self, hash_id: int ):
result = self._c.execute( 'SELECT 1 FROM hashes WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
result = self._Execute( 'SELECT 1 FROM hashes WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
return result is not None
def SetExtraHashes( self, hash_id, md5, sha1, sha512 ):
self._c.execute( 'INSERT OR IGNORE INTO local_hashes ( hash_id, md5, sha1, sha512 ) VALUES ( ?, ?, ?, ? );', ( hash_id, sqlite3.Binary( md5 ), sqlite3.Binary( sha1 ), sqlite3.Binary( sha512 ) ) )
self._Execute( 'INSERT OR IGNORE INTO local_hashes ( hash_id, md5, sha1, sha512 ) VALUES ( ?, ?, ?, ? );', ( hash_id, sqlite3.Binary( md5 ), sqlite3.Binary( sha1 ), sqlite3.Binary( sha512 ) ) )
class ClientDBMasterTexts( HydrusDBModule.HydrusDBModule ):
@ -328,13 +328,13 @@ class ClientDBMasterTexts( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.labels ( label_id INTEGER PRIMARY KEY, label TEXT UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.labels ( label_id INTEGER PRIMARY KEY, label TEXT UNIQUE );' )
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.notes ( note_id INTEGER PRIMARY KEY, note TEXT UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.notes ( note_id INTEGER PRIMARY KEY, note TEXT UNIQUE );' )
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.texts ( text_id INTEGER PRIMARY KEY, text TEXT UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.texts ( text_id INTEGER PRIMARY KEY, text TEXT UNIQUE );' )
self._c.execute( 'CREATE VIRTUAL TABLE IF NOT EXISTS external_caches.notes_fts4 USING fts4( note );' )
self._Execute( 'CREATE VIRTUAL TABLE IF NOT EXISTS external_caches.notes_fts4 USING fts4( note );' )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
@ -351,13 +351,13 @@ class ClientDBMasterTexts( HydrusDBModule.HydrusDBModule ):
def GetLabelId( self, label ):
result = self._c.execute( 'SELECT label_id FROM labels WHERE label = ?;', ( label, ) ).fetchone()
result = self._Execute( 'SELECT label_id FROM labels WHERE label = ?;', ( label, ) ).fetchone()
if result is None:
self._c.execute( 'INSERT INTO labels ( label ) VALUES ( ? );', ( label, ) )
self._Execute( 'INSERT INTO labels ( label ) VALUES ( ? );', ( label, ) )
label_id = self._c.lastrowid
label_id = self._GetLastRowId()
else:
@ -369,15 +369,15 @@ class ClientDBMasterTexts( HydrusDBModule.HydrusDBModule ):
def GetNoteId( self, note: str ) -> int:
result = self._c.execute( 'SELECT note_id FROM notes WHERE note = ?;', ( note, ) ).fetchone()
result = self._Execute( 'SELECT note_id FROM notes WHERE note = ?;', ( note, ) ).fetchone()
if result is None:
self._c.execute( 'INSERT INTO notes ( note ) VALUES ( ? );', ( note, ) )
self._Execute( 'INSERT INTO notes ( note ) VALUES ( ? );', ( note, ) )
note_id = self._c.lastrowid
note_id = self._GetLastRowId()
self._c.execute( 'REPLACE INTO notes_fts4 ( docid, note ) VALUES ( ?, ? );', ( note_id, note ) )
self._Execute( 'REPLACE INTO notes_fts4 ( docid, note ) VALUES ( ?, ? );', ( note_id, note ) )
else:
@ -394,7 +394,7 @@ class ClientDBMasterTexts( HydrusDBModule.HydrusDBModule ):
def GetText( self, text_id ):
result = self._c.execute( 'SELECT text FROM texts WHERE text_id = ?;', ( text_id, ) ).fetchone()
result = self._Execute( 'SELECT text FROM texts WHERE text_id = ?;', ( text_id, ) ).fetchone()
if result is None:
@ -408,13 +408,13 @@ class ClientDBMasterTexts( HydrusDBModule.HydrusDBModule ):
def GetTextId( self, text ):
result = self._c.execute( 'SELECT text_id FROM texts WHERE text = ?;', ( text, ) ).fetchone()
result = self._Execute( 'SELECT text_id FROM texts WHERE text = ?;', ( text, ) ).fetchone()
if result is None:
self._c.execute( 'INSERT INTO texts ( text ) VALUES ( ? );', ( text, ) )
self._Execute( 'INSERT INTO texts ( text ) VALUES ( ? );', ( text, ) )
text_id = self._c.lastrowid
text_id = self._GetLastRowId()
else:
@ -465,14 +465,14 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ):
( uncached_tag_id, ) = uncached_tag_ids
rows = self._c.execute( 'SELECT tag_id, namespace, subtag FROM tags NATURAL JOIN namespaces NATURAL JOIN subtags WHERE tag_id = ?;', ( uncached_tag_id, ) ).fetchall()
rows = self._Execute( 'SELECT tag_id, namespace, subtag FROM tags NATURAL JOIN namespaces NATURAL JOIN subtags WHERE tag_id = ?;', ( uncached_tag_id, ) ).fetchall()
else:
with HydrusDB.TemporaryIntegerTable( self._c, uncached_tag_ids, 'tag_id' ) as temp_table_name:
with self._MakeTemporaryIntegerTable( uncached_tag_ids, 'tag_id' ) as temp_table_name:
# temp tag_ids to tags to subtags and namespaces
rows = self._c.execute( 'SELECT tag_id, namespace, subtag FROM {} CROSS JOIN tags USING ( tag_id ) CROSS JOIN subtags USING ( subtag_id ) CROSS JOIN namespaces USING ( namespace_id );'.format( temp_table_name ) ).fetchall()
rows = self._Execute( 'SELECT tag_id, namespace, subtag FROM {} CROSS JOIN tags USING ( tag_id ) CROSS JOIN subtags USING ( subtag_id ) CROSS JOIN namespaces USING ( namespace_id );'.format( temp_table_name ) ).fetchall()
@ -491,7 +491,7 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ):
namespace_id = self.GetNamespaceId( namespace )
subtag_id = self.GetSubtagId( subtag )
self._c.execute( 'REPLACE INTO tags ( tag_id, namespace_id, subtag_id ) VALUES ( ?, ?, ? );', ( tag_id, namespace_id, subtag_id ) )
self._Execute( 'REPLACE INTO tags ( tag_id, namespace_id, subtag_id ) VALUES ( ?, ?, ? );', ( tag_id, namespace_id, subtag_id ) )
uncached_tag_ids_to_tags[ tag_id ] = tag
@ -504,11 +504,11 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.namespaces ( namespace_id INTEGER PRIMARY KEY, namespace TEXT UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.namespaces ( namespace_id INTEGER PRIMARY KEY, namespace TEXT UNIQUE );' )
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.subtags ( subtag_id INTEGER PRIMARY KEY, subtag TEXT UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.subtags ( subtag_id INTEGER PRIMARY KEY, subtag TEXT UNIQUE );' )
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.tags ( tag_id INTEGER PRIMARY KEY, namespace_id INTEGER, subtag_id INTEGER );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.tags ( tag_id INTEGER PRIMARY KEY, namespace_id INTEGER, subtag_id INTEGER );' )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
@ -528,19 +528,19 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ):
if self.null_namespace_id is None:
( self.null_namespace_id, ) = self._c.execute( 'SELECT namespace_id FROM namespaces WHERE namespace = ?;', ( '', ) ).fetchone()
( self.null_namespace_id, ) = self._Execute( 'SELECT namespace_id FROM namespaces WHERE namespace = ?;', ( '', ) ).fetchone()
return self.null_namespace_id
result = self._c.execute( 'SELECT namespace_id FROM namespaces WHERE namespace = ?;', ( namespace, ) ).fetchone()
result = self._Execute( 'SELECT namespace_id FROM namespaces WHERE namespace = ?;', ( namespace, ) ).fetchone()
if result is None:
self._c.execute( 'INSERT INTO namespaces ( namespace ) VALUES ( ? );', ( namespace, ) )
self._Execute( 'INSERT INTO namespaces ( namespace ) VALUES ( ? );', ( namespace, ) )
namespace_id = self._c.lastrowid
namespace_id = self._GetLastRowId()
else:
@ -552,13 +552,13 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ):
def GetSubtagId( self, subtag ) -> int:
result = self._c.execute( 'SELECT subtag_id FROM subtags WHERE subtag = ?;', ( subtag, ) ).fetchone()
result = self._Execute( 'SELECT subtag_id FROM subtags WHERE subtag = ?;', ( subtag, ) ).fetchone()
if result is None:
self._c.execute( 'INSERT INTO subtags ( subtag ) VALUES ( ? );', ( subtag, ) )
self._Execute( 'INSERT INTO subtags ( subtag ) VALUES ( ? );', ( subtag, ) )
subtag_id = self._c.lastrowid
subtag_id = self._GetLastRowId()
else:
@ -602,13 +602,13 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ):
namespace_id = self.GetNamespaceId( namespace )
subtag_id = self.GetSubtagId( subtag )
result = self._c.execute( 'SELECT tag_id FROM tags WHERE namespace_id = ? AND subtag_id = ?;', ( namespace_id, subtag_id ) ).fetchone()
result = self._Execute( 'SELECT tag_id FROM tags WHERE namespace_id = ? AND subtag_id = ?;', ( namespace_id, subtag_id ) ).fetchone()
if result is None:
self._c.execute( 'INSERT INTO tags ( namespace_id, subtag_id ) VALUES ( ?, ? );', ( namespace_id, subtag_id ) )
self._Execute( 'INSERT INTO tags ( namespace_id, subtag_id ) VALUES ( ?, ? );', ( namespace_id, subtag_id ) )
tag_id = self._c.lastrowid
tag_id = self._GetLastRowId()
else:
@ -641,7 +641,7 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ):
return True
result = self._c.execute( 'SELECT 1 FROM namespaces WHERE namespace = ?;', ( namespace, ) ).fetchone()
result = self._Execute( 'SELECT 1 FROM namespaces WHERE namespace = ?;', ( namespace, ) ).fetchone()
if result is None:
@ -664,7 +664,7 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ):
return False
result = self._c.execute( 'SELECT 1 FROM subtags WHERE subtag = ?;', ( subtag, ) ).fetchone()
result = self._Execute( 'SELECT 1 FROM subtags WHERE subtag = ?;', ( subtag, ) ).fetchone()
if result is None:
@ -711,7 +711,7 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ):
subtag_id = self.GetSubtagId( subtag )
result = self._c.execute( 'SELECT 1 FROM tags WHERE namespace_id = ? AND subtag_id = ?;', ( namespace_id, subtag_id ) ).fetchone()
result = self._Execute( 'SELECT 1 FROM tags WHERE namespace_id = ? AND subtag_id = ?;', ( namespace_id, subtag_id ) ).fetchone()
if result is None:
@ -730,7 +730,7 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ):
def UpdateTagId( self, tag_id, namespace_id, subtag_id ):
self._c.execute( 'UPDATE tags SET namespace_id = ?, subtag_id = ? WHERE tag_id = ?;', ( namespace_id, subtag_id, tag_id ) )
self._Execute( 'UPDATE tags SET namespace_id = ?, subtag_id = ? WHERE tag_id = ?;', ( namespace_id, subtag_id, tag_id ) )
if tag_id in self._tag_ids_to_tags_cache:
@ -756,9 +756,9 @@ class ClientDBMasterURLs( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.url_domains ( domain_id INTEGER PRIMARY KEY, domain TEXT UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.url_domains ( domain_id INTEGER PRIMARY KEY, domain TEXT UNIQUE );' )
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.urls ( url_id INTEGER PRIMARY KEY, domain_id INTEGER, url TEXT UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.urls ( url_id INTEGER PRIMARY KEY, domain_id INTEGER, url TEXT UNIQUE );' )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
@ -780,13 +780,13 @@ class ClientDBMasterURLs( HydrusDBModule.HydrusDBModule ):
def GetURLDomainId( self, domain ):
result = self._c.execute( 'SELECT domain_id FROM url_domains WHERE domain = ?;', ( domain, ) ).fetchone()
result = self._Execute( 'SELECT domain_id FROM url_domains WHERE domain = ?;', ( domain, ) ).fetchone()
if result is None:
self._c.execute( 'INSERT INTO url_domains ( domain ) VALUES ( ? );', ( domain, ) )
self._Execute( 'INSERT INTO url_domains ( domain ) VALUES ( ? );', ( domain, ) )
domain_id = self._c.lastrowid
domain_id = self._GetLastRowId()
else:
@ -813,7 +813,7 @@ class ClientDBMasterURLs( HydrusDBModule.HydrusDBModule ):
search_phrase = '%.{}'.format( domain )
for ( domain_id, ) in self._c.execute( 'SELECT domain_id FROM url_domains WHERE domain LIKE ?;', ( search_phrase, ) ):
for ( domain_id, ) in self._Execute( 'SELECT domain_id FROM url_domains WHERE domain LIKE ?;', ( search_phrase, ) ):
domain_ids.add( domain_id )
@ -823,7 +823,7 @@ class ClientDBMasterURLs( HydrusDBModule.HydrusDBModule ):
def GetURLId( self, url ):
result = self._c.execute( 'SELECT url_id FROM urls WHERE url = ?;', ( url, ) ).fetchone()
result = self._Execute( 'SELECT url_id FROM urls WHERE url = ?;', ( url, ) ).fetchone()
if result is None:
@ -838,9 +838,9 @@ class ClientDBMasterURLs( HydrusDBModule.HydrusDBModule ):
domain_id = self.GetURLDomainId( domain )
self._c.execute( 'INSERT INTO urls ( domain_id, url ) VALUES ( ?, ? );', ( domain_id, url ) )
self._Execute( 'INSERT INTO urls ( domain_id, url ) VALUES ( ?, ? );', ( domain_id, url ) )
url_id = self._c.lastrowid
url_id = self._GetLastRowId()
else:

View File

@ -125,13 +125,13 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
if hash_ids is None:
hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {};'.format( repository_unregistered_updates_table_name ) ) )
hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {};'.format( repository_unregistered_updates_table_name ) ) )
else:
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, repository_unregistered_updates_table_name ) ) )
hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, repository_unregistered_updates_table_name ) ) )
@ -141,9 +141,9 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
service_type = self.modules_services.GetService( service_id ).GetServiceType()
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
hash_ids_to_mimes = { hash_id : mime for ( hash_id, mime ) in self._c.execute( 'SELECT hash_id, mime FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_hash_ids_table_name ) ) }
hash_ids_to_mimes = { hash_id : mime for ( hash_id, mime ) in self._Execute( 'SELECT hash_id, mime FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_hash_ids_table_name ) ) }
if len( hash_ids_to_mimes ) > 0:
@ -165,8 +165,8 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
inserts.extend( ( ( hash_id, content_type, processed ) for content_type in content_types ) )
self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id, content_type, processed ) VALUES ( ?, ?, ? );'.format( repository_updates_processed_table_name ), inserts )
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for hash_id in hash_ids_to_mimes.keys() ) )
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id, content_type, processed ) VALUES ( ?, ?, ? );'.format( repository_updates_processed_table_name ), inserts )
self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for hash_id in hash_ids_to_mimes.keys() ) )
@ -175,7 +175,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id )
self._c.executemany( 'UPDATE {} SET processed = ? WHERE content_type = ?;'.format( repository_updates_processed_table_name ), ( ( False, content_type ) for content_type in content_types ) )
self._ExecuteMany( 'UPDATE {} SET processed = ? WHERE content_type = ?;'.format( repository_updates_processed_table_name ), ( ( False, content_type ) for content_type in content_types ) )
self._ClearOutstandingWorkCache( service_id )
@ -186,7 +186,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
table_join = self.modules_files_storage.GetCurrentTableJoinPhrase( self.modules_services.local_update_service_id, repository_updates_table_name )
update_hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM {};'.format( table_join ) ) )
update_hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM {};'.format( table_join ) ) )
self.modules_files_maintenance.AddJobs( update_hash_ids, job_type )
@ -208,9 +208,9 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id )
self._c.executemany( 'INSERT OR IGNORE INTO {} ( update_index, hash_id ) VALUES ( ?, ? );'.format( repository_updates_table_name ), inserts )
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( update_index, hash_id ) VALUES ( ?, ? );'.format( repository_updates_table_name ), inserts )
self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for ( update_index, hash_id ) in inserts ) )
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for ( update_index, hash_id ) in inserts ) )
self._RegisterUpdates( service_id )
@ -225,14 +225,14 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id )
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( repository_updates_table_name ) )
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( repository_unregistered_updates_table_name ) )
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( repository_updates_processed_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( repository_updates_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( repository_unregistered_updates_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( repository_updates_processed_table_name ) )
( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryDefinitionTableNames( service_id )
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( hash_id_map_table_name ) )
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( tag_id_map_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( hash_id_map_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( tag_id_map_table_name ) )
self._ClearOutstandingWorkCache( service_id )
@ -249,18 +249,18 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id )
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( update_index INTEGER, hash_id INTEGER, PRIMARY KEY ( update_index, hash_id ) );'.format( repository_updates_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( update_index INTEGER, hash_id INTEGER, PRIMARY KEY ( update_index, hash_id ) );'.format( repository_updates_table_name ) )
self._CreateIndex( repository_updates_table_name, [ 'hash_id' ] )
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );'.format( repository_unregistered_updates_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );'.format( repository_unregistered_updates_table_name ) )
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER, content_type INTEGER, processed INTEGER_BOOLEAN, PRIMARY KEY ( hash_id, content_type ) );'.format( repository_updates_processed_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER, content_type INTEGER, processed INTEGER_BOOLEAN, PRIMARY KEY ( hash_id, content_type ) );'.format( repository_updates_processed_table_name ) )
self._CreateIndex( repository_updates_processed_table_name, [ 'content_type' ] )
( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryDefinitionTableNames( service_id )
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( service_hash_id INTEGER PRIMARY KEY, hash_id INTEGER );'.format( hash_id_map_table_name ) )
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( service_tag_id INTEGER PRIMARY KEY, tag_id INTEGER );'.format( tag_id_map_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( service_hash_id INTEGER PRIMARY KEY, hash_id INTEGER );'.format( hash_id_map_table_name ) )
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( service_tag_id INTEGER PRIMARY KEY, tag_id INTEGER );'.format( tag_id_map_table_name ) )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
@ -277,14 +277,14 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id )
( num_updates, ) = self._c.execute( 'SELECT COUNT( * ) FROM {}'.format( repository_updates_table_name ) ).fetchone()
( num_updates, ) = self._Execute( 'SELECT COUNT( * ) FROM {}'.format( repository_updates_table_name ) ).fetchone()
table_join = self.modules_files_storage.GetCurrentTableJoinPhrase( self.modules_services.local_update_service_id, repository_updates_table_name )
( num_local_updates, ) = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( table_join ) ).fetchone()
( num_local_updates, ) = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( table_join ) ).fetchone()
content_types_to_num_updates = collections.Counter( dict( self._c.execute( 'SELECT content_type, COUNT( * ) FROM {} GROUP BY content_type;'.format( repository_updates_processed_table_name ) ) ) )
content_types_to_num_processed_updates = collections.Counter( dict( self._c.execute( 'SELECT content_type, COUNT( * ) FROM {} WHERE processed = ? GROUP BY content_type;'.format( repository_updates_processed_table_name ), ( True, ) ) ) )
content_types_to_num_updates = collections.Counter( dict( self._Execute( 'SELECT content_type, COUNT( * ) FROM {} GROUP BY content_type;'.format( repository_updates_processed_table_name ) ) ) )
content_types_to_num_processed_updates = collections.Counter( dict( self._Execute( 'SELECT content_type, COUNT( * ) FROM {} WHERE processed = ? GROUP BY content_type;'.format( repository_updates_processed_table_name ), ( True, ) ) ) )
# little helpful thing that pays off later
for content_type in content_types_to_num_updates:
@ -307,17 +307,17 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id )
result = self._c.execute( 'SELECT 1 FROM {} WHERE content_type = ? AND processed = ?;'.format( repository_updates_processed_table_name ), ( HC.CONTENT_TYPE_DEFINITIONS, True ) ).fetchone()
result = self._Execute( 'SELECT 1 FROM {} WHERE content_type = ? AND processed = ?;'.format( repository_updates_processed_table_name ), ( HC.CONTENT_TYPE_DEFINITIONS, True ) ).fetchone()
this_is_first_definitions_work = result is None
result = self._c.execute( 'SELECT 1 FROM {} WHERE content_type != ? AND processed = ?;'.format( repository_updates_processed_table_name ), ( HC.CONTENT_TYPE_DEFINITIONS, True ) ).fetchone()
result = self._Execute( 'SELECT 1 FROM {} WHERE content_type != ? AND processed = ?;'.format( repository_updates_processed_table_name ), ( HC.CONTENT_TYPE_DEFINITIONS, True ) ).fetchone()
this_is_first_content_work = result is None
min_unregistered_update_index = None
result = self._c.execute( 'SELECT MIN( update_index ) FROM {} CROSS JOIN {} USING ( hash_id );'.format( repository_unregistered_updates_table_name, repository_updates_table_name ) ).fetchone()
result = self._Execute( 'SELECT MIN( update_index ) FROM {} CROSS JOIN {} USING ( hash_id );'.format( repository_unregistered_updates_table_name, repository_updates_table_name ) ).fetchone()
if result is not None:
@ -336,12 +336,12 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
query = 'SELECT update_index, hash_id, content_type FROM {} CROSS JOIN {} USING ( hash_id ) WHERE {};'.format( repository_updates_processed_table_name, repository_updates_table_name, predicate_phrase )
rows = self._c.execute( query ).fetchall()
rows = self._Execute( query ).fetchall()
update_indices_to_unprocessed_hash_ids = HydrusData.BuildKeyToSetDict( ( ( update_index, hash_id ) for ( update_index, hash_id, content_type ) in rows ) )
hash_ids_to_content_types_to_process = HydrusData.BuildKeyToSetDict( ( ( hash_id, content_type ) for ( update_index, hash_id, content_type ) in rows ) )
all_hash_ids = set( itertools.chain.from_iterable( update_indices_to_unprocessed_hash_ids.values() ) )
all_hash_ids = set( hash_ids_to_content_types_to_process.keys() )
all_local_hash_ids = self.modules_files_storage.FilterCurrentHashIds( self.modules_services.local_update_service_id, all_hash_ids )
@ -400,11 +400,11 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id )
all_hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM {} ORDER BY update_index ASC;'.format( repository_updates_table_name ) ) )
all_hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM {} ORDER BY update_index ASC;'.format( repository_updates_table_name ) ) )
table_join = self.modules_files_storage.GetCurrentTableJoinPhrase( self.modules_services.local_update_service_id, repository_updates_table_name )
existing_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {};'.format( table_join ) ) )
existing_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {};'.format( table_join ) ) )
needed_hash_ids = [ hash_id for hash_id in all_hash_ids if hash_id not in existing_hash_ids ]
@ -455,7 +455,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
if content_type not in content_types_to_outstanding_local_processing:
result = self._STL( self._c.execute( 'SELECT 1 FROM {} WHERE content_type = ? AND processed = ?;'.format( repository_updates_processed_table_name ), ( content_type, False ) ).fetchmany( 20 ) )
result = self._STL( self._Execute( 'SELECT 1 FROM {} WHERE content_type = ? AND processed = ?;'.format( repository_updates_processed_table_name ), ( content_type, False ) ).fetchmany( 20 ) )
content_types_to_outstanding_local_processing[ content_type ] = len( result ) >= 20
@ -473,7 +473,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
hash_id_map_table_name = GenerateRepositoryFileDefinitionTableName( service_id )
result = self._c.execute( 'SELECT hash_id FROM {} WHERE service_hash_id = ?;'.format( hash_id_map_table_name ), ( service_hash_id, ) ).fetchone()
result = self._Execute( 'SELECT hash_id FROM {} WHERE service_hash_id = ?;'.format( hash_id_map_table_name ), ( service_hash_id, ) ).fetchone()
if result is None:
@ -489,10 +489,10 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
hash_id_map_table_name = GenerateRepositoryFileDefinitionTableName( service_id )
with HydrusDB.TemporaryIntegerTable( self._c, service_hash_ids, 'service_hash_id' ) as temp_table_name:
with self._MakeTemporaryIntegerTable( service_hash_ids, 'service_hash_id' ) as temp_table_name:
# temp service hashes to lookup
hash_ids_potentially_dupes = self._STL( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( service_hash_id );'.format( temp_table_name, hash_id_map_table_name ) ) )
hash_ids_potentially_dupes = self._STL( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( service_hash_id );'.format( temp_table_name, hash_id_map_table_name ) ) )
# every service_id can only exist once, but technically a hash_id could be mapped to two service_ids
@ -502,7 +502,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
for service_hash_id in service_hash_ids:
result = self._c.execute( 'SELECT hash_id FROM {} WHERE service_hash_id = ?;'.format( hash_id_map_table_name ), ( service_hash_id, ) ).fetchone()
result = self._Execute( 'SELECT hash_id FROM {} WHERE service_hash_id = ?;'.format( hash_id_map_table_name ), ( service_hash_id, ) ).fetchone()
if result is None:
@ -522,7 +522,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
tag_id_map_table_name = GenerateRepositoryTagDefinitionTableName( service_id )
result = self._c.execute( 'SELECT tag_id FROM {} WHERE service_tag_id = ?;'.format( tag_id_map_table_name ), ( service_tag_id, ) ).fetchone()
result = self._Execute( 'SELECT tag_id FROM {} WHERE service_tag_id = ?;'.format( tag_id_map_table_name ), ( service_tag_id, ) ).fetchone()
if result is None:
@ -569,7 +569,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
inserts.append( ( service_hash_id, hash_id ) )
self._c.executemany( 'REPLACE INTO {} ( service_hash_id, hash_id ) VALUES ( ?, ? );'.format( hash_id_map_table_name ), inserts )
self._ExecuteMany( 'REPLACE INTO {} ( service_hash_id, hash_id ) VALUES ( ?, ? );'.format( hash_id_map_table_name ), inserts )
num_rows_processed += len( inserts )
@ -606,7 +606,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
inserts.append( ( service_tag_id, tag_id ) )
self._c.executemany( 'REPLACE INTO {} ( service_tag_id, tag_id ) VALUES ( ?, ? );'.format( tag_id_map_table_name ), inserts )
self._ExecuteMany( 'REPLACE INTO {} ( service_tag_id, tag_id ) VALUES ( ?, ? );'.format( tag_id_map_table_name ), inserts )
num_rows_processed += len( inserts )
@ -644,15 +644,15 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id )
current_update_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {};'.format( repository_updates_table_name ) ) )
current_update_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {};'.format( repository_updates_table_name ) ) )
all_future_update_hash_ids = self.modules_hashes_local_cache.GetHashIds( metadata.GetUpdateHashes() )
deletee_hash_ids = current_update_hash_ids.difference( all_future_update_hash_ids )
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_updates_table_name ), ( ( hash_id, ) for hash_id in deletee_hash_ids ) )
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for hash_id in deletee_hash_ids ) )
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_updates_processed_table_name ), ( ( hash_id, ) for hash_id in deletee_hash_ids ) )
self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_updates_table_name ), ( ( hash_id, ) for hash_id in deletee_hash_ids ) )
self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for hash_id in deletee_hash_ids ) )
self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_updates_processed_table_name ), ( ( hash_id, ) for hash_id in deletee_hash_ids ) )
inserts = []
@ -664,7 +664,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
if hash_id in current_update_hash_ids:
self._c.execute( 'UPDATE {} SET update_index = ? WHERE hash_id = ?;'.format( repository_updates_table_name ), ( update_index, hash_id ) )
self._Execute( 'UPDATE {} SET update_index = ? WHERE hash_id = ?;'.format( repository_updates_table_name ), ( update_index, hash_id ) )
else:
@ -673,8 +673,8 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
self._c.executemany( 'INSERT OR IGNORE INTO {} ( update_index, hash_id ) VALUES ( ?, ? );'.format( repository_updates_table_name ), inserts )
self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for ( update_index, hash_id ) in inserts ) )
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( update_index, hash_id ) VALUES ( ?, ? );'.format( repository_updates_table_name ), inserts )
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for ( update_index, hash_id ) in inserts ) )
self._RegisterUpdates( service_id )
@ -687,7 +687,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
update_hash_id = self.modules_hashes_local_cache.GetHashId( update_hash )
self._c.executemany( 'UPDATE {} SET processed = ? WHERE hash_id = ? AND content_type = ?;'.format( repository_updates_processed_table_name ), ( ( True, update_hash_id, content_type ) for content_type in content_types ) )
self._ExecuteMany( 'UPDATE {} SET processed = ? WHERE hash_id = ? AND content_type = ?;'.format( repository_updates_processed_table_name ), ( ( True, update_hash_id, content_type ) for content_type in content_types ) )
for content_type in content_types:

View File

@ -162,32 +162,32 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._c.execute( 'CREATE TABLE json_dict ( name TEXT PRIMARY KEY, dump BLOB_BYTES );' )
self._c.execute( 'CREATE TABLE json_dumps ( dump_type INTEGER PRIMARY KEY, version INTEGER, dump BLOB_BYTES );' )
self._c.execute( 'CREATE TABLE json_dumps_named ( dump_type INTEGER, dump_name TEXT, version INTEGER, timestamp INTEGER, dump BLOB_BYTES, PRIMARY KEY ( dump_type, dump_name, timestamp ) );' )
self._c.execute( 'CREATE TABLE json_dumps_hashed ( hash BLOB_BYTES PRIMARY KEY, dump_type INTEGER, version INTEGER, dump BLOB_BYTES );' )
self._Execute( 'CREATE TABLE json_dict ( name TEXT PRIMARY KEY, dump BLOB_BYTES );' )
self._Execute( 'CREATE TABLE json_dumps ( dump_type INTEGER PRIMARY KEY, version INTEGER, dump BLOB_BYTES );' )
self._Execute( 'CREATE TABLE json_dumps_named ( dump_type INTEGER, dump_name TEXT, version INTEGER, timestamp INTEGER, dump BLOB_BYTES, PRIMARY KEY ( dump_type, dump_name, timestamp ) );' )
self._Execute( 'CREATE TABLE json_dumps_hashed ( hash BLOB_BYTES PRIMARY KEY, dump_type INTEGER, version INTEGER, dump BLOB_BYTES );' )
self._c.execute( 'CREATE TABLE yaml_dumps ( dump_type INTEGER, dump_name TEXT, dump TEXT_YAML, PRIMARY KEY ( dump_type, dump_name ) );' )
self._Execute( 'CREATE TABLE yaml_dumps ( dump_type INTEGER, dump_name TEXT, dump TEXT_YAML, PRIMARY KEY ( dump_type, dump_name ) );' )
def DeleteJSONDump( self, dump_type ):
self._c.execute( 'DELETE FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) )
self._Execute( 'DELETE FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) )
def DeleteJSONDumpNamed( self, dump_type, dump_name = None, timestamp = None ):
if dump_name is None:
self._c.execute( 'DELETE FROM json_dumps_named WHERE dump_type = ?;', ( dump_type, ) )
self._Execute( 'DELETE FROM json_dumps_named WHERE dump_type = ?;', ( dump_type, ) )
elif timestamp is None:
self._c.execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) )
self._Execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) )
else:
self._c.execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, timestamp ) )
self._Execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, timestamp ) )
@ -195,20 +195,20 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
if dump_name is None:
self._c.execute( 'DELETE FROM yaml_dumps WHERE dump_type = ?;', ( dump_type, ) )
self._Execute( 'DELETE FROM yaml_dumps WHERE dump_type = ?;', ( dump_type, ) )
else:
if dump_type == YAML_DUMP_ID_LOCAL_BOORU: dump_name = dump_name.hex()
self._c.execute( 'DELETE FROM yaml_dumps WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) )
self._Execute( 'DELETE FROM yaml_dumps WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) )
if dump_type == YAML_DUMP_ID_LOCAL_BOORU:
service_id = self.modules_services.GetServiceId( CC.LOCAL_BOORU_SERVICE_KEY )
self._c.execute( 'DELETE FROM service_info WHERE service_id = ? AND info_type = ?;', ( service_id, HC.SERVICE_INFO_NUM_SHARES ) )
self._Execute( 'DELETE FROM service_info WHERE service_id = ? AND info_type = ?;', ( service_id, HC.SERVICE_INFO_NUM_SHARES ) )
HG.client_controller.pub( 'refresh_local_booru_shares' )
@ -219,7 +219,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
all_expected_hashes = set()
# not the GetJSONDumpNamesToBackupTimestamps call, which excludes the latest save!
names_and_timestamps = self._c.execute( 'SELECT dump_name, timestamp FROM json_dumps_named WHERE dump_type = ?;', ( HydrusSerialisable.SERIALISABLE_TYPE_GUI_SESSION_CONTAINER, ) ).fetchall()
names_and_timestamps = self._Execute( 'SELECT dump_name, timestamp FROM json_dumps_named WHERE dump_type = ?;', ( HydrusSerialisable.SERIALISABLE_TYPE_GUI_SESSION_CONTAINER, ) ).fetchall()
for ( name, timestamp ) in names_and_timestamps:
@ -252,7 +252,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
for hash in hashes:
result = self._c.execute( 'SELECT version, dump_type, dump FROM json_dumps_hashed WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
result = self._Execute( 'SELECT version, dump_type, dump FROM json_dumps_hashed WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
if result is None:
@ -289,7 +289,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
except:
self._c.execute( 'DELETE FROM json_dumps_hashed WHERE hash = ?;', ( sqlite3.Binary( hash ), ) )
self._Execute( 'DELETE FROM json_dumps_hashed WHERE hash = ?;', ( sqlite3.Binary( hash ), ) )
self._cursor_transaction_wrapper.CommitAndBegin()
@ -341,7 +341,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
def GetJSONDump( self, dump_type ):
result = self._c.execute( 'SELECT version, dump FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) ).fetchone()
result = self._Execute( 'SELECT version, dump FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) ).fetchone()
if result is None:
@ -362,7 +362,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
except:
self._c.execute( 'DELETE FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) )
self._Execute( 'DELETE FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) )
self._cursor_transaction_wrapper.CommitAndBegin()
@ -392,7 +392,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
if dump_name is None:
results = self._c.execute( 'SELECT dump_name, version, dump, timestamp FROM json_dumps_named WHERE dump_type = ?;', ( dump_type, ) ).fetchall()
results = self._Execute( 'SELECT dump_name, version, dump, timestamp FROM json_dumps_named WHERE dump_type = ?;', ( dump_type, ) ).fetchall()
objs = []
@ -411,7 +411,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
except:
self._c.execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, object_timestamp ) )
self._Execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, object_timestamp ) )
self._cursor_transaction_wrapper.CommitAndBegin()
@ -425,11 +425,11 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
if timestamp is None:
result = self._c.execute( 'SELECT version, dump, timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? ORDER BY timestamp DESC;', ( dump_type, dump_name ) ).fetchone()
result = self._Execute( 'SELECT version, dump, timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? ORDER BY timestamp DESC;', ( dump_type, dump_name ) ).fetchone()
else:
result = self._c.execute( 'SELECT version, dump, timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, timestamp ) ).fetchone()
result = self._Execute( 'SELECT version, dump, timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, timestamp ) ).fetchone()
if result is None:
@ -450,7 +450,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
except:
self._c.execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, object_timestamp ) )
self._Execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, object_timestamp ) )
self._cursor_transaction_wrapper.CommitAndBegin()
@ -463,14 +463,14 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
def GetJSONDumpNames( self, dump_type ):
names = [ name for ( name, ) in self._c.execute( 'SELECT DISTINCT dump_name FROM json_dumps_named WHERE dump_type = ?;', ( dump_type, ) ) ]
names = [ name for ( name, ) in self._Execute( 'SELECT DISTINCT dump_name FROM json_dumps_named WHERE dump_type = ?;', ( dump_type, ) ) ]
return names
def GetJSONDumpNamesToBackupTimestamps( self, dump_type ):
names_to_backup_timestamps = HydrusData.BuildKeyToListDict( self._c.execute( 'SELECT dump_name, timestamp FROM json_dumps_named WHERE dump_type = ? ORDER BY timestamp ASC;', ( dump_type, ) ) )
names_to_backup_timestamps = HydrusData.BuildKeyToListDict( self._Execute( 'SELECT dump_name, timestamp FROM json_dumps_named WHERE dump_type = ? ORDER BY timestamp ASC;', ( dump_type, ) ) )
for ( name, timestamp_list ) in list( names_to_backup_timestamps.items() ):
@ -487,7 +487,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
def GetJSONSimple( self, name ):
result = self._c.execute( 'SELECT dump FROM json_dict WHERE name = ?;', ( name, ) ).fetchone()
result = self._Execute( 'SELECT dump FROM json_dict WHERE name = ?;', ( name, ) ).fetchone()
if result is None:
@ -515,7 +515,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
if dump_name is None:
result = { dump_name : data for ( dump_name, data ) in self._c.execute( 'SELECT dump_name, dump FROM yaml_dumps WHERE dump_type = ?;', ( dump_type, ) ) }
result = { dump_name : data for ( dump_name, data ) in self._Execute( 'SELECT dump_name, dump FROM yaml_dumps WHERE dump_type = ?;', ( dump_type, ) ) }
if dump_type == YAML_DUMP_ID_LOCAL_BOORU:
@ -526,7 +526,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
if dump_type == YAML_DUMP_ID_LOCAL_BOORU: dump_name = dump_name.hex()
result = self._c.execute( 'SELECT dump FROM yaml_dumps WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ).fetchone()
result = self._Execute( 'SELECT dump FROM yaml_dumps WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ).fetchone()
if result is None:
@ -546,7 +546,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
def GetYAMLDumpNames( self, dump_type ):
names = [ name for ( name, ) in self._c.execute( 'SELECT dump_name FROM yaml_dumps WHERE dump_type = ?;', ( dump_type, ) ) ]
names = [ name for ( name, ) in self._Execute( 'SELECT dump_name FROM yaml_dumps WHERE dump_type = ?;', ( dump_type, ) ) ]
if dump_type == YAML_DUMP_ID_LOCAL_BOORU:
@ -558,7 +558,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
def HaveHashedJSONDump( self, hash ):
result = self._c.execute( 'SELECT 1 FROM json_dumps_hashed WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
result = self._Execute( 'SELECT 1 FROM json_dumps_hashed WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone()
return result is not None
@ -577,13 +577,13 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
all_expected_hashes = self.GetAllExpectedHashedJSONHashes()
all_stored_hashes = self._STS( self._c.execute( 'SELECT hash FROM json_dumps_hashed;' ) )
all_stored_hashes = self._STS( self._Execute( 'SELECT hash FROM json_dumps_hashed;' ) )
all_deletee_hashes = all_stored_hashes.difference( all_expected_hashes )
if len( all_deletee_hashes ) > 0:
self._c.executemany( 'DELETE FROM json_dumps_hashed WHERE hash = ?;', ( ( sqlite3.Binary( hash ), ) for hash in all_deletee_hashes ) )
self._ExecuteMany( 'DELETE FROM json_dumps_hashed WHERE hash = ?;', ( ( sqlite3.Binary( hash ), ) for hash in all_deletee_hashes ) )
maintenance_tracker.NotifyHashedSerialisableMaintenanceDone()
@ -636,7 +636,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
try:
self._c.execute( 'INSERT INTO json_dumps_hashed ( hash, dump_type, version, dump ) VALUES ( ?, ?, ?, ? );', ( sqlite3.Binary( hash ), dump_type, version, dump_buffer ) )
self._Execute( 'INSERT INTO json_dumps_hashed ( hash, dump_type, version, dump ) VALUES ( ?, ?, ?, ? );', ( sqlite3.Binary( hash ), dump_type, version, dump_buffer ) )
except:
@ -703,7 +703,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
if store_backups:
existing_timestamps = sorted( self._STI( self._c.execute( 'SELECT timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ) ) )
existing_timestamps = sorted( self._STI( self._Execute( 'SELECT timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ) ) )
if len( existing_timestamps ) > 0:
@ -721,11 +721,11 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
deletee_timestamps.append( object_timestamp ) # if save gets spammed twice in one second, we'll overwrite
self._c.executemany( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', [ ( dump_type, dump_name, timestamp ) for timestamp in deletee_timestamps ] )
self._ExecuteMany( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', [ ( dump_type, dump_name, timestamp ) for timestamp in deletee_timestamps ] )
else:
self._c.execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) )
self._Execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) )
else:
@ -737,7 +737,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
try:
self._c.execute( 'INSERT INTO json_dumps_named ( dump_type, dump_name, version, timestamp, dump ) VALUES ( ?, ?, ?, ?, ? );', ( dump_type, dump_name, version, object_timestamp, dump_buffer ) )
self._Execute( 'INSERT INTO json_dumps_named ( dump_type, dump_name, version, timestamp, dump ) VALUES ( ?, ?, ?, ?, ? );', ( dump_type, dump_name, version, object_timestamp, dump_buffer ) )
except:
@ -826,13 +826,13 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
raise Exception( 'Trying to json dump the object ' + str( obj ) + ' caused an error. Its serialisable info has been dumped to the log.' )
self._c.execute( 'DELETE FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) )
self._Execute( 'DELETE FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) )
dump_buffer = GenerateBigSQLiteDumpBuffer( dump )
try:
self._c.execute( 'INSERT INTO json_dumps ( dump_type, version, dump ) VALUES ( ?, ?, ? );', ( dump_type, version, dump_buffer ) )
self._Execute( 'INSERT INTO json_dumps ( dump_type, version, dump ) VALUES ( ?, ?, ? );', ( dump_type, version, dump_buffer ) )
except:
@ -881,7 +881,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
if value is None:
self._c.execute( 'DELETE FROM json_dict WHERE name = ?;', ( name, ) )
self._Execute( 'DELETE FROM json_dict WHERE name = ?;', ( name, ) )
else:
@ -891,7 +891,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
try:
self._c.execute( 'REPLACE INTO json_dict ( name, dump ) VALUES ( ?, ? );', ( name, dump_buffer ) )
self._Execute( 'REPLACE INTO json_dict ( name, dump ) VALUES ( ?, ? );', ( name, dump_buffer ) )
except:
@ -910,11 +910,11 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
dump_name = dump_name.hex()
self._c.execute( 'DELETE FROM yaml_dumps WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) )
self._Execute( 'DELETE FROM yaml_dumps WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) )
try:
self._c.execute( 'INSERT INTO yaml_dumps ( dump_type, dump_name, dump ) VALUES ( ?, ?, ? );', ( dump_type, dump_name, data ) )
self._Execute( 'INSERT INTO yaml_dumps ( dump_type, dump_name, dump ) VALUES ( ?, ?, ? );', ( dump_type, dump_name, data ) )
except:
@ -927,7 +927,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ):
service_id = self.modules_services.GetServiceId( CC.LOCAL_BOORU_SERVICE_KEY )
self._c.execute( 'DELETE FROM service_info WHERE service_id = ? AND info_type = ?;', ( service_id, HC.SERVICE_INFO_NUM_SHARES ) )
self._Execute( 'DELETE FROM service_info WHERE service_id = ? AND info_type = ?;', ( service_id, HC.SERVICE_INFO_NUM_SHARES ) )
HG.client_controller.pub( 'refresh_local_booru_shares' )

View File

@ -37,9 +37,9 @@ class ClientDBMasterServices( HydrusDBModule.HydrusDBModule ):
def _InitCaches( self ):
if self._c.execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'services', ) ).fetchone() is not None:
if self._Execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'services', ) ).fetchone() is not None:
all_data = self._c.execute( 'SELECT service_id, service_key, service_type, name, dictionary_string FROM services;' ).fetchall()
all_data = self._Execute( 'SELECT service_id, service_key, service_type, name, dictionary_string FROM services;' ).fetchall()
for ( service_id, service_key, service_type, name, dictionary_string ) in all_data:
@ -62,7 +62,7 @@ class ClientDBMasterServices( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._c.execute( 'CREATE TABLE services ( service_id INTEGER PRIMARY KEY AUTOINCREMENT, service_key BLOB_BYTES UNIQUE, service_type INTEGER, name TEXT, dictionary_string TEXT );' )
self._Execute( 'CREATE TABLE services ( service_id INTEGER PRIMARY KEY AUTOINCREMENT, service_key BLOB_BYTES UNIQUE, service_type INTEGER, name TEXT, dictionary_string TEXT );' )
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
@ -78,9 +78,9 @@ class ClientDBMasterServices( HydrusDBModule.HydrusDBModule ):
dictionary_string = dictionary.DumpToString()
self._c.execute( 'INSERT INTO services ( service_key, service_type, name, dictionary_string ) VALUES ( ?, ?, ?, ? );', ( sqlite3.Binary( service_key ), service_type, name, dictionary_string ) )
self._Execute( 'INSERT INTO services ( service_key, service_type, name, dictionary_string ) VALUES ( ?, ?, ?, ? );', ( sqlite3.Binary( service_key ), service_type, name, dictionary_string ) )
service_id = self._c.lastrowid
service_id = self._GetLastRowId()
service = ClientServices.GenerateService( service_key, service_type, name, dictionary )
@ -125,7 +125,7 @@ class ClientDBMasterServices( HydrusDBModule.HydrusDBModule ):
self._c.execute( 'DELETE FROM services WHERE service_id = ?;', ( service_id, ) )
self._Execute( 'DELETE FROM services WHERE service_id = ?;', ( service_id, ) )
def GetNonDupeName( self, name ) -> str:
@ -188,7 +188,7 @@ class ClientDBMasterServices( HydrusDBModule.HydrusDBModule ):
dictionary_string = dictionary.DumpToString()
self._c.execute( 'UPDATE services SET name = ?, dictionary_string = ? WHERE service_id = ?;', ( name, dictionary_string, service_id ) )
self._Execute( 'UPDATE services SET name = ?, dictionary_string = ? WHERE service_id = ?;', ( name, dictionary_string, service_id ) )
self._service_ids_to_services[ service_id ] = service

View File

@ -25,7 +25,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
def _AddLeaf( self, phash_id, phash ):
result = self._c.execute( 'SELECT phash_id FROM shape_vptree WHERE parent_id IS NULL;' ).fetchone()
result = self._Execute( 'SELECT phash_id FROM shape_vptree WHERE parent_id IS NULL;' ).fetchone()
if result is None:
@ -46,7 +46,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
ancestor_id = next_ancestor_id
( ancestor_phash, ancestor_radius, ancestor_inner_id, ancestor_inner_population, ancestor_outer_id, ancestor_outer_population ) = self._c.execute( 'SELECT phash, radius, inner_id, inner_population, outer_id, outer_population FROM shape_perceptual_hashes NATURAL JOIN shape_vptree WHERE phash_id = ?;', ( ancestor_id, ) ).fetchone()
( ancestor_phash, ancestor_radius, ancestor_inner_id, ancestor_inner_population, ancestor_outer_id, ancestor_outer_population ) = self._Execute( 'SELECT phash, radius, inner_id, inner_population, outer_id, outer_population FROM shape_perceptual_hashes NATURAL JOIN shape_vptree WHERE phash_id = ?;', ( ancestor_id, ) ).fetchone()
distance_to_ancestor = HydrusData.Get64BitHammingDistance( phash, ancestor_phash )
@ -58,7 +58,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
if ancestor_inner_id is None:
self._c.execute( 'UPDATE shape_vptree SET inner_id = ?, radius = ? WHERE phash_id = ?;', ( phash_id, distance_to_ancestor, ancestor_id ) )
self._Execute( 'UPDATE shape_vptree SET inner_id = ?, radius = ? WHERE phash_id = ?;', ( phash_id, distance_to_ancestor, ancestor_id ) )
parent_id = ancestor_id
@ -71,7 +71,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
if ancestor_outer_id is None:
self._c.execute( 'UPDATE shape_vptree SET outer_id = ? WHERE phash_id = ?;', ( phash_id, ancestor_id ) )
self._Execute( 'UPDATE shape_vptree SET outer_id = ? WHERE phash_id = ?;', ( phash_id, ancestor_id ) )
parent_id = ancestor_id
@ -84,7 +84,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
if smaller / larger < 0.5:
self._c.execute( 'INSERT OR IGNORE INTO shape_maintenance_branch_regen ( phash_id ) VALUES ( ? );', ( ancestor_id, ) )
self._Execute( 'INSERT OR IGNORE INTO shape_maintenance_branch_regen ( phash_id ) VALUES ( ? );', ( ancestor_id, ) )
# we only do this for the eldest ancestor, as the eventual rebalancing will affect all children
@ -93,8 +93,8 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
self._c.executemany( 'UPDATE shape_vptree SET inner_population = inner_population + 1 WHERE phash_id = ?;', ( ( ancestor_id, ) for ancestor_id in ancestors_we_are_inside ) )
self._c.executemany( 'UPDATE shape_vptree SET outer_population = outer_population + 1 WHERE phash_id = ?;', ( ( ancestor_id, ) for ancestor_id in ancestors_we_are_outside ) )
self._ExecuteMany( 'UPDATE shape_vptree SET inner_population = inner_population + 1 WHERE phash_id = ?;', ( ( ancestor_id, ) for ancestor_id in ancestors_we_are_inside ) )
self._ExecuteMany( 'UPDATE shape_vptree SET outer_population = outer_population + 1 WHERE phash_id = ?;', ( ( ancestor_id, ) for ancestor_id in ancestors_we_are_outside ) )
radius = None
@ -103,7 +103,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
outer_id = None
outer_population = 0
self._c.execute( 'INSERT OR REPLACE INTO shape_vptree ( phash_id, parent_id, radius, inner_id, inner_population, outer_id, outer_population ) VALUES ( ?, ?, ?, ?, ?, ?, ? );', ( phash_id, parent_id, radius, inner_id, inner_population, outer_id, outer_population ) )
self._Execute( 'INSERT OR REPLACE INTO shape_vptree ( phash_id, parent_id, radius, inner_id, inner_population, outer_id, outer_population ) VALUES ( ?, ?, ?, ?, ?, ?, ? );', ( phash_id, parent_id, radius, inner_id, inner_population, outer_id, outer_population ) )
def _GenerateBranch( self, job_key, parent_id, phash_id, phash, children ):
@ -190,7 +190,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
job_key.SetVariable( 'popup_text_2', 'branch constructed, now committing' )
self._c.executemany( 'INSERT OR REPLACE INTO shape_vptree ( phash_id, parent_id, radius, inner_id, inner_population, outer_id, outer_population ) VALUES ( ?, ?, ?, ?, ?, ?, ? );', insert_rows )
self._ExecuteMany( 'INSERT OR REPLACE INTO shape_vptree ( phash_id, parent_id, radius, inner_id, inner_population, outer_id, outer_population ) VALUES ( ?, ?, ?, ?, ?, ?, ? );', insert_rows )
def _GetInitialIndexGenerationTuples( self ):
@ -205,13 +205,13 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
def _GetPHashId( self, phash ):
result = self._c.execute( 'SELECT phash_id FROM shape_perceptual_hashes WHERE phash = ?;', ( sqlite3.Binary( phash ), ) ).fetchone()
result = self._Execute( 'SELECT phash_id FROM shape_perceptual_hashes WHERE phash = ?;', ( sqlite3.Binary( phash ), ) ).fetchone()
if result is None:
self._c.execute( 'INSERT INTO shape_perceptual_hashes ( phash ) VALUES ( ? );', ( sqlite3.Binary( phash ), ) )
self._Execute( 'INSERT INTO shape_perceptual_hashes ( phash ) VALUES ( ? );', ( sqlite3.Binary( phash ), ) )
phash_id = self._c.lastrowid
phash_id = self._GetLastRowId()
self._AddLeaf( phash_id, phash )
@ -317,7 +317,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
# grab everything in the branch
( parent_id, ) = self._c.execute( 'SELECT parent_id FROM shape_vptree WHERE phash_id = ?;', ( phash_id, ) ).fetchone()
( parent_id, ) = self._Execute( 'SELECT parent_id FROM shape_vptree WHERE phash_id = ?;', ( phash_id, ) ).fetchone()
cte_table_name = 'branch ( branch_phash_id )'
initial_select = 'SELECT ?'
@ -325,7 +325,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
with_clause = 'WITH RECURSIVE ' + cte_table_name + ' AS ( ' + initial_select + ' UNION ALL ' + recursive_select + ')'
unbalanced_nodes = self._c.execute( with_clause + ' SELECT branch_phash_id, phash FROM branch, shape_perceptual_hashes ON phash_id = branch_phash_id;', ( phash_id, ) ).fetchall()
unbalanced_nodes = self._Execute( with_clause + ' SELECT branch_phash_id, phash FROM branch, shape_perceptual_hashes ON phash_id = branch_phash_id;', ( phash_id, ) ).fetchall()
# removal of old branch, maintenance schedule, and orphan phashes
@ -333,18 +333,18 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
unbalanced_phash_ids = { p_id for ( p_id, p_h ) in unbalanced_nodes }
self._c.executemany( 'DELETE FROM shape_vptree WHERE phash_id = ?;', ( ( p_id, ) for p_id in unbalanced_phash_ids ) )
self._ExecuteMany( 'DELETE FROM shape_vptree WHERE phash_id = ?;', ( ( p_id, ) for p_id in unbalanced_phash_ids ) )
self._c.executemany( 'DELETE FROM shape_maintenance_branch_regen WHERE phash_id = ?;', ( ( p_id, ) for p_id in unbalanced_phash_ids ) )
self._ExecuteMany( 'DELETE FROM shape_maintenance_branch_regen WHERE phash_id = ?;', ( ( p_id, ) for p_id in unbalanced_phash_ids ) )
with HydrusDB.TemporaryIntegerTable( self._c, unbalanced_phash_ids, 'phash_id' ) as temp_phash_ids_table_name:
with self._MakeTemporaryIntegerTable( unbalanced_phash_ids, 'phash_id' ) as temp_phash_ids_table_name:
useful_phash_ids = self._STS( self._c.execute( 'SELECT phash_id FROM {} CROSS JOIN shape_perceptual_hash_map USING ( phash_id );'.format( temp_phash_ids_table_name ) ) )
useful_phash_ids = self._STS( self._Execute( 'SELECT phash_id FROM {} CROSS JOIN shape_perceptual_hash_map USING ( phash_id );'.format( temp_phash_ids_table_name ) ) )
orphan_phash_ids = unbalanced_phash_ids.difference( useful_phash_ids )
self._c.executemany( 'DELETE FROM shape_perceptual_hashes WHERE phash_id = ?;', ( ( p_id, ) for p_id in orphan_phash_ids ) )
self._ExecuteMany( 'DELETE FROM shape_perceptual_hashes WHERE phash_id = ?;', ( ( p_id, ) for p_id in orphan_phash_ids ) )
useful_nodes = [ row for row in unbalanced_nodes if row[0] in useful_phash_ids ]
@ -363,7 +363,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
if parent_id is not None:
( parent_inner_id, ) = self._c.execute( 'SELECT inner_id FROM shape_vptree WHERE phash_id = ?;', ( parent_id, ) ).fetchone()
( parent_inner_id, ) = self._Execute( 'SELECT inner_id FROM shape_vptree WHERE phash_id = ?;', ( parent_id, ) ).fetchone()
if parent_inner_id == phash_id:
@ -374,7 +374,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
query = 'UPDATE shape_vptree SET outer_id = ?, outer_population = ? WHERE phash_id = ?;'
self._c.execute( query, ( new_phash_id, useful_population, parent_id ) )
self._Execute( query, ( new_phash_id, useful_population, parent_id ) )
if useful_population > 0:
@ -394,11 +394,11 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
phash_ids.add( phash_id )
self._c.executemany( 'INSERT OR IGNORE INTO shape_perceptual_hash_map ( phash_id, hash_id ) VALUES ( ?, ? );', ( ( phash_id, hash_id ) for phash_id in phash_ids ) )
self._ExecuteMany( 'INSERT OR IGNORE INTO shape_perceptual_hash_map ( phash_id, hash_id ) VALUES ( ?, ? );', ( ( phash_id, hash_id ) for phash_id in phash_ids ) )
if HydrusDB.GetRowCount( self._c ) > 0:
if self._GetRowCount() > 0:
self._c.execute( 'REPLACE INTO shape_search_cache ( hash_id, searched_distance ) VALUES ( ?, ? );', ( hash_id, None ) )
self._Execute( 'REPLACE INTO shape_search_cache ( hash_id, searched_distance ) VALUES ( ?, ? );', ( hash_id, None ) )
return phash_ids
@ -406,31 +406,31 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
def CreateInitialTables( self ):
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_perceptual_hashes ( phash_id INTEGER PRIMARY KEY, phash BLOB_BYTES UNIQUE );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_perceptual_hashes ( phash_id INTEGER PRIMARY KEY, phash BLOB_BYTES UNIQUE );' )
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_perceptual_hash_map ( phash_id INTEGER, hash_id INTEGER, PRIMARY KEY ( phash_id, hash_id ) );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_perceptual_hash_map ( phash_id INTEGER, hash_id INTEGER, PRIMARY KEY ( phash_id, hash_id ) );' )
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_vptree ( phash_id INTEGER PRIMARY KEY, parent_id INTEGER, radius INTEGER, inner_id INTEGER, inner_population INTEGER, outer_id INTEGER, outer_population INTEGER );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_vptree ( phash_id INTEGER PRIMARY KEY, parent_id INTEGER, radius INTEGER, inner_id INTEGER, inner_population INTEGER, outer_id INTEGER, outer_population INTEGER );' )
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_maintenance_branch_regen ( phash_id INTEGER PRIMARY KEY );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_maintenance_branch_regen ( phash_id INTEGER PRIMARY KEY );' )
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_search_cache ( hash_id INTEGER PRIMARY KEY, searched_distance INTEGER );' )
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_search_cache ( hash_id INTEGER PRIMARY KEY, searched_distance INTEGER );' )
def DisassociatePHashes( self, hash_id, phash_ids ):
self._c.executemany( 'DELETE FROM shape_perceptual_hash_map WHERE phash_id = ? AND hash_id = ?;', ( ( phash_id, hash_id ) for phash_id in phash_ids ) )
self._ExecuteMany( 'DELETE FROM shape_perceptual_hash_map WHERE phash_id = ? AND hash_id = ?;', ( ( phash_id, hash_id ) for phash_id in phash_ids ) )
useful_phash_ids = { phash for ( phash, ) in self._c.execute( 'SELECT phash_id FROM shape_perceptual_hash_map WHERE phash_id IN ' + HydrusData.SplayListForDB( phash_ids ) + ';' ) }
useful_phash_ids = { phash for ( phash, ) in self._Execute( 'SELECT phash_id FROM shape_perceptual_hash_map WHERE phash_id IN ' + HydrusData.SplayListForDB( phash_ids ) + ';' ) }
useless_phash_ids = phash_ids.difference( useful_phash_ids )
self._c.executemany( 'INSERT OR IGNORE INTO shape_maintenance_branch_regen ( phash_id ) VALUES ( ? );', ( ( phash_id, ) for phash_id in useless_phash_ids ) )
self._ExecuteMany( 'INSERT OR IGNORE INTO shape_maintenance_branch_regen ( phash_id ) VALUES ( ? );', ( ( phash_id, ) for phash_id in useless_phash_ids ) )
def FileIsInSystem( self, hash_id ):
result = self._c.execute( 'SELECT 1 FROM shape_search_cache WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
result = self._Execute( 'SELECT 1 FROM shape_search_cache WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
return result is not None
@ -445,7 +445,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
def GetMaintenanceStatus( self ):
searched_distances_to_count = collections.Counter( dict( self._c.execute( 'SELECT searched_distance, COUNT( * ) FROM shape_search_cache GROUP BY searched_distance;' ) ) )
searched_distances_to_count = collections.Counter( dict( self._Execute( 'SELECT searched_distance, COUNT( * ) FROM shape_search_cache GROUP BY searched_distance;' ) ) )
return searched_distances_to_count
@ -480,7 +480,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
job_key.SetStatusTitle( 'similar files metadata maintenance' )
rebalance_phash_ids = self._STL( self._c.execute( 'SELECT phash_id FROM shape_maintenance_branch_regen;' ) )
rebalance_phash_ids = self._STL( self._Execute( 'SELECT phash_id FROM shape_maintenance_branch_regen;' ) )
num_to_do = len( rebalance_phash_ids )
@ -510,15 +510,15 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
job_key.SetVariable( 'popup_text_1', text )
job_key.SetVariable( 'popup_gauge_1', ( num_done, num_to_do ) )
with HydrusDB.TemporaryIntegerTable( self._c, rebalance_phash_ids, 'phash_id' ) as temp_table_name:
with self._MakeTemporaryIntegerTable( rebalance_phash_ids, 'phash_id' ) as temp_table_name:
# temp phashes to tree
( biggest_phash_id, ) = self._c.execute( 'SELECT phash_id FROM {} CROSS JOIN shape_vptree USING ( phash_id ) ORDER BY inner_population + outer_population DESC;'.format( temp_table_name ) ).fetchone()
( biggest_phash_id, ) = self._Execute( 'SELECT phash_id FROM {} CROSS JOIN shape_vptree USING ( phash_id ) ORDER BY inner_population + outer_population DESC;'.format( temp_table_name ) ).fetchone()
self._RegenerateBranch( job_key, biggest_phash_id )
rebalance_phash_ids = self._STL( self._c.execute( 'SELECT phash_id FROM shape_maintenance_branch_regen;' ) )
rebalance_phash_ids = self._STL( self._Execute( 'SELECT phash_id FROM shape_maintenance_branch_regen;' ) )
finally:
@ -541,7 +541,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
search_distance = new_options.GetInteger( 'similar_files_duplicate_pairs_search_distance' )
( count, ) = self._c.execute( 'SELECT COUNT( * ) FROM ( SELECT 1 FROM shape_search_cache WHERE searched_distance IS NULL or searched_distance < ? LIMIT 100 );', ( search_distance, ) ).fetchone()
( count, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT 1 FROM shape_search_cache WHERE searched_distance IS NULL or searched_distance < ? LIMIT 100 );', ( search_distance, ) ).fetchone()
if count >= 100:
@ -566,13 +566,13 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = ClientDBFilesStorage.GenerateFilesTableNames( self.modules_services.combined_local_file_service_id )
self._c.execute( 'DELETE FROM shape_perceptual_hash_map WHERE hash_id NOT IN ( SELECT hash_id FROM {} );'.format( current_files_table_name ) )
self._Execute( 'DELETE FROM shape_perceptual_hash_map WHERE hash_id NOT IN ( SELECT hash_id FROM {} );'.format( current_files_table_name ) )
job_key.SetVariable( 'popup_text_1', 'gathering all leaves' )
self._c.execute( 'DELETE FROM shape_vptree;' )
self._Execute( 'DELETE FROM shape_vptree;' )
all_nodes = self._c.execute( 'SELECT phash_id, phash FROM shape_perceptual_hashes;' ).fetchall()
all_nodes = self._Execute( 'SELECT phash_id, phash FROM shape_perceptual_hashes;' ).fetchall()
job_key.SetVariable( 'popup_text_1', HydrusData.ToHumanInt( len( all_nodes ) ) + ' leaves found, now regenerating' )
@ -593,14 +593,14 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
def ResetSearch( self, hash_ids ):
self._c.executemany( 'UPDATE shape_search_cache SET searched_distance = NULL WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) )
self._ExecuteMany( 'UPDATE shape_search_cache SET searched_distance = NULL WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) )
def Search( self, hash_id, max_hamming_distance ):
if max_hamming_distance == 0:
similar_hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM shape_perceptual_hash_map WHERE phash_id IN ( SELECT phash_id FROM shape_perceptual_hash_map WHERE hash_id = ? );', ( hash_id, ) ) )
similar_hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM shape_perceptual_hash_map WHERE phash_id IN ( SELECT phash_id FROM shape_perceptual_hash_map WHERE hash_id = ? );', ( hash_id, ) ) )
similar_hash_ids_and_distances = [ ( similar_hash_id, 0 ) for similar_hash_id in similar_hash_ids ]
@ -608,7 +608,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
search_radius = max_hamming_distance
top_node_result = self._c.execute( 'SELECT phash_id FROM shape_vptree WHERE parent_id IS NULL;' ).fetchone()
top_node_result = self._Execute( 'SELECT phash_id FROM shape_vptree WHERE parent_id IS NULL;' ).fetchone()
if top_node_result is None:
@ -617,7 +617,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
( root_node_phash_id, ) = top_node_result
search = self._STL( self._c.execute( 'SELECT phash FROM shape_perceptual_hashes NATURAL JOIN shape_perceptual_hash_map WHERE hash_id = ?;', ( hash_id, ) ) )
search = self._STL( self._Execute( 'SELECT phash FROM shape_perceptual_hashes NATURAL JOIN shape_perceptual_hash_map WHERE hash_id = ?;', ( hash_id, ) ) )
if len( search ) == 0:
@ -655,10 +655,10 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
results = list( self._ExecuteManySelectSingleParam( select_statement, group_of_current_potentials ) )
'''
with HydrusDB.TemporaryIntegerTable( self._c, group_of_current_potentials, 'phash_id' ) as temp_table_name:
with self._MakeTemporaryIntegerTable( group_of_current_potentials, 'phash_id' ) as temp_table_name:
# temp phash_ids to actual phashes and tree info
results = self._c.execute( 'SELECT phash_id, phash, radius, inner_id, outer_id FROM {} CROSS JOIN shape_perceptual_hashes USING ( phash_id ) CROSS JOIN shape_vptree USING ( phash_id );'.format( temp_table_name ) ).fetchall()
results = self._Execute( 'SELECT phash_id, phash, radius, inner_id, outer_id FROM {} CROSS JOIN shape_perceptual_hashes USING ( phash_id ) CROSS JOIN shape_vptree USING ( phash_id );'.format( temp_table_name ) ).fetchall()
for ( node_phash_id, node_phash, node_radius, inner_phash_id, outer_phash_id ) in results:
@ -728,10 +728,10 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
similar_phash_ids = list( similar_phash_ids_to_distances.keys() )
with HydrusDB.TemporaryIntegerTable( self._c, similar_phash_ids, 'phash_id' ) as temp_table_name:
with self._MakeTemporaryIntegerTable( similar_phash_ids, 'phash_id' ) as temp_table_name:
# temp phashes to hash map
similar_phash_ids_to_hash_ids = HydrusData.BuildKeyToListDict( self._c.execute( 'SELECT phash_id, hash_id FROM {} CROSS JOIN shape_perceptual_hash_map USING ( phash_id );'.format( temp_table_name ) ) )
similar_phash_ids_to_hash_ids = HydrusData.BuildKeyToListDict( self._Execute( 'SELECT phash_id, hash_id FROM {} CROSS JOIN shape_perceptual_hash_map USING ( phash_id );'.format( temp_table_name ) ) )
similar_hash_ids_to_distances = {}
@ -766,7 +766,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
def SetPHashes( self, hash_id, phashes ):
current_phash_ids = self._STS( self._c.execute( 'SELECT phash_id FROM shape_perceptual_hash_map WHERE hash_id = ?;', ( hash_id, ) ) )
current_phash_ids = self._STS( self._Execute( 'SELECT phash_id FROM shape_perceptual_hash_map WHERE hash_id = ?;', ( hash_id, ) ) )
if len( current_phash_ids ) > 0:
@ -781,10 +781,10 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
def StopSearchingFile( self, hash_id ):
phash_ids = self._STS( self._c.execute( 'SELECT phash_id FROM shape_perceptual_hash_map WHERE hash_id = ?;', ( hash_id, ) ) )
phash_ids = self._STS( self._Execute( 'SELECT phash_id FROM shape_perceptual_hash_map WHERE hash_id = ?;', ( hash_id, ) ) )
self.DisassociatePHashes( hash_id, phash_ids )
self._c.execute( 'DELETE FROM shape_search_cache WHERE hash_id = ?;', ( hash_id, ) )
self._Execute( 'DELETE FROM shape_search_cache WHERE hash_id = ?;', ( hash_id, ) )

View File

@ -4896,6 +4896,26 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
HydrusData.ShowText( 'Profiling done: {} slow jobs, {} fast jobs'.format( HydrusData.ToHumanInt( slow ), HydrusData.ToHumanInt( fast ) ) )
elif name == 'query_planner_mode':
if not HG.query_planner_mode:
now = HydrusData.GetNow()
HG.query_planner_start_time = now
HG.query_planner_query_count = 0
HG.query_planner_mode = True
HydrusData.ShowText( 'Query Planner mode on!' )
else:
HG.query_planner_mode = False
HydrusData.ShowText( 'Query Planning done: {} queries analyzed'.format( HydrusData.ToHumanInt( HG.query_planner_query_count ) ) )
elif name == 'pubsub_report_mode':
HG.pubsub_report_mode = not HG.pubsub_report_mode
@ -6017,10 +6037,13 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
profile_mode_message += os.linesep * 2
profile_mode_message += 'Turn the mode on, do the slow thing for a bit, and then turn it off. In your database directory will be a new profile log, which is really helpful for hydrus dev to figure out what is running slow for you and how to fix it.'
profile_mode_message += os.linesep * 2
profile_mode_message += 'A new Query Planner mode also makes very detailed database analysis. This is an alternate profiling mode hydev is testing.'
profile_mode_message += os.linesep * 2
profile_mode_message += 'More information is available in the help, under \'reducing program lag\'.'
ClientGUIMenus.AppendMenuItem( profiling, 'what is this?', 'Show profile info.', QW.QMessageBox.information, self, 'Profile modes', profile_mode_message )
ClientGUIMenus.AppendMenuCheckItem( profiling, 'profile mode', 'Run detailed \'profiles\'.', HG.profile_mode, self._SwitchBoolean, 'profile_mode' )
ClientGUIMenus.AppendMenuCheckItem( profiling, 'query planner mode', 'Run detailed \'query plans\'.', HG.query_planner_mode, self._SwitchBoolean, 'query_planner_mode' )
ClientGUIMenus.AppendMenu( debug, profiling, 'profiling' )

View File

@ -15,6 +15,7 @@ from hydrus.core import HydrusTags
from hydrus.client import ClientConstants as CC
from hydrus.client import ClientExporting
from hydrus.client import ClientSearch
from hydrus.client import ClientThreading
from hydrus.client.gui import ClientGUIDialogsQuick
from hydrus.client.gui import ClientGUIFunctions
from hydrus.client.gui import ClientGUIScrolledPanels
@ -770,6 +771,8 @@ class ReviewExportFilesPanel( ClientGUIScrolledPanels.ReviewPanel ):
to_do = self._paths.GetData()
to_do = [ ( ordering_index, media, self._GetPath( media ) ) for ( ordering_index, media ) in to_do ]
num_to_do = len( to_do )
def qt_update_label( text ):
@ -799,19 +802,33 @@ class ReviewExportFilesPanel( ClientGUIScrolledPanels.ReviewPanel ):
def do_it( directory, neighbouring_txt_tag_service_keys, delete_afterwards, export_symlinks, quit_afterwards ):
job_key = ClientThreading.JobKey( cancellable = True )
job_key.SetStatusTitle( 'file export' )
HG.client_controller.pub( 'message', job_key )
pauser = HydrusData.BigJobPauser()
for ( index, ( ordering_index, media ) ) in enumerate( to_do ):
for ( index, ( ordering_index, media, path ) ) in enumerate( to_do ):
if job_key.IsCancelled():
break
try:
QP.CallAfter( qt_update_label, HydrusData.ConvertValueRangeToPrettyString(index+1,num_to_do) )
x_of_y = HydrusData.ConvertValueRangeToPrettyString( index + 1, num_to_do )
job_key.SetVariable( 'popup_text_1', 'Done {}'.format( x_of_y ) )
job_key.SetVariable( 'popup_gauge_1', ( index + 1, num_to_do ) )
QP.CallAfter( qt_update_label, x_of_y )
hash = media.GetHash()
mime = media.GetMime()
path = self._GetPath( media )
path = os.path.normpath( path )
if not path.startswith( directory ):
@ -869,7 +886,7 @@ class ReviewExportFilesPanel( ClientGUIScrolledPanels.ReviewPanel ):
pauser.Pause()
if delete_afterwards:
if not job_key.IsCancelled() and delete_afterwards:
QP.CallAfter( qt_update_label, 'deleting' )
@ -877,11 +894,11 @@ class ReviewExportFilesPanel( ClientGUIScrolledPanels.ReviewPanel ):
if delete_lock_for_archived_files:
deletee_hashes = { media.GetHash() for ( ordering_index, media ) in to_do if not media.HasArchive() }
deletee_hashes = { media.GetHash() for ( ordering_index, media, path ) in to_do if not media.HasArchive() }
else:
deletee_hashes = { media.GetHash() for ( ordering_index, media ) in to_do }
deletee_hashes = { media.GetHash() for ( ordering_index, media, path ) in to_do }
chunks_of_hashes = HydrusData.SplitListIntoChunks( deletee_hashes, 64 )
@ -896,6 +913,13 @@ class ReviewExportFilesPanel( ClientGUIScrolledPanels.ReviewPanel ):
job_key.DeleteVariable( 'popup_gauge_1' )
job_key.SetVariable( 'popup_text_1', 'Done!' )
job_key.Finish()
job_key.Delete( 5 )
QP.CallAfter( qt_update_label, 'done!' )
time.sleep( 1 )

View File

@ -2585,8 +2585,10 @@ class ReviewHowBonedAmI( ClientGUIScrolledPanels.ReviewPanel ):
num_inbox = boned_stats[ 'num_inbox' ]
num_archive = boned_stats[ 'num_archive' ]
num_deleted = boned_stats[ 'num_deleted' ]
size_inbox = boned_stats[ 'size_inbox' ]
size_archive = boned_stats[ 'size_archive' ]
size_deleted = boned_stats[ 'size_deleted' ]
total_viewtime = boned_stats[ 'total_viewtime' ]
total_alternate_files = boned_stats[ 'total_alternate_files' ]
total_duplicate_files = boned_stats[ 'total_duplicate_files' ]
@ -2595,6 +2597,9 @@ class ReviewHowBonedAmI( ClientGUIScrolledPanels.ReviewPanel ):
num_total = num_archive + num_inbox
size_total = size_archive + size_inbox
num_supertotal = num_total + num_deleted
size_supertotal = size_total + size_deleted
vbox = QP.VBoxLayout()
if num_total < 1000:
@ -2630,13 +2635,21 @@ class ReviewHowBonedAmI( ClientGUIScrolledPanels.ReviewPanel ):
else:
notebook = ClientGUICommon.BetterNotebook( self )
#
panel = QW.QWidget( notebook )
panel_vbox = QP.VBoxLayout()
average_filesize = size_total // num_total
summary_label = 'Total: {} files, totalling {}, averaging {}'.format( HydrusData.ToHumanInt( num_total ), HydrusData.ToHumanBytes( size_total ), HydrusData.ToHumanBytes( average_filesize ) )
summary_st = ClientGUICommon.BetterStaticText( self, label = summary_label )
summary_st = ClientGUICommon.BetterStaticText( panel, label = summary_label )
QP.AddToLayout( vbox, summary_st, CC.FLAGS_CENTER )
QP.AddToLayout( panel_vbox, summary_st, CC.FLAGS_CENTER )
num_archive_percent = num_archive / num_total
size_archive_percent = size_archive / size_total
@ -2644,42 +2657,97 @@ class ReviewHowBonedAmI( ClientGUIScrolledPanels.ReviewPanel ):
num_inbox_percent = num_inbox / num_total
size_inbox_percent = size_inbox / size_total
archive_label = 'Archive: ' + HydrusData.ToHumanInt( num_archive ) + ' files (' + ClientData.ConvertZoomToPercentage( num_archive_percent ) + '), totalling ' + HydrusData.ToHumanBytes( size_archive ) + '(' + ClientData.ConvertZoomToPercentage( size_archive_percent ) + ')'
num_deleted_percent = num_deleted / num_supertotal
size_deleted_percent = size_deleted / size_supertotal
archive_st = ClientGUICommon.BetterStaticText( self, label = archive_label )
archive_label = 'Archive: {} files ({}), totalling {} ({})'.format( HydrusData.ToHumanInt( num_archive ), ClientData.ConvertZoomToPercentage( num_archive_percent ), HydrusData.ToHumanBytes( size_archive ), ClientData.ConvertZoomToPercentage( size_archive_percent ) )
inbox_label = 'Inbox: ' + HydrusData.ToHumanInt( num_inbox ) + ' files (' + ClientData.ConvertZoomToPercentage( num_inbox_percent ) + '), totalling ' + HydrusData.ToHumanBytes( size_inbox ) + '(' + ClientData.ConvertZoomToPercentage( size_inbox_percent ) + ')'
archive_st = ClientGUICommon.BetterStaticText( panel, label = archive_label )
inbox_st = ClientGUICommon.BetterStaticText( self, label = inbox_label )
inbox_label = 'Inbox: {} files ({}), totalling {} ({})'.format( HydrusData.ToHumanInt( num_inbox ), ClientData.ConvertZoomToPercentage( num_inbox_percent ), HydrusData.ToHumanBytes( size_inbox ), ClientData.ConvertZoomToPercentage( size_inbox_percent ) )
QP.AddToLayout( vbox, archive_st, CC.FLAGS_CENTER )
QP.AddToLayout( vbox, inbox_st, CC.FLAGS_CENTER )
inbox_st = ClientGUICommon.BetterStaticText( panel, label = inbox_label )
deleted_label = 'Deleted: {} files ({}), totalling {} ({})'.format( HydrusData.ToHumanInt( num_deleted ), ClientData.ConvertZoomToPercentage( num_deleted_percent ), HydrusData.ToHumanBytes( size_deleted ), ClientData.ConvertZoomToPercentage( size_deleted_percent ) )
deleted_st = ClientGUICommon.BetterStaticText( panel, label = deleted_label )
QP.AddToLayout( panel_vbox, archive_st, CC.FLAGS_CENTER )
QP.AddToLayout( panel_vbox, inbox_st, CC.FLAGS_CENTER )
QP.AddToLayout( panel_vbox, deleted_st, CC.FLAGS_CENTER )
if 'earliest_import_time' in boned_stats:
eit = boned_stats[ 'earliest_import_time' ]
eit_label = 'Earliest file import: {} ({})'.format( HydrusData.ConvertTimestampToPrettyTime( eit ), HydrusData.TimestampToPrettyTimeDelta( eit ) )
eit_st = ClientGUICommon.BetterStaticText( panel, label = eit_label )
QP.AddToLayout( panel_vbox, eit_st, CC.FLAGS_CENTER )
panel_vbox.addStretch( 1 )
panel.setLayout( panel_vbox )
notebook.addTab( panel, 'files' )
#
panel = QW.QWidget( notebook )
panel_vbox = QP.VBoxLayout()
( media_views, media_viewtime, preview_views, preview_viewtime ) = total_viewtime
media_label = 'Total media views: ' + HydrusData.ToHumanInt( media_views ) + ', totalling ' + HydrusData.TimeDeltaToPrettyTimeDelta( media_viewtime )
media_st = ClientGUICommon.BetterStaticText( self, label = media_label )
media_st = ClientGUICommon.BetterStaticText( panel, label = media_label )
preview_label = 'Total preview views: ' + HydrusData.ToHumanInt( preview_views ) + ', totalling ' + HydrusData.TimeDeltaToPrettyTimeDelta( preview_viewtime )
preview_st = ClientGUICommon.BetterStaticText( self, label = preview_label )
preview_st = ClientGUICommon.BetterStaticText( panel, label = preview_label )
QP.AddToLayout( vbox, media_st, CC.FLAGS_CENTER )
QP.AddToLayout( vbox, preview_st, CC.FLAGS_CENTER )
QP.AddToLayout( panel_vbox, media_st, CC.FLAGS_CENTER )
QP.AddToLayout( panel_vbox, preview_st, CC.FLAGS_CENTER )
panel_vbox.addStretch( 1 )
panel.setLayout( panel_vbox )
notebook.addTab( panel, 'views' )
#
panel = QW.QWidget( notebook )
panel_vbox = QP.VBoxLayout()
potentials_label = 'Total duplicate potential pairs: {}'.format( HydrusData.ToHumanInt( total_potential_pairs ) )
duplicates_label = 'Total files set duplicate: {}'.format( HydrusData.ToHumanInt( total_duplicate_files ) )
alternates_label = 'Total duplicate file groups set alternate: {}'.format( HydrusData.ToHumanInt( total_alternate_files ) )
potentials_st = ClientGUICommon.BetterStaticText( self, label = potentials_label )
duplicates_st = ClientGUICommon.BetterStaticText( self, label = duplicates_label )
alternates_st = ClientGUICommon.BetterStaticText( self, label = alternates_label )
potentials_st = ClientGUICommon.BetterStaticText( panel, label = potentials_label )
duplicates_st = ClientGUICommon.BetterStaticText( panel, label = duplicates_label )
alternates_st = ClientGUICommon.BetterStaticText( panel, label = alternates_label )
QP.AddToLayout( vbox, potentials_st, CC.FLAGS_CENTER )
QP.AddToLayout( vbox, duplicates_st, CC.FLAGS_CENTER )
QP.AddToLayout( vbox, alternates_st, CC.FLAGS_CENTER )
QP.AddToLayout( panel_vbox, potentials_st, CC.FLAGS_CENTER )
QP.AddToLayout( panel_vbox, duplicates_st, CC.FLAGS_CENTER )
QP.AddToLayout( panel_vbox, alternates_st, CC.FLAGS_CENTER )
panel_vbox.addStretch( 1 )
panel.setLayout( panel_vbox )
notebook.addTab( panel, 'duplicates' )
#
QP.AddToLayout( vbox, notebook, CC.FLAGS_EXPAND_PERPENDICULAR )
vbox.addStretch( 1 )
self.widget().setLayout( vbox )

View File

@ -3793,50 +3793,11 @@ class MediaPanelThumbnails( MediaPanel ):
ClientGUIMenus.AppendMenuItem( duplicates_action_submenu, 'set this file as the best quality of its group', 'Set the focused media to be the King of its group.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_SET_FOCUSED_KING ) )
if dissolution_actions_available:
duplicates_single_dissolution_menu = QW.QMenu( duplicates_action_submenu )
if focus_can_be_searched:
ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'schedule this file to be searched for potentials again', 'Queue this file for another potentials search. Will not remove any existing potentials.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_RESET_FOCUSED_POTENTIAL_SEARCH ) )
if focus_has_potentials:
ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'remove this file\'s potential relationships', 'Clear out this file\'s potential relationships.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_REMOVE_FOCUSED_POTENTIALS ) )
if focus_is_in_duplicate_group:
if not focus_is_definitely_king:
ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'remove this file from its duplicate group', 'Extract this file from its duplicate group and reset its search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_REMOVE_FOCUSED_FROM_DUPLICATE_GROUP ) )
ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'dissolve this file\'s duplicate group completely', 'Completely eliminate this file\'s duplicate group and reset all files\' search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_DISSOLVE_FOCUSED_DUPLICATE_GROUP ) )
if focus_is_in_alternate_group:
ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'remove this file from its alternate group', 'Extract this file\'s duplicate group from its alternate group and reset the duplicate group\'s search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_REMOVE_FOCUSED_FROM_ALTERNATE_GROUP ) )
ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'dissolve this file\'s alternate group completely', 'Completely eliminate this file\'s alternate group and all duplicate group members. This resets search status for all involved files.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_DISSOLVE_FOCUSED_ALTERNATE_GROUP ) )
if focus_has_fps:
ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'delete all false-positive relationships this file\'s alternate group has with other groups', 'Clear out all false-positive relationships this file\'s alternates group has with other groups and resets search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_CLEAR_FOCUSED_FALSE_POSITIVES ) )
ClientGUIMenus.AppendMenu( duplicates_action_submenu, duplicates_single_dissolution_menu, 'remove/reset for this file' )
ClientGUIMenus.AppendSeparator( duplicates_action_submenu )
if multiple_selected:
ClientGUIMenus.AppendSeparator( duplicates_action_submenu )
label = 'set this file as better than the ' + HydrusData.ToHumanInt( num_selected - 1 ) + ' other selected'
ClientGUIMenus.AppendMenuItem( duplicates_action_submenu, label, 'Set the focused media to be better than the other selected files.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_SET_FOCUSED_BETTER ) )
@ -3860,13 +3821,75 @@ class MediaPanelThumbnails( MediaPanel ):
ClientGUIMenus.AppendMenuItem( duplicates_action_submenu, 'set selected collections as groups of alternates', 'Set files in the selection which are collected together as alternates.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_SET_ALTERNATE_COLLECTIONS ) )
#
ClientGUIMenus.AppendSeparator( duplicates_action_submenu )
duplicates_edit_action_submenu = QW.QMenu( duplicates_action_submenu )
for duplicate_type in ( HC.DUPLICATE_BETTER, HC.DUPLICATE_SAME_QUALITY ):
ClientGUIMenus.AppendMenuItem( duplicates_edit_action_submenu, 'for ' + HC.duplicate_type_string_lookup[duplicate_type], 'Edit what happens when you set this status.', self._EditDuplicateActionOptions, duplicate_type )
if HG.client_controller.new_options.GetBoolean( 'advanced_mode' ):
ClientGUIMenus.AppendMenuItem( duplicates_edit_action_submenu, 'for ' + HC.duplicate_type_string_lookup[HC.DUPLICATE_ALTERNATE] + ' (advanced!)', 'Edit what happens when you set this status.', self._EditDuplicateActionOptions, HC.DUPLICATE_ALTERNATE )
ClientGUIMenus.AppendMenu( duplicates_action_submenu, duplicates_edit_action_submenu, 'edit default duplicate metadata merge options' )
#
ClientGUIMenus.AppendSeparator( duplicates_action_submenu )
ClientGUIMenus.AppendMenuItem( duplicates_action_submenu, 'set all possible pair combinations as \'potential\' duplicates for the duplicates filter.', 'Queue all these files up in the duplicates filter.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_SET_POTENTIAL ) )
if advanced_mode:
if dissolution_actions_available:
ClientGUIMenus.AppendSeparator( duplicates_action_submenu )
duplicates_single_dissolution_menu = QW.QMenu( duplicates_action_submenu )
if focus_can_be_searched:
ClientGUIMenus.AppendSeparator( duplicates_action_submenu )
ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'schedule this file to be searched for potentials again', 'Queue this file for another potentials search. Will not remove any existing potentials.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_RESET_FOCUSED_POTENTIAL_SEARCH ) )
if focus_has_potentials:
ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'remove this file\'s potential relationships', 'Clear out this file\'s potential relationships.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_REMOVE_FOCUSED_POTENTIALS ) )
if focus_is_in_duplicate_group:
if not focus_is_definitely_king:
ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'remove this file from its duplicate group', 'Extract this file from its duplicate group and reset its search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_REMOVE_FOCUSED_FROM_DUPLICATE_GROUP ) )
ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'dissolve this file\'s duplicate group completely', 'Completely eliminate this file\'s duplicate group and reset all files\' search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_DISSOLVE_FOCUSED_DUPLICATE_GROUP ) )
if focus_is_in_alternate_group:
ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'remove this file from its alternate group', 'Extract this file\'s duplicate group from its alternate group and reset the duplicate group\'s search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_REMOVE_FOCUSED_FROM_ALTERNATE_GROUP ) )
ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'dissolve this file\'s alternate group completely', 'Completely eliminate this file\'s alternate group and all duplicate group members. This resets search status for all involved files.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_DISSOLVE_FOCUSED_ALTERNATE_GROUP ) )
if focus_has_fps:
ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'delete all false-positive relationships this file\'s alternate group has with other groups', 'Clear out all false-positive relationships this file\'s alternates group has with other groups and resets search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_CLEAR_FOCUSED_FALSE_POSITIVES ) )
ClientGUIMenus.AppendMenu( duplicates_action_submenu, duplicates_single_dissolution_menu, 'remove/reset for this file' )
if multiple_selected:
if advanced_mode:
duplicates_multiple_dissolution_menu = QW.QMenu( duplicates_action_submenu )
@ -3879,21 +3902,6 @@ class MediaPanelThumbnails( MediaPanel ):
ClientGUIMenus.AppendMenu( duplicates_action_submenu, duplicates_multiple_dissolution_menu, 'remove/reset for all selected' )
duplicates_edit_action_submenu = QW.QMenu( duplicates_action_submenu )
for duplicate_type in ( HC.DUPLICATE_BETTER, HC.DUPLICATE_SAME_QUALITY ):
ClientGUIMenus.AppendMenuItem( duplicates_edit_action_submenu, 'for ' + HC.duplicate_type_string_lookup[duplicate_type], 'Edit what happens when you set this status.', self._EditDuplicateActionOptions, duplicate_type )
if HG.client_controller.new_options.GetBoolean( 'advanced_mode' ):
ClientGUIMenus.AppendMenuItem( duplicates_edit_action_submenu, 'for ' + HC.duplicate_type_string_lookup[HC.DUPLICATE_ALTERNATE] + ' (advanced!)', 'Edit what happens when you set this status.', self._EditDuplicateActionOptions, HC.DUPLICATE_ALTERNATE )
ClientGUIMenus.AppendSeparator( duplicates_action_submenu )
ClientGUIMenus.AppendMenu( duplicates_action_submenu, duplicates_edit_action_submenu, 'edit default duplicate metadata merge options' )
ClientGUIMenus.AppendMenu( duplicates_menu, duplicates_action_submenu, 'set relationship' )

View File

@ -1389,7 +1389,10 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
publish_to_page = False
gallery_import.Start( self._page_key, publish_to_page )
if self._have_started:
gallery_import.Start( self._page_key, publish_to_page )
self._AddGalleryImport( gallery_import )
@ -1459,7 +1462,10 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
publish_to_page = False
gallery_import.Start( self._page_key, publish_to_page )
if self._have_started:
gallery_import.Start( self._page_key, publish_to_page )
self._AddGalleryImport( gallery_import )

View File

@ -241,7 +241,10 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ):
publish_to_page = False
watcher.Start( self._page_key, publish_to_page )
if self._have_started:
watcher.Start( self._page_key, publish_to_page )
self._AddWatcher( watcher )

View File

@ -37,10 +37,10 @@ LOCAL_BOORU_STRING_PARAMS = set()
LOCAL_BOORU_JSON_PARAMS = set()
LOCAL_BOORU_JSON_BYTE_LIST_PARAMS = set()
CLIENT_API_INT_PARAMS = { 'file_id' }
CLIENT_API_BYTE_PARAMS = { 'hash', 'destination_page_key', 'page_key', 'Hydrus-Client-API-Access-Key', 'Hydrus-Client-API-Session-Key' }
CLIENT_API_INT_PARAMS = { 'file_id', 'file_sort_type' }
CLIENT_API_BYTE_PARAMS = { 'hash', 'destination_page_key', 'page_key', 'Hydrus-Client-API-Access-Key', 'Hydrus-Client-API-Session-Key', 'tag_service_key', 'file_service_key' }
CLIENT_API_STRING_PARAMS = { 'name', 'url', 'domain' }
CLIENT_API_JSON_PARAMS = { 'basic_permissions', 'system_inbox', 'system_archive', 'tags', 'file_ids', 'only_return_identifiers', 'detailed_url_information', 'simple' }
CLIENT_API_JSON_PARAMS = { 'basic_permissions', 'system_inbox', 'system_archive', 'tags', 'file_ids', 'only_return_identifiers', 'detailed_url_information', 'simple', 'file_sort_asc' }
CLIENT_API_JSON_BYTE_LIST_PARAMS = { 'hashes' }
def ParseLocalBooruGETArgs( requests_args ):
@ -1580,16 +1580,121 @@ class HydrusResourceClientAPIRestrictedGetFilesSearchFiles( HydrusResourceClient
def _threadDoGETJob( self, request: HydrusServerRequest.HydrusRequest ):
# optionally pull this from the params, obviously
location_search_context = ClientSearch.LocationSearchContext( current_service_keys = [ CC.LOCAL_FILE_SERVICE_KEY ] )
if 'file_service_key' in request.parsed_request_args or 'file_service_name' in request.parsed_request_args:
if 'file_service_key' in request.parsed_request_args:
file_service_key = request.parsed_request_args[ 'file_service_key' ]
else:
file_service_name = request.parsed_request_args[ 'file_service_name' ]
try:
file_service_key = HG.client_controller.services_manager.GetServiceKeyFromName( HC.ALL_FILE_SERVICES, file_service_name )
except:
raise HydrusExceptions.BadRequestException( 'Could not find the service "{}"!'.format( file_service_name ) )
try:
service = HG.client_controller.services_manager.GetService( file_service_key )
except:
raise HydrusExceptions.BadRequestException( 'Could not find that file service!' )
if service.GetServiceType() not in HC.ALL_FILE_SERVICES:
raise HydrusExceptions.BadRequestException( 'Sorry, that service key did not give a file service!' )
else:
# I guess ideally we would go for the 'all local services' umbrella, or a list of them, or however we end up doing that
# for now we'll fudge it
file_service_key = list( HG.client_controller.services_manager.GetServiceKeys( ( HC.LOCAL_FILE_DOMAIN, ) ) )[0]
tag_search_context = ClientSearch.TagSearchContext( service_key = CC.COMBINED_TAG_SERVICE_KEY )
if 'tag_service_key' in request.parsed_request_args or 'tag_service_name' in request.parsed_request_args:
if 'tag_service_key' in request.parsed_request_args:
tag_service_key = request.parsed_request_args[ 'tag_service_key' ]
else:
tag_service_name = request.parsed_request_args[ 'tag_service_name' ]
try:
tag_service_key = HG.client_controller.services_manager.GetServiceKeyFromName( HC.ALL_TAG_SERVICES, tag_service_name )
except:
raise HydrusExceptions.BadRequestException( 'Could not find the service "{}"!'.format( tag_service_name ) )
try:
service = HG.client_controller.services_manager.GetService( tag_service_key )
except:
raise HydrusExceptions.BadRequestException( 'Could not find that tag service!' )
if service.GetServiceType() not in HC.ALL_TAG_SERVICES:
raise HydrusExceptions.BadRequestException( 'Sorry, that service key did not give a tag service!' )
else:
tag_service_key = CC.COMBINED_TAG_SERVICE_KEY
if tag_service_key == CC.COMBINED_TAG_SERVICE_KEY and file_service_key == CC.COMBINED_FILE_SERVICE_KEY:
raise HydrusExceptions.BadRequestException( 'Sorry, search for all known tags over all known files is not supported!' )
location_search_context = ClientSearch.LocationSearchContext( current_service_keys = [ file_service_key ] )
tag_search_context = ClientSearch.TagSearchContext( service_key = tag_service_key )
predicates = ParseClientAPISearchPredicates( request )
file_search_context = ClientSearch.FileSearchContext( location_search_context = location_search_context, tag_search_context = tag_search_context, predicates = predicates )
file_sort_type = CC.SORT_FILES_BY_IMPORT_TIME
if 'file_sort_type' in request.parsed_request_args:
file_sort_type = request.parsed_request_args[ 'file_sort_type' ]
if file_sort_type not in CC.SYSTEM_SORT_TYPES:
raise HydrusExceptions.BadRequestException( 'Sorry, did not understand that sort type!' )
file_sort_asc = False
if 'file_sort_asc' in request.parsed_request_args:
file_sort_asc = request.parsed_request_args.GetValue( 'file_sort_asc', bool )
sort_order = CC.SORT_ASC if file_sort_asc else CC.SORT_DESC
# newest first
sort_by = ClientMedia.MediaSort( sort_type = ( 'system', CC.SORT_FILES_BY_IMPORT_TIME ), sort_order = CC.SORT_DESC )
sort_by = ClientMedia.MediaSort( sort_type = ( 'system', file_sort_type ), sort_order = CC.SORT_DESC )
hash_ids = HG.client_controller.Read( 'file_query_ids', file_search_context, sort_by = sort_by, apply_implicit_limit = False )

View File

@ -81,8 +81,8 @@ options = {}
# Misc
NETWORK_VERSION = 20
SOFTWARE_VERSION = 449
CLIENT_API_VERSION = 18
SOFTWARE_VERSION = 450
CLIENT_API_VERSION = 19
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
@ -293,10 +293,10 @@ NICE_RESOLUTIONS = {}
NICE_RESOLUTIONS[ ( 640, 480 ) ] = '480p'
NICE_RESOLUTIONS[ ( 1280, 720 ) ] = '720p'
NICE_RESOLUTIONS[ ( 1920, 1080 ) ] = '1080p'
NICE_RESOLUTIONS[ ( 3840, 2060 ) ] = '4k'
NICE_RESOLUTIONS[ ( 3840, 2160 ) ] = '4k'
NICE_RESOLUTIONS[ ( 720, 1280 ) ] = 'vertical 720p'
NICE_RESOLUTIONS[ ( 1080, 1920 ) ] = 'vertical 1080p'
NICE_RESOLUTIONS[ ( 2060, 3840 ) ] = 'vertical 4k'
NICE_RESOLUTIONS[ ( 2160, 3840 ) ] = 'vertical 4k'
NICE_RATIOS = {}
@ -434,7 +434,10 @@ NONEDITABLE_SERVICES = ( LOCAL_FILE_DOMAIN, LOCAL_FILE_TRASH_DOMAIN, COMBINED_FI
SPECIFIC_FILE_SERVICES = ( LOCAL_FILE_DOMAIN, LOCAL_FILE_TRASH_DOMAIN, COMBINED_LOCAL_FILE, FILE_REPOSITORY, IPFS )
AUTOCOMPLETE_CACHE_SPECIFIC_FILE_SERVICES = ( LOCAL_FILE_DOMAIN, LOCAL_FILE_TRASH_DOMAIN, COMBINED_LOCAL_FILE, FILE_REPOSITORY )
TAG_CACHE_SPECIFIC_FILE_SERVICES = ( COMBINED_LOCAL_FILE, FILE_REPOSITORY )
ALL_SERVICES = REMOTE_SERVICES + LOCAL_SERVICES + ( COMBINED_FILE, COMBINED_TAG )
ALL_TAG_SERVICES = REAL_TAG_SERVICES + ( COMBINED_TAG, )
ALL_FILE_SERVICES = FILE_SERVICES + ( COMBINED_FILE, )
SERVICES_WITH_THUMBNAILS = [ FILE_REPOSITORY, LOCAL_FILE_DOMAIN ]

View File

@ -644,6 +644,41 @@ class HydrusController( object ):
def PrintQueryPlan( self, query, plan_lines ):
pretty_timestamp = time.strftime( '%Y-%m-%d %H-%M-%S', time.localtime( HG.query_planner_start_time ) )
query_planner_log_filename = '{} query planner - {}.log'.format( self._name, pretty_timestamp )
query_planner_log_path = os.path.join( self.db_dir, query_planner_log_filename )
with open( query_planner_log_path, 'a', encoding = 'utf-8' ) as f:
prefix = time.strftime( '%Y/%m/%d %H:%M:%S: ' )
if ' ' in query:
first_word = query.split( ' ', 1 )[0]
else:
first_word = 'unknown'
f.write( prefix + first_word )
f.write( os.linesep )
f.write( query )
if len( plan_lines ) > 0:
f.write( os.linesep )
f.write( os.linesep.join( ( str( p ) for p in plan_lines ) ) )
f.write( os.linesep * 2 )
def Read( self, action, *args, **kwargs ):
return self._Read( action, *args, **kwargs )

View File

@ -6,6 +6,7 @@ import sqlite3
import traceback
import time
from hydrus.core import HydrusDBBase
from hydrus.core import HydrusConstants as HC
from hydrus.core import HydrusData
from hydrus.core import HydrusEncryption
@ -59,13 +60,6 @@ def GetApproxVacuumDuration( db_size ):
return approx_vacuum_duration
def GetRowCount( c: sqlite3.Cursor ):
row_count = c.rowcount
if row_count == -1: return 0
else: return row_count
def ReadFromCancellableCursor( cursor, largest_group_size, cancelled_hook = None ):
if cancelled_hook is None:
@ -164,11 +158,13 @@ def VacuumDB( db_path ):
c.execute( 'PRAGMA journal_mode = {};'.format( HG.db_journal_mode ) )
class DBCursorTransactionWrapper( object ):
class DBCursorTransactionWrapper( HydrusDBBase.DBBase ):
def __init__( self, c: sqlite3.Cursor, transaction_commit_period: int ):
self._c = c
HydrusDBBase.DBBase.__init__( self )
self._SetCursor( c )
self._transaction_commit_period = transaction_commit_period
@ -184,8 +180,8 @@ class DBCursorTransactionWrapper( object ):
if not self._in_transaction:
self._c.execute( 'BEGIN IMMEDIATE;' )
self._c.execute( 'SAVEPOINT hydrus_savepoint;' )
self._Execute( 'BEGIN IMMEDIATE;' )
self._Execute( 'SAVEPOINT hydrus_savepoint;' )
self._transaction_start_time = HydrusData.GetNow()
self._in_transaction = True
@ -197,24 +193,24 @@ class DBCursorTransactionWrapper( object ):
if self._in_transaction:
self._c.execute( 'COMMIT;' )
self._Execute( 'COMMIT;' )
self._in_transaction = False
self._transaction_contains_writes = False
if HG.db_journal_mode == 'WAL' and HydrusData.TimeHasPassed( self._last_wal_checkpoint_time + 1800 ):
self._c.execute( 'PRAGMA wal_checkpoint(PASSIVE);' )
self._Execute( 'PRAGMA wal_checkpoint(PASSIVE);' )
self._last_wal_checkpoint_time = HydrusData.GetNow()
if HydrusData.TimeHasPassed( self._last_mem_refresh_time + 600 ):
self._c.execute( 'DETACH mem;' )
self._c.execute( 'ATTACH ":memory:" AS mem;' )
self._Execute( 'DETACH mem;' )
self._Execute( 'ATTACH ":memory:" AS mem;' )
TemporaryIntegerTableNameCache.instance().Clear()
HydrusDBBase.TemporaryIntegerTableNameCache.instance().Clear()
self._last_mem_refresh_time = HydrusData.GetNow()
@ -249,10 +245,10 @@ class DBCursorTransactionWrapper( object ):
if self._in_transaction:
self._c.execute( 'ROLLBACK TO hydrus_savepoint;' )
self._Execute( 'ROLLBACK TO hydrus_savepoint;' )
# any temp int tables created in this lad will be rolled back, so 'initialised' can't be trusted. just reset, no big deal
TemporaryIntegerTableNameCache.instance().Clear()
HydrusDBBase.TemporaryIntegerTableNameCache.instance().Clear()
# still in transaction
# transaction may no longer contain writes, but it isn't important to figure out that it doesn't
@ -265,9 +261,9 @@ class DBCursorTransactionWrapper( object ):
def Save( self ):
self._c.execute( 'RELEASE hydrus_savepoint;' )
self._Execute( 'RELEASE hydrus_savepoint;' )
self._c.execute( 'SAVEPOINT hydrus_savepoint;' )
self._Execute( 'SAVEPOINT hydrus_savepoint;' )
def TimeToCommit( self ):
@ -275,7 +271,7 @@ class DBCursorTransactionWrapper( object ):
return self._in_transaction and self._transaction_contains_writes and HydrusData.TimeHasPassed( self._transaction_start_time + self._transaction_commit_period )
class HydrusDB( object ):
class HydrusDB( HydrusDBBase.DBBase ):
READ_WRITE_ACTIONS = []
UPDATE_WAIT = 2
@ -287,13 +283,15 @@ class HydrusDB( object ):
raise Exception( 'Sorry, it looks like the db partition has less than 500MB, please free up some space.' )
HydrusDBBase.DBBase.__init__( self )
self._controller = controller
self._db_dir = db_dir
self._db_name = db_name
self._modules = []
TemporaryIntegerTableNameCache()
HydrusDBBase.TemporaryIntegerTableNameCache()
self._ssl_cert_filename = '{}.crt'.format( self._db_name )
self._ssl_key_filename = '{}.key'.format( self._db_name )
@ -332,7 +330,6 @@ class HydrusDB( object ):
self._current_job_name = ''
self._db = None
self._c = None
self._is_connected = False
self._cursor_transaction_wrapper = None
@ -342,12 +339,12 @@ class HydrusDB( object ):
# open and close to clean up in case last session didn't close well
self._InitDB()
self._CloseDBCursor()
self._CloseDBConnection()
self._InitDB()
( version, ) = self._c.execute( 'SELECT version FROM version;' ).fetchone()
( version, ) = self._Execute( 'SELECT version FROM version;' ).fetchone()
if version > HC.SOFTWARE_VERSION:
@ -405,10 +402,10 @@ class HydrusDB( object ):
raise e
( version, ) = self._c.execute( 'SELECT version FROM version;' ).fetchone()
( version, ) = self._Execute( 'SELECT version FROM version;' ).fetchone()
self._CloseDBCursor()
self._CloseDBConnection()
self._controller.CallToThreadLongRunning( self.MainLoop )
@ -427,8 +424,8 @@ class HydrusDB( object ):
# this is useful to do after populating a temp table so the query planner can decide which index to use in a big join that uses it
self._c.execute( 'ANALYZE {};'.format( temp_table_name ) )
self._c.execute( 'ANALYZE mem.sqlite_master;' ) # this reloads the current stats into the query planner, may no longer be needed
self._Execute( 'ANALYZE {};'.format( temp_table_name ) )
self._Execute( 'ANALYZE mem.sqlite_master;' ) # this reloads the current stats into the query planner, may no longer be needed
def _AttachExternalDatabases( self ):
@ -442,12 +439,12 @@ class HydrusDB( object ):
db_path = os.path.join( self._db_dir, filename )
self._c.execute( 'ATTACH ? AS ' + name + ';', ( db_path, ) )
self._Execute( 'ATTACH ? AS ' + name + ';', ( db_path, ) )
db_path = os.path.join( self._db_dir, self._durable_temp_db_filename )
self._c.execute( 'ATTACH ? AS durable_temp;', ( db_path, ) )
self._Execute( 'ATTACH ? AS durable_temp;', ( db_path, ) )
def _CleanAfterJobWork( self ):
@ -455,9 +452,9 @@ class HydrusDB( object ):
self._pubsubs = []
def _CloseDBCursor( self ):
def _CloseDBConnection( self ):
TemporaryIntegerTableNameCache.instance().Clear()
HydrusDBBase.TemporaryIntegerTableNameCache.instance().Clear()
if self._db is not None:
@ -466,14 +463,13 @@ class HydrusDB( object ):
self._cursor_transaction_wrapper.Commit()
self._c.close()
self._CloseCursor()
self._db.close()
del self._c
del self._db
self._db = None
self._c = None
self._is_connected = False
@ -488,35 +484,6 @@ class HydrusDB( object ):
raise NotImplementedError()
def _CreateIndex( self, table_name, columns, unique = False ):
if '.' in table_name:
table_name_simple = table_name.split( '.' )[1]
else:
table_name_simple = table_name
index_name = table_name + '_' + '_'.join( columns ) + '_index'
if unique:
create_phrase = 'CREATE UNIQUE INDEX IF NOT EXISTS '
else:
create_phrase = 'CREATE INDEX IF NOT EXISTS '
on_phrase = ' ON ' + table_name_simple + ' (' + ', '.join( columns ) + ');'
statement = create_phrase + index_name + on_phrase
self._c.execute( statement )
def _DisplayCatastrophicError( self, text ):
message = 'The db encountered a serious error! This is going to be written to the log as well, but here it is for a screenshot:'
@ -534,30 +501,6 @@ class HydrusDB( object ):
def _ExecuteManySelectSingleParam( self, query, single_param_iterator ):
select_args_iterator = ( ( param, ) for param in single_param_iterator )
return self._ExecuteManySelect( query, select_args_iterator )
def _ExecuteManySelect( self, query, select_args_iterator ):
# back in python 2, we did batches of 256 hash_ids/whatever at a time in big "hash_id IN (?,?,?,?,...)" predicates.
# this was useful to get over some 100,000 x fetchall() call overhead, but it would sometimes throw the SQLite query planner off and do non-optimal queries
# (basically, the "hash_id in (256)" would weight the hash_id index request x 256 vs another when comparing the sqlite_stat1 tables, which could lead to WEWLAD for some indices with low median very-high mean skewed distribution
# python 3 is better about call overhead, so we'll go back to what is pure
# cursor.executemany SELECT when
for select_args in select_args_iterator:
for result in self._c.execute( query, select_args ):
yield result
def _GenerateDBJob( self, job_type, synchronous, action, *args, **kwargs ):
return HydrusData.JobDatabase( job_type, synchronous, action, *args, **kwargs )
@ -597,9 +540,9 @@ class HydrusDB( object ):
self._InitDBCursor()
self._InitDBConnection()
result = self._c.execute( 'SELECT 1 FROM sqlite_master WHERE type = ? AND name = ?;', ( 'table', 'version' ) ).fetchone()
result = self._Execute( 'SELECT 1 FROM sqlite_master WHERE type = ? AND name = ?;', ( 'table', 'version' ) ).fetchone()
if result is None:
@ -616,9 +559,9 @@ class HydrusDB( object ):
def _InitDBCursor( self ):
def _InitDBConnection( self ):
self._CloseDBCursor()
self._CloseDBConnection()
db_path = os.path.join( self._db_dir, self._db_filenames[ 'main' ] )
@ -626,7 +569,9 @@ class HydrusDB( object ):
self._db = sqlite3.connect( db_path, isolation_level = None, detect_types = sqlite3.PARSE_DECLTYPES )
self._c = self._db.cursor()
c = self._db.cursor()
self._SetCursor( c )
self._is_connected = True
@ -636,42 +581,42 @@ class HydrusDB( object ):
if HG.no_db_temp_files:
self._c.execute( 'PRAGMA temp_store = 2;' ) # use memory for temp store exclusively
self._Execute( 'PRAGMA temp_store = 2;' ) # use memory for temp store exclusively
self._AttachExternalDatabases()
self._c.execute( 'ATTACH ":memory:" AS mem;' )
self._Execute( 'ATTACH ":memory:" AS mem;' )
except Exception as e:
raise HydrusExceptions.DBAccessException( 'Could not connect to database! This could be an issue related to WAL and network storage, or something else. If it is not obvious to you, please let hydrus dev know. Error follows:' + os.linesep * 2 + str( e ) )
TemporaryIntegerTableNameCache.instance().Clear()
HydrusDBBase.TemporaryIntegerTableNameCache.instance().Clear()
# durable_temp is not excluded here
db_names = [ name for ( index, name, path ) in self._c.execute( 'PRAGMA database_list;' ) if name not in ( 'mem', 'temp' ) ]
db_names = [ name for ( index, name, path ) in self._Execute( 'PRAGMA database_list;' ) if name not in ( 'mem', 'temp' ) ]
for db_name in db_names:
# MB -> KB
cache_size = HG.db_cache_size * 1024
self._c.execute( 'PRAGMA {}.cache_size = -{};'.format( db_name, cache_size ) )
self._Execute( 'PRAGMA {}.cache_size = -{};'.format( db_name, cache_size ) )
self._c.execute( 'PRAGMA {}.journal_mode = {};'.format( db_name, HG.db_journal_mode ) )
self._Execute( 'PRAGMA {}.journal_mode = {};'.format( db_name, HG.db_journal_mode ) )
if HG.db_journal_mode in ( 'PERSIST', 'WAL' ):
self._c.execute( 'PRAGMA {}.journal_size_limit = {};'.format( db_name, 1024 ** 3 ) ) # 1GB for now
self._Execute( 'PRAGMA {}.journal_size_limit = {};'.format( db_name, 1024 ** 3 ) ) # 1GB for now
self._c.execute( 'PRAGMA {}.synchronous = {};'.format( db_name, HG.db_synchronous ) )
self._Execute( 'PRAGMA {}.synchronous = {};'.format( db_name, HG.db_synchronous ) )
try:
self._c.execute( 'SELECT * FROM {}.sqlite_master;'.format( db_name ) ).fetchone()
self._Execute( 'SELECT * FROM {}.sqlite_master;'.format( db_name ) ).fetchone()
except sqlite3.OperationalError as e:
@ -771,9 +716,9 @@ class HydrusDB( object ):
HydrusData.Print( 'When the transaction failed, attempting to rollback the database failed. Please restart the client as soon as is convenient.' )
self._CloseDBCursor()
self._CloseDBConnection()
self._InitDBCursor()
self._InitDBConnection()
HydrusData.PrintException( rollback_e )
@ -815,28 +760,7 @@ class HydrusDB( object ):
def _ShrinkMemory( self ):
self._c.execute( 'PRAGMA shrink_memory;' )
def _STI( self, iterable_cursor ):
# strip singleton tuples to an iterator
return ( item for ( item, ) in iterable_cursor )
def _STL( self, iterable_cursor ):
# strip singleton tuples to a list
return [ item for ( item, ) in iterable_cursor ]
def _STS( self, iterable_cursor ):
# strip singleton tuples to a set
return { item for ( item, ) in iterable_cursor }
self._Execute( 'PRAGMA shrink_memory;' )
def _UnloadModules( self ):
@ -951,7 +875,7 @@ class HydrusDB( object ):
try:
self._InitDBCursor() # have to reinitialise because the thread id has changed
self._InitDBConnection() # have to reinitialise because the thread id has changed
self._InitCaches()
@ -1030,7 +954,7 @@ class HydrusDB( object ):
if self._pause_and_disconnect:
self._CloseDBCursor()
self._CloseDBConnection()
while self._pause_and_disconnect:
@ -1042,11 +966,11 @@ class HydrusDB( object ):
time.sleep( 1 )
self._InitDBCursor()
self._InitDBConnection()
self._CloseDBCursor()
self._CloseDBConnection()
temp_path = os.path.join( self._db_dir, self._durable_temp_db_filename )
@ -1111,100 +1035,3 @@ class HydrusDB( object ):
if synchronous: return job.GetResult()
class TemporaryIntegerTableNameCache( object ):
my_instance = None
def __init__( self ):
TemporaryIntegerTableNameCache.my_instance = self
self._column_names_to_table_names = collections.defaultdict( collections.deque )
self._column_names_counter = collections.Counter()
@staticmethod
def instance() -> 'TemporaryIntegerTableNameCache':
if TemporaryIntegerTableNameCache.my_instance is None:
raise Exception( 'TemporaryIntegerTableNameCache is not yet initialised!' )
else:
return TemporaryIntegerTableNameCache.my_instance
def Clear( self ):
self._column_names_to_table_names = collections.defaultdict( collections.deque )
self._column_names_counter = collections.Counter()
def GetName( self, column_name ):
table_names = self._column_names_to_table_names[ column_name ]
initialised = True
if len( table_names ) == 0:
initialised = False
i = self._column_names_counter[ column_name ]
table_name = 'mem.temp_int_{}_{}'.format( column_name, i )
table_names.append( table_name )
self._column_names_counter[ column_name ] += 1
table_name = table_names.pop()
return ( initialised, table_name )
def ReleaseName( self, column_name, table_name ):
self._column_names_to_table_names[ column_name ].append( table_name )
class TemporaryIntegerTable( object ):
def __init__( self, cursor, integer_iterable, column_name ):
if not isinstance( integer_iterable, set ):
integer_iterable = set( integer_iterable )
self._cursor = cursor
self._integer_iterable = integer_iterable
self._column_name = column_name
( self._initialised, self._table_name ) = TemporaryIntegerTableNameCache.instance().GetName( self._column_name )
def __enter__( self ):
if not self._initialised:
self._cursor.execute( 'CREATE TABLE IF NOT EXISTS {} ( {} INTEGER PRIMARY KEY );'.format( self._table_name, self._column_name ) )
self._cursor.executemany( 'INSERT INTO {} ( {} ) VALUES ( ? );'.format( self._table_name, self._column_name ), ( ( i, ) for i in self._integer_iterable ) )
return self._table_name
def __exit__( self, exc_type, exc_val, exc_tb ):
self._cursor.execute( 'DELETE FROM {};'.format( self._table_name ) )
TemporaryIntegerTableNameCache.instance().ReleaseName( self._column_name, self._table_name )
return False

257
hydrus/core/HydrusDBBase.py Normal file
View File

@ -0,0 +1,257 @@
import collections
import sqlite3
from hydrus.core import HydrusGlobals as HG
class TemporaryIntegerTableNameCache( object ):
my_instance = None
def __init__( self ):
TemporaryIntegerTableNameCache.my_instance = self
self._column_names_to_table_names = collections.defaultdict( collections.deque )
self._column_names_counter = collections.Counter()
@staticmethod
def instance() -> 'TemporaryIntegerTableNameCache':
if TemporaryIntegerTableNameCache.my_instance is None:
raise Exception( 'TemporaryIntegerTableNameCache is not yet initialised!' )
else:
return TemporaryIntegerTableNameCache.my_instance
def Clear( self ):
self._column_names_to_table_names = collections.defaultdict( collections.deque )
self._column_names_counter = collections.Counter()
def GetName( self, column_name ):
table_names = self._column_names_to_table_names[ column_name ]
initialised = True
if len( table_names ) == 0:
initialised = False
i = self._column_names_counter[ column_name ]
table_name = 'mem.temp_int_{}_{}'.format( column_name, i )
table_names.append( table_name )
self._column_names_counter[ column_name ] += 1
table_name = table_names.pop()
return ( initialised, table_name )
def ReleaseName( self, column_name, table_name ):
self._column_names_to_table_names[ column_name ].append( table_name )
class TemporaryIntegerTable( object ):
def __init__( self, cursor: sqlite3.Cursor, integer_iterable, column_name ):
if not isinstance( integer_iterable, set ):
integer_iterable = set( integer_iterable )
self._cursor = cursor
self._integer_iterable = integer_iterable
self._column_name = column_name
( self._initialised, self._table_name ) = TemporaryIntegerTableNameCache.instance().GetName( self._column_name )
def __enter__( self ):
if not self._initialised:
self._cursor.execute( 'CREATE TABLE IF NOT EXISTS {} ( {} INTEGER PRIMARY KEY );'.format( self._table_name, self._column_name ) )
self._cursor.executemany( 'INSERT INTO {} ( {} ) VALUES ( ? );'.format( self._table_name, self._column_name ), ( ( i, ) for i in self._integer_iterable ) )
return self._table_name
def __exit__( self, exc_type, exc_val, exc_tb ):
self._cursor.execute( 'DELETE FROM {};'.format( self._table_name ) )
TemporaryIntegerTableNameCache.instance().ReleaseName( self._column_name, self._table_name )
return False
class DBBase( object ):
def __init__( self ):
self._c = None
def _CloseCursor( self ):
if self._c is not None:
self._c.close()
del self._c
self._c = None
def _CreateIndex( self, table_name, columns, unique = False ):
if unique:
create_phrase = 'CREATE UNIQUE INDEX IF NOT EXISTS'
else:
create_phrase = 'CREATE INDEX IF NOT EXISTS'
index_name = self._GenerateIndexName( table_name, columns )
if '.' in table_name:
table_name_simple = table_name.split( '.' )[1]
else:
table_name_simple = table_name
statement = '{} {} ON {} ({});'.format( create_phrase, index_name, table_name_simple, ', '.join( columns ) )
self._Execute( statement )
def _Execute( self, query, *args ) -> sqlite3.Cursor:
if HG.query_planner_mode:
plan_lines = self._c.execute( 'EXPLAIN QUERY PLAN {}'.format( query ), *args ).fetchall()
HG.query_planner_query_count += 1
HG.client_controller.PrintQueryPlan( query, plan_lines )
return self._c.execute( query, *args )
def _ExecuteMany( self, query, args_iterator ):
if HG.query_planner_mode:
args_iterator = list( args_iterator )
plan_lines = self._c.execute( 'EXPLAIN QUERY PLAN {}'.format( query ), args_iterator[0] ).fetchall()
HG.query_planner_query_count += 1
HG.client_controller.PrintQueryPlan( query, plan_lines )
self._c.executemany( query, args_iterator )
def _GenerateIndexName( self, table_name, columns ):
return '{}_{}_index'.format( table_name, '_'.join( columns ) )
def _ExecuteManySelectSingleParam( self, query, single_param_iterator ):
select_args_iterator = ( ( param, ) for param in single_param_iterator )
return self._ExecuteManySelect( query, select_args_iterator )
def _ExecuteManySelect( self, query, select_args_iterator ):
# back in python 2, we did batches of 256 hash_ids/whatever at a time in big "hash_id IN (?,?,?,?,...)" predicates.
# this was useful to get over some 100,000 x fetchall() call overhead, but it would sometimes throw the SQLite query planner off and do non-optimal queries
# (basically, the "hash_id in (256)" would weight the hash_id index request x 256 vs another when comparing the sqlite_stat1 tables, which could lead to WEWLAD for some indices with low median very-high mean skewed distribution
# python 3 is better about call overhead, so we'll go back to what is pure
# cursor.executemany SELECT when
for select_args in select_args_iterator:
for result in self._Execute( query, select_args ):
yield result
def _GetLastRowId( self ) -> int:
return self._c.lastrowid
def _GetRowCount( self ):
row_count = self._c.rowcount
if row_count == -1:
return 0
else:
return row_count
def _MakeTemporaryIntegerTable( self, integer_iterable, column_name ):
return TemporaryIntegerTable( self._c, integer_iterable, column_name )
def _SetCursor( self, c: sqlite3.Cursor ):
self._c = c
def _STI( self, iterable_cursor ):
# strip singleton tuples to an iterator
return ( item for ( item, ) in iterable_cursor )
def _STL( self, iterable_cursor ):
# strip singleton tuples to a list
return [ item for ( item, ) in iterable_cursor ]
def _STS( self, iterable_cursor ):
# strip singleton tuples to a set
return { item for ( item, ) in iterable_cursor }

View File

@ -1,41 +1,17 @@
import sqlite3
import typing
class HydrusDBModule( object ):
from hydrus.core import HydrusDBBase
class HydrusDBModule( HydrusDBBase.DBBase ):
def __init__( self, name, cursor: sqlite3.Cursor ):
HydrusDBBase.DBBase.__init__( self )
self.name = name
self._c = cursor
def _CreateIndex( self, table_name, columns, unique = False ):
if '.' in table_name:
table_name_simple = table_name.split( '.' )[1]
else:
table_name_simple = table_name
index_name = self._GenerateIndexName( table_name, columns )
if unique:
create_phrase = 'CREATE UNIQUE INDEX IF NOT EXISTS '
else:
create_phrase = 'CREATE INDEX IF NOT EXISTS '
on_phrase = ' ON ' + table_name_simple + ' (' + ', '.join( columns ) + ');'
statement = create_phrase + index_name + on_phrase
self._c.execute( statement )
self._SetCursor( cursor )
def _GetInitialIndexGenerationTuples( self ):
@ -43,32 +19,6 @@ class HydrusDBModule( object ):
raise NotImplementedError()
def _GenerateIndexName( self, table_name, columns ):
return '{}_{}_index'.format( table_name, '_'.join( columns ) )
def _STI( self, iterable_cursor ):
# strip singleton tuples to an iterator
return ( item for ( item, ) in iterable_cursor )
def _STL( self, iterable_cursor ):
# strip singleton tuples to a list
return [ item for ( item, ) in iterable_cursor ]
def _STS( self, iterable_cursor ):
# strip singleton tuples to a set
return { item for ( item, ) in iterable_cursor }
def CreateInitialIndices( self ):
index_generation_tuples = self._GetInitialIndexGenerationTuples()

View File

@ -36,6 +36,10 @@ menu_profile_min_job_time_ms = 16
pubsub_profile_min_job_time_ms = 5
ui_timer_profile_min_job_time_ms = 5
query_planner_mode = False
query_planner_start_time = 0
query_planner_query_count = 0
profile_start_time = 0
profile_slow_count = 0
profile_fast_count = 0

View File

@ -51,7 +51,7 @@ class HydrusRatingArchive( object ):
if not os.path.exists( self._path ): create_db = True
else: create_db = False
self._InitDBCursor()
self._InitDBConnection()
if create_db: self._InitDB()
@ -65,7 +65,7 @@ class HydrusRatingArchive( object ):
self._c.execute( 'CREATE TABLE ratings ( hash BLOB PRIMARY KEY, rating REAL );' )
def _InitDBCursor( self ):
def _InitDBConnection( self ):
self._db = sqlite3.connect( self._path, isolation_level = None, detect_types = sqlite3.PARSE_DECLTYPES )

View File

@ -100,7 +100,7 @@ class HydrusTagArchive( object ):
if not os.path.exists( self._path ): create_db = True
else: create_db = False
self._InitDBCursor()
self._InitDBConnection()
if create_db: self._InitDB()
@ -129,7 +129,7 @@ class HydrusTagArchive( object ):
self._c.execute( 'CREATE UNIQUE INDEX tags_tag_index ON tags ( tag );' )
def _InitDBCursor( self ):
def _InitDBConnection( self ):
self._db = sqlite3.connect( self._path, isolation_level = None, detect_types = sqlite3.PARSE_DECLTYPES )
@ -496,7 +496,7 @@ class HydrusTagPairArchive( object ):
is_new_db = not os.path.exists( self._path )
self._InitDBCursor()
self._InitDBConnection()
if is_new_db:
@ -525,7 +525,7 @@ class HydrusTagPairArchive( object ):
self._c.execute( 'CREATE UNIQUE INDEX tags_tag_index ON tags ( tag );' )
def _InitDBCursor( self ):
def _InitDBConnection( self ):
self._db = sqlite3.connect( self._path, isolation_level = None, detect_types = sqlite3.PARSE_DECLTYPES )

View File

@ -1972,16 +1972,22 @@ class Metadata( HydrusSerialisable.SerialisableBase ):
self._metadata[ update_index ] = ( update_hashes, begin, end )
for update_index in sorted( self._metadata.keys() ):
( update_hashes, begin, end ) = self._metadata[ update_index ]
self._RecalcHashes()
self._biggest_end = self._CalculateBiggestEnd()
def _RecalcHashes( self ):
self._update_hashes = set()
self._update_hashes_ordered = []
for ( update_index, ( update_hashes, begin, end ) ) in sorted( self._metadata.items() ):
self._update_hashes.update( update_hashes )
self._update_hashes_ordered.extend( update_hashes )
self._biggest_end = self._CalculateBiggestEnd()
def AppendUpdate( self, update_hashes, begin, end, next_update_due ):
@ -2021,9 +2027,7 @@ class Metadata( HydrusSerialisable.SerialisableBase ):
with self._lock:
data = sorted( self._metadata.items() )
for ( update_index, ( update_hashes, begin, end ) ) in data:
for ( update_index, ( update_hashes, begin, end ) ) in sorted( self._metadata.items() ):
if HydrusData.SetsIntersect( hashes, update_hashes ):
@ -2229,6 +2233,8 @@ class Metadata( HydrusSerialisable.SerialisableBase ):
self._next_update_due = new_next_update_due
self._biggest_end = self._CalculateBiggestEnd()
self._RecalcHashes()
def UpdateIsEmpty( self, update_index ):

View File

@ -260,7 +260,7 @@ def parse_value(string, spec):
return string[len(match[0]):], (hashes, distance)
raise ValueError("Invalid value, expected a list of hashes with distance")
elif spec == Value.HASHLIST_WITH_ALGORITHM:
match = re.match('(?P<hashes>([0-9a-f]+(\s|,)+)+[0-9a-f]+)((with\s+)?algorithm)?\s*(?P<algorithm>sha256|sha512|md5|sha1|)', string)
match = re.match('(?P<hashes>[0-9a-f]+((\s|,)+[0-9a-f]+)*)((with\s+)?algorithm)?\s*(?P<algorithm>sha256|sha512|md5|sha1|)', string)
if match:
hashes = set(hsh.strip() for hsh in re.sub('\s', ' ', match['hashes'].replace(',', ' ')).split(' ') if len(hsh) > 0)
algorithm = match['algorithm'] if len(match['algorithm']) > 0 else 'sha256'

File diff suppressed because it is too large Load Diff

View File

@ -1761,9 +1761,7 @@ class TestClientAPI( unittest.TestCase ):
def _test_search_files( self, connection, set_up_permissions ):
hash_ids = [ 1, 2, 3, 4, 5, 10 ]
HG.test_controller.SetRead( 'file_query_ids', set( hash_ids ) )
hash_ids = [ 1, 2, 3, 4, 5, 10, 15, 16, 17, 18, 19, 20, 21, 25, 100, 101, 150 ]
# search files failed tag permission
@ -1775,6 +1773,10 @@ class TestClientAPI( unittest.TestCase ):
#
sample_hash_ids = set( random.sample( hash_ids, 3 ) )
HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) )
tags = []
path = '/get_files/search_files?tags={}'.format( urllib.parse.quote( json.dumps( tags ) ) )
@ -1789,6 +1791,10 @@ class TestClientAPI( unittest.TestCase ):
#
sample_hash_ids = set( random.sample( hash_ids, 3 ) )
HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) )
tags = [ 'kino' ]
path = '/get_files/search_files?tags={}'.format( urllib.parse.quote( json.dumps( tags ) ) )
@ -1803,6 +1809,10 @@ class TestClientAPI( unittest.TestCase ):
# search files
sample_hash_ids = set( random.sample( hash_ids, 3 ) )
HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) )
tags = [ 'kino', 'green' ]
path = '/get_files/search_files?tags={}'.format( urllib.parse.quote( json.dumps( tags ) ) )
@ -1819,10 +1829,140 @@ class TestClientAPI( unittest.TestCase ):
d = json.loads( text )
expected_answer = { 'file_ids' : hash_ids }
expected_answer = { 'file_ids' : list( sample_hash_ids ) }
self.assertEqual( d, expected_answer )
# sort
# this just tests if it parses, we don't have a full test for read params yet
sample_hash_ids = set( random.sample( hash_ids, 3 ) )
HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) )
tags = [ 'kino', 'green' ]
path = '/get_files/search_files?tags={}&file_sort_type={}'.format( urllib.parse.quote( json.dumps( tags ) ), CC.SORT_FILES_BY_FRAMERATE )
connection.request( 'GET', path, headers = headers )
response = connection.getresponse()
data = response.read()
text = str( data, 'utf-8' )
self.assertEqual( response.status, 200 )
# sort
# this just tests if it parses, we don't have a full test for read params yet
sample_hash_ids = set( random.sample( hash_ids, 3 ) )
HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) )
tags = [ 'kino', 'green' ]
path = '/get_files/search_files?tags={}&file_sort_type={}&file_sort_asc={}'.format( urllib.parse.quote( json.dumps( tags ) ), CC.SORT_FILES_BY_FRAMERATE, 'true' )
connection.request( 'GET', path, headers = headers )
response = connection.getresponse()
data = response.read()
text = str( data, 'utf-8' )
self.assertEqual( response.status, 200 )
# file domain
# this just tests if it parses, we don't have a full test for read params yet
sample_hash_ids = set( random.sample( hash_ids, 3 ) )
HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) )
tags = [ 'kino', 'green' ]
path = '/get_files/search_files?tags={}&file_sort_type={}&file_sort_asc={}&file_service_name={}'.format(
urllib.parse.quote( json.dumps( tags ) ),
CC.SORT_FILES_BY_FRAMERATE,
'true',
'trash'
)
connection.request( 'GET', path, headers = headers )
response = connection.getresponse()
data = response.read()
text = str( data, 'utf-8' )
self.assertEqual( response.status, 200 )
# file and tag domain
# this just tests if it parses, we don't have a full test for read params yet
sample_hash_ids = set( random.sample( hash_ids, 3 ) )
HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) )
tags = [ 'kino', 'green' ]
path = '/get_files/search_files?tags={}&file_sort_type={}&file_sort_asc={}&file_service_key={}&tag_service_name={}'.format(
urllib.parse.quote( json.dumps( tags ) ),
CC.SORT_FILES_BY_FRAMERATE,
'true',
CC.TRASH_SERVICE_KEY.hex(),
'all%20known%20tags'
)
connection.request( 'GET', path, headers = headers )
response = connection.getresponse()
data = response.read()
text = str( data, 'utf-8' )
self.assertEqual( response.status, 200 )
# file and tag domain
# this just tests if it parses, we don't have a full test for read params yet
sample_hash_ids = set( random.sample( hash_ids, 3 ) )
HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) )
tags = [ 'kino', 'green' ]
path = '/get_files/search_files?tags={}&file_sort_type={}&file_sort_asc={}&file_service_key={}&tag_service_key={}'.format(
urllib.parse.quote( json.dumps( tags ) ),
CC.SORT_FILES_BY_FRAMERATE,
'true',
CC.COMBINED_FILE_SERVICE_KEY.hex(),
CC.COMBINED_TAG_SERVICE_KEY.hex()
)
connection.request( 'GET', path, headers = headers )
response = connection.getresponse()
data = response.read()
text = str( data, 'utf-8' )
self.assertEqual( response.status, 400 )
def _test_search_files_predicate_parsing( self, connection, set_up_permissions ):
# some file search param parsing
class PretendRequest( object ):
@ -1927,6 +2067,9 @@ class TestClientAPI( unittest.TestCase ):
self.assertEqual( set( predicates ), set( expected_predicates ) )
def _test_file_metadata( self, connection, set_up_permissions ):
# test file metadata
api_permissions = set_up_permissions[ 'search_green_files' ]
@ -2192,8 +2335,12 @@ class TestClientAPI( unittest.TestCase ):
self.assertEqual( d, expected_detailed_known_urls_metadata_result )
def _test_get_files( self, connection, set_up_permissions ):
# files and thumbs
file_id = 1
hash = b'\xadm5\x99\xa6\xc4\x89\xa5u\xeb\x19\xc0&\xfa\xce\x97\xa9\xcdey\xe7G(\xb0\xce\x94\xa6\x01\xd22\xf3\xc3'
hash_hex = hash.hex()
@ -2532,6 +2679,9 @@ class TestClientAPI( unittest.TestCase ):
self._test_manage_cookies( connection, set_up_permissions )
self._test_manage_pages( connection, set_up_permissions )
self._test_search_files( connection, set_up_permissions )
self._test_search_files_predicate_parsing( connection, set_up_permissions )
self._test_file_metadata( connection, set_up_permissions )
self._test_get_files( connection, set_up_permissions )
self._test_permission_failures( connection, set_up_permissions )
self._test_cors_fails( connection )

View File

@ -1853,6 +1853,9 @@ class TestTagObjects( unittest.TestCase ):
( 'system:filetype is jpeg, png, apng', "system:filetype = image/jpg, image/png, apng" ),
( 'system:sha256 hash is in 3 hashes', "system:hash = abcdef01 abcdef02 abcdef03" ),
( 'system:md5 hash is in 3 hashes', "system:hash = abcdef01 abcdef, abcdef04 md5" ),
( 'system:md5 hash is abcdef01', "system:hash = abcdef01 md5" ),
( 'system:md5 hash is abcdef01', "system:Hash = Abcdef01 md5" ),
( 'system:sha256 hash is abcdef0102', "system:hash = abcdef0102" ),
( 'system:modified time: since 7 years 1 month ago', "system:modified date < 7 years 45 days 70h" ),
( 'system:modified time: since 2011-06-04', "system:modified date > 2011-06-04" ),
( 'system:modified time: before 7 years 2 months ago', "system:date modified > 7 years 2 months" ),
@ -1886,12 +1889,12 @@ class TestTagObjects( unittest.TestCase ):
( 'system:media viewtime < 1 day 1 hour', "system:media viewtime < 1 days 1 hour 0 minutes" ),
( 'system:all viewtime > 1 hour 1 minute', "system:all viewtime > 1 hours 100 seconds" ),
( 'system:preview viewtime \u2248 2 days 7 hours', "system:preview viewtime ~= 1 day 30 hours 100 minutes 90s" ),
( 'system:has a url matching regex: reg.*ex', " system:has url matching regex reg.*ex " ),
( 'system:does not have a url matching regex: test', "system:does not have a url matching regex test" ),
( 'system:has url: https://test.test/', "system:has_url https://test.test/" ),
( 'system:does not have url: test url here', " system:doesn't have url test url here " ),
( 'system:has a url with domain: test.com', "system:has domain test.com" ),
( 'system:does not have a url with domain: test.com', "system:doesn't have domain test.com" ),
( 'system:has a url matching regex: index\\.php', " system:has url matching regex index\\.php" ),
( 'system:does not have a url matching regex: index\\.php', "system:does not have a url matching regex index\\.php" ),
( 'system:has url: https://safebooru.donmai.us/posts/4695284', "system:has_url https://safebooru.donmai.us/posts/4695284" ),
( 'system:does not have url: https://safebooru.donmai.us/posts/4695284', " system:doesn't have url https://safebooru.donmai.us/posts/4695284 " ),
( 'system:has a url with domain: safebooru.com', "system:has domain safebooru.com" ),
( 'system:does not have a url with domain: safebooru.com', "system:doesn't have domain safebooru.com" ),
( 'system:has safebooru file page url', "system:has a url with class safebooru file page" ),
( 'system:does not have safebooru file page url', "system:doesn't have a url with url class safebooru file page " ),
( 'system:page less than 5', "system:tag as number page < 5" )