Version 378

This commit is contained in:
Hydrus Network Developer 2019-12-18 16:06:34 -06:00
parent 50771138ca
commit 6ed9369f9c
22 changed files with 798 additions and 334 deletions

View File

@ -8,6 +8,30 @@
<div class="content">
<h3>changelog</h3>
<ul>
<li><h3>version 378</h3></li>
<ul>
<li>if a search has system:limit, the current sort is now sent down to the database. if the sort is simple, results are now sorted before system:limit is applied, meaning you will now get the largest/longest/whateverest sample of the search! supported sorts are: import time, filesize, duration, width, height, resolution ratio, media views, media viewtime, num pixels, approx bitrate, and modified time. this does not apply to searches in the 'all known files' file domain.</li>
<li>after identifying a sometimes-unoptimal db access routine, wrote a new more reliable one and replaced the 60-odd places it is used in both client and server. a variety of functions will now have less 'spiky' job time, including certain combinations of regular tag and system search predicates. some jobs will have slightly higher average job time, some will be much faster in all common situations</li>
<li>added additional database analysis to some complicated duplicate file system jobs that adds some overhead but should reduce extreme spikes in job time for very large databases</li>
<li>converted some legacy db code to new access methods</li>
<li>fixed a bug in the new menu generation code that was not showing sessions in the 'pages' menu if there were no backups for these sessions (i.e. they have only been saved once, or are old enough to have been last saved before the backup system was added)</li>
<li>fixed the 'click window close button should back out, not choose the red no button' bug in the yes/no confirmation dialogs for analyze, vacuum, clear orphan, and gallery log button url import</li>
<li>fixed some checkbox select and data retrieval logic in the checkbox tree control and completely cleared out the buggy ipfs directory download workflow. I apologise for the delay</li>
<li>fixed some inelegant multihash->urls resolution in the ipfs service code that would often mean a large folder would lock the client while parsing was proceeding</li>
<li>when the multihash->urls resolution is going on, the popup now exposes the underlying network control. cancelling the whole job mid-parse/download is now also quicker and prettier</li>
<li>when a 'downloader multiple urls' popup is working, it will publish its ongoing presented files to a files button as it works, rather than just once the job is finished</li>
<li>improved some unusual taglist height calculations that were turning up</li>
<li>improved how taglists set their minimum height--the 'selection tags' list should now always have at least 15 rows, even when bunched up in a tall gallery panel</li>
<li>if the system clock is rewound, new objects that are saved in the backup system (atm, gui sessions) will now detect that existing backups are from the future and increase their save time to ensure they count as the newest object</li>
<li>short version: 'remove files from view when trashed' now works on downloader thumbs that are loaded in from a session. long version: downloader thumb pages now force 'my files' file domain for now (previously it was 'all local files')</li>
<li>the downloader/thread watcher right-click menus for 'show all downloaders xxx files' now has a new 'all files and trash' entry. this will show absolutely everything still in your db, for quick access to accidental deletes</li>
<li>the 'select a downloader' list dialog _should_ size itself better, with no double scrollbars, when there are many many downloaders and/or very long-named downloaders. if this layout works, I'll replicated it in other areas</li>
<li>if an unrenderable key enters a shortcut, the shortcut will now display an 'unknown key: blah' statement instead of throwing an error. this affected both the manage shortcuts dialog and the media viewer(!)</li>
<li>SIGTERM is now caught in non-windows systems and will initiate a fast forced shutdown</li>
<li>unified and played with some border styles around the program</li>
<li>added a user-written guide to updating to the 'getting started - installing' help page</li>
<li>misc small code cleanup</li>
</ul>
<li><h3>version 377</h3></li>
<ul>
<li>qt:</li>

View File

@ -36,6 +36,7 @@
<p><span class="warning">However, for macOS users:</span> the Hydrus App is <b>non-portable</b> and puts your database in ~/Library/Hydrus (i.e. /Users/[You]/Library/Hydrus). You can update simply by replacing the old App with the new, but if you wish to backup, you should be looking at ~/Library/Hydrus, not the App itself.</p>
<h3>updating</h3>
<p class="warning">Hydrus is imageboard-tier software, wild and fun but unprofessional. It is written by one Anon spinning a lot of plates. Mistakes happen from time to time, usually in the update process. There are also no training wheels to stop you from accidentally overwriting your whole db if you screw around. Be careful when updating. Make backups beforehand!</p>
<p>A user has written a longer and more formal guide to updating, and information on the 334-&gt;335 step <a href="update_guide.rtf">here</a>.</p>
<p>The update process:<p>
<ul>
<li>If the client is running, close it!</li>

BIN
help/update_guide.rtf Normal file

Binary file not shown.

View File

@ -337,11 +337,19 @@ class Controller( HydrusController.HydrusController ):
def CatchSignal( self, sig, frame ):
if sig == signal.SIGINT:
if sig in ( signal.SIGINT, signal.SIGTERM ):
event = QG.QCloseEvent()
if sig == signal.SIGTERM:
HG.emergency_exit = True
QW.QApplication.postEvent( self.gui, event )
if hasattr( self, 'gui' ):
event = QG.QCloseEvent()
QW.QApplication.postEvent( self.gui, event )
@ -1279,6 +1287,7 @@ class Controller( HydrusController.HydrusController ):
self.CreateSplash()
signal.signal( signal.SIGINT, self.CatchSignal )
signal.signal( signal.SIGTERM, self.CatchSignal )
self.CallToThreadLongRunning( self.THREADBootEverything )

View File

@ -47,6 +47,90 @@ import traceback
from qtpy import QtWidgets as QW
from . import QtPorting as QP
#
# 𝓑𝓵𝓮𝓼𝓼𝓲𝓷𝓰𝓼 𝓸𝓯 𝓽𝓱𝓮 𝓢𝓱𝓻𝓲𝓷𝓮 𝓸𝓷 𝓽𝓱𝓲𝓼 𝓗𝓮𝓵𝓵 𝓒𝓸𝓭𝓮
#
#
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓██▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▒█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▓▓▓▓█▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓██ █▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒░▒▓▓▓░ █▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒▒▓▒ ░▓▓▓ ▒█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓████▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒▒▒▒▒▒▓▒ ▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▓▓▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒▓▓▒▒▒▒▒▓ ▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▒█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ░▓░ ▓▓▓▓▒▒▒▒▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒▒▓▓▓█▒ ▓▓▓█ ▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░ ▓░ ▓▒▒▒▒▒▒▒▒▒▒▒▓▓▓▓▒▓▒▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▓▓▓░ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▒▒▒▓▓▓▓▒▒▒▒▒▒▒▒▒▒▓ ▓▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█ ▒█▓░▒▓▒▒▒▒▓▓▓█▓████████████▓▓▓▓▓▒▒▒▓ ▒▓▓▓ ▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓██▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ░█▓ ░▓▓█████████▓███▓█▓███████▓▓▓▓▓ ░▓▓█ █▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓█▓▓▓▓▓▓▓▓▓▓▓▓▓█▒▒█▓▓▓▓▓▓▓▓▓▓ ▓▓ ░██████▓███▓█████▓▓▓▓▓█████▓▓▓▒ ▓▓▓▒ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒███▓▓▓▓▓▓▓▓▓▓▓████▓█▓▓▓▓▓▓▓▓▓▓█░▓▓███▓▓▓█▓█▓▓▓█▓█▓███▓▓▓▓▓▓██████▓ ▓▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▒▓▓▓█▒▓▓▒▓▓▓██▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒██████▓▓▓▓▓████▓▓█▓▓██▓▓▓▓▓▓██▓███ ▓█ ██▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓ ▒███▒█▒▓█▓▓███▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓██▓█▓▓██▓▓▓▓▓▓▓▓██▓▓▓▓█▓░▒▒▒▓▓█████ ▒█ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓░▓██▓▒█▓████▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓███▓▓▓█▓▓██▓▓▓▓▓▓▓▓▓█▓▓▓▓█░ ▓▓▓▓█████▓▓▓░ █▓▒▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▒▓██▓▒█▓▓█▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓██▓▓▓▓▓▓▓▓▓██▓▓▓▓▓▒▒▒▓▒ ▒▓▓░▓▓▓▓▓█████▓▓▒ ▓▓▓▒▓▓ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓███▓███▓▓▓▒█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▓█▓█▓▓█▓▓▓▓███▓▒▒▒▒░░▓▓▓▓█▓▓▓▓▓███████▓▓░██▓▓▓▓▒ ▒▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▒▓█▓▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▓▓▓▓▓▓█▓▓▓▓▒▒▓██▓▓▒▓▓▓▓████▓▓▓▓▓██▓▓███▒ ▒█▓▒░░ ▓▓ ▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒██▒▓▓█▓█▓▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓██▓▓█▓█▓▒▓█▓▓▓▓▓▓▓▓██████▓▓███▓▓▓▓█████▓█▓ ▓ ░▒▓▓▒ ▒█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▓▓█▓▓█▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓██▓▓█▓▓█▓▓▓▓▓▓██▓██████████████▓▓▓███▓▓▓█░░█░▒▓▓░▒▒ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓██▒▓██▓█▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒██▓▓█▓▓▓██▓▓▓▓░▓█▓▒▓███████████▓▓▓███▓▓▓█▓▒▒▓▒ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▓█▒▓██▓▓█▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▓███ ▓███░▒▒ ▓▓▒ ░░▒░░▓█▓▓██▓▓▓▓█▓▓▒ ▒█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▓██▓▓███▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓██▓███ ███ ▒ ▒▒░░▓▓▒██ ██████▓▓▓█░▒▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓██▓▓██▓█▓▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓██▓█▒ ░██▓ ░▒▒▓█████▓ █▓█▓██▓▓▓█▓██▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒██▓▓██▓▒█▒█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓██▓▒▓ ░ ▒▒ ▒ ░█▓▒ ▒ ░░▒█▓▓█████▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▓███▓███▒█▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▓██▒ ▒▓▓▒ ░▓▒▒██▓▓███▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓░▓▓█░▓█▒▓█▓███▓▓▒▓▓▓▓▓▓▓▒▓██▒▓████ ▒▓░▒█▓▓█▓██▓█▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓██▓▓██▓░█▓█▓▒▒▒▓▓██▓▓▒▓▓▓▓▓▒▓██▒ ▓░ ▒▓▒▓█▓███▓▓▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▓▒▓▓█████▓▓▓██▒▓█▓█▓▓▓▓▒▒██▓▓▓▓▓▓▓▓▒▓█▓ ▒▓▒▓█▓▓█▓█▓▓█▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▒░▒▓▓███▓▓██▓▓▓▓█▓▓█▓██▓█▓▓▒▓█▓▓▓▓▓▓▓▓▓▓▓▓▒ ░ ▓▓▒▓█▒██▓▓▓▓█▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▓▓█████▓▒▓▓▓█▓▓▓▓██▒█▓▓███▓▓▓▒██▓▓▓▓▓▓▓▓▓▓▓▓░ ▓█▒░▒▒▓██▓█▓▓█▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█████▓▓ ▓▓██▓▓▓██▒▓█▓█▓▒▓▓▓▓▓█▓▓▓▓▓▓▓▓▓▓▓▓▓░ ░░ ░▒█▒▒▒░▒▓█▓▓▓▓▓█▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒▓█▓▓▓ ▒██▓▓▓▓█▓▒██▓▓▒▓▓▓▓▒██▓▓▓▓▓▓▓▓▓▓▓▓█▓ ░▓░░ ░███▓██▓▓▓▓▓█▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒██▓▓▓░▓██▓▓▓▓██░▓█▓▓▓▓▓▓▓▒▓██▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒ ░▓▒ ░ ▓███▓██▓█▓▓▓█▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▓█▓█▓▒▓██▓▓▓██▓▒█▓▓▓▓▓▓▓▓▒██▓▒▓▓▓▓▓▓▓▓▓▓█▓▓▓▓▓░ ▓█▓ █▓▓█▓█▓▓█▓▓▓██▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓██ ░██▓▓▓▓█▓▒▓█▓▓▒▓▓▓▓▒▓█▓▓▓▓▓▓▓▓▓▒███▓▒▓▓▓▓███▓░ █▓▓█▓█▓▓█▓▓▓██▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓ █▓░ ░█▓▓▓▓██▓▓██▓▓▒▓▓▓▓▒██▓▓▓▓▓▓▓▓▒▓█▓▓▓▒▓▓▓▓▓░ ░█▒▓█▓█▓▓▓█▓▓▓▓▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█░ ░ ███ ██▓▓▓██▓▒██▓▓▒▓▓▓▓▒▓██▓▓▓▓▓▓▓▒▓█▓▓▓▒▒▓▓█▓ █▓██▒█▓▓▓▓█▓▓▓█▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒ ░ ███ ▓█▓▓▓▓██▒▓█▓▓▓▓▓▓▓▓▒██▓▒▓▓▓▓▓▓▓██▓█▒▓▓█▓░ █▓██▒▓██████▓▓▓▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█ ▓ ▓█ ░█▓▓▓▓██▓▓██▓▓▒▓▓▓▓▒▓█▓▓▒▓▓▓▓▓░▓███▓▓█░ █▓█▓▓▓▓▓█▓░███▓▓▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▓█ ▓▒ ██▒ ▒████▓███▒▓█▓▓▓▒▓▓▓▒▓██▓▓▓▓▓▓▓▒▒███▓▓▒ ▒ ▓███▓▓▓▓▓ ░░▓▓██▓▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓██▓▓█▓██ ▓█▓▓▓▓▓██▓▓██▓▓▒▒▓▒▒▒▓██▓▒ ▓█▓██ ░ ▓▒▓██▓▓▓▒ ░ ██▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▒▓██▓█████▓ ▓██▓█████▓▓▓█▓▓▓▓▓▓▓▓█▒██ █░▒▓▓▓█ ▓▒▓██▓▒░ ▒▒ █▓▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▓████▓ ▓█████████▓▓██▓▓▓▓▓█▓▓▓██▒ █▓ ▓▒▓▒ ▓▓▓█▓ ▒▓ ▒█▓▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒██▓▒▓░ ▒███▓█████▓▓███▓▓▓▓▓█████▓ ▓▓▓░ ▓▒▓▒ ▒▓▓▓▒ ▓▓▓█▒ ▓█▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒▒▓▓▓▒ ███▓▓█████▓▓████▓▓▓███▓░ ▓▓▓█▓ ▓▓▓ ▓█▒░ ▒▒▓▓▓█ ██▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▓▒▓▒▓▓▓▓▓▓▓▓▓▓█▓ ▒███▓█████▓▓▓█▓▓▓███▓ ▓▓▓▓▓ ▒▓▓ ▓▓▒ ▒▓▒█▓▓▒▓▓ ▓█▒▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▓▒▒█▓▒▓▒▓▓▓▓▓▓▓▓█▒ ███▓▓█▒██▓▓█▓███▓▓▓░ ▓▓▒▓▒▓▓█ ▓▒ ░▓▓░ ▒█▓▓▓▒▒▓▓▓ ▒█▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▓▒▒▓▓▓▓▓▒▓▓▒▒▒▓▓▓▓▓ ▒██▓█▒▒▓██▒████▓▒▒▓ ▓▓▒▓▒▒▒▓▓▓ ▒▒ ▒▓░▓▒█▓▓▒▒▒▒▒▒▒▓▒ █▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▓▒▒▓█▓▓▓▓▓▓▓▓▓▓▒▓▒▒▓▒ ▓███▓▓▓██░▓▓██▓▒▓▒ ▓▓▒▒▒▒▒▒▓▓▓█▓░ ▒█▓▓▓▓▓▒▒▒▒▒▒▒▒▓▒ ░░ ▓▓▒▒▒▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▒▓▒▓▒▒▓█▓██▓▓▓▓▓▓▓▓▓▓▓▓▓▓░ ▒█▓▓█▓▒██░░ ▒██▒▓ ░▓▒▒▒▒▒▒▒▒▓▓▓▓▓█▓▓█▓▓▒▒▒▒▒▒▒▒▒▒▒▓░ ░▒▓▓ █▓▓▓▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▒▓▓▓▒▒▓█▓▓▓█▓▓▓▓▓▓▒▓▓▓▓▓▓▓██████████▓██▒▓█▓▓ ██▓▓ ▓▒▒▒▒▒▒▒▒▒▒▓▓▓░▒▓▒▓▓▒▒▒▒▒▒▒▒▒▒▒▒▓░░▒▒░▒▓ ▒███▒▓▓▓▒▓▓▓▓▓▓▓▒▓▓▓
#▓▓▓▒▒▓█▓▓████▓▓▓▓▓▓▓▓▓▓▓▓█▓▓███████▓▒▓█▓▓██▒▒ ▓██ ▒▓▒▒▒▒▒▒▒▒▒▒▓▓░ ▒▒░▓▓▒▒▒▒▒▒▒▒▒▒▒▒▓▒ ░░░░█▓ ▓█▒▒▓▓▓▓▒▓▓▓▓▓▓▓▓▓▓
#▒▓▒▒▓███████████▓▓▓▓▓▓▓▓▓▓▓█▓▓▓▓▓▓██▒▓█▓▒██▒▓░▒██ ▓▓▒▒▒▒▒▒▒▒▒▒▓▒ ▒▒░▓▓▒▒▒▒▒▒▒▒▒▒▒▒▓▓ ░░▒▓▓░ ▒░▒ ▒▓▒▓▒▓▒▓▒▓▒▓▒▓▒▓
#▓▒▓▒▓▓▓▓███████▓█▓██▓▓█▓▓▓▓▓▓▓▓▓█▓██▓▓██▒▓█▓▒▓▓██░ ▒▓▒▓▒▓▓▓▒▒▓▓▓ ░░▒░ ▒▓▒▒▒▒▒▒▒▒▒▒▒▒▓▓ ░ ▓▓▓▓ ▒ ▒ ▒▓▓▒▓▒▓▒▓▓▓▒▓▒▓▒
#▒▓▒▓▒▒▒▒▒▓▓██████████▓▓▓▓▓▓█▓█▓█▓███▓▓▓█▓▒██▒▓█▒██ ░█▓▓▓▓▓▓▓▓▓▓ ▒▒▒░ ▒▓▒▒▒▒▒▒▒▒▒▒▒▓▓▓ ░░▒▓▒▓█▒ ░ ██▒▓▒▓▒▓▒▓▒▓▒▓▒▓
#▓▒▓▒▓▒▓▒▒▒▒▒▒▒▓▓█████████▓▓██████████▓▓█▓▒▓██▓█▒▓█░ ▓▓▓▓▓▓▓▓▓█▒ ▒▒▓░▒ ░█▓▓▓▒▓▓▓▓▓▓▓▒▓▓▒ ▒▓▓▒▓▒░ ░▒█▒ ▓▒▓▓▓▒▓▒▓▒▓▒▓▒▓▒
#▒▓▒▓▒▓▒▓▒▓▒▒▒▒▒▒▒▒▒▓▓▓▓▓▓▓███████▓▓██▓▓▓█▒▒██▓▓▓▓█▓ ░█▓▓▓▓▓▓▓▓ ▒▒▒ ▒ █▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ ▒▓▒▓▒▓▓▓▓▓░▓█▓▒ ▒▓▒▓▒▓▒▓▒▓▒▓▒▓
#▓▒▓▒▓▒▓▒▓▒▒▒▓▒▓▒▒▒▒▒▒▒▒▒▒▒▒▒▓ ░▓▒██▓▓▓▓▒▓█▓▓█▓▓█ ░▓▓▓▓▓▓▓█░ ░▒▒▒ ▒ █▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▒▒▓▒▒▒▒▓▓▓▒░ ░ ▓▓▒▓▒▓▒▓▒▓▒▒▒
#▒▓▒▓▒▒▒▒▒▒▒▒▒▒▒▓▒▒▒▒▒▒▒▒▒▒▒▒▓▒ ▒░ ██▓██▒▓██▓█▓▒█▒░▓▓▓▓▓▓▓▓ ░▓▓▒ ▒ ▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒▒▒▒▒▓▒ ▓▓▒▒▒▓▒▒▒▓▒▒
#▓▒▓▒▓▒▓▒▒▒▒▒▒▒▓▒▒▒▒▒▓▒▒▒▓▒▒▒▓▓░░░ ▓██▓█▓▓██▓▓█▒█▓▒▓▓▓▓▓▓▓░ ░▓▒░ ▒ ▒▒▒█▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▒▒▒▒▒▒▓▓▒ ░▓▓▒▒▒▒▒▒▒▓▒
#▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▓▒ ░ ██▓▓█▒▓█▓▒█▓▓█▓▓▓▓▓▓▓▓░ ░▓▓ ▒░ ▒▒ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒▒▒▓▒▓▒▓▓▒ ░░░ ░▓▓▒▒▒▓▒▒▒▒
#▓▒▒▒▒▒▒▒▒▒▒▒▒▒▓▒▒▒▓▒▒▒▒▒▒▒▒▒▒▒▓ ░░ ▓██▓█▒▓██▓▓▓▓█▓▓▓▓▓▓▓▓██▒░▒▒ ▓▒ ░▓ ░█▓▓▓▓▓▓▓▓▓▓▓▓▒▒▒▒▓▒▒▒▓▒▓▓ ░░░░ ▒▓▓▒▒▒▒▒▓▒
#▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▓▓ ░░ ░██▓▓▓▒██▓▓█▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▒ ▒▒ ▓░ ▓▓▓▓▓▓▓▓▓▓▓█▒▒▒▒▒▒▒▒▒▒▒▒▓▓▒ ░ ▒▓▓▒▒▒▒▒▒
#▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▓▒▒▒▒▓▓ ░░ ██▓▓█▒▓██▓██▓▓▓▓▓▓▓▓▓▓▓▓▓█▒ ▒▓ ░▒ ▓▓▓▓▓▓▓▓▓█▒▒▒▒▒▒▒▒▒▒▒▒▒▓▓ ░ ░░░ ▒▓▒▒▒▒▒▒
#▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▓▓░ ▓██▓▓▓▒██▓▓▓▓▓▓▓▓▒▒▒▓▓▓▓▓▓░ ░▓▒▓▓▓░▒▓▓▓▓▓▓▓▓▓▓▒▒▒▒▒▒▒▒▒▒▒▒▒▓▒ ░░ ░░ ▒▓▒▒▒▒▒
#▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▓▓▒ ▓█▓█▓▓█▒███▓▓▓▓▓▓▓▓▓▓▒▒▒▒▒▓▓▓▒▒▓█▓█▓▓█▓▓▓▓▓▓▓▓▓▒▒▒▒▒▒▒▒▒▒▒▒▒▓▓ ░░░ ░░ ▓▒▒▒▒▒
#▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▓▓▓▓▓▒▓█▓█▓▓██▓▒▓▓▓▓▓▓▓▓▓▓▓▒▒▒▓▓▓▓▓▓▒▒▒▒▓▓▓▓▓▓▓▓▓▓▒▒▒▒▒▒▒▒▒▒▒▒▓▓ ░ ░░░░░ ▓▓▒▒▒▒
#▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▓██▒▒▓▓█▓▓▒▓██▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒▒▒▒▒▓▓▓▓▓▓▓▓▓▓▓▓▒▒▒▒▒▒▒▒▒▒▒▓▒ ▒░ ░ ░░░░░ ▓▒▒▒▒▒
#▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒██▓▒▓▓▓▒▓▓▓█▓▒▒▓▓▓▒▓▒▒▒▓▒▒▒▒▓▓▒▒▒▓▓▓▓▓▓▓▒▒▒▓▓▓▓▓▓▒▒▒▒▒▒▒▒▒▒▓░ ██▓ ░ ░░░░ ▒▓▒▒▒▒▒
#▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░▓██▒▓▓▓█▓▓██▓▒▓▓▓▓▓▒▒▒▒▒▒▒▒▒▓▒▓▓▓▓▓▒▓▒▓▓▒▓▓▓▓▓▒▓▓▒▒▒▒▒▒▒▒▒▓░▓█▒▒░▒ ░ ░░░░ ▒▓▒▒▒▒▒
#▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒██▓▓▓▓▓███▓▒▓▒▓▒▓▒▓▒▒▒▒▒▒▒▒▓▓▒▓▒▓▓▓▓▓▒▒▓▓▓▓▓▓▓▓▓▓▒▒▒▒▒▒▒▓▓██▒ ▒░ ░░░ ▓▒▒▒▒▒▒
#▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒███▓▒▓██▓█▓▒▒▓▓▓▓▓▓▓▓▓▓▓▓▓▒▓▓▓▓▓▓▓▓▒▒▓▓▓▓▓▓▓▓▓▓▓▓▒▒▒▒▒▒▒▒▒█░ ▒ ░░ ░▓▒▒▒▒▒▒
#▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▓▓██▓▓▓▒▒▓▓▓▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒▒▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒▒▓▓ ▒░▓░ ░░ ▒▓▒▒▒▒▒▒
#▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░░░░▒▒░░▓▓▒▓▓▓▒▒▒▒▒▒▒▒▒▒▒▒▒▒▓▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▓▓▓▓▓ ░▒▒▒▒ ▓████▒ ▒▒▒▒▒▒▒▒
YAML_DUMP_ID_SINGLE = 0
YAML_DUMP_ID_REMOTE_BOORU = 1
YAML_DUMP_ID_FAVOURITE_CUSTOM_FILTER_ACTIONS = 2
@ -218,7 +302,7 @@ class DB( HydrusDB.HydrusDB ):
hash_ids = { row[0] for row in rows }
existing_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM current_files WHERE service_id = ? AND hash_id IN ' + HydrusData.SplayListForDB( hash_ids ) + ';', ( service_id, ) ) )
existing_hash_ids = self._STS( self._ExecuteManySelect( 'SELECT hash_id FROM current_files WHERE service_id = ? AND hash_id = ?;', ( ( service_id, hash_id ) for hash_id in hash_ids ) ) )
valid_hash_ids = hash_ids.difference( existing_hash_ids )
@ -226,15 +310,13 @@ class DB( HydrusDB.HydrusDB ):
valid_rows = [ ( hash_id, timestamp ) for ( hash_id, timestamp ) in rows if hash_id in valid_hash_ids ]
splayed_valid_hash_ids = HydrusData.SplayListForDB( valid_hash_ids )
# insert the files
self._c.executemany( 'INSERT OR IGNORE INTO current_files VALUES ( ?, ?, ? );', ( ( service_id, hash_id, timestamp ) for ( hash_id, timestamp ) in valid_rows ) )
self._c.execute( 'DELETE FROM file_transfers WHERE service_id = ? AND hash_id IN ' + splayed_valid_hash_ids + ';', ( service_id, ) )
self._c.executemany( 'DELETE FROM file_transfers WHERE service_id = ? AND hash_id = ?;', ( ( service_id, hash_id ) for hash_id in valid_hash_ids ) )
info = self._c.execute( 'SELECT hash_id, size, mime FROM files_info WHERE hash_id IN ' + splayed_valid_hash_ids + ';' ).fetchall()
info = list( self._ExecuteManySelectSingleParam( 'SELECT hash_id, size, mime FROM files_info WHERE hash_id = ?;', valid_hash_ids ) )
num_files = len( valid_hash_ids )
delta_size = sum( ( size for ( hash_id, size, mime ) in info ) )
@ -246,9 +328,9 @@ class DB( HydrusDB.HydrusDB ):
service_info_updates.append( ( num_files, service_id, HC.SERVICE_INFO_NUM_FILES ) )
service_info_updates.append( ( num_inbox, service_id, HC.SERVICE_INFO_NUM_INBOX ) )
select_statement = 'SELECT COUNT( * ) FROM files_info WHERE mime IN ' + HydrusData.SplayListForDB( HC.SEARCHABLE_MIMES ) + ' AND hash_id IN {};'
select_statement = 'SELECT 1 FROM files_info WHERE mime IN ' + HydrusData.SplayListForDB( HC.SEARCHABLE_MIMES ) + ' AND hash_id = ?;'
num_viewable_files = sum( self._STL( self._SelectFromList( select_statement, valid_hash_ids ) ) )
num_viewable_files = sum( self._STI( self._ExecuteManySelectSingleParam( select_statement, valid_hash_ids ) ) )
service_info_updates.append( ( num_viewable_files, service_id, HC.SERVICE_INFO_NUM_VIEWABLE_FILES ) )
@ -262,7 +344,7 @@ class DB( HydrusDB.HydrusDB ):
if service_id == self._combined_local_file_service_id or service_type == HC.FILE_REPOSITORY:
self._c.execute( 'DELETE FROM deleted_files WHERE service_id = ? AND hash_id IN ' + splayed_valid_hash_ids + ';', ( service_id, ) )
self._c.executemany( 'DELETE FROM deleted_files WHERE service_id = ? AND hash_id = ?;', ( ( service_id, hash_id ) for hash_id in valid_hash_ids ) )
num_deleted = self._GetRowCount()
@ -483,17 +565,22 @@ class DB( HydrusDB.HydrusDB ):
def _ArchiveFiles( self, hash_ids ):
valid_hash_ids = [ hash_id for hash_id in hash_ids if hash_id in self._inbox_hash_ids ]
hash_ids = set( hash_ids )
valid_hash_ids = hash_ids.intersection( self._inbox_hash_ids )
if len( valid_hash_ids ) > 0:
splayed_hash_ids = HydrusData.SplayListForDB( valid_hash_ids )
self._c.executemany( 'DELETE FROM file_inbox WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in valid_hash_ids ) )
self._c.execute( 'DELETE FROM file_inbox WHERE hash_id IN ' + splayed_hash_ids + ';' )
updates = self._c.execute( 'SELECT service_id, COUNT( * ) FROM current_files WHERE hash_id IN ' + splayed_hash_ids + ' GROUP BY service_id;' ).fetchall()
self._c.executemany( 'UPDATE service_info SET info = info - ? WHERE service_id = ? AND info_type = ?;', [ ( count, service_id, HC.SERVICE_INFO_NUM_INBOX ) for ( service_id, count ) in updates ] )
with HydrusDB.TemporaryIntegerTable( self._c, valid_hash_ids, 'hash_id' ) as temp_table_name:
self._AnalyzeTempTable( temp_table_name )
updates = self._c.execute( 'SELECT service_id, COUNT( * ) FROM current_files NATURAL JOIN {} GROUP BY service_id;'.format( temp_table_name ) ).fetchall()
self._c.executemany( 'UPDATE service_info SET info = info - ? WHERE service_id = ? AND info_type = ?;', [ ( count, service_id, HC.SERVICE_INFO_NUM_INBOX ) for ( service_id, count ) in updates ] )
self._inbox_hash_ids.difference_update( valid_hash_ids )
@ -608,9 +695,9 @@ class DB( HydrusDB.HydrusDB ):
if current_mappings_exist:
select_statement = 'SELECT tag_id, COUNT( * ) FROM ' + current_mappings_table_name + ' WHERE tag_id IN {} GROUP BY tag_id;'
select_statement = 'SELECT tag_id, COUNT( * ) FROM ' + current_mappings_table_name + ' WHERE tag_id = ?;'
for ( tag_id, count ) in self._SelectFromList( select_statement, group_of_ids ):
for ( tag_id, count ) in self._ExecuteManySelectSingleParam( select_statement, group_of_ids ):
if count > 0:
@ -625,9 +712,9 @@ class DB( HydrusDB.HydrusDB ):
if pending_mappings_exist:
select_statement = 'SELECT tag_id, COUNT( * ) FROM ' + pending_mappings_table_name + ' WHERE tag_id IN {} GROUP BY tag_id;'
select_statement = 'SELECT tag_id, COUNT( * ) FROM ' + pending_mappings_table_name + ' WHERE tag_id = ?;'
for ( tag_id, count ) in self._SelectFromList( select_statement, group_of_ids ):
for ( tag_id, count ) in self._ExecuteManySelectSingleParam( select_statement, group_of_ids ):
if count > 0:
@ -653,9 +740,9 @@ class DB( HydrusDB.HydrusDB ):
ac_cache_table_name = GenerateCombinedFilesMappingsCacheTableName( service_id )
select_statement = 'SELECT tag_id, current_count, pending_count FROM ' + ac_cache_table_name + ' WHERE tag_id IN {};'
select_statement = 'SELECT tag_id, current_count, pending_count FROM ' + ac_cache_table_name + ' WHERE tag_id = ?;'
return self._SelectFromListFetchAll( select_statement, tag_ids )
return list( self._ExecuteManySelectSingleParam( select_statement, tag_ids ) )
def _CacheCombinedFilesMappingsUpdate( self, service_id, count_ids ):
@ -735,9 +822,9 @@ class DB( HydrusDB.HydrusDB ):
( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryMasterCacheTableNames( service_id )
select_statement = 'SELECT hash_id FROM ' + hash_id_map_table_name + ' WHERE service_hash_id IN {};'
select_statement = 'SELECT hash_id FROM ' + hash_id_map_table_name + ' WHERE service_hash_id = ?;'
hash_ids = self._STL( self._SelectFromList( select_statement, service_hash_ids ) )
hash_ids = self._STL( self._ExecuteManySelectSingleParam( select_statement, service_hash_ids ) )
if len( hash_ids ) != len( service_hash_ids ):
@ -1055,9 +1142,9 @@ class DB( HydrusDB.HydrusDB ):
( cache_files_table_name, cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name, ac_cache_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id )
select_statement = 'SELECT hash_id FROM ' + cache_files_table_name + ' WHERE hash_id IN {};'
select_statement = 'SELECT hash_id FROM ' + cache_files_table_name + ' WHERE hash_id = ?;'
return self._STL( self._SelectFromList( select_statement, hash_ids ) )
return self._STL( self._ExecuteManySelectSingleParam( select_statement, hash_ids ) )
def _CacheSpecificMappingsGenerate( self, file_service_id, tag_service_id ):
@ -1092,9 +1179,9 @@ class DB( HydrusDB.HydrusDB ):
( cache_files_table_name, cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name, ac_cache_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id )
select_statement = 'SELECT tag_id, current_count, pending_count FROM ' + ac_cache_table_name + ' WHERE tag_id IN {};'
select_statement = 'SELECT tag_id, current_count, pending_count FROM ' + ac_cache_table_name + ' WHERE tag_id = ?;'
return self._SelectFromListFetchAll( select_statement, tag_ids )
return list( self._ExecuteManySelectSingleParam( select_statement, tag_ids ) )
def _CacheSpecificMappingsPendMappings( self, file_service_id, tag_service_id, mappings_ids ):
@ -1688,7 +1775,7 @@ class DB( HydrusDB.HydrusDB ):
service_type = service.GetServiceType()
existing_hash_ids = self._STS( self._SelectFromList( 'SELECT hash_id FROM current_files WHERE service_id = ' + str( service_id ) + ' AND hash_id IN {};', hash_ids ) )
existing_hash_ids = self._STS( self._ExecuteManySelect( 'SELECT hash_id FROM current_files WHERE service_id = ? AND hash_id = ?;', ( ( service_id, hash_id ) for hash_id in hash_ids ) ) )
service_info_updates = []
@ -1698,11 +1785,11 @@ class DB( HydrusDB.HydrusDB ):
# remove them from the service
self._c.execute( 'DELETE FROM current_files WHERE service_id = ? AND hash_id IN ' + splayed_existing_hash_ids + ';', ( service_id, ) )
self._c.executemany( 'DELETE FROM current_files WHERE service_id = ? AND hash_id = ?;', ( ( service_id, hash_id ) for hash_id in existing_hash_ids ) )
self._c.execute( 'DELETE FROM file_petitions WHERE service_id = ? AND hash_id IN ' + splayed_existing_hash_ids + ';', ( service_id, ) )
self._c.executemany( 'DELETE FROM file_petitions WHERE service_id = ? AND hash_id = ?;', ( ( service_id, hash_id ) for hash_id in existing_hash_ids ) )
info = self._c.execute( 'SELECT size, mime FROM files_info WHERE hash_id IN ' + splayed_existing_hash_ids + ';' ).fetchall()
info = list( self._ExecuteManySelectSingleParam( 'SELECT size, mime FROM files_info WHERE hash_id = ?;', existing_hash_ids ) )
num_existing_files_removed = len( existing_hash_ids )
delta_size = sum( ( size for ( size, mime ) in info ) )
@ -1712,9 +1799,9 @@ class DB( HydrusDB.HydrusDB ):
service_info_updates.append( ( -num_existing_files_removed, service_id, HC.SERVICE_INFO_NUM_FILES ) )
service_info_updates.append( ( -num_inbox, service_id, HC.SERVICE_INFO_NUM_INBOX ) )
select_statement = 'SELECT COUNT( * ) FROM files_info WHERE mime IN ' + HydrusData.SplayListForDB( HC.SEARCHABLE_MIMES ) + ' AND hash_id IN {};'
select_statement = 'SELECT 1 FROM files_info WHERE mime IN ' + HydrusData.SplayListForDB( HC.SEARCHABLE_MIMES ) + ' AND hash_id = ?;'
num_viewable_files = sum( self._STL( self._SelectFromList( select_statement, existing_hash_ids ) ) )
num_viewable_files = sum( self._STI( self._ExecuteManySelectSingleParam( select_statement, existing_hash_ids ) ) )
service_info_updates.append( ( -num_viewable_files, service_id, HC.SERVICE_INFO_NUM_VIEWABLE_FILES ) )
@ -2153,8 +2240,8 @@ class DB( HydrusDB.HydrusDB ):
potential_duplicate_pairs = set()
potential_duplicate_pairs.update( self._SelectFromList( 'SELECT smaller_media_id, larger_media_id FROM potential_duplicate_pairs WHERE smaller_media_id IN {};', all_media_ids ) )
potential_duplicate_pairs.update( self._SelectFromList( 'SELECT smaller_media_id, larger_media_id FROM potential_duplicate_pairs WHERE larger_media_id IN {};', all_media_ids ) )
potential_duplicate_pairs.update( self._ExecuteManySelectSingleParam( 'SELECT smaller_media_id, larger_media_id FROM potential_duplicate_pairs WHERE smaller_media_id = ?;', all_media_ids ) )
potential_duplicate_pairs.update( self._ExecuteManySelectSingleParam( 'SELECT smaller_media_id, larger_media_id FROM potential_duplicate_pairs WHERE larger_media_id = ?;', all_media_ids ) )
deletees = []
@ -2272,13 +2359,13 @@ class DB( HydrusDB.HydrusDB ):
allowed_hash_ids = set( allowed_hash_ids )
query = 'SELECT king_hash_id FROM duplicate_files WHERE king_hash_id in {};'
query = 'SELECT king_hash_id FROM duplicate_files WHERE king_hash_id = ?;'
explicit_king_hash_ids = self._STS( self._SelectFromList( query, allowed_hash_ids ) )
explicit_king_hash_ids = self._STS( self._ExecuteManySelectSingleParam( query, allowed_hash_ids ) )
query = 'SELECT hash_id FROM duplicate_file_members WHERE hash_id in {};'
query = 'SELECT hash_id FROM duplicate_file_members WHERE hash_id = ?;'
all_duplicate_member_hash_ids = self._STS( self._SelectFromList( query, allowed_hash_ids ) )
all_duplicate_member_hash_ids = self._STS( self._ExecuteManySelectSingleParam( query, allowed_hash_ids ) )
all_non_king_hash_ids = all_duplicate_member_hash_ids.difference( explicit_king_hash_ids )
@ -2779,9 +2866,9 @@ class DB( HydrusDB.HydrusDB ):
select_statement = 'SELECT hash_id FROM duplicate_file_members WHERE media_id IN {};'
select_statement = 'SELECT hash_id FROM duplicate_file_members WHERE media_id = ?;'
hash_ids = self._STS( self._SelectFromList( select_statement, media_ids ) )
hash_ids = self._STS( self._ExecuteManySelectSingleParam( select_statement, media_ids ) )
elif dupe_type == HC.DUPLICATE_POTENTIAL:
@ -2921,6 +3008,8 @@ class DB( HydrusDB.HydrusDB ):
self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( temp_table_name ), ( ( hash_id, ) for hash_id in query_hash_ids ) )
self._AnalyzeTempTable( temp_table_name )
( table_join, predicate_string ) = self._DuplicatesGetPotentialDuplicatePairsTableJoinInfoOnSearchResults( file_service_key, temp_table_name, both_files_match )
@ -3013,6 +3102,8 @@ class DB( HydrusDB.HydrusDB ):
self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( temp_table_name ), ( ( hash_id, ) for hash_id in query_hash_ids ) )
self._AnalyzeTempTable( temp_table_name )
( table_join, predicate_string ) = self._DuplicatesGetPotentialDuplicatePairsTableJoinInfoOnSearchResults( file_service_key, temp_table_name, both_files_match )
@ -3144,6 +3235,8 @@ class DB( HydrusDB.HydrusDB ):
self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( temp_table_name ), ( ( hash_id, ) for hash_id in query_hash_ids ) )
self._AnalyzeTempTable( temp_table_name )
( table_join, predicate_string ) = self._DuplicatesGetPotentialDuplicatePairsTableJoinInfoOnSearchResults( file_service_key, temp_table_name, both_files_match )
@ -3838,7 +3931,7 @@ class DB( HydrusDB.HydrusDB ):
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
child_hash_ids = self._STS( self._SelectFromList( 'SELECT hash_id FROM ' + current_mappings_table_name + ' WHERE tag_id IN {};', sibling_tag_ids ) )
child_hash_ids = self._STS( self._ExecuteManySelectSingleParam( 'SELECT hash_id FROM ' + current_mappings_table_name + ' WHERE tag_id = ?;', sibling_tag_ids ) )
for parent_tag_id in parent_tag_ids:
@ -4590,14 +4683,14 @@ class DB( HydrusDB.HydrusDB ):
if hash_ids_to_current_file_service_ids is None:
hash_ids_to_current_file_service_ids = HydrusData.BuildKeyToListDict( self._SelectFromList( 'SELECT hash_id, service_id FROM current_files WHERE hash_id IN {};', hash_ids ) )
hash_ids_to_current_file_service_ids = HydrusData.BuildKeyToListDict( self._ExecuteManySelectSingleParam( 'SELECT hash_id, service_id FROM current_files WHERE hash_id = ?;', hash_ids ) )
# Let's figure out if there is a common specific file service to this batch
file_service_id_counter = collections.Counter()
for file_service_ids in list(hash_ids_to_current_file_service_ids.values()):
for file_service_ids in hash_ids_to_current_file_service_ids.values():
for file_service_id in file_service_ids:
@ -4607,7 +4700,7 @@ class DB( HydrusDB.HydrusDB ):
common_file_service_id = None
for ( file_service_id, count ) in list(file_service_id_counter.items()):
for ( file_service_id, count ) in file_service_id_counter.items():
if count == len( hash_ids ): # i.e. every hash has this file service
@ -4634,20 +4727,20 @@ class DB( HydrusDB.HydrusDB ):
if common_file_service_id is None:
tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_CURRENT, tag_id ) ) for ( hash_id, tag_id ) in self._SelectFromList( 'SELECT hash_id, tag_id FROM ' + current_mappings_table_name + ' WHERE hash_id IN {};', hash_ids ) )
tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_DELETED, tag_id ) ) for ( hash_id, tag_id ) in self._SelectFromList( 'SELECT hash_id, tag_id FROM ' + deleted_mappings_table_name + ' WHERE hash_id IN {};', hash_ids ) )
tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_PENDING, tag_id ) ) for ( hash_id, tag_id ) in self._SelectFromList( 'SELECT hash_id, tag_id FROM ' + pending_mappings_table_name + ' WHERE hash_id IN {};', hash_ids ) )
tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_CURRENT, tag_id ) ) for ( hash_id, tag_id ) in self._ExecuteManySelectSingleParam( 'SELECT hash_id, tag_id FROM ' + current_mappings_table_name + ' WHERE hash_id = ?;', hash_ids ) )
tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_DELETED, tag_id ) ) for ( hash_id, tag_id ) in self._ExecuteManySelectSingleParam( 'SELECT hash_id, tag_id FROM ' + deleted_mappings_table_name + ' WHERE hash_id = ?;', hash_ids ) )
tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_PENDING, tag_id ) ) for ( hash_id, tag_id ) in self._ExecuteManySelectSingleParam( 'SELECT hash_id, tag_id FROM ' + pending_mappings_table_name + ' WHERE hash_id = ?;', hash_ids ) )
else:
( cache_files_table_name, cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name, ac_cache_table_name ) = GenerateSpecificMappingsCacheTableNames( common_file_service_id, tag_service_id )
tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_CURRENT, tag_id ) ) for ( hash_id, tag_id ) in self._SelectFromList( 'SELECT hash_id, tag_id FROM ' + cache_current_mappings_table_name + ' WHERE hash_id IN {};', hash_ids ) )
tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_DELETED, tag_id ) ) for ( hash_id, tag_id ) in self._SelectFromList( 'SELECT hash_id, tag_id FROM ' + cache_deleted_mappings_table_name + ' WHERE hash_id IN {};', hash_ids ) )
tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_PENDING, tag_id ) ) for ( hash_id, tag_id ) in self._SelectFromList( 'SELECT hash_id, tag_id FROM ' + cache_pending_mappings_table_name + ' WHERE hash_id IN {};', hash_ids ) )
tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_CURRENT, tag_id ) ) for ( hash_id, tag_id ) in self._ExecuteManySelectSingleParam( 'SELECT hash_id, tag_id FROM ' + cache_current_mappings_table_name + ' WHERE hash_id = ?;', hash_ids ) )
tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_DELETED, tag_id ) ) for ( hash_id, tag_id ) in self._ExecuteManySelectSingleParam( 'SELECT hash_id, tag_id FROM ' + cache_deleted_mappings_table_name + ' WHERE hash_id = ?;', hash_ids ) )
tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_PENDING, tag_id ) ) for ( hash_id, tag_id ) in self._ExecuteManySelectSingleParam( 'SELECT hash_id, tag_id FROM ' + cache_pending_mappings_table_name + ' WHERE hash_id = ?;', hash_ids ) )
tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_PETITIONED, tag_id ) ) for ( hash_id, tag_id ) in self._SelectFromList( 'SELECT hash_id, tag_id FROM ' + petitioned_mappings_table_name + ' WHERE hash_id IN {};', hash_ids ) )
tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_PETITIONED, tag_id ) ) for ( hash_id, tag_id ) in self._ExecuteManySelectSingleParam( 'SELECT hash_id, tag_id FROM ' + petitioned_mappings_table_name + ' WHERE hash_id = ?;', hash_ids ) )
seen_tag_ids = { tag_id for ( hash_id, ( tag_service_id, status, tag_id ) ) in tag_data }
@ -4937,7 +5030,7 @@ class DB( HydrusDB.HydrusDB ):
return hash_ids
def _GetHashIdsFromQuery( self, search_context, job_key = None, query_hash_ids = None, apply_implicit_limit = True, sort_by = None ):
def _GetHashIdsFromQuery( self, search_context, job_key = None, query_hash_ids = None, apply_implicit_limit = True, sort_by = None, limit_sort_by = None ):
if job_key is None:
@ -5432,17 +5525,17 @@ class DB( HydrusDB.HydrusDB ):
if not done_files_info_predicates and ( need_file_domain_cross_reference or there_are_simple_files_info_preds_to_search_for ):
files_info_predicates.append( 'hash_id IN {}' )
files_info_predicates.append( 'hash_id = ?' )
if file_service_key == CC.COMBINED_FILE_SERVICE_KEY:
query_hash_ids = intersection_update_qhi( query_hash_ids, self._STI( self._SelectFromList( 'SELECT hash_id FROM files_info WHERE ' + ' AND '.join( files_info_predicates ) + ';', query_hash_ids ) ) )
query_hash_ids = intersection_update_qhi( query_hash_ids, self._STI( self._ExecuteManySelectSingleParam( 'SELECT hash_id FROM files_info WHERE ' + ' AND '.join( files_info_predicates ) + ';', query_hash_ids ) ) )
else:
files_info_predicates.insert( 0, 'service_id = ' + str( file_service_id ) )
query_hash_ids = intersection_update_qhi( query_hash_ids, self._STI( self._SelectFromList( 'SELECT hash_id FROM current_files NATURAL JOIN files_info WHERE ' + ' AND '.join( files_info_predicates ) + ';', query_hash_ids ) ) )
query_hash_ids = intersection_update_qhi( query_hash_ids, self._STI( self._ExecuteManySelectSingleParam( 'SELECT hash_id FROM current_files NATURAL JOIN files_info WHERE ' + ' AND '.join( files_info_predicates ) + ';', query_hash_ids ) ) )
done_files_info_predicates = True
@ -5764,42 +5857,27 @@ class DB( HydrusDB.HydrusDB ):
query_hash_ids = list( query_hash_ids )
if sort_by is not None:
( sort_metadata, sort_data ) = sort_by.sort_type
sort_asc = sort_by.sort_asc
query = None
if sort_metadata == 'system':
if sort_data == CC.SORT_FILES_BY_IMPORT_TIME:
query = 'SELECT hash_id, timestamp FROM files_info NATURAL JOIN current_files WHERE hash_id IN {} AND service_id = {};'.format( '{}', self._local_file_service_id )
key = lambda row: row[1]
reverse = sort_asc == CC.SORT_DESC
if query is not None:
query_hash_ids_and_other_data = list( self._SelectFromList( query, query_hash_ids ) )
query_hash_ids_and_other_data.sort( key = key, reverse = reverse )
query_hash_ids = [ row[0] for row in query_hash_ids_and_other_data ]
#
limit = system_predicates.GetLimit( apply_implicit_limit = apply_implicit_limit )
if limit is not None and sort_by is None and limit_sort_by is not None:
sort_by = limit_sort_by
did_sort = False
if sort_by is not None and file_service_id != self._combined_file_service_id:
( did_sort, query_hash_ids ) = self._TryToSortHashIds( file_service_id, query_hash_ids, sort_by )
#
if limit is not None and limit <= len( query_hash_ids ):
if sort_by is None:
if not did_sort:
query_hash_ids = random.sample( query_hash_ids, limit )
@ -5962,8 +6040,7 @@ class DB( HydrusDB.HydrusDB ):
selects.extend( pending_selects )
# 25k static value here sucks, but we'll see how it goes--it translates to 100 chunks in selectfromlist, and most tags have n < this anyway
if allowed_hash_ids is None or len( allowed_hash_ids ) > 25600:
if allowed_hash_ids is None:
for select in selects:
@ -5977,11 +6054,11 @@ class DB( HydrusDB.HydrusDB ):
else:
selects = [ select.replace( ';', ' AND hash_id IN {};' ) for select in selects ]
selects = [ select.replace( ';', ' AND hash_id = ?;' ) for select in selects ]
for select in selects:
hash_ids.update( self._STI( self._SelectFromList( select, allowed_hash_ids ) ) )
hash_ids.update( self._STI( self._ExecuteManySelectSingleParam( select, allowed_hash_ids ) ) )
@ -5996,6 +6073,8 @@ class DB( HydrusDB.HydrusDB ):
else:
# instead of this, ultimately do the temp table deal with hash_ids and natural join
if len( hash_ids ) <= 1000:
search_hash_ids = hash_ids
@ -6046,7 +6125,9 @@ class DB( HydrusDB.HydrusDB ):
else:
query = self._SelectFromList( 'SELECT hash_id, url FROM url_map NATURAL JOIN urls WHERE domain_id = ' + str( domain_id ) + ' AND hash_id in {};', search_hash_ids )
select_args_iterator = ( ( domain_id, hash_id ) for hash_id in search_hash_ids )
query = self._ExecuteManySelect( 'SELECT hash_id, url FROM url_map NATURAL JOIN urls WHERE domain_id = ? AND hash_id = ?;', select_args_iterator )
for ( hash_id, url ) in query:
@ -6080,11 +6161,13 @@ class DB( HydrusDB.HydrusDB ):
if search_hash_ids is None:
query = self._c.execute( 'SELECT DISTINCT hash_id FROM url_map NATURAL JOIN urls WHERE domain_id = ?;', ( domain_id, ) )
query = self._c.execute( 'SELECT hash_id FROM url_map NATURAL JOIN urls WHERE domain_id = ?;', ( domain_id, ) )
else:
query = self._SelectFromList( 'SELECT DISTINCT hash_id FROM url_map NATURAL JOIN urls WHERE domain_id = ' + str( domain_id ) + ' AND hash_id in {};', search_hash_ids )
select_args_iterator = ( ( domain_id, hash_id ) for hash_id in search_hash_ids )
query = self._ExecuteManySelect( 'SELECT hash_id FROM url_map NATURAL JOIN urls WHERE domain_id = ? AND hash_id = ?;', select_args_iterator )
domain_result_hash_ids = self._STS( query )
@ -6107,7 +6190,7 @@ class DB( HydrusDB.HydrusDB ):
else:
query = self._SelectFromList( 'SELECT hash_id, url FROM url_map NATURAL JOIN urls WHERE hash_id in {};', search_hash_ids )
query = self._ExecuteManySelectSingleParam( 'SELECT hash_id, url FROM url_map NATURAL JOIN urls WHERE hash_id = ?;', search_hash_ids )
result_hash_ids = set()
@ -6244,9 +6327,9 @@ class DB( HydrusDB.HydrusDB ):
table_union_to_select_from = '( ' + ' UNION ALL '.join( ( 'SELECT * FROM ' + table_name for table_name in table_names ) ) + ' )'
select_statement = 'SELECT hash_id, COUNT( DISTINCT tag_id ) FROM ' + table_union_to_select_from + ' WHERE hash_id IN {} GROUP BY hash_id;'
select_statement = 'SELECT hash_id, COUNT( DISTINCT tag_id ) FROM ' + table_union_to_select_from + ' WHERE hash_id = ?;'
hash_id_tag_counts = list( self._SelectFromList( select_statement, hash_ids ) )
hash_id_tag_counts = list( self._ExecuteManySelectSingleParam( select_statement, hash_ids ) )
return hash_id_tag_counts
@ -6746,25 +6829,25 @@ class DB( HydrusDB.HydrusDB ):
self._PopulateHashIdsToHashesCache( hash_ids )
hash_ids_to_info = { hash_id : ClientMedia.FileInfoManager( hash_id, self._hash_ids_to_hashes_cache[ hash_id ], size, mime, width, height, duration, num_frames, has_audio, num_words ) for ( hash_id, size, mime, width, height, duration, num_frames, has_audio, num_words ) in self._SelectFromList( 'SELECT * FROM files_info WHERE hash_id IN {};', hash_ids ) }
hash_ids_to_info = { hash_id : ClientMedia.FileInfoManager( hash_id, self._hash_ids_to_hashes_cache[ hash_id ], size, mime, width, height, duration, num_frames, has_audio, num_words ) for ( hash_id, size, mime, width, height, duration, num_frames, has_audio, num_words ) in self._ExecuteManySelectSingleParam( 'SELECT * FROM files_info WHERE hash_id = ?;', hash_ids ) }
hash_ids_to_current_file_service_ids_and_timestamps = HydrusData.BuildKeyToListDict( ( ( hash_id, ( service_id, timestamp ) ) for ( hash_id, service_id, timestamp ) in self._SelectFromList( 'SELECT hash_id, service_id, timestamp FROM current_files WHERE hash_id IN {};', hash_ids ) ) )
hash_ids_to_current_file_service_ids_and_timestamps = HydrusData.BuildKeyToListDict( ( ( hash_id, ( service_id, timestamp ) ) for ( hash_id, service_id, timestamp ) in self._ExecuteManySelectSingleParam( 'SELECT hash_id, service_id, timestamp FROM current_files WHERE hash_id = ?;', hash_ids ) ) )
hash_ids_to_deleted_file_service_ids = HydrusData.BuildKeyToListDict( self._SelectFromList( 'SELECT hash_id, service_id FROM deleted_files WHERE hash_id IN {};', hash_ids ) )
hash_ids_to_deleted_file_service_ids = HydrusData.BuildKeyToListDict( self._ExecuteManySelectSingleParam( 'SELECT hash_id, service_id FROM deleted_files WHERE hash_id = ?;', hash_ids ) )
hash_ids_to_pending_file_service_ids = HydrusData.BuildKeyToListDict( self._SelectFromList( 'SELECT hash_id, service_id FROM file_transfers WHERE hash_id IN {};', hash_ids ) )
hash_ids_to_pending_file_service_ids = HydrusData.BuildKeyToListDict( self._ExecuteManySelectSingleParam( 'SELECT hash_id, service_id FROM file_transfers WHERE hash_id = ?;', hash_ids ) )
hash_ids_to_petitioned_file_service_ids = HydrusData.BuildKeyToListDict( self._SelectFromList( 'SELECT hash_id, service_id FROM file_petitions WHERE hash_id IN {};', hash_ids ) )
hash_ids_to_petitioned_file_service_ids = HydrusData.BuildKeyToListDict( self._ExecuteManySelectSingleParam( 'SELECT hash_id, service_id FROM file_petitions WHERE hash_id = ?;', hash_ids ) )
hash_ids_to_urls = HydrusData.BuildKeyToSetDict( self._SelectFromList( 'SELECT hash_id, url FROM url_map NATURAL JOIN urls WHERE hash_id IN {};', hash_ids ) )
hash_ids_to_urls = HydrusData.BuildKeyToSetDict( self._ExecuteManySelectSingleParam( 'SELECT hash_id, url FROM url_map NATURAL JOIN urls WHERE hash_id = ?;', hash_ids ) )
hash_ids_to_service_ids_and_filenames = HydrusData.BuildKeyToListDict( ( ( hash_id, ( service_id, filename ) ) for ( hash_id, service_id, filename ) in self._SelectFromList( 'SELECT hash_id, service_id, filename FROM service_filenames WHERE hash_id IN {};', hash_ids ) ) )
hash_ids_to_service_ids_and_filenames = HydrusData.BuildKeyToListDict( ( ( hash_id, ( service_id, filename ) ) for ( hash_id, service_id, filename ) in self._ExecuteManySelectSingleParam( 'SELECT hash_id, service_id, filename FROM service_filenames WHERE hash_id = ?;', hash_ids ) ) )
hash_ids_to_local_ratings = HydrusData.BuildKeyToListDict( ( ( hash_id, ( service_id, rating ) ) for ( service_id, hash_id, rating ) in self._SelectFromList( 'SELECT service_id, hash_id, rating FROM local_ratings WHERE hash_id IN {};', hash_ids ) ) )
hash_ids_to_local_ratings = HydrusData.BuildKeyToListDict( ( ( hash_id, ( service_id, rating ) ) for ( service_id, hash_id, rating ) in self._ExecuteManySelectSingleParam( 'SELECT service_id, hash_id, rating FROM local_ratings WHERE hash_id = ?;', hash_ids ) ) )
hash_ids_to_file_viewing_stats_managers = { hash_id : ClientMedia.FileViewingStatsManager( preview_views, preview_viewtime, media_views, media_viewtime ) for ( hash_id, preview_views, preview_viewtime, media_views, media_viewtime ) in self._SelectFromList( 'SELECT hash_id, preview_views, preview_viewtime, media_views, media_viewtime FROM file_viewing_stats WHERE hash_id IN {};', hash_ids ) }
hash_ids_to_file_viewing_stats_managers = { hash_id : ClientMedia.FileViewingStatsManager( preview_views, preview_viewtime, media_views, media_viewtime ) for ( hash_id, preview_views, preview_viewtime, media_views, media_viewtime ) in self._ExecuteManySelectSingleParam( 'SELECT hash_id, preview_views, preview_viewtime, media_views, media_viewtime FROM file_viewing_stats WHERE hash_id = ?;', hash_ids ) }
hash_ids_to_file_modified_timestamps = dict( self._SelectFromList( 'SELECT hash_id, file_modified_timestamp FROM file_modified_timestamps WHERE hash_id IN {};', hash_ids ) )
hash_ids_to_file_modified_timestamps = dict( self._ExecuteManySelectSingleParam( 'SELECT hash_id, file_modified_timestamp FROM file_modified_timestamps WHERE hash_id = ?;', hash_ids ) )
#
@ -7343,9 +7426,10 @@ class DB( HydrusDB.HydrusDB ):
unprocessed_hash_ids = update_indices_to_unprocessed_hash_ids[ update_index ]
select_statement = 'SELECT hash_id FROM current_files WHERE service_id = ' + str( self._local_update_service_id ) + ' and hash_id IN {};'
select_statement = 'SELECT hash_id FROM current_files WHERE service_id = ? and hash_id = ?;'
select_args_iterator = ( ( self._local_update_service_id, hash_id ) for hash_id in unprocessed_hash_ids )
local_hash_ids = self._STS( self._SelectFromList( select_statement, unprocessed_hash_ids ) )
local_hash_ids = self._STS( self._ExecuteManySelect( select_statement, select_args_iterator ) )
if unprocessed_hash_ids == local_hash_ids:
@ -7357,12 +7441,12 @@ class DB( HydrusDB.HydrusDB ):
select_statement = 'SELECT hash, mime FROM files_info NATURAL JOIN hashes WHERE hash_id IN {};'
select_statement = 'SELECT hash, mime FROM files_info NATURAL JOIN hashes WHERE hash_id = ?;'
definition_hashes = set()
content_hashes = set()
for ( hash, mime ) in self._SelectFromList( select_statement, hash_ids_i_can_process ):
for ( hash, mime ) in self._ExecuteManySelectSingleParam( select_statement, hash_ids_i_can_process ):
if mime == HC.APPLICATION_HYDRUS_UPDATE_DEFINITIONS:
@ -7890,13 +7974,13 @@ class DB( HydrusDB.HydrusDB ):
service_predicate = ' AND service_id = ' + str( service_id )
good_select = 'SELECT good_tag_id FROM tag_siblings WHERE bad_tag_id IN {}' + service_predicate + ';'
bad_select = 'SELECT bad_tag_id FROM tag_siblings WHERE good_tag_id IN {}' + service_predicate + ';'
good_select = 'SELECT good_tag_id FROM tag_siblings WHERE bad_tag_id = ?' + service_predicate + ';'
bad_select = 'SELECT bad_tag_id FROM tag_siblings WHERE good_tag_id = ?' + service_predicate + ';'
while len( search_tag_ids ) > 0:
goods = self._STS( self._SelectFromList( good_select, search_tag_ids ) )
bads = self._STS( self._SelectFromList( bad_select, search_tag_ids ) )
goods = self._STS( self._ExecuteManySelectSingleParam( good_select, search_tag_ids ) )
bads = self._STS( self._ExecuteManySelectSingleParam( bad_select, search_tag_ids ) )
searched_tag_ids.update( search_tag_ids )
@ -9177,7 +9261,9 @@ class DB( HydrusDB.HydrusDB ):
with HydrusDB.TemporaryIntegerTable( self._c, rebalance_phash_ids, 'phash_id' ) as temp_table_name:
# can't turn this into selectfromlist due to the order clause. we need to do this all at once
self._AnalyzeTempTable( temp_table_name )
# can't turn this into an ExecuteManySelect due to the order clause. we need to do it all at once
( biggest_phash_id, ) = self._c.execute( 'SELECT phash_id FROM shape_vptree NATURAL JOIN ' + temp_table_name + ' ORDER BY inner_population + outer_population DESC;' ).fetchone()
@ -9334,9 +9420,7 @@ class DB( HydrusDB.HydrusDB ):
self._c.executemany( 'DELETE FROM shape_maintenance_branch_regen WHERE phash_id = ?;', ( ( p_id, ) for p_id in unbalanced_phash_ids ) )
select_statement = 'SELECT phash_id FROM shape_perceptual_hash_map WHERE phash_id IN {};'
useful_phash_ids = self._STS( self._SelectFromList( select_statement, unbalanced_phash_ids ) )
useful_phash_ids = self._STS( self._ExecuteManySelectSingleParam( 'SELECT phash_id FROM shape_perceptual_hash_map WHERE phash_id = ?;', unbalanced_phash_ids ) )
orphan_phash_ids = unbalanced_phash_ids.difference( useful_phash_ids )
@ -9550,9 +9634,9 @@ class DB( HydrusDB.HydrusDB ):
# adding a delay in seemed to fix it as well. guess it was some memory maintenance buffer/bytes thing
# anyway, we now just get the whole lot of results first and then work on the whole lot
select_statement = 'SELECT phash_id, phash, radius, inner_id, outer_id FROM shape_perceptual_hashes NATURAL JOIN shape_vptree WHERE phash_id IN {};'
select_statement = 'SELECT phash_id, phash, radius, inner_id, outer_id FROM shape_perceptual_hashes NATURAL JOIN shape_vptree WHERE phash_id = ?;'
results = list( self._SelectFromList( select_statement, group_of_current_potentials ) )
results = list( self._ExecuteManySelectSingleParam( select_statement, group_of_current_potentials ) )
for ( node_phash_id, node_phash, node_radius, inner_phash_id, outer_phash_id ) in results:
@ -9612,9 +9696,7 @@ class DB( HydrusDB.HydrusDB ):
similar_phash_ids = list( similar_phash_ids_to_distances.keys() )
select_statement = 'SELECT phash_id, hash_id FROM shape_perceptual_hash_map WHERE phash_id IN {};'
similar_phash_ids_to_hash_ids = HydrusData.BuildKeyToListDict( self._SelectFromList( select_statement, similar_phash_ids ) )
similar_phash_ids_to_hash_ids = HydrusData.BuildKeyToListDict( self._ExecuteManySelectSingleParam( 'SELECT phash_id, hash_id FROM shape_perceptual_hash_map WHERE phash_id = ?;', similar_phash_ids ) )
similar_hash_ids_to_distances = {}
@ -9674,9 +9756,7 @@ class DB( HydrusDB.HydrusDB ):
pubbed_error = False
select_statement = 'SELECT hash_id, hash FROM hashes WHERE hash_id IN {};'
uncached_hash_ids_to_hashes = dict( self._SelectFromList( select_statement, uncached_hash_ids ) )
uncached_hash_ids_to_hashes = dict( self._ExecuteManySelectSingleParam( 'SELECT hash_id, hash FROM hashes WHERE hash_id = ?;', uncached_hash_ids ) )
if len( uncached_hash_ids_to_hashes ) < len( uncached_hash_ids ):
@ -9720,9 +9800,7 @@ class DB( HydrusDB.HydrusDB ):
if len( uncached_tag_ids ) > 0:
select_statement = 'SELECT tag_id, tag FROM local_tags_cache WHERE tag_id IN {};'
local_uncached_tag_ids_to_tags = { tag_id : tag for ( tag_id, tag ) in self._SelectFromList( select_statement, uncached_tag_ids ) }
local_uncached_tag_ids_to_tags = { tag_id : tag for ( tag_id, tag ) in self._ExecuteManySelectSingleParam( 'SELECT tag_id, tag FROM local_tags_cache WHERE tag_id = ?;', uncached_tag_ids ) }
self._tag_ids_to_tags_cache.update( local_uncached_tag_ids_to_tags )
@ -9731,9 +9809,9 @@ class DB( HydrusDB.HydrusDB ):
if len( uncached_tag_ids ) > 0:
select_statement = 'SELECT tag_id, namespace, subtag FROM tags NATURAL JOIN namespaces NATURAL JOIN subtags WHERE tag_id IN {};'
select_statement = 'SELECT tag_id, namespace, subtag FROM tags NATURAL JOIN namespaces NATURAL JOIN subtags WHERE tag_id = ?;'
uncached_tag_ids_to_tags = { tag_id : HydrusTags.CombineTag( namespace, subtag ) for ( tag_id, namespace, subtag ) in self._SelectFromList( select_statement, uncached_tag_ids ) }
uncached_tag_ids_to_tags = { tag_id : HydrusTags.CombineTag( namespace, subtag ) for ( tag_id, namespace, subtag ) in self._ExecuteManySelectSingleParam( select_statement, uncached_tag_ids ) }
if len( uncached_tag_ids_to_tags ) < len( uncached_tag_ids ):
@ -9869,7 +9947,7 @@ class DB( HydrusDB.HydrusDB ):
reason_id = self._GetTextId( reason )
self._c.execute( 'DELETE FROM file_petitions WHERE service_id = ? AND hash_id IN ' + HydrusData.SplayListForDB( hash_ids ) + ';', ( service_id, ) )
self._c.executemany( 'DELETE FROM file_petitions WHERE service_id = ? AND hash_id = ?;', ( ( service_id, hash_id ) for hash_id in hash_ids ) )
self._c.executemany( 'INSERT OR IGNORE INTO file_petitions ( service_id, hash_id, reason_id ) VALUES ( ?, ?, ? );', ( ( service_id, hash_id, reason_id ) for hash_id in hash_ids ) )
@ -9881,7 +9959,7 @@ class DB( HydrusDB.HydrusDB ):
hash_ids = self._GetHashIds( hashes )
self._c.execute( 'DELETE FROM file_transfers WHERE service_id = ? AND hash_id IN ' + HydrusData.SplayListForDB( hash_ids ) + ';', ( service_id, ) )
self._c.executemany( 'DELETE FROM file_transfers WHERE service_id = ? AND hash_id = ?;', ( ( service_id, hash_id ) for hash_id in hash_ids ) )
notify_new_pending = True
@ -9891,7 +9969,7 @@ class DB( HydrusDB.HydrusDB ):
hash_ids = self._GetHashIds( hashes )
self._c.execute( 'DELETE FROM file_petitions WHERE service_id = ? AND hash_id IN ' + HydrusData.SplayListForDB( hash_ids ) + ';', ( service_id, ) )
self._c.executemany( 'DELETE FROM file_petitions WHERE service_id = ? AND hash_id = ?;', ( ( service_id, hash_id ) for hash_id in hash_ids ) )
notify_new_pending = True
@ -10259,7 +10337,7 @@ class DB( HydrusDB.HydrusDB ):
ratings_added = 0
self._c.execute( 'DELETE FROM local_ratings WHERE service_id = ? AND hash_id IN ' + splayed_hash_ids + ';', ( service_id, ) )
self._c.executemany( 'DELETE FROM local_ratings WHERE service_id = ? AND hash_id = ?;', ( ( service_id, hash_id ) for hash_id in hash_ids ) )
ratings_added -= self._GetRowCount()
@ -11295,7 +11373,7 @@ class DB( HydrusDB.HydrusDB ):
backup_depth = HG.client_controller.new_options.GetInteger( 'number_of_gui_session_backups' )
now = HydrusData.GetNow()
object_timestamp = HydrusData.GetNow()
if store_backups:
@ -11303,9 +11381,21 @@ class DB( HydrusDB.HydrusDB ):
existing_timestamps.sort()
if len( existing_timestamps ) > 0:
# the user has changed their system clock, so let's make sure the new timestamp is larger at least
largest_existing_timestamp = max( existing_timestamps )
if largest_existing_timestamp > object_timestamp:
object_timestamp = largest_existing_timestamp + 1
deletee_timestamps = existing_timestamps[ : - backup_depth ] # keep highest n values
deletee_timestamps.append( now ) # if save gets spammed twice in one second, we'll overwrite
deletee_timestamps.append( object_timestamp ) # if save gets spammed twice in one second, we'll overwrite
self._c.executemany( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', [ ( dump_type, dump_name, timestamp ) for timestamp in deletee_timestamps ] )
@ -11318,7 +11408,7 @@ class DB( HydrusDB.HydrusDB ):
try:
self._c.execute( 'INSERT INTO json_dumps_named ( dump_type, dump_name, version, timestamp, dump ) VALUES ( ?, ?, ?, ?, ? );', ( dump_type, dump_name, version, now, dump_buffer ) )
self._c.execute( 'INSERT INTO json_dumps_named ( dump_type, dump_name, version, timestamp, dump ) VALUES ( ?, ?, ?, ?, ? );', ( dump_type, dump_name, version, object_timestamp, dump_buffer ) )
except:
@ -11544,6 +11634,208 @@ class DB( HydrusDB.HydrusDB ):
def _TryToSortHashIds( self, file_service_id, hash_ids, sort_by ):
did_sort = False
( sort_metadata, sort_data ) = sort_by.sort_type
sort_asc = sort_by.sort_asc
query = None
select_args_iterator = None
if sort_metadata == 'system':
simple_sorts = []
simple_sorts.append( CC.SORT_FILES_BY_IMPORT_TIME )
simple_sorts.append( CC.SORT_FILES_BY_FILESIZE )
simple_sorts.append( CC.SORT_FILES_BY_DURATION )
simple_sorts.append( CC.SORT_FILES_BY_WIDTH )
simple_sorts.append( CC.SORT_FILES_BY_HEIGHT )
simple_sorts.append( CC.SORT_FILES_BY_RATIO )
simple_sorts.append( CC.SORT_FILES_BY_NUM_PIXELS )
simple_sorts.append( CC.SORT_FILES_BY_MEDIA_VIEWS )
simple_sorts.append( CC.SORT_FILES_BY_MEDIA_VIEWTIME )
simple_sorts.append( CC.SORT_FILES_BY_APPROX_BITRATE )
simple_sorts.append( CC.SORT_FILES_BY_FILE_MODIFIED_TIMESTAMP )
if sort_data in simple_sorts:
if sort_data == CC.SORT_FILES_BY_IMPORT_TIME:
query = 'SELECT hash_id, timestamp FROM files_info NATURAL JOIN current_files WHERE hash_id = ? AND service_id = ?;'
select_args_iterator = ( ( hash_id, file_service_id ) for hash_id in hash_ids )
else:
if sort_data == CC.SORT_FILES_BY_FILESIZE:
query = 'SELECT hash_id, size FROM files_info WHERE hash_id = ?;'
elif sort_data == CC.SORT_FILES_BY_DURATION:
query = 'SELECT hash_id, duration FROM files_info WHERE hash_id = ?;'
elif sort_data == CC.SORT_FILES_BY_WIDTH:
query = 'SELECT hash_id, width FROM files_info WHERE hash_id = ?;'
elif sort_data == CC.SORT_FILES_BY_HEIGHT:
query = 'SELECT hash_id, height FROM files_info WHERE hash_id = ?;'
elif sort_data == CC.SORT_FILES_BY_RATIO:
query = 'SELECT hash_id, width, height FROM files_info WHERE hash_id = ?;'
elif sort_data == CC.SORT_FILES_BY_NUM_PIXELS:
query = 'SELECT hash_id, width, height FROM files_info WHERE hash_id = ?;'
elif sort_data == CC.SORT_FILES_BY_MEDIA_VIEWS:
query = 'SELECT hash_id, media_views FROM file_viewing_stats WHERE hash_id = ?;'
elif sort_data == CC.SORT_FILES_BY_MEDIA_VIEWTIME:
query = 'SELECT hash_id, media_viewtime FROM file_viewing_stats WHERE hash_id = ?;'
elif sort_data == CC.SORT_FILES_BY_APPROX_BITRATE:
query = 'SELECT hash_id, duration, num_frames, size, width, height FROM files_info WHERE hash_id = ?;'
elif sort_data == CC.SORT_FILES_BY_FILE_MODIFIED_TIMESTAMP:
query = 'SELECT hash_id, file_modified_timestamp FROM file_modified_timestamps WHERE hash_id = ?;'
select_args_iterator = ( ( hash_id, ) for hash_id in hash_ids )
if sort_data == CC.SORT_FILES_BY_RATIO:
def key( row ):
width = row[1]
height = row[2]
if width is None or height is None:
return -1
else:
return width / height
elif sort_data == CC.SORT_FILES_BY_NUM_PIXELS:
def key( row ):
width = row[1]
height = row[2]
if width is None or height is None or width == 0 or height == 0:
return -1
else:
return width * height
elif sort_data == CC.SORT_FILES_BY_APPROX_BITRATE:
def key( row ):
duration = row[1]
num_frames = row[2]
size = row[3]
width = row[4]
height = row[5]
if duration is None or duration == 0:
if size is None or size == 0:
duration_bitrate = -1
frame_bitrate = -1
else:
duration_bitrate = 0
if width is None or height is None:
frame_bitrate = 0
else:
num_pixels = width * height
if size is None or size == 0 or num_pixels == 0:
frame_bitrate = -1
else:
frame_bitrate = size / num_pixels
else:
if size is None or size == 0:
duration_bitrate = -1
frame_bitrate = -1
else:
duration_bitrate = size / duration
if num_frames is None or num_frames == 0:
frame_bitrate = 0
else:
frame_bitrate = duration_bitrate / num_frames
return ( duration_bitrate, frame_bitrate )
else:
key = lambda row: -1 if row[1] is None else row[1]
reverse = sort_asc == CC.SORT_DESC
if query is not None:
hash_ids_and_other_data = list( self._ExecuteManySelect( query, select_args_iterator ) )
hash_ids_and_other_data.sort( key = key, reverse = reverse )
hash_ids = [ row[0] for row in hash_ids_and_other_data ]
did_sort = True
return ( did_sort, hash_ids )
def _UpdateDB( self, version ):
self._controller.pub( 'splash_set_status_text', 'updating db to v' + str( version + 1 ) )
@ -13556,9 +13848,10 @@ class DB( HydrusDB.HydrusDB ):
for ( tag_id, hash_ids ) in pending_mappings_ids:
select_statement = 'SELECT hash_id FROM ' + current_mappings_table_name + ' WHERE tag_id = ' + str( tag_id ) + ' AND hash_id IN {};'
select_statement = 'SELECT hash_id FROM ' + current_mappings_table_name + ' WHERE tag_id = ? AND hash_id = ?;'
select_args_iterator = ( ( tag_id, hash_id ) for hash_id in hash_ids )
existing_current_hash_ids = self._STS( self._SelectFromList( select_statement, hash_ids ) )
existing_current_hash_ids = self._STS( self._ExecuteManySelect( select_statement, select_args_iterator ) )
valid_hash_ids = set( hash_ids ).difference( existing_current_hash_ids )

View File

@ -289,7 +289,7 @@ class MenuUpdaterPages( ClientGUIAsync.AsyncQtUpdater ):
gui_session_names_to_backup_timestamps = {}
return gui_session_names_to_backup_timestamps
return ( gui_session_names, gui_session_names_to_backup_timestamps )
def _publishLoading( self ):
@ -299,9 +299,9 @@ class MenuUpdaterPages( ClientGUIAsync.AsyncQtUpdater ):
def _publishResult( self, result ):
gui_session_names_to_backup_timestamps = result
( gui_session_names, gui_session_names_to_backup_timestamps ) = result
( menu, label ) = self._win.GenerateMenuInfoPages( gui_session_names_to_backup_timestamps )
( menu, label ) = self._win.GenerateMenuInfoPages( gui_session_names, gui_session_names_to_backup_timestamps )
self._win.ReplaceMenu( 'pages', menu, label )
@ -507,9 +507,16 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
aboutinfo.SetDescription( description )
with open( HC.LICENSE_PATH, 'r', encoding = 'utf-8' ) as f:
if os.path.exists( HC.LICENSE_PATH ):
license = f.read()
with open( HC.LICENSE_PATH, 'r', encoding = 'utf-8' ) as f:
license = f.read()
else:
license = 'no licence file found!'
aboutinfo.SetLicense( license )
@ -547,7 +554,12 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
message += os.linesep * 2
message += 'A \'full\' analyze will force a run over every index in the database. This can take substantially longer. If you do not have a specific reason to select this, it is probably pointless.'
result = ClientGUIDialogsQuick.GetYesNo( self, message, title = 'Choose how thorough your analyze will be.', yes_label = 'soft', no_label = 'full' )
( result, was_cancelled ) = ClientGUIDialogsQuick.GetYesNo( self, message, title = 'Choose how thorough your analyze will be.', yes_label = 'soft', no_label = 'full', check_for_cancelled = True )
if was_cancelled:
return
if result == QW.QDialog.Accepted:
@ -960,7 +972,12 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
client_files_manager = self._controller.client_files_manager
result = ClientGUIDialogsQuick.GetYesNo( self, text, title = 'Choose what do to with the orphans.', yes_label = 'move them somewhere', no_label = 'delete them' )
( result, was_cancelled ) = ClientGUIDialogsQuick.GetYesNo( self, text, title = 'Choose what do to with the orphans.', yes_label = 'move them somewhere', no_label = 'delete them', check_for_cancelled = True )
if was_cancelled:
return
if result == QW.QDialog.Accepted:
@ -3498,7 +3515,12 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
text += os.linesep * 2
text += 'A \'full\' vacuum will immediately force a vacuum for the entire database. This can take substantially longer.'
result = ClientGUIDialogsQuick.GetYesNo( self, text, title = 'Choose how thorough your vacuum will be.', yes_label = 'soft', no_label = 'full' )
( result, was_cancelled ) = ClientGUIDialogsQuick.GetYesNo( self, text, title = 'Choose how thorough your vacuum will be.', yes_label = 'soft', no_label = 'full', check_for_cancelled = True )
if was_cancelled:
return
if result == QW.QDialog.Accepted:
@ -4648,7 +4670,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
return ( menu, '&file' )
def GenerateMenuInfoPages( self, gui_session_names_to_backup_timestamps ):
def GenerateMenuInfoPages( self, gui_session_names, gui_session_names_to_backup_timestamps ):
menu = QW.QMenu( self )
@ -4685,7 +4707,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
sessions = QW.QMenu( menu )
gui_session_names = list( gui_session_names_to_backup_timestamps.keys() )
gui_session_names = list( gui_session_names )
gui_session_names.sort()
@ -4722,6 +4744,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
submenu = QW.QMenu( append_backup )
for timestamp in timestamps:
ClientGUIMenus.AppendMenuItem( submenu, HydrusData.ConvertTimestampToPrettyTime( timestamp ), 'Append this backup session to whatever pages are already open.', self._notebook.AppendGUISessionBackup, name, timestamp )
@ -5337,7 +5360,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
message = 'Session "{}" already exists! Do you want to overwrite it?'.format( name )
result, closed_by_user = ClientGUIDialogsQuick.GetYesNo( self, message, title = 'Overwrite existing session?', yes_label = 'yes, overwrite', no_label = 'no, choose another name', check_for_cancelled = True )
( result, closed_by_user ) = ClientGUIDialogsQuick.GetYesNo( self, message, title = 'Overwrite existing session?', yes_label = 'yes, overwrite', no_label = 'no, choose another name', check_for_cancelled = True )
if closed_by_user:

View File

@ -536,7 +536,8 @@ class AutoCompleteDropdown( QW.QWidget ):
self._dropdown_window = QW.QFrame( self )
self._dropdown_window.setFrameStyle( QW.QFrame.Box | QW.QFrame.Plain )
self._dropdown_window.setFrameShape( QW.QFrame.NoFrame )
#self._dropdown_window.setFrameStyle( QW.QFrame.Box | QW.QFrame.Raised )
self._dropdown_notebook = QW.QTabWidget( self._dropdown_window )

View File

@ -2430,7 +2430,6 @@ class StaticBox( QW.QFrame ):
QW.QFrame.__init__( self, parent )
self.setFrameStyle( QW.QFrame.Box | QW.QFrame.Raised )
self._spacer = QW.QSpacerItem( 0, 0, QW.QSizePolicy.Minimum, QW.QSizePolicy.MinimumExpanding )
self._sizer = QP.VBoxLayout()

View File

@ -1029,7 +1029,7 @@ class DialogSelectFromURLTree( Dialog ):
item = QW.QTreeWidgetItem()
item.setText( 0, item_name )
item.setCheckState( root.checkState() )
item.setCheckState( 0, root.checkState( 0 ) )
item.setData( 0, QC.Qt.UserRole, data )
root.addChild( item )
@ -1037,7 +1037,7 @@ class DialogSelectFromURLTree( Dialog ):
subroot = QW.QTreeWidgetItem()
subroot.setText( 0, item_name )
subroot.setCheckState( root.checkState() )
subroot.setCheckState( 0, root.checkState( 0 ) )
root.addChild( subroot )
self._AddDirectory( subroot, data )
@ -1048,25 +1048,24 @@ class DialogSelectFromURLTree( Dialog ):
def _GetSelectedChildrenData( self, parent_item ):
result = []
child_iterator = QW.QTreeWidgetItemIterator( parent_item )
for child_item in child_iterator:
for i in range( parent_item.childCount() ):
data = child_item.data( 0, QC.Qt.UserRole )
child_item = parent_item.child( i )
if data is None:
if child_item.checkState( 0 ) == QC.Qt.Checked:
result.extend( self._GetSelectedChildrenData( child_item ) )
data = child_item.data( 0, QC.Qt.UserRole )
else:
if child_item.checkState() == QC.Qt.Checked:
if data is not None:
result.append( data )
result.extend( self._GetSelectedChildrenData( child_item ) )
return result

View File

@ -398,7 +398,12 @@ class GallerySeedLogButton( ClientGUICommon.BetterBitmapButton ):
message = 'Of the ' + HydrusData.ToHumanInt( num_urls ) + ' URLs you mean to add, ' + HydrusData.ToHumanInt( num_removed ) + ' are already in the gallery log. Would you like to only add new URLs or add everything (which will force a re-check of the duplicates)?'
result = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = 'only add new urls', no_label = 'add all urls, even duplicates' )
( result, was_cancelled ) = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = 'only add new urls', no_label = 'add all urls, even duplicates', check_for_cancelled = True )
if was_cancelled:
return
if result == QW.QDialog.Accepted:
@ -416,9 +421,9 @@ class GallerySeedLogButton( ClientGUICommon.BetterBitmapButton ):
message = 'Would you like these urls to only check for new files, or would you like them to also generate subsequent gallery pages, like a regular search would?'
result = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = 'just check what I am adding', no_label = 'start a potential new search for every url added' )
( result, was_cancelled ) = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = 'just check what I am adding', no_label = 'start a potential new search for every url added', check_for_cancelled = True )
if result == QW.QDialog.Rejected:
if was_cancelled:
return

View File

@ -832,7 +832,7 @@ class ListBox( QW.QScrollArea ):
def __init__( self, parent, height_num_chars = 10 ):
QW.QScrollArea.__init__( self, parent )
self.setFrameShape( QW.QFrame.StyledPanel )
self.setFrameStyle( QW.QFrame.Panel | QW.QFrame.Sunken )
self.setHorizontalScrollBarPolicy( QC.Qt.ScrollBarAlwaysOff )
self.setVerticalScrollBarPolicy( QC.Qt.ScrollBarAsNeeded )
self.setWidget( ListBox._InnerWidget( self ) )
@ -849,18 +849,13 @@ class ListBox( QW.QScrollArea ):
self._last_view_start = None
self._height_num_chars = height_num_chars
self._minimum_height_num_chars = 8
self._num_rows_per_page = 0
self.setFont( QW.QApplication.font() )
text_height = self.fontMetrics().height()
self.verticalScrollBar().setSingleStep( text_height )
( min_width, min_height ) = ClientGUIFunctions.ConvertTextToPixels( self, ( 16, height_num_chars ) )
QP.SetMinClientSize( self, ( min_width, min_height ) )
self._widget_event_filter = QP.WidgetEventFilter( self.widget() )
self._widget_event_filter.EVT_LEFT_DOWN( self.EventMouseSelect )
@ -878,17 +873,6 @@ class ListBox( QW.QScrollArea ):
return QP.isValid( self )
def sizeHint( self ):
size_hint = QW.QScrollArea.sizeHint( self )
text_height = self.fontMetrics().height()
size_hint.setHeight( 9 * text_height + self.height() - self.viewport().height() )
return size_hint
def _Activate( self ):
pass
@ -1529,6 +1513,19 @@ class ListBox( QW.QScrollArea ):
return len( self._ordered_terms ) > 0
def minimumSizeHint( self ):
size_hint = QW.QScrollArea.minimumSizeHint( self )
text_height = self.fontMetrics().height()
minimum_height = self._minimum_height_num_chars * text_height + ( self.frameWidth() * 2 )
size_hint.setHeight( minimum_height )
return size_hint
def MoveSelectionDown( self ):
if len( self._ordered_terms ) > 1 and self._last_hit_index is not None:
@ -1553,6 +1550,24 @@ class ListBox( QW.QScrollArea ):
def SetMinimumHeightNumChars( self, minimum_height_num_chars ):
self._minimum_height_num_chars = minimum_height_num_chars
def sizeHint( self ):
size_hint = QW.QScrollArea.sizeHint( self )
text_height = self.fontMetrics().height()
ideal_height = self._height_num_chars * text_height + ( self.frameWidth() * 2 )
size_hint.setHeight( ideal_height )
return size_hint
class ListBoxTags( ListBox ):
ors_are_under_construction = False
@ -2921,7 +2936,7 @@ class ListBoxTagsSelection( ListBoxTags ):
def __init__( self, parent, tag_display_type, include_counts = True, show_sibling_description = False ):
ListBoxTags.__init__( self, parent, height_num_chars = 12 )
ListBoxTags.__init__( self, parent, height_num_chars = 24 )
self._sort = HC.options[ 'default_tag_sort' ]
@ -3235,6 +3250,8 @@ class ListBoxTagsSelectionManagementPanel( ListBoxTagsSelection ):
ListBoxTagsSelection.__init__( self, parent, tag_display_type, include_counts = True )
self._minimum_height_num_chars = 15
self._page_key = page_key
self._get_current_predicates_callable = predicates_callable

View File

@ -678,8 +678,8 @@ class ManagementPanel( QW.QScrollArea ):
self.setFrameShape( QW.QFrame.NoFrame )
self.setWidget( QW.QWidget() )
self.setWidgetResizable( True )
self.setFrameStyle( QW.QFrame.Panel | QW.QFrame.Sunken )
self.setLineWidth( 2 )
#self.setFrameStyle( QW.QFrame.Panel | QW.QFrame.Sunken )
#self.setLineWidth( 2 )
#self.setHorizontalScrollBarPolicy( QC.Qt.ScrollBarAlwaysOff )
self.setVerticalScrollBarPolicy( QC.Qt.ScrollBarAsNeeded )
@ -1815,6 +1815,7 @@ class ManagementPanelImporterMultipleGallery( ManagementPanelImporter ):
ClientGUIMenus.AppendMenuItem( menu, 'show all importers\' presented files', 'Gather the presented files for the selected importers and show them in a new page.', self._ShowSelectedImportersFiles, show='presented' )
ClientGUIMenus.AppendMenuItem( menu, 'show all importers\' new files', 'Gather the presented files for the selected importers and show them in a new page.', self._ShowSelectedImportersFiles, show='new' )
ClientGUIMenus.AppendMenuItem( menu, 'show all importers\' files', 'Gather the presented files for the selected importers and show them in a new page.', self._ShowSelectedImportersFiles, show='all' )
ClientGUIMenus.AppendMenuItem( menu, 'show all importers\' files (including trash)', 'Gather the presented files (including trash) for the selected importers and show them in a new page.', self._ShowSelectedImportersFiles, show='all_and_trash' )
if self._CanRetryFailed():
@ -2061,7 +2062,7 @@ class ManagementPanelImporterMultipleGallery( ManagementPanelImporter ):
gallery_hashes = gallery_import.GetNewHashes()
elif show == 'all':
elif show in ( 'all', 'all_and_trash' ):
gallery_hashes = gallery_import.GetHashes()
@ -2072,7 +2073,16 @@ class ManagementPanelImporterMultipleGallery( ManagementPanelImporter ):
seen_hashes.update( new_hashes )
hashes = HG.client_controller.Read( 'filter_hashes', hashes, CC.LOCAL_FILE_SERVICE_KEY )
if show == 'all_and_trash':
filter_file_service_key = CC.COMBINED_LOCAL_FILE_SERVICE_KEY
else:
filter_file_service_key = CC.LOCAL_FILE_SERVICE_KEY
hashes = HG.client_controller.Read( 'filter_hashes', hashes, filter_file_service_key )
if len( hashes ) > 0:
@ -2489,6 +2499,7 @@ class ManagementPanelImporterMultipleWatcher( ManagementPanelImporter ):
ClientGUIMenus.AppendMenuItem( menu, 'show all watchers\' presented files', 'Gather the presented files for the selected watchers and show them in a new page.', self._ShowSelectedImportersFiles, show='presented' )
ClientGUIMenus.AppendMenuItem( menu, 'show all watchers\' new files', 'Gather the presented files for the selected watchers and show them in a new page.', self._ShowSelectedImportersFiles, show='new' )
ClientGUIMenus.AppendMenuItem( menu, 'show all watchers\' files', 'Gather the presented files for the selected watchers and show them in a new page.', self._ShowSelectedImportersFiles, show='all' )
ClientGUIMenus.AppendMenuItem( menu, 'show all watchers\' files (including trash)', 'Gather the presented files (including trash) for the selected watchers and show them in a new page.', self._ShowSelectedImportersFiles, show='all_and_trash' )
if self._CanRetryFailed():
@ -2731,7 +2742,7 @@ class ManagementPanelImporterMultipleWatcher( ManagementPanelImporter ):
watcher_hashes = watcher.GetNewHashes()
elif show == 'all':
elif show in ( 'all', 'all_and_trash' ):
watcher_hashes = watcher.GetHashes()
@ -2742,7 +2753,16 @@ class ManagementPanelImporterMultipleWatcher( ManagementPanelImporter ):
seen_hashes.update( new_hashes )
hashes = HG.client_controller.Read( 'filter_hashes', hashes, CC.LOCAL_FILE_SERVICE_KEY )
if show == 'all_and_trash':
filter_file_service_key = CC.COMBINED_LOCAL_FILE_SERVICE_KEY
else:
filter_file_service_key = CC.LOCAL_FILE_SERVICE_KEY
hashes = HG.client_controller.Read( 'filter_hashes', hashes, filter_file_service_key )
if len( hashes ) > 0:
@ -4326,7 +4346,9 @@ class ManagementPanelQuery( ManagementPanel ):
self._query_job_key = ClientThreading.JobKey()
self._controller.CallToThread( self.THREADDoQuery, self._controller, self._page_key, self._query_job_key, file_search_context )
sort_by = self._media_sort.GetSort()
self._controller.CallToThread( self.THREADDoQuery, self._controller, self._page_key, self._query_job_key, file_search_context, sort_by )
panel = ClientGUIMedia.MediaPanelLoading( self._page, self._page_key, file_service_key )
@ -4480,7 +4502,7 @@ class ManagementPanelQuery( ManagementPanel ):
def THREADDoQuery( self, controller, page_key, query_job_key, search_context ):
def THREADDoQuery( self, controller, page_key, query_job_key, search_context, sort_by ):
def qt_code():
@ -4498,7 +4520,7 @@ class ManagementPanelQuery( ManagementPanel ):
HG.client_controller.file_viewing_stats_manager.Flush()
query_hash_ids = controller.Read( 'file_query_ids', search_context, job_key = query_job_key )
query_hash_ids = controller.Read( 'file_query_ids', search_context, job_key = query_job_key, limit_sort_by = sort_by )
if query_job_key.IsCancelled():

View File

@ -734,7 +734,14 @@ class Page( QW.QSplitter ):
def SetMediaResults( self, media_results ):
file_service_key = self._management_controller.GetKey( 'file_service' )
if self._management_controller.IsImporter():
file_service_key = CC.LOCAL_FILE_SERVICE_KEY
else:
file_service_key = self._management_controller.GetKey( 'file_service' )
media_panel = ClientGUIMedia.MediaPanelThumbnails( self, self._page_key, file_service_key, media_results )

View File

@ -5709,15 +5709,6 @@ class EditSelectFromListPanel( ClientGUIScrolledPanels.EditPanel ):
self._list = QW.QListWidget( self )
self._list.itemDoubleClicked.connect( self.EventSelect )
max_width = max( ( len( label ) for ( label, value ) in choice_tuples ) )
width_chars = min( 36, max_width )
height_chars = max( 6, len( choice_tuples ) )
l_size = ClientGUIFunctions.ConvertTextToPixels( self._list, ( width_chars, height_chars ) )
QP.SetMinClientSize( self._list, l_size )
#
selected_a_value = False
@ -5763,6 +5754,24 @@ class EditSelectFromListPanel( ClientGUIScrolledPanels.EditPanel ):
#
max_label_width_chars = max( ( len( label ) for ( label, value ) in choice_tuples ) )
width_chars = min( 64, max_label_width_chars + 2 )
height_chars = min( max( 6, len( choice_tuples ) ), 36 )
( width_px, height_px ) = ClientGUIFunctions.ConvertTextToPixels( self._list, ( width_chars, height_chars ) )
row_height_px = self._list.sizeHintForRow( 0 )
if row_height_px != -1:
height_px = row_height_px * height_chars
# wew lad, but it 'works'
# formalise this and make a 'stretchy qlistwidget' class
self._list.sizeHint = lambda: QC.QSize( width_px, height_px )
vbox = QP.VBoxLayout()
QP.AddToLayout( vbox, self._list, CC.FLAGS_EXPAND_BOTH_WAYS )

View File

@ -426,18 +426,25 @@ class Shortcut( HydrusSerialisable.SerialisableBase ):
elif self._shortcut_type == CC.SHORTCUT_TYPE_KEYBOARD_CHARACTER:
if ClientData.OrdIsAlphaUpper( self._shortcut_key ):
try:
components.append( chr( self._shortcut_key + 32 ) ) # + 32 for converting ascii A -> a
if ClientData.OrdIsAlphaUpper( self._shortcut_key ):
components.append( chr( self._shortcut_key + 32 ) ) # + 32 for converting ascii A -> a
else:
components.append( chr( self._shortcut_key ) )
else:
except:
components.append( chr( self._shortcut_key ) )
components.append( 'unknown key: {}'.format( repr( self._shortcut_key ) ) )
else:
components.append( 'unknown key' )
components.append( 'unknown key: {}'.format( repr( self._shortcut_key ) ) )
s = '+'.join( components )

View File

@ -275,6 +275,11 @@ def THREADDownloadURLs( job_key, urls, title ):
presentation_hashes_fast.add( hash )
if len( presentation_hashes ) > 0:
job_key.SetVariable( 'popup_files', ( presentation_hashes, 'downloads' ) )
elif status == CC.STATUS_DELETED:
num_deleted += 1

View File

@ -1696,22 +1696,22 @@ class MediaList( object ):
if action == HC.CONTENT_UPDATE_DELETE:
local_file_domains = HG.client_controller.services_manager.GetServiceKeys( ( HC.LOCAL_FILE_DOMAIN, ) )
non_trash_local_file_services = list( local_file_domains ) + [ CC.COMBINED_LOCAL_FILE_SERVICE_KEY ]
all_local_file_services = list( non_trash_local_file_services ) + [ CC.TRASH_SERVICE_KEY ]
#
physically_deleted = service_key in ( CC.TRASH_SERVICE_KEY, CC.COMBINED_LOCAL_FILE_SERVICE_KEY )
trashed = service_key in local_file_domains
deleted_from_our_domain = service_key == self._file_service_key
physically_deleted_and_local_view = physically_deleted and self._file_service_key in all_local_file_services
user_says_remove_and_trashed_from_our_local_file_domain = HC.options[ 'remove_trashed_files' ] and trashed and deleted_from_our_domain
user_says_remove_and_trashed_from_non_trash_local_view = HC.options[ 'remove_trashed_files' ] and trashed and deleted_from_our_domain
deleted_from_repo_and_repo_view = service_key not in all_local_file_services and deleted_from_our_domain
if physically_deleted_and_local_view or user_says_remove_and_trashed_from_our_local_file_domain or deleted_from_repo_and_repo_view:
if physically_deleted_and_local_view or user_says_remove_and_trashed_from_non_trash_local_view or deleted_from_repo_and_repo_view:
self._RemoveMediaByHashes( hashes )

View File

@ -2134,19 +2134,51 @@ class ServiceIPFS( ServiceRemote ):
self._nocopy_abs_path_translations = dictionary[ 'nocopy_abs_path_translations' ]
def _ConvertMultihashToURLTree( self, name, size, multihash ):
def _GetAPIBaseURL( self ):
api_base_url = self._GetAPIBaseURL()
( host, port ) = self._credentials.GetAddress()
api_base_url = 'http://' + host + ':' + str( port ) + '/api/v0/'
return api_base_url
def ConvertMultihashToURLTree( self, name, size, multihash, job_key = None ):
with self._lock:
api_base_url = self._GetAPIBaseURL()
links_url = api_base_url + 'object/links/' + multihash
network_job = ClientNetworkingJobs.NetworkJob( 'GET', links_url )
network_job.OverrideBandwidth()
if job_key is not None:
job_key.SetVariable( 'popup_network_job', network_job )
HG.client_controller.network_engine.AddJob( network_job )
network_job.WaitUntilDone()
try:
network_job.OverrideBandwidth()
HG.client_controller.network_engine.AddJob( network_job )
network_job.WaitUntilDone()
finally:
if job_key is not None:
job_key.SetVariable( 'popup_network_job', None )
if job_key.IsCancelled():
raise HydrusExceptions.CancelledException( 'Multihash parsing cancelled by user.' )
parsing_text = network_job.GetContentText()
@ -2175,7 +2207,7 @@ class ServiceIPFS( ServiceRemote ):
subsize = link[ 'Size' ]
submultihash = link[ 'Hash' ]
children.append( self._ConvertMultihashToURLTree( subname, subsize, submultihash ) )
children.append( self.ConvertMultihashToURLTree( subname, subsize, submultihash, job_key = job_key ) )
if size is None:
@ -2193,15 +2225,6 @@ class ServiceIPFS( ServiceRemote ):
def _GetAPIBaseURL( self ):
( host, port ) = self._credentials.GetAddress()
api_base_url = 'http://' + host + ':' + str( port ) + '/api/v0/'
return api_base_url
def EnableNoCopy( self, value ):
with self._lock:
@ -2305,28 +2328,37 @@ class ServiceIPFS( ServiceRemote ):
def on_qt_select_tree( job_key, url_tree ):
from . import ClientGUIDialogs
with ClientGUIDialogs.DialogSelectFromURLTree( HG.client_controller.gui, url_tree ) as dlg:
try:
urls_good = False
from . import ClientGUIDialogs
if dlg.exec() == QW.QDialog.Accepted:
with ClientGUIDialogs.DialogSelectFromURLTree( HG.client_controller.gui, url_tree ) as dlg:
urls = dlg.GetURLs()
urls_good = False
if len( urls ) > 0:
if dlg.exec() == QW.QDialog.Accepted:
HG.client_controller.CallToThread( ClientImporting.THREADDownloadURLs, job_key, urls, multihash )
urls = dlg.GetURLs()
urls_good = True
if len( urls ) > 0:
HG.client_controller.CallToThread( ClientImporting.THREADDownloadURLs, job_key, urls, multihash )
urls_good = True
if not urls_good:
job_key.Delete()
if not urls_good:
job_key.Delete()
except:
job_key.Delete()
raise
@ -2341,11 +2373,11 @@ class ServiceIPFS( ServiceRemote ):
HG.client_controller.pub( 'message', job_key )
with self._lock:
try:
try:
url_tree = self._ConvertMultihashToURLTree( multihash, None, multihash )
url_tree = self.ConvertMultihashToURLTree( multihash, None, multihash, job_key = job_key )
except HydrusExceptions.NotFoundException:
@ -2360,18 +2392,24 @@ class ServiceIPFS( ServiceRemote ):
return
if url_tree[0] == 'file':
if url_tree[0] == 'file':
url = url_tree[3]
HG.client_controller.CallToThread( ClientImporting.THREADDownloadURL, job_key, url, multihash )
else:
job_key.SetVariable( 'popup_text_1', 'Waiting for user selection' )
QP.CallAfter( on_qt_select_tree, job_key, url_tree )
url = url_tree[3]
except:
HG.client_controller.CallToThread( ClientImporting.THREADDownloadURL, job_key, url, multihash )
job_key.Delete()
else:
job_key.SetVariable( 'popup_text_1', 'Waiting for user selection' )
QP.CallAfter( on_qt_select_tree, job_key, url_tree )
raise

View File

@ -67,7 +67,7 @@ options = {}
# Misc
NETWORK_VERSION = 18
SOFTWARE_VERSION = 377
SOFTWARE_VERSION = 378
CLIENT_API_VERSION = 11
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )

View File

@ -277,6 +277,14 @@ class HydrusDB( object ):
def _AnalyzeTempTable( self, temp_table_name ):
# this is useful to do after populating a temp table so the query planned can decide which index to use in a big join that uses it
self._c.execute( 'ANALYZE {};'.format( temp_table_name ) )
self._c.execute( 'ANALYZE mem.sqlite_master;' ) # this reloads the current stats into the query planner, may no longer be needed
def _AttachExternalDatabases( self ):
for ( name, filename ) in list(self._db_filenames.items()):
@ -390,6 +398,30 @@ class HydrusDB( object ):
HydrusData.DebugPrint( message )
def _ExecuteManySelectSingleParam( self, query, single_param_iterator ):
select_args_iterator = ( ( param, ) for param in single_param_iterator )
return self._ExecuteManySelect( query, select_args_iterator )
def _ExecuteManySelect( self, query, select_args_iterator ):
# back in python 2, we did batches of 256 hash_ids/whatever at a time in big "hash_id IN (?,?,?,?,...)" predicates.
# this was useful to get over some 100,000 x fetchall() call overhead, but it would sometimes throw the SQLite query planner off and do non-optimal queries
# (basically, the "hash_id in (256)" would weight the hash_id index request x 256 vs another when comparing the sqlite_stat1 tables, which could lead to WEWLAD for some indices with low median very-high mean skewed distribution
# python 3 is better about call overhead, so we'll go back to what is pure
# cursor.executemany SELECT when
for select_args in select_args_iterator:
for result in self._c.execute( query, select_args ):
yield result
def _GetRowCount( self ):
row_count = self._c.rowcount
@ -684,49 +716,6 @@ class HydrusDB( object ):
self._c.execute( 'SAVEPOINT hydrus_savepoint;' )
def _SelectFromList( self, select_statement, xs ):
# issue here is that doing a simple blah_id = ? is real quick and cacheable but doing a lot of fetchone()s is slow
# blah_id IN ( 1, 2, 3 ) is fast to execute but not cacheable and doing the str() list splay takes time so there is initial lag
# doing the temporaryintegertable trick works well for gigantic lists you refer to frequently but it is super laggy when you sometimes are only selecting four things
# blah_id IN ( ?, ?, ? ) is fast and cacheable but there's a small limit (1024 is too many) to the number of params sql can handle
# so lets do the latter but break it into 256-strong chunks to get a good medium
# this will take a select statement with {} like so:
# SELECT blah_id, blah FROM blahs WHERE blah_id IN {};
MAX_CHUNK_SIZE = 256
# do this just so we aren't always reproducing this long string for gigantic lists
# and also so we aren't overmaking it when this gets spammed with a lot of len() == 1 calls
if len( xs ) >= MAX_CHUNK_SIZE:
max_statement = select_statement.format( '({})'.format( ','.join( '?' * MAX_CHUNK_SIZE ) ) )
for chunk in HydrusData.SplitListIntoChunks( xs, MAX_CHUNK_SIZE ):
if len( chunk ) == MAX_CHUNK_SIZE:
chunk_statement = max_statement
else:
chunk_statement = select_statement.format( '({})'.format( ','.join( '?' * len( chunk ) ) ) )
for row in self._c.execute( chunk_statement, chunk ):
yield row
def _SelectFromListFetchAll( self, select_statement, xs ):
return [ row for row in self._SelectFromList( select_statement, xs ) ]
def _ShrinkMemory( self ):
self._c.execute( 'PRAGMA shrink_memory;' )

View File

@ -1756,7 +1756,7 @@ class RadioBox( QW.QFrame ):
QW.QFrame.__init__( self, parent )
self.setFrameStyle( QW.QFrame.Box | QW.QFrame.Plain )
self.setFrameStyle( QW.QFrame.Box | QW.QFrame.Raised )
if vertical:
@ -2148,17 +2148,18 @@ class TreeWidgetWithInheritedCheckState( QW.QTreeWidget ):
QW.QTreeWidget.__init__( self, *args, **kwargs )
self.itemClicked.connect( self._UpdateCheckState )
self.itemClicked.connect( self._HandleItemClickedForCheckStateUpdate )
def _HandleItemClickedForCheckStateUpdate( self, item, column ):
self._UpdateCheckState( item, item.checkState() )
self._UpdateCheckState( item, item.checkState( 0 ) )
def _UpdateCheckState( self, item, check_state ):
item.setCheckState( check_state )
# this is an int, should be a checkstate
item.setCheckState( 0, check_state )
for i in range( item.childCount() ):

View File

@ -1,3 +1,4 @@
import collections
import hashlib
from . import HydrusConstants as HC
from . import HydrusDB
@ -790,9 +791,9 @@ class DB( HydrusDB.HydrusDB ):
def _GetHashes( self, master_hash_ids ):
select_statement = 'SELECT hash FROM hashes WHERE master_hash_id IN {};'
select_statement = 'SELECT hash FROM hashes WHERE master_hash_id = ?;'
return [ hash for ( hash, ) in self._SelectFromList( select_statement, master_hash_ids ) ]
return [ hash for ( hash, ) in self._ExecuteManySelectSingleParam( select_statement, master_hash_ids ) ]
def _GetMasterHashId( self, hash ):
@ -1336,9 +1337,9 @@ class DB( HydrusDB.HydrusDB ):
else:
select_statement = 'SELECT service_hash_id FROM ' + deleted_mappings_table_name + ' WHERE service_tag_id = ' + str( service_tag_id ) + ' AND service_hash_id IN {};'
select_statement = 'SELECT service_hash_id FROM ' + deleted_mappings_table_name + ' WHERE service_tag_id = ' + str( service_tag_id ) + ' AND service_hash_id = ?;'
deleted_service_hash_ids = self._STI( self._SelectFromList( select_statement, service_hash_ids ) )
deleted_service_hash_ids = self._STI( self._ExecuteManySelectSingleParam( select_statement, service_hash_ids ) )
service_hash_ids = set( service_hash_ids ).difference( deleted_service_hash_ids )
@ -1582,9 +1583,9 @@ class DB( HydrusDB.HydrusDB ):
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name, ip_addresses_table_name ) = GenerateRepositoryFilesTableNames( service_id )
select_statement = 'SELECT service_hash_id FROM ' + current_files_table_name + ' WHERE service_hash_id IN {};'
select_statement = 'SELECT service_hash_id FROM ' + current_files_table_name + ' WHERE service_hash_id = ?;'
valid_service_hash_ids = self._STL( self._SelectFromList( select_statement, service_hash_ids ) )
valid_service_hash_ids = self._STL( self._ExecuteManySelectSingleParam( select_statement, service_hash_ids ) )
self._RepositoryRewardFilePetitioners( service_id, valid_service_hash_ids, 1 )
@ -1598,9 +1599,9 @@ class DB( HydrusDB.HydrusDB ):
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id )
select_statement = 'SELECT service_hash_id FROM ' + current_mappings_table_name + ' WHERE service_tag_id = ' + str( service_tag_id ) + ' AND service_hash_id IN {};'
select_statement = 'SELECT service_hash_id FROM ' + current_mappings_table_name + ' WHERE service_tag_id = ' + str( service_tag_id ) + ' AND service_hash_id = ?;'
valid_service_hash_ids = self._STL( self._SelectFromList( select_statement, service_hash_ids ) )
valid_service_hash_ids = self._STL( self._ExecuteManySelectSingleParam( select_statement, service_hash_ids ) )
self._RepositoryRewardMappingPetitioners( service_id, service_tag_id, valid_service_hash_ids, 1 )
@ -2117,9 +2118,9 @@ class DB( HydrusDB.HydrusDB ):
( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryMasterMapTableNames( service_id )
select_statement = 'SELECT master_hash_id FROM ' + hash_id_map_table_name + ' WHERE service_hash_id IN {};'
select_statement = 'SELECT master_hash_id FROM ' + hash_id_map_table_name + ' WHERE service_hash_id = ?;'
master_hash_ids = [ master_hash_id for ( master_hash_id, ) in self._SelectFromList( select_statement, service_hash_ids ) ]
master_hash_ids = [ master_hash_id for ( master_hash_id, ) in self._ExecuteManySelectSingleParam( select_statement, service_hash_ids ) ]
if len( service_hash_ids ) != len( master_hash_ids ):
@ -2613,9 +2614,9 @@ class DB( HydrusDB.HydrusDB ):
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name, ip_addresses_table_name ) = GenerateRepositoryFilesTableNames( service_id )
select_statement = 'SELECT service_hash_id FROM ' + current_files_table_name + ' WHERE service_hash_id IN {};'
select_statement = 'SELECT service_hash_id FROM ' + current_files_table_name + ' WHERE service_hash_id = ?;'
valid_service_hash_ids = [ service_hash_id for ( service_hash_id, ) in self._SelectFromList( select_statement, service_hash_ids ) ]
valid_service_hash_ids = [ service_hash_id for ( service_hash_id, ) in self._ExecuteManySelectSingleParam( select_statement, service_hash_ids ) ]
self._c.executemany( 'REPLACE INTO ' + petitioned_files_table_name + ' ( service_hash_id, account_id, reason_id ) VALUES ( ?, ?, ? );', ( ( service_hash_id, account_id, reason_id ) for service_hash_id in valid_service_hash_ids ) )
@ -2624,9 +2625,9 @@ class DB( HydrusDB.HydrusDB ):
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id )
select_statement = 'SELECT service_hash_id FROM ' + current_mappings_table_name + ' WHERE service_tag_id = ' + str( service_tag_id ) + ' AND service_hash_id IN {};'
select_statement = 'SELECT service_hash_id FROM ' + current_mappings_table_name + ' WHERE service_tag_id = ' + str( service_tag_id ) + ' AND service_hash_id = ?;'
valid_service_hash_ids = [ service_hash_id for ( service_hash_id, ) in self._SelectFromList( select_statement, service_hash_ids ) ]
valid_service_hash_ids = [ service_hash_id for ( service_hash_id, ) in self._ExecuteManySelectSingleParam( select_statement, service_hash_ids ) ]
self._c.executemany( 'REPLACE INTO ' + petitioned_mappings_table_name + ' ( service_tag_id, service_hash_id, account_id, reason_id ) VALUES ( ?, ?, ?, ? );', [ ( service_tag_id, service_hash_id, account_id, reason_id ) for service_hash_id in valid_service_hash_ids ] )
@ -2968,9 +2969,16 @@ class DB( HydrusDB.HydrusDB ):
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name, ip_addresses_table_name ) = GenerateRepositoryFilesTableNames( service_id )
select_statement = 'SELECT account_id, COUNT( * ) FROM ' + petitioned_files_table_name + ' WHERE service_hash_id IN {} GROUP BY account_id;'
select_statement = 'SELECT account_id, COUNT( * ) FROM ' + petitioned_files_table_name + ' WHERE service_hash_id = ? GROUP BY account_id;'
scores = [ ( account_id, count * multiplier ) for ( account_id, count ) in self._SelectFromList( select_statement, service_hash_ids ) ]
count = collections.Counter()
for ( account_id, count ) in self._ExecuteManySelectSingleParam( select_statement, service_hash_ids ):
count[ account_id ] += count
scores = [ ( account_id, count * multiplier ) for ( account_id, count ) in count.items() ]
self._RewardAccounts( service_id, HC.SCORE_PETITION, scores )
@ -2979,9 +2987,16 @@ class DB( HydrusDB.HydrusDB ):
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id )
select_statement = 'SELECT account_id, COUNT( * ) FROM ' + petitioned_mappings_table_name + ' WHERE service_tag_id = ' + str( service_tag_id ) + ' AND service_hash_id IN {} GROUP BY account_id;'
select_statement = 'SELECT account_id, COUNT( * ) FROM ' + petitioned_mappings_table_name + ' WHERE service_tag_id = ' + str( service_tag_id ) + ' AND service_hash_id = ? GROUP BY account_id;'
scores = [ ( account_id, count * multiplier ) for ( account_id, count ) in self._SelectFromList( select_statement, service_hash_ids ) ]
count = collections.Counter()
for ( account_id, count ) in self._ExecuteManySelectSingleParam( select_statement, service_hash_ids ):
count[ account_id ] += count
scores = [ ( account_id, count * multiplier ) for ( account_id, count ) in count.items() ]
self._RewardAccounts( service_id, HC.SCORE_PETITION, scores )
@ -3094,9 +3109,9 @@ class DB( HydrusDB.HydrusDB ):
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name, ip_addresses_table_name ) = GenerateRepositoryFilesTableNames( service_id )
select_statement = 'SELECT service_hash_id FROM ' + current_files_table_name + ' WHERE account_id IN {};'
select_statement = 'SELECT service_hash_id FROM ' + current_files_table_name + ' WHERE account_id = ?;'
service_hash_ids = [ service_hash_id for ( service_hash_id, ) in self._SelectFromList( select_statement, subject_account_ids ) ]
service_hash_ids = self._STL( self._ExecuteManySelectSingleParam( select_statement, subject_account_ids ) )
if len( service_hash_ids ) > 0:
@ -3105,9 +3120,9 @@ class DB( HydrusDB.HydrusDB ):
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id )
select_statement = 'SELECT service_tag_id, service_hash_id FROM ' + current_mappings_table_name + ' WHERE account_id IN {};'
select_statement = 'SELECT service_tag_id, service_hash_id FROM ' + current_mappings_table_name + ' WHERE account_id = ?;'
mappings_dict = HydrusData.BuildKeyToListDict( self._SelectFromList( select_statement, subject_account_ids ) )
mappings_dict = HydrusData.BuildKeyToListDict( self._ExecuteManySelectSingleParam( select_statement, subject_account_ids ) )
if len( mappings_dict ) > 0:
@ -3119,9 +3134,9 @@ class DB( HydrusDB.HydrusDB ):
( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id )
select_statement = 'SELECT child_service_tag_id, parent_service_tag_id FROM ' + current_tag_parents_table_name + ' WHERE account_id IN {};'
select_statement = 'SELECT child_service_tag_id, parent_service_tag_id FROM ' + current_tag_parents_table_name + ' WHERE account_id = ?;'
pairs = self._SelectFromListFetchAll( select_statement, subject_account_ids )
pairs = list( self._ExecuteManySelectSingleParam( select_statement, subject_account_ids ) )
if len( pairs ) > 0:
@ -3133,9 +3148,9 @@ class DB( HydrusDB.HydrusDB ):
( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id )
select_statement = 'SELECT bad_service_tag_id, good_service_tag_id FROM ' + current_tag_siblings_table_name + ' WHERE account_id IN {};'
select_statement = 'SELECT bad_service_tag_id, good_service_tag_id FROM ' + current_tag_siblings_table_name + ' WHERE account_id = ?;'
pairs = self._SelectFromListFetchAll( select_statement, subject_account_ids )
pairs = list( self._ExecuteManySelectSingleParam( select_statement, subject_account_ids ) )
if len( pairs ) > 0: