Version 392

This commit is contained in:
Hydrus Network Developer 2020-04-08 16:10:11 -05:00
parent 73cf390fae
commit 4b29d91b54
35 changed files with 1199 additions and 402 deletions

View File

@ -8,6 +8,48 @@
<div class="content">
<h3>changelog</h3>
<ul>
<li><h3>version 392</h3></li>
<ul>
<li>db-level tag sibling cache:</li>
<li>the hydrus client db now maintains a fast cache of current+pending tag-to-ideal-tag sibling relationships. it works for specific services and 'all known tags'. this is a nice tool and the first step in having a proper hard-baked siblings mappings cache</li>
<li>the new sibling cache can be regenerated under _database->regenerate_. the 'autocomplete cache' entry under that menu is also renamed to the now more appropriate 'tag mappings cache'</li>
<li>the db repair system can regenerate this new cache if any part is missing on boot</li>
<li>the lookup that finds tag sibling matches for autocomplete uses this and is now faster, specific to the searched service, more accurate about status, and now includes pending siblings</li>
<li>wrote a new unified object to manage a collection of tag siblings, it is now in use at the db level</li>
<li>as I continue to develop this new fast tech, the old 'apply all sibs to all services' option, which was always buggy, may sometimes not apply in it. I will ultimately replace it with a fuller per-service choice system that will work quickly and properly and in the same unified way</li>
<li>fixed a bug where only one local tag service's siblings would be matched at the ui level when looking at 'all known tags'</li>
<li>fixed a bug in the file search code where searching for a tag that had an unnamespaced sibling going to it would result in searching for all possible namespaces of that sibling (e.g. searching for 'character:samus aran' when 'samus aran'->'character:samus aran' sibling existed would result in effectively 'anything:samus aran')</li>
<li>when tag services are deleted, they are now better about cleaning up their siblings and parents quickly</li>
<li>optimised some tag and hash id->value database cache population routines to improve performance for large queries (e.g. when fetching all the tag parents/siblings on boot). also these caches are now larger, 100k instead of 25k</li>
<li>all cache regen code now forces an immediate analyze of the new tables to speed up imminent access</li>
<li>.</li>
<li>the rest:</li>
<li>updated the default e621 file page parser to get rating tags again (looks like their markup just changed again)</li>
<li>updated the default sankaku file page parser to get their recently redefined 'genre' tags</li>
<li>in edit subscriptions, the 'overwrite tag import options/check options' actions now initialise their dialogs with the current value for the first subscription, rather than the global program default</li>
<li>in the edit subscription panel, the checker options button is moved down to the file/tag import options</li>
<li>when not in advanced mode, the edit tag import options panel now has some red-text at the top to reinforce to new users that they should generally use the defaults</li>
<li>the tag import options blacklist now secondarily checks against all known siblings of the parsed tags, rather than just the 'collapsed' ideal siblings</li>
<li>subscriptions are now more aggressive about clearing out old urls from their file import caches--instead of clearing the 251st url after it has aged twice the death period, now they use just one DP. also, checkers with static checker timings will use five times that check period as DP if that is smaller. static checkers, or those that never die, will use a flat value of six months as DP if that is smaller</li>
<li>moved a bunch of the debug 'data actions' to a new 'memory actions' menu</li>
<li>significantly reduced how often the system tray regenerates its menu, which seems to improve stability</li>
<li>fixed an issue where guis that were maximised before a minimise were restoring from a system tray icon click to normal view</li>
<li>double-clicking the system tray when the ui is hidden should no longer do a fast show/hide</li>
<li>fixed an issue where if the gui was minimised, the main animation timer would not run for other windows (e.g. a separate media viewer)</li>
<li>improved ui shown/hidden tracking logic for the new system tray icon for different OSes</li>
<li>fixed the 'refresh_page_of_pages_pages' shortcut action, which had faulty old wx code in it</li>
<li>fixed a wx->Qt bug where modal popups that cannot be cancelled, and thus pop up a 'sorry, you can't dismiss this' text when you try to close them, were nonetheless still closing afterwards</li>
<li>the hydrus client and server now attempt to listen their servers on both IPv4 and IPv6, failing gracefully if IPv6 is not available</li>
<li>the 'is this a localhost request?' check now understands IPv6 localhost (::1 or ::ffff:127.0.0.1)</li>
<li>may have solved a 100% cpu repaint issue with the a/c dropdown in some qt environments</li>
<li>added info to installing help about Windows N and clean installs</li>
<li>misc media viewer wx->Qt code cleanup</li>
<li>misc code cleanup</li>
<li>.</li>
<li>experimental hellzone, be wary ye scabs:</li>
<li>added an experimental 'sub-gallery url' url content type to the parsing and downloading system. this url is queued into the gallery log even if the primary gallery page found no file/post urls, and is meant for galleries that link to galleries. not yet ready for primetime, but feedback would be appreciated</li>
<li>added an experimental ui-hang relief mode, activated under _help->debug->data actions->db ui-hang relief mode_, which _should_ stop the ui hanging in unusual long-time ui-synchronous db jobs. it may cause other problems, so it is default off. it also prints begin/end statements to log for additional info. users who experience ui hang due to db job processing time are invited to play with this mode and report back results</li>
</ul>
<li><h3>version 391</h3></li>
<ul>
<li>system tray icon:</li>
@ -44,6 +86,7 @@
<li>cleaned some thumbnail selection and rendering code, particularly fixing some edge case 'where that media go?' issues where collect-by calls happen during thumbnail waterfalls and so on</li>
<li>cleaned up some page file domain setting code and misc page management code</li>
<li>improved accuracy of rendered image cache memory footprint calculations</li>
<li>fixed some Qt signal object definitions that were causing errors for some users who run from source</li>
</ul>
<li><h3>version 390</h3></li>
<ul>

View File

@ -17,6 +17,7 @@
<li>If you want the easy solution, download the .exe installer. Run it, hit ok several times.</li>
<li>If you know what you are doing and want a little more control, get the .zip. Don't extract it to Program Files unless you are willing to run it as administrator every time (it stores all its user data inside its own folder). You probably want something like D:\hydrus.</li>
<li><i>Note if you run &lt;Win10, you may need <a href="https://www.microsoft.com/en-us/download/details.aspx?id=48145">Visual C++ Redistributable for Visual Studio 2015</a>, if you don't already have it for vidya. If you run Win7, you will need some/all core OS updates released before 2017.</i></li>
<li>If you use Windows 10 N (a version of Windows without some media playback features), you will likely need the 'Media Feature Pack'. There have been several versions of this, so it may best found by searching for the latest version or hitting Windows Update, but otherwise check <a href="https://support.microsoft.com/en-us/help/3145500/media-feature-pack-list-for-windows-n-editions">here</a>.</li>
</ul>
<p>for macOS:</p>
<ul>
@ -50,6 +51,18 @@
<p>Unless the update specifically disables or reconfigures something, all your files and tags and settings will be remembered after the update.</p>
<p>Releases typically need to update your database to their version. New releases can retroactively perform older database updates, so if the new version is v255 but your database is on v250, you generally only need to get the v255 release, and it'll do all the intervening v250->v251, v251->v252, etc... update steps in order as soon as you boot it. If you need to update from a release more than, say, ten versions older than current, see below. You might also like to skim the release posts or <a href="changelog.html">changelog</a> to see what is new.</p>
<p>Clients and servers of different versions can usually connect to one another, but from time to time, I make a change to the network protocol, and you will get polite error messages if you try to connect to a newer server with an older client or <i>vice versa</i>. There is still no <i>need</i> to update the client--it'll still do local stuff like searching for files completely fine. Read my release posts and judge for yourself what you want to do.</p>
<h3 id="clean_installs">clean installs</h3>
<b>This is only relevant if you update and get an odd error about dlls when you try to boot.</b>
<p>Very rarely, hydrus needs a clean install. This can be due to a special update like when we moved from 32-bit to 64-bit or needing to otherwise 'reset' a custom install situation. The problem is usually that a library file has been renamed in a new version and hydrus has trouble figuring out whether to use the older one (from a previous version) or the newer.</p>
<p>In any case, if you cannot boot hydrus and instead get a crash log or system-level error popup complaining in a technical way about not being able to load a dll/pyd/so file, you may need a clean install, which essentially means clearing any old files out and reinstalling.</p>
<p>However, you need to be careful not to delete your database! It sounds silly, but at least one user has made a mistake here. The process is simple, do not deviate:</p>
<ul>
<li>Make a backup if you can!</li>
<li>Go to your install directory.</li>
<li>Delete all the files and folders except the 'db' dir (and all of its contents, obviously).</li>
<li>Reinstall/extract hydrus as you normally do.</li>
</ul>
<p>After that, you'll have a 'clean' version of hydrus that only has the latest version's dlls. If hydrus still will not boot, I recommend you roll back to your last working backup and let me, hydrus dev, know what your error is.</p>
<h3 id="big_updates">big updates</h3>
<p>If you have not updated in some time--say twenty versions or more--doing it all in one jump, like v250->v290, is likely not going to work. I am doing a lot of unusual stuff with hydrus, change my code at a fast pace, and do not have a ton of testing in place. Hydrus update code often falls to <a href="https://en.wikipedia.org/wiki/Software_rot">bitrot</a>, and so some underlying truth I assumed for the v255->v256 code may not still apply six months later. If you try to update more than 50 versions at once (i.e. trying to perform more than a year of updates in one go), the client will give you a polite error rather than even try.</p>
<p>As a result, if you get a failure on trying to do a big update, try cutting the distance in half--try v270 first, and then if that works, try v270->v290. If it doesn't, try v260, and so on.</p>

View File

@ -1365,6 +1365,8 @@ class Controller( HydrusController.HydrusController ):
self.app = App( self._pubsub, sys.argv )
self.main_qt_thread = self.app.thread()
HydrusData.Print( 'booting controller\u2026' )
self.frame_icon_pixmap = QG.QPixmap( os.path.join( HC.STATIC_DIR, 'hydrus_32_non-transparent.png' ) )
@ -1540,7 +1542,34 @@ class Controller( HydrusController.HydrusController ):
http_factory = ClientLocalServer.HydrusServiceClientAPI( service, allow_non_local_connections = allow_non_local_connections )
self._service_keys_to_connected_ports[ service_key ] = reactor.listenTCP( port, http_factory )
ipv6_port = None
try:
ipv6_port = reactor.listenTCP( port, http_factory, interface = '::' )
except Exception as e:
HydrusData.Print( 'Could not bind to IPv6:' )
HydrusData.Print( str( e ) )
ipv4_port = None
try:
ipv4_port = reactor.listenTCP( port, http_factory )
except:
if ipv6_port is None:
raise
self._service_keys_to_connected_ports[ service_key ] = ( ipv4_port, ipv6_port )
if not HydrusNetworking.LocalPortInUse( port ):
@ -1563,11 +1592,21 @@ class Controller( HydrusController.HydrusController ):
deferreds = []
for port in self._service_keys_to_connected_ports.values():
for ( ipv4_port, ipv6_port ) in self._service_keys_to_connected_ports.values():
deferred = defer.maybeDeferred( port.stopListening )
if ipv4_port is not None:
deferred = defer.maybeDeferred( ipv4_port.stopListening )
deferreds.append( deferred )
deferreds.append( deferred )
if ipv6_port is not None:
deferred = defer.maybeDeferred( ipv6_port.stopListening )
deferreds.append( deferred )
self._service_keys_to_connected_ports = {}

View File

@ -40,6 +40,7 @@ import stat
import time
import traceback
import typing
from qtpy import QtCore as QC
from qtpy import QtWidgets as QW
from . import QtPorting as QP
@ -184,19 +185,19 @@ def DealWithBrokenJSONDump( db_dir, dump, dump_descriptor ):
def GenerateCombinedFilesMappingsCacheTableName( service_id ):
return 'external_caches.combined_files_ac_cache_' + str( service_id )
return 'external_caches.combined_files_ac_cache_{}'.format( service_id )
def GenerateMappingsTableNames( service_id ):
suffix = str( service_id )
current_mappings_table_name = 'external_mappings.current_mappings_' + suffix
current_mappings_table_name = 'external_mappings.current_mappings_{}'.format( suffix )
deleted_mappings_table_name = 'external_mappings.deleted_mappings_' + suffix
deleted_mappings_table_name = 'external_mappings.deleted_mappings_{}'.format( suffix )
pending_mappings_table_name = 'external_mappings.pending_mappings_' + suffix
pending_mappings_table_name = 'external_mappings.pending_mappings_{}'.format( suffix )
petitioned_mappings_table_name = 'external_mappings.petitioned_mappings_' + suffix
petitioned_mappings_table_name = 'external_mappings.petitioned_mappings_{}'.format( suffix )
return ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name )
@ -204,14 +205,14 @@ def GenerateRepositoryMasterCacheTableNames( service_id ):
suffix = str( service_id )
hash_id_map_table_name = 'external_master.repository_hash_id_map_' + suffix
tag_id_map_table_name = 'external_master.repository_tag_id_map_' + suffix
hash_id_map_table_name = 'external_master.repository_hash_id_map_{}'.format( suffix )
tag_id_map_table_name = 'external_master.repository_tag_id_map_{}'.format( suffix )
return ( hash_id_map_table_name, tag_id_map_table_name )
def GenerateRepositoryRepositoryUpdatesTableName( service_id ):
repository_updates_table_name = 'repository_updates_' + str( service_id )
repository_updates_table_name = 'repository_updates_{}'.format( service_id )
return repository_updates_table_name
@ -219,18 +220,22 @@ def GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id ):
suffix = str( file_service_id ) + '_' + str( tag_service_id )
cache_files_table_name = 'external_caches.specific_files_cache_' + suffix
cache_files_table_name = 'external_caches.specific_files_cache_{}'.format( suffix )
cache_current_mappings_table_name = 'external_caches.specific_current_mappings_cache_' + suffix
cache_current_mappings_table_name = 'external_caches.specific_current_mappings_cache_{}'.format( suffix )
cache_deleted_mappings_table_name = 'external_caches.specific_deleted_mappings_cache_' + suffix
cache_deleted_mappings_table_name = 'external_caches.specific_deleted_mappings_cache_{}'.format( suffix )
cache_pending_mappings_table_name = 'external_caches.specific_pending_mappings_cache_' + suffix
cache_pending_mappings_table_name = 'external_caches.specific_pending_mappings_cache_{}'.format( suffix )
ac_cache_table_name = 'external_caches.specific_ac_cache_' + suffix
ac_cache_table_name = 'external_caches.specific_ac_cache_{}'.format( suffix )
return ( cache_files_table_name, cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name, ac_cache_table_name )
def GenerateTagSiblingsLookupCacheTableName( service_id ):
return 'external_caches.tag_siblings_lookup_cache_{}'.format( service_id )
def report_content_speed_to_job_key( job_key, rows_done, total_rows, precise_timestamp, num_rows, row_name ):
it_took = HydrusData.GetNowPrecise() - precise_timestamp
@ -268,6 +273,21 @@ def report_speed_to_log( precise_timestamp, num_rows, row_name ):
HydrusData.Print( summary )
class JobDatabaseClient( HydrusData.JobDatabase ):
def _DoDelayedResultRelief( self ):
if HG.db_ui_hang_relief_mode:
if QC.QThread.currentThread() == HG.client_controller.main_qt_thread:
HydrusData.Print( 'ui-hang event processing: begin' )
QW.QApplication.instance().processEvents()
HydrusData.Print( 'ui-hang event processing: end' )
class DB( HydrusDB.HydrusDB ):
READ_WRITE_ACTIONS = [ 'service_info', 'system_predicates', 'missing_thumbnail_hashes' ]
@ -391,6 +411,13 @@ class DB( HydrusDB.HydrusDB ):
service_id = self._c.lastrowid
if service_type == HC.COMBINED_TAG:
self._combined_tag_service_id = service_id
self._CacheTagSiblingsLookupGenerate( service_id )
if service_type in HC.REPOSITORIES:
repository_updates_table_name = GenerateRepositoryRepositoryUpdatesTableName( service_id )
@ -410,6 +437,8 @@ class DB( HydrusDB.HydrusDB ):
#
self._CacheTagSiblingsLookupGenerate( service_id )
self._CacheCombinedFilesMappingsGenerate( service_id )
file_service_ids = self._GetServiceIds( HC.AUTOCOMPLETE_CACHE_SPECIFIC_FILE_SERVICES )
@ -463,8 +492,11 @@ class DB( HydrusDB.HydrusDB ):
for ( bad_tag_id, good_tag_id ) in pairs:
tag_ids.add( bad_tag_id )
tag_ids.add( good_tag_id )
self._CacheTagSiblingsUpdateChains( service_id, tag_ids )
for tag_id in tag_ids:
self._FillInParents( service_id, tag_id, make_content_updates = make_content_updates )
@ -744,6 +776,8 @@ class DB( HydrusDB.HydrusDB ):
self._AnalyzeTable( ac_cache_table_name )
def _CacheCombinedFilesMappingsGetAutocompleteCounts( self, service_id, tag_ids ):
@ -789,6 +823,8 @@ class DB( HydrusDB.HydrusDB ):
self._CacheLocalTagIdsPotentialAdd( block_of_tag_ids )
self._AnalyzeTable( 'external_caches.local_tags_cache' )
def _CacheLocalTagIdsPotentialAdd( self, tag_ids ):
@ -874,6 +910,9 @@ class DB( HydrusDB.HydrusDB ):
self._c.execute( 'CREATE TABLE ' + hash_id_map_table_name + ' ( service_hash_id INTEGER PRIMARY KEY, hash_id INTEGER );' )
self._c.execute( 'CREATE TABLE ' + tag_id_map_table_name + ' ( service_tag_id INTEGER PRIMARY KEY, tag_id INTEGER );' )
self._AnalyzeTable( hash_id_map_table_name )
self._AnalyzeTable( tag_id_map_table_name )
def _CacheSpecificMappingsAddFiles( self, file_service_id, tag_service_id, hash_ids ):
@ -1183,6 +1222,12 @@ class DB( HydrusDB.HydrusDB ):
self._CreateIndex( cache_deleted_mappings_table_name, [ 'tag_id', 'hash_id' ], unique = True )
self._CreateIndex( cache_pending_mappings_table_name, [ 'tag_id', 'hash_id' ], unique = True )
self._AnalyzeTable( cache_files_table_name )
self._AnalyzeTable( cache_current_mappings_table_name )
self._AnalyzeTable( cache_deleted_mappings_table_name )
self._AnalyzeTable( cache_pending_mappings_table_name )
self._AnalyzeTable( ac_cache_table_name )
def _CacheSpecificMappingsGetAutocompleteCounts( self, file_service_id, tag_service_id, tag_ids ):
@ -1242,6 +1287,159 @@ class DB( HydrusDB.HydrusDB ):
def _CacheTagSiblingsLookupDrop( self, tag_service_id ):
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( tag_service_id )
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( cache_tag_siblings_lookup_table_name ) )
def _CacheTagSiblingsLookupGenerate( self, tag_service_id ):
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( tag_service_id )
self._c.execute( 'CREATE TABLE {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_tag_siblings_lookup_table_name ) )
#
if tag_service_id == self._combined_tag_service_id:
# until we have the nice system here, we'll do the old-fashioned local, then remote precedence
search_tag_service_ids = self._GetServiceIds( HC.REAL_TAG_SERVICES )
else:
search_tag_service_ids = [ tag_service_id ]
tss = ClientTags.TagSiblingsStructure()
for search_tag_service_id in search_tag_service_ids:
for ( bad_tag_id, good_tag_id ) in self._c.execute( 'SELECT bad_tag_id, good_tag_id FROM tag_siblings WHERE service_id = ? AND status = ?;', ( search_tag_service_id, HC.CONTENT_STATUS_CURRENT ) ):
tss.AddPair( bad_tag_id, good_tag_id )
for search_tag_service_id in search_tag_service_ids:
for ( bad_tag_id, good_tag_id ) in self._c.execute( 'SELECT bad_tag_id, good_tag_id FROM tag_sibling_petitions WHERE service_id = ? AND status = ?;', ( search_tag_service_id, HC.CONTENT_STATUS_PENDING ) ):
tss.AddPair( bad_tag_id, good_tag_id )
self._c.executemany( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) VALUES ( ?, ? );'.format( cache_tag_siblings_lookup_table_name ), tss.GetBadTagsToIdealTags().items() )
self._CreateIndex( cache_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] )
self._AnalyzeTable( cache_tag_siblings_lookup_table_name )
def _CacheTagSiblingsLookupGetAdditionalSiblings( self, tag_service_id, tag_ids ):
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( tag_service_id )
sibling_tag_ids = set()
with HydrusDB.TemporaryIntegerTable( self._c, tag_ids, 'tag_id' ) as temp_table_name:
self._AnalyzeTempTable( temp_table_name )
sibling_tag_ids.update( self._STI( self._c.execute( 'SELECT bad_tag_id FROM {}, {} ON ( ideal_tag_id = tag_id );'.format( cache_tag_siblings_lookup_table_name, temp_table_name ) ) ) )
sibling_tag_ids.update( self._STI( self._c.execute( 'SELECT ideal_tag_id FROM {}, {} ON ( bad_tag_id = tag_id );'.format( cache_tag_siblings_lookup_table_name, temp_table_name ) ) ) )
sibling_tag_ids.difference_update( tag_ids )
return sibling_tag_ids
def _CacheTagSiblingsUpdateChains( self, tag_service_id, tag_ids, regenerate_existing_entry = True ):
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( tag_service_id )
tag_ids = set( tag_ids )
tag_ids.update( self._CacheTagSiblingsLookupGetAdditionalSiblings( tag_service_id, tag_ids ) )
if regenerate_existing_entry:
tag_ids_to_do = tag_ids
else:
tag_ids_to_do = set()
for tag_id in tag_ids:
result = self._c.execute( 'SELECT 1 FROM {} WHERE bad_tag_id = ? OR ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( tag_id, tag_id ) ).fetchone()
no_entry_yet = result is None
if no_entry_yet:
tag_ids_to_do.add( tag_id )
if len( tag_ids_to_do ) == 0:
return
self._c.executemany( 'DELETE FROM {} WHERE bad_tag_id = ? OR ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( ( tag_id, tag_id ) for tag_id in tag_ids_to_do ) )
if tag_service_id == self._combined_tag_service_id:
# until we have the nice system here, we'll do the old-fashioned local, then remote precedence
search_tag_service_ids = self._GetServiceIds( HC.REAL_TAG_SERVICES )
else:
search_tag_service_ids = [ tag_service_id ]
tss = ClientTags.TagSiblingsStructure()
for search_tag_service_id in search_tag_service_ids:
for tag_id in tag_ids_to_do:
some_pairs = self._c.execute( 'SELECT bad_tag_id, good_tag_id FROM tag_siblings WHERE service_id = ? AND ( bad_tag_id = ? OR good_tag_id = ? ) AND status = ?;', ( search_tag_service_id, tag_id, tag_id, HC.CONTENT_STATUS_CURRENT ) ).fetchall()
for ( bad_tag_id, good_tag_id ) in some_pairs:
tss.AddPair( bad_tag_id, good_tag_id )
for search_tag_service_id in search_tag_service_ids:
for tag_id in tag_ids_to_do:
some_pairs = self._c.execute( 'SELECT bad_tag_id, good_tag_id FROM tag_sibling_petitions WHERE service_id = ? AND ( bad_tag_id = ? OR good_tag_id = ? ) AND status = ?;', ( search_tag_service_id, tag_id, tag_id, HC.CONTENT_STATUS_PENDING ) ).fetchall()
for ( bad_tag_id, good_tag_id ) in some_pairs:
tss.AddPair( bad_tag_id, good_tag_id )
self._c.executemany( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) VALUES ( ?, ? );'.format( cache_tag_siblings_lookup_table_name ), tss.GetBadTagsToIdealTags().items() )
if tag_service_id != self._combined_tag_service_id:
self._CacheTagSiblingsUpdateChains( self._combined_tag_service_id, tag_ids, regenerate_existing_entry = regenerate_existing_entry )
def _CheckDBIntegrity( self ):
prefix_string = 'checking db integrity: '
@ -1588,13 +1786,13 @@ class DB( HydrusDB.HydrusDB ):
init_service_info = []
init_service_info.append( ( CC.COMBINED_TAG_SERVICE_KEY, HC.COMBINED_TAG, 'all known tags' ) )
init_service_info.append( ( CC.COMBINED_FILE_SERVICE_KEY, HC.COMBINED_FILE, 'all known files' ) )
init_service_info.append( ( CC.COMBINED_LOCAL_FILE_SERVICE_KEY, HC.COMBINED_LOCAL_FILE, 'all local files' ) )
init_service_info.append( ( CC.LOCAL_FILE_SERVICE_KEY, HC.LOCAL_FILE_DOMAIN, 'my files' ) )
init_service_info.append( ( CC.TRASH_SERVICE_KEY, HC.LOCAL_FILE_TRASH_DOMAIN, 'trash' ) )
init_service_info.append( ( CC.LOCAL_UPDATE_SERVICE_KEY, HC.LOCAL_FILE_DOMAIN, 'repository updates' ) )
init_service_info.append( ( CC.DEFAULT_LOCAL_TAG_SERVICE_KEY, HC.LOCAL_TAG, 'my tags' ) )
init_service_info.append( ( CC.COMBINED_FILE_SERVICE_KEY, HC.COMBINED_FILE, 'all known files' ) )
init_service_info.append( ( CC.COMBINED_TAG_SERVICE_KEY, HC.COMBINED_TAG, 'all known tags' ) )
init_service_info.append( ( CC.LOCAL_BOORU_SERVICE_KEY, HC.LOCAL_BOORU, 'local booru' ) )
init_service_info.append( ( CC.LOCAL_NOTES_SERVICE_KEY, HC.LOCAL_NOTES, 'local notes' ) )
init_service_info.append( ( CC.CLIENT_API_SERVICE_KEY, HC.CLIENT_API_SERVICE, 'client api' ) )
@ -2009,6 +2207,17 @@ class DB( HydrusDB.HydrusDB ):
#
self._c.execute( 'DELETE FROM tag_siblings WHERE service_id = ?;', ( service_id, ) )
self._c.execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ?;', ( service_id, ) )
self._c.execute( 'DELETE FROM tag_parents WHERE service_id = ?;', ( service_id, ) )
self._c.execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ?;', ( service_id, ) )
self._CacheTagSiblingsLookupDrop( service_id )
self._CacheTagSiblingsLookupDrop( self._combined_tag_service_id )
self._CacheTagSiblingsLookupGenerate( self._combined_tag_service_id )
self._CacheCombinedFilesMappingsDrop( service_id )
file_service_ids = self._GetServiceIds( HC.AUTOCOMPLETE_CACHE_SPECIFIC_FILE_SERVICES )
@ -2071,6 +2280,16 @@ class DB( HydrusDB.HydrusDB ):
self._c.executemany( 'INSERT OR IGNORE INTO tag_siblings ( service_id, bad_tag_id, good_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_DELETED ) for ( bad_tag_id, good_tag_id ) in pairs ) )
tag_ids = set()
for ( bad_tag_id, good_tag_id ) in pairs:
tag_ids.add( bad_tag_id )
tag_ids.add( good_tag_id )
self._CacheTagSiblingsUpdateChains( service_id, tag_ids )
def _DeleteYAMLDump( self, dump_type, dump_name = None ):
@ -4024,6 +4243,11 @@ class DB( HydrusDB.HydrusDB ):
return hashes_result
def _GenerateDBJob( self, job_type, synchronous, action, *args, **kwargs ):
return JobDatabaseClient( job_type, synchronous, action, *args, **kwargs )
def _GenerateMappingsTables( self, service_id ):
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
@ -4266,21 +4490,7 @@ class DB( HydrusDB.HydrusDB ):
sibling_service_id = self._GetServiceId( sibling_service_key )
sibling_tag_ids = set()
for group_of_tag_ids in HydrusData.SplitIteratorIntoChunks( tag_ids, 1024 ):
if job_key is not None and job_key.IsCancelled():
return set()
group_of_sibling_tag_ids = self._GetTagSiblingIds( sibling_service_id, group_of_tag_ids )
sibling_tag_ids.update( group_of_sibling_tag_ids )
tag_ids.update( sibling_tag_ids )
tag_ids.update( self._CacheTagSiblingsLookupGetAdditionalSiblings( sibling_service_id, tag_ids ) )
return tag_ids
@ -6145,11 +6355,11 @@ class DB( HydrusDB.HydrusDB ):
return hash_ids
def _GetHashIdsFromTag( self, file_service_key, tag_service_key, tag, include_current_tags, include_pending_tags, hash_ids_table_name = None ):
def _GetHashIdsFromTag( self, file_service_key, tag_service_key, search_tag, include_current_tags, include_pending_tags, hash_ids_table_name = None ):
siblings_manager = self._controller.tag_siblings_manager
tags = siblings_manager.GetAllSiblings( tag_service_key, tag )
tags = siblings_manager.GetAllSiblings( tag_service_key, search_tag )
predicate_strings = []
@ -6157,7 +6367,12 @@ class DB( HydrusDB.HydrusDB ):
( namespace, subtag ) = HydrusTags.SplitTag( tag )
if namespace != '':
# "tag != search_tag" here because if a sibling is unnamespaced of a namespaced search, we end up getting all namespaced versions of that unnamespaced tag!!
# e.g. search 'character:samus aran'
# 'samus aran' is a sibling
# we hence search anything:samus aran!
if namespace != '' or tag != search_tag:
if not self._TagExists( tag ):
@ -6664,10 +6879,16 @@ class DB( HydrusDB.HydrusDB ):
db_path = os.path.join( self._db_dir, self._db_filenames[ name ] )
if HydrusDB.CanVacuum( db_path, stop_time = stop_time ):
try:
possible_due_names.add( name )
HydrusDB.CheckCanVacuum( db_path, stop_time = stop_time )
except Exception as e:
continue
possible_due_names.add( name )
possible_due_names = list( possible_due_names )
@ -8073,7 +8294,7 @@ class DB( HydrusDB.HydrusDB ):
service_keys_to_statuses_to_pairs = collections.defaultdict( HydrusData.default_dict_set )
for ( service_id, statuses_and_pair_ids ) in list(service_ids_to_statuses_and_pair_ids.items()):
for ( service_id, statuses_and_pair_ids ) in list( service_ids_to_statuses_and_pair_ids.items() ):
try:
@ -8108,47 +8329,6 @@ class DB( HydrusDB.HydrusDB ):
def _GetTagSiblingIds( self, service_id, tag_ids ):
search_tag_ids = set( tag_ids )
searched_tag_ids = set()
sibling_tag_ids = set()
if service_id == self._combined_tag_service_id:
service_predicate = ''
else:
service_predicate = ' AND service_id = ' + str( service_id )
while len( search_tag_ids ) > 0:
with HydrusDB.TemporaryIntegerTable( self._c, search_tag_ids, 'tag_id' ) as temp_table_name:
self._AnalyzeTempTable( temp_table_name )
good_select = 'SELECT good_tag_id FROM tag_siblings, {} ON ( bad_tag_id = tag_id );'.format( temp_table_name )
bad_select = 'SELECT bad_tag_id FROM tag_siblings, {} ON ( good_tag_id = tag_id );'.format( temp_table_name )
goods = self._STS( self._c.execute( good_select ) )
bads = self._STS( self._c.execute( bad_select ) )
searched_tag_ids.update( search_tag_ids )
# ids are new if we have not seen them before
new_sibling_tag_ids = goods.union( bads ).difference( searched_tag_ids )
search_tag_ids = new_sibling_tag_ids
sibling_tag_ids.update( new_sibling_tag_ids )
return sibling_tag_ids
def _GetText( self, text_id ):
result = self._c.execute( 'SELECT text FROM texts WHERE text_id = ?;', ( text_id, ) ).fetchone()
@ -9899,18 +10079,30 @@ class DB( HydrusDB.HydrusDB ):
def _PopulateHashIdsToHashesCache( self, hash_ids, exception_on_error = False ):
if len( self._hash_ids_to_hashes_cache ) > 25000:
if len( self._hash_ids_to_hashes_cache ) > 100000:
self._hash_ids_to_hashes_cache = {}
uncached_hash_ids = [ hash_id for hash_id in hash_ids if hash_id not in self._hash_ids_to_hashes_cache ]
uncached_hash_ids = { hash_id for hash_id in hash_ids if hash_id not in self._hash_ids_to_hashes_cache }
if len( uncached_hash_ids ) > 0:
pubbed_error = False
uncached_hash_ids_to_hashes = dict( self._ExecuteManySelectSingleParam( 'SELECT hash_id, hash FROM hashes WHERE hash_id = ?;', uncached_hash_ids ) )
if len( uncached_hash_ids ) > 100:
with HydrusDB.TemporaryIntegerTable( self._c, uncached_hash_ids, 'hash_id' ) as temp_table_name:
self._AnalyzeTempTable( temp_table_name )
uncached_hash_ids_to_hashes = dict( self._c.execute( 'SELECT hash_id, hash FROM hashes NATURAL JOIN {};'.format( temp_table_name ) ) )
else:
uncached_hash_ids_to_hashes = dict( self._ExecuteManySelectSingleParam( 'SELECT hash_id, hash FROM hashes WHERE hash_id = ?;', uncached_hash_ids ) )
if len( uncached_hash_ids_to_hashes ) < len( uncached_hash_ids ):
@ -9945,27 +10137,53 @@ class DB( HydrusDB.HydrusDB ):
def _PopulateTagIdsToTagsCache( self, tag_ids ):
if len( self._tag_ids_to_tags_cache ) > 25000:
if len( self._tag_ids_to_tags_cache ) > 100000:
self._tag_ids_to_tags_cache = {}
uncached_tag_ids = [ tag_id for tag_id in tag_ids if tag_id not in self._tag_ids_to_tags_cache ]
uncached_tag_ids = { tag_id for tag_id in tag_ids if tag_id not in self._tag_ids_to_tags_cache }
if len( uncached_tag_ids ) > 0:
local_uncached_tag_ids_to_tags = { tag_id : tag for ( tag_id, tag ) in self._ExecuteManySelectSingleParam( 'SELECT tag_id, tag FROM local_tags_cache WHERE tag_id = ?;', uncached_tag_ids ) }
if len( uncached_tag_ids ) > 100:
with HydrusDB.TemporaryIntegerTable( self._c, uncached_tag_ids, 'tag_id' ) as temp_table_name:
self._AnalyzeTempTable( temp_table_name )
local_uncached_tag_ids_to_tags = { tag_id : tag for ( tag_id, tag ) in self._c.execute( 'SELECT tag_id, tag FROM local_tags_cache NATURAL JOIN {};'.format( temp_table_name ) ) }
else:
local_uncached_tag_ids_to_tags = { tag_id : tag for ( tag_id, tag ) in self._ExecuteManySelectSingleParam( 'SELECT tag_id, tag FROM local_tags_cache WHERE tag_id = ?;', uncached_tag_ids ) }
self._tag_ids_to_tags_cache.update( local_uncached_tag_ids_to_tags )
uncached_tag_ids = [ tag_id for tag_id in tag_ids if tag_id not in self._tag_ids_to_tags_cache ]
uncached_tag_ids = { tag_id for tag_id in uncached_tag_ids if tag_id not in self._tag_ids_to_tags_cache }
if len( uncached_tag_ids ) > 0:
select_statement = 'SELECT tag_id, namespace, subtag FROM tags NATURAL JOIN namespaces NATURAL JOIN subtags WHERE tag_id = ?;'
uncached_tag_ids_to_tags = { tag_id : HydrusTags.CombineTag( namespace, subtag ) for ( tag_id, namespace, subtag ) in self._ExecuteManySelectSingleParam( select_statement, uncached_tag_ids ) }
if len( uncached_tag_ids ) > 100:
with HydrusDB.TemporaryIntegerTable( self._c, uncached_tag_ids, 'tag_id' ) as temp_table_name:
self._AnalyzeTempTable( temp_table_name )
select_statement = 'SELECT tag_id, namespace, subtag FROM tags NATURAL JOIN namespaces NATURAL JOIN subtags NATURAL JOIN {};'.format( temp_table_name )
uncached_tag_ids_to_tags = { tag_id : HydrusTags.CombineTag( namespace, subtag ) for ( tag_id, namespace, subtag ) in self._c.execute( select_statement ) }
else:
select_statement = 'SELECT tag_id, namespace, subtag FROM tags NATURAL JOIN namespaces NATURAL JOIN subtags WHERE tag_id = ?;'
uncached_tag_ids_to_tags = { tag_id : HydrusTags.CombineTag( namespace, subtag ) for ( tag_id, namespace, subtag ) in self._ExecuteManySelectSingleParam( select_statement, uncached_tag_ids ) }
if len( uncached_tag_ids_to_tags ) < len( uncached_tag_ids ):
@ -10454,6 +10672,8 @@ class DB( HydrusDB.HydrusDB ):
self._c.execute( 'INSERT OR IGNORE INTO tag_sibling_petitions ( service_id, bad_tag_id, good_tag_id, reason_id, status ) VALUES ( ?, ?, ?, ?, ? );', ( service_id, bad_tag_id, good_tag_id, reason_id, new_status ) )
self._CacheTagSiblingsUpdateChains( service_id, { bad_tag_id, good_tag_id } )
notify_new_pending = True
elif action in ( HC.CONTENT_UPDATE_RESCIND_PEND, HC.CONTENT_UPDATE_RESCIND_PETITION ):
@ -10480,6 +10700,8 @@ class DB( HydrusDB.HydrusDB ):
self._c.execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND status = ?;', ( service_id, bad_tag_id, deletee_status ) )
self._CacheTagSiblingsUpdateChains( service_id, { bad_tag_id } )
notify_new_pending = True
@ -11033,24 +11255,19 @@ class DB( HydrusDB.HydrusDB ):
return result
def _RegenerateACCache( self ):
def _RegenerateTagMappingsCache( self ):
job_key = ClientThreading.JobKey( cancellable = True )
try:
job_key.SetVariable( 'popup_title', 'regenerating autocomplete cache' )
job_key.SetVariable( 'popup_title', 'regenerating tag mappings cache' )
self._controller.pub( 'modal_message', job_key )
tag_service_ids = self._GetServiceIds( HC.REAL_TAG_SERVICES )
file_service_ids = self._GetServiceIds( HC.AUTOCOMPLETE_CACHE_SPECIFIC_FILE_SERVICES )
message = 'generating local tag cache'
job_key.SetVariable( 'popup_text_1', message )
self._controller.pub( 'splash_set_status_subtext', message )
time.sleep( 0.01 )
for ( file_service_id, tag_service_id ) in itertools.product( file_service_ids, tag_service_ids ):
@ -11091,6 +11308,11 @@ class DB( HydrusDB.HydrusDB ):
self._CacheCombinedFilesMappingsGenerate( tag_service_id )
message = 'generating local tag cache'
job_key.SetVariable( 'popup_text_1', message )
self._controller.pub( 'splash_set_status_subtext', message )
self._CacheLocalTagIdsGenerate()
finally:
@ -11103,6 +11325,53 @@ class DB( HydrusDB.HydrusDB ):
def _RegenerateTagSiblingsCache( self ):
job_key = ClientThreading.JobKey( cancellable = True )
try:
job_key.SetVariable( 'popup_title', 'regenerating tag siblings cache' )
self._controller.pub( 'modal_message', job_key )
tag_service_ids = self._GetServiceIds( HC.REAL_TAG_SERVICES )
time.sleep( 0.01 )
for tag_service_id in tag_service_ids:
if job_key.IsCancelled():
break
message = 'generating specific tag siblings cache {}'.format( tag_service_id )
job_key.SetVariable( 'popup_text_1', message )
self._controller.pub( 'splash_set_status_subtext', message )
time.sleep( 0.01 )
self._CacheTagSiblingsLookupDrop( tag_service_id )
self._CacheTagSiblingsLookupGenerate( tag_service_id )
self._CacheTagSiblingsLookupDrop( self._combined_tag_service_id )
self._CacheTagSiblingsLookupGenerate( self._combined_tag_service_id )
finally:
job_key.SetVariable( 'popup_text_1', 'done!' )
job_key.Finish()
job_key.Delete( 5 )
def _RelocateClientFiles( self, prefix, source, dest ):
full_source = os.path.join( source, prefix )
@ -11214,6 +11483,48 @@ class DB( HydrusDB.HydrusDB ):
self._CreateIndex( 'external_master.local_hashes', [ 'sha512' ] )
( version, ) = self._c.execute( 'SELECT version FROM version;' ).fetchone()
if version >= 392:
# tag sibling caches
existing_cache_tables = self._STS( self._c.execute( 'SELECT name FROM external_caches.sqlite_master WHERE type = ?;', ( 'table', ) ) )
tag_sibling_cache_service_ids = list( self._GetServiceIds( HC.REAL_TAG_SERVICES ) )
tag_sibling_cache_service_ids.append( self._combined_tag_service_id )
missing_tag_sibling_cache_tables = []
for tag_service_id in tag_sibling_cache_service_ids:
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( tag_service_id )
if cache_tag_siblings_lookup_table_name.split( '.' )[1] not in existing_cache_tables:
self._CacheTagSiblingsLookupGenerate( tag_service_id )
missing_tag_sibling_cache_tables.append( cache_tag_siblings_lookup_table_name )
if len( missing_tag_sibling_cache_tables ) > 0:
missing_tag_sibling_cache_tables.sort()
message = 'On boot, some important tag sibling cache tables were missing! This could be due to the entire \'caches\' database file being missing or some other problem. All of this data can be regenerated. The exact missing tables were:'
message += os.linesep * 2
message += os.linesep.join( missing_tag_sibling_cache_tables )
message += os.linesep * 2
message += 'If you wish, click ok on this message and the client will recreate and repopulate these tables with the correct data. But if you want to solve this problem otherwise, kill the hydrus process now.'
message += os.linesep * 2
message += 'If you do not already know what caused this, it was likely a hard drive fault--either due to a recent abrupt power cut or actual hardware failure. Check \'help my db is broke.txt\' in the install_dir/db directory as soon as you can.'
self._controller.CallBlockingToQt( self._controller.app, QW.QMessageBox.warning, None, 'Warning', message )
# mappings
existing_mapping_tables = self._STS( self._c.execute( 'SELECT name FROM external_mappings.sqlite_master WHERE type = ?;', ( 'table', ) ) )
@ -11315,7 +11626,7 @@ class DB( HydrusDB.HydrusDB ):
self._controller.CallBlockingToQt( self._controller.app, QW.QMessageBox.warning, None, 'Warning', message )
self._RegenerateACCache()
self._RegenerateTagMappingsCache()
#
@ -13960,6 +14271,47 @@ class DB( HydrusDB.HydrusDB ):
if version == 391:
tag_service_ids = self._GetServiceIds( HC.REAL_TAG_SERVICES )
for tag_service_id in tag_service_ids:
self._CacheTagSiblingsLookupDrop( tag_service_id )
self._CacheTagSiblingsLookupGenerate( tag_service_id )
self._CacheTagSiblingsLookupDrop( self._combined_tag_service_id )
self._CacheTagSiblingsLookupGenerate( self._combined_tag_service_id )
try:
domain_manager = self._GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER )
domain_manager.Initialise()
#
domain_manager.OverwriteDefaultParsers( [ 'e621 file page parser', 'sankaku file page parser' ] )
#
domain_manager.TryToLinkURLClassesAndParsers()
#
self._SetJSONDump( domain_manager )
except Exception as e:
HydrusData.PrintException( e )
message = 'Trying to update some parsers failed! Please let hydrus dev know!'
self.pub_initial_message( message )
self._controller.pub( 'splash_set_title_text', 'updated db to v{}'.format( HydrusData.ToHumanInt( version + 1 ) ) )
self._c.execute( 'UPDATE version SET version = ?;', ( version + 1, ) )
@ -14410,31 +14762,35 @@ class DB( HydrusDB.HydrusDB ):
db_path = os.path.join( self._db_dir, self._db_filenames[ name ] )
if HydrusDB.CanVacuum( db_path, stop_time = stop_time ):
try:
if not job_key_pubbed:
self._controller.pub( 'modal_message', job_key )
job_key_pubbed = True
HydrusDB.CheckCanVacuum( db_path )
self._controller.pub( 'splash_set_status_text', 'vacuuming ' + name )
job_key.SetVariable( 'popup_text_1', 'vacuuming ' + name )
except Exception as e:
started = HydrusData.GetNowPrecise()
HydrusData.Print( 'Cannot vacuum "{}": {}'.format( db_path, e ) )
HydrusDB.VacuumDB( db_path )
continue
time_took = HydrusData.GetNowPrecise() - started
if not job_key_pubbed:
HydrusData.Print( 'Vacuumed ' + db_path + ' in ' + HydrusData.TimeDeltaToPrettyTimeDelta( time_took ) )
self._controller.pub( 'modal_message', job_key )
else:
HydrusData.Print( 'Could not vacuum ' + db_path + ' (probably due to limited disk space on db or system drive).' )
job_key_pubbed = True
self._controller.pub( 'splash_set_status_text', 'vacuuming ' + name )
job_key.SetVariable( 'popup_text_1', 'vacuuming ' + name )
started = HydrusData.GetNowPrecise()
HydrusDB.VacuumDB( db_path )
time_took = HydrusData.GetNowPrecise() - started
HydrusData.Print( 'Vacuumed ' + db_path + ' in ' + HydrusData.TimeDeltaToPrettyTimeDelta( time_took ) )
names_done.append( name )
except Exception as e:
@ -14523,8 +14879,9 @@ class DB( HydrusDB.HydrusDB ):
elif action == 'process_repository_content': result = self._ProcessRepositoryContent( *args, **kwargs )
elif action == 'process_repository_definitions': result = self._ProcessRepositoryDefinitions( *args, **kwargs )
elif action == 'push_recent_tags': self._PushRecentTags( *args, **kwargs )
elif action == 'regenerate_ac_cache': self._RegenerateACCache( *args, **kwargs )
elif action == 'regenerate_similar_files': self._PHashesRegenerateTree( *args, **kwargs )
elif action == 'regenerate_tag_mappings_cache': self._RegenerateTagMappingsCache( *args, **kwargs )
elif action == 'regenerate_tag_siblings_cache': self._RegenerateTagSiblingsCache( *args, **kwargs )
elif action == 'repopulate_fts_cache': self._RepopulateAndUpdateFTSTags( *args, **kwargs )
elif action == 'relocate_client_files': self._RelocateClientFiles( *args, **kwargs )
elif action == 'remove_alternates_member': self._DuplicatesRemoveAlternateMemberFromHashes( *args, **kwargs )

View File

@ -373,6 +373,8 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
ClientGUITopLevelWindows.MainFrameThatResizes.__init__( self, None, title, 'main_gui' )
self._currently_minimised_to_system_tray = False
bandwidth_width = ClientGUIFunctions.ConvertTextToPixelWidth( self, 17 )
idle_width = ClientGUIFunctions.ConvertTextToPixelWidth( self, 6 )
hydrus_busy_width = ClientGUIFunctions.ConvertTextToPixelWidth( self, 11 )
@ -453,6 +455,8 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
ClientGUITopLevelWindows.SetInitialTLWSizeAndPosition( self, self._frame_key )
self._was_maximised = self.isMaximized()
self._InitialiseMenubar()
self._RefreshStatusBar()
@ -483,11 +487,13 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
if self._controller.new_options.GetBoolean( 'start_client_in_system_tray' ):
self._currently_minimised_to_system_tray = True
QW.QApplication.instance().setQuitOnLastWindowClosed( False )
self.hide()
self._system_tray_hidden_tlws.append( self )
self._system_tray_hidden_tlws.append( ( self.isMaximized(), self ) )
else:
@ -1120,7 +1126,7 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
def _CurrentlyMinimisedOrHidden( self ):
return self.isMinimized() or not self.isVisible()
return self.isMinimized() or self._currently_minimised_to_system_tray
def _DebugFetchAURL( self ):
@ -1652,11 +1658,11 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
def _FlipShowHideWholeUI( self ):
if self.isVisible():
if not self._currently_minimised_to_system_tray:
QW.QApplication.instance().setQuitOnLastWindowClosed( False )
visible_tlws = [ tlw for tlw in QW.QApplication.topLevelWidgets() if tlw.isVisible() ]
visible_tlws = [ tlw for tlw in QW.QApplication.topLevelWidgets() if tlw.isVisible() or tlw.isMinimized() ]
visible_dialogs = [ tlw for tlw in visible_tlws if isinstance( tlw, QW.QDialog ) ]
@ -1687,12 +1693,12 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
tlw.hide()
self._system_tray_hidden_tlws.append( tlw )
self._system_tray_hidden_tlws.append( ( tlw.isMaximized(), tlw ) )
else:
for tlw in self._system_tray_hidden_tlws:
for ( was_maximised, tlw ) in self._system_tray_hidden_tlws:
if QP.isValid( tlw ):
@ -1719,14 +1725,13 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
self._system_tray_hidden_tlws = []
if self.isMinimized():
self.showNormal()
self.RestoreOrActivateWindow()
QW.QApplication.instance().setQuitOnLastWindowClosed( True )
self._currently_minimised_to_system_tray = not self._currently_minimised_to_system_tray
self._UpdateSystemTrayIcon()
@ -2975,9 +2980,9 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
self._statusbar.SetStatusText( db_status, 5 )
def _RegenerateACCache( self ):
def _RegenerateTagMappingsCache( self ):
message = 'This will delete and then recreate the entire autocomplete cache. This is useful if miscounting has somehow occurred.'
message = 'This will delete and then recreate the entire tag mappings cache, which is used for tag searching, loading, and autocomplete counts. This is useful if miscounting has somehow occurred.'
message += os.linesep * 2
message += 'If you have a lot of tags and files, it can take a long time, during which the gui may hang.'
message += os.linesep * 2
@ -2987,23 +2992,7 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
if result == QW.QDialog.Accepted:
self._controller.Write( 'regenerate_ac_cache' )
def _RegenerateSimilarFilesTree( self ):
message = 'This will delete and then recreate the similar files search tree. This is useful if it has somehow become unbalanced and similar files searches are running slow.'
message += os.linesep * 2
message += 'If you have a lot of files, it can take a little while, during which the gui may hang.'
message += os.linesep * 2
message += 'If you do not have a specific reason to run this, it is pointless.'
( result, was_cancelled ) = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = 'do it', no_label = 'forget it', check_for_cancelled = True )
if result == QW.QDialog.Accepted:
self._controller.Write( 'regenerate_similar_files' )
self._controller.Write( 'regenerate_tag_mappings_cache' )
@ -3033,6 +3022,36 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
def _RegenerateSimilarFilesTree( self ):
message = 'This will delete and then recreate the similar files search tree. This is useful if it has somehow become unbalanced and similar files searches are running slow.'
message += os.linesep * 2
message += 'If you have a lot of files, it can take a little while, during which the gui may hang.'
message += os.linesep * 2
message += 'If you do not have a specific reason to run this, it is pointless.'
( result, was_cancelled ) = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = 'do it', no_label = 'forget it', check_for_cancelled = True )
if result == QW.QDialog.Accepted:
self._controller.Write( 'regenerate_similar_files' )
def _RegenerateTagSiblingsCache( self ):
message = 'This will delete and then recreate the tag siblings cache. This is useful if it has become damaged or otherwise desynchronised.'
message += os.linesep * 2
message += 'If you do not have a specific reason to run this, it is pointless.'
result = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = 'do it', no_label = 'forget it' )
if result == QW.QDialog.Accepted:
self._controller.Write( 'regenerate_tag_siblings_cache' )
def _RestoreSplitterPositions( self ):
self._controller.pub( 'set_splitter_positions', HC.options[ 'hpos' ], HC.options[ 'vpos' ] )
@ -3517,6 +3536,10 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
HG.db_profile_mode = not HG.db_profile_mode
elif name == 'db_ui_hang_relief_mode':
HG.db_ui_hang_relief_mode = not HG.db_ui_hang_relief_mode
elif name == 'file_import_report_mode':
HG.file_import_report_mode = not HG.file_import_report_mode
@ -3695,7 +3718,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
need_system_tray = always_show_system_tray_icon
if not self.isVisible():
if self._currently_minimised_to_system_tray:
need_system_tray = True
@ -3728,7 +3751,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
if self._have_system_tray_icon:
self._system_tray_icon.SetShouldAlwaysShow( always_show_system_tray_icon )
self._system_tray_icon.SetUIIsCurrentlyShown( self.isVisible() )
self._system_tray_icon.SetUIIsCurrentlyShown( not self._currently_minimised_to_system_tray )
self._system_tray_icon.SetNetworkTrafficPaused( self._controller.new_options.GetBoolean( 'pause_all_new_network_traffic' ) )
self._system_tray_icon.SetSubscriptionsPaused( HC.options[ 'pause_subs_sync' ] )
@ -3981,11 +4004,13 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
self._notebook.ShowMenuFromScreenPosition( screen_position )
def EventIconize( self, event ):
def EventIconize( self, event: QG.QWindowStateChangeEvent ):
if self.isMinimized():
if self.isVisible() and self._controller.new_options.GetBoolean( 'minimise_client_to_system_tray' ):
self._was_maximised = event.oldState() & QC.Qt.WindowMaximized
if not self._currently_minimised_to_system_tray and self._controller.new_options.GetBoolean( 'minimise_client_to_system_tray' ):
self._FlipShowHideWholeUI()
@ -4033,7 +4058,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
continue
if self._CurrentlyMinimisedOrHidden():
if self._currently_minimised_to_system_tray:
continue
@ -4259,7 +4284,8 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
submenu = QW.QMenu( menu )
ClientGUIMenus.AppendMenuItem( submenu, 'autocomplete cache', 'Delete and recreate the tag autocomplete cache, fixing any miscounts.', self._RegenerateACCache )
ClientGUIMenus.AppendMenuItem( submenu, 'tag mappings cache', 'Delete and recreate the tag mappings cache, fixing any miscounts.', self._RegenerateTagMappingsCache )
ClientGUIMenus.AppendMenuItem( submenu, 'tag siblings cache', 'Delete and recreate the tag siblings cache.', self._RegenerateTagSiblingsCache )
ClientGUIMenus.AppendMenuItem( submenu, 'repopulate and correct tag text search cache', 'Repopulate the cache hydrus uses for fast tag search.', self._RepopulateFTSCache )
ClientGUIMenus.AppendMenuItem( submenu, 'similar files search tree', 'Delete and recreate the similar files search tree.', self._RegenerateSimilarFilesTree )
@ -4609,6 +4635,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
ClientGUIMenus.AppendMenuItem( gui_actions, 'make a popup in five seconds', 'Throw a delayed popup at the message manager, giving you time to minimise or otherwise alter the client before it arrives.', self._controller.CallLater, 5, HydrusData.ShowText, 'This is a delayed popup message.' )
ClientGUIMenus.AppendMenuItem( gui_actions, 'make a modal popup in five seconds', 'Throw up a delayed modal popup to test with. It will stay alive for five seconds.', self._DebugMakeDelayedModalPopup )
ClientGUIMenus.AppendMenuItem( gui_actions, 'make a new page in five seconds', 'Throw a delayed page at the main notebook, giving you time to minimise or otherwise alter the client before it arrives.', self._controller.CallLater, 5, self._controller.pub, 'new_page_query', CC.LOCAL_FILE_SERVICE_KEY )
ClientGUIMenus.AppendMenuItem( gui_actions, 'refresh pages menu in five seconds', 'Delayed refresh the pages menu, giving you time to minimise or otherwise alter the client before it arrives.', self._controller.CallLater, 5, self._menu_updater_pages.update )
ClientGUIMenus.AppendMenuItem( gui_actions, 'make a parentless text ctrl dialog', 'Make a parentless text control in a dialog to test some character event catching.', self._DebugMakeParentlessTextCtrl )
ClientGUIMenus.AppendMenuItem( gui_actions, 'force a main gui layout now', 'Tell the gui to relayout--useful to test some gui bootup layout issues.', self.adjustSize )
ClientGUIMenus.AppendMenuItem( gui_actions, 'save \'last session\' gui session', 'Make an immediate save of the \'last session\' gui session. Mostly for testing crashes, where last session is not saved correctly.', self.ProposeSaveGUISession, 'last session' )
@ -4618,23 +4645,29 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
data_actions = QW.QMenu( debug )
ClientGUIMenus.AppendMenuItem( data_actions, 'run fast memory maintenance', 'Tell all the fast caches to maintain themselves.', self._controller.MaintainMemoryFast )
ClientGUIMenus.AppendMenuItem( data_actions, 'run slow memory maintenance', 'Tell all the slow caches to maintain themselves.', self._controller.MaintainMemorySlow )
ClientGUIMenus.AppendMenuCheckItem( data_actions, 'db ui-hang relief mode', 'Have UI-synchronised database jobs process pending Qt events while they wait.', HG.db_ui_hang_relief_mode, self._SwitchBoolean, 'db_ui_hang_relief_mode' )
ClientGUIMenus.AppendMenuItem( data_actions, 'review threads', 'Show current threads and what they are doing.', self._ReviewThreads )
ClientGUIMenus.AppendMenuItem( data_actions, 'show scheduled jobs', 'Print some information about the currently scheduled jobs log.', self._DebugShowScheduledJobs )
ClientGUIMenus.AppendMenuItem( data_actions, 'subscription manager snapshot', 'Have the subscription system show what it is doing.', self._controller.subscriptions_manager.ShowSnapshot )
ClientGUIMenus.AppendMenuItem( data_actions, 'flush log', 'Command the log to write any buffered contents to hard drive.', HydrusData.DebugPrint, 'Flushing log' )
ClientGUIMenus.AppendMenuItem( data_actions, 'print garbage', 'Print some information about the python garbage to the log.', self._DebugPrintGarbage )
ClientGUIMenus.AppendMenuItem( data_actions, 'take garbage snapshot', 'Capture current garbage object counts.', self._DebugTakeGarbageSnapshot )
ClientGUIMenus.AppendMenuItem( data_actions, 'show garbage snapshot changes', 'Show object count differences from the last snapshot.', self._DebugShowGarbageDifferences )
ClientGUIMenus.AppendMenuItem( data_actions, 'enable truncated image loading', 'Enable the truncated image loading to test out broken jpegs.', self._EnableLoadTruncatedImages )
ClientGUIMenus.AppendMenuItem( data_actions, 'clear image rendering cache', 'Tell the image rendering system to forget all current images. This will often free up a bunch of memory immediately.', self._controller.ClearCaches )
ClientGUIMenus.AppendMenuItem( data_actions, 'clear thumbnail cache', 'Tell the thumbnail cache to forget everything and redraw all current thumbs.', self._controller.pub, 'reset_thumbnail_cache' )
ClientGUIMenus.AppendMenuItem( data_actions, 'clear db service info cache', 'Delete all cached service info like total number of mappings or files, in case it has become desynchronised. Some parts of the gui may be laggy immediately after this as these numbers are recalculated.', self._DeleteServiceInfo )
ClientGUIMenus.AppendMenuItem( data_actions, 'load whole db in disk cache', 'Contiguously read as much of the db as will fit into memory. This will massively speed up any subsequent big job.', self._controller.CallToThread, self._controller.Read, 'load_into_disk_cache' )
ClientGUIMenus.AppendMenu( debug, data_actions, 'data actions' )
memory_actions = QW.QMenu( debug )
ClientGUIMenus.AppendMenuItem( memory_actions, 'run fast memory maintenance', 'Tell all the fast caches to maintain themselves.', self._controller.MaintainMemoryFast )
ClientGUIMenus.AppendMenuItem( memory_actions, 'run slow memory maintenance', 'Tell all the slow caches to maintain themselves.', self._controller.MaintainMemorySlow )
ClientGUIMenus.AppendMenuItem( memory_actions, 'clear image rendering cache', 'Tell the image rendering system to forget all current images. This will often free up a bunch of memory immediately.', self._controller.ClearCaches )
ClientGUIMenus.AppendMenuItem( memory_actions, 'clear thumbnail cache', 'Tell the thumbnail cache to forget everything and redraw all current thumbs.', self._controller.pub, 'reset_thumbnail_cache' )
ClientGUIMenus.AppendMenuItem( memory_actions, 'clear db service info cache', 'Delete all cached service info like total number of mappings or files, in case it has become desynchronised. Some parts of the gui may be laggy immediately after this as these numbers are recalculated.', self._DeleteServiceInfo )
ClientGUIMenus.AppendMenuItem( memory_actions, 'print garbage', 'Print some information about the python garbage to the log.', self._DebugPrintGarbage )
ClientGUIMenus.AppendMenuItem( memory_actions, 'take garbage snapshot', 'Capture current garbage object counts.', self._DebugTakeGarbageSnapshot )
ClientGUIMenus.AppendMenuItem( memory_actions, 'show garbage snapshot changes', 'Show object count differences from the last snapshot.', self._DebugShowGarbageDifferences )
ClientGUIMenus.AppendMenuItem( memory_actions, 'load whole db in disk cache', 'Contiguously read as much of the db as will fit into memory. This will massively speed up any subsequent big job.', self._controller.CallToThread, self._controller.Read, 'load_into_disk_cache' )
ClientGUIMenus.AppendMenu( debug, memory_actions, 'memory actions' )
network_actions = QW.QMenu( debug )
ClientGUIMenus.AppendMenuItem( network_actions, 'fetch a url', 'Fetch a URL using the network engine as per normal.', self._DebugFetchAURL )
@ -5086,7 +5119,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
mpv_widget = self._persistent_mpv_widgets.pop()
if mpv_widget.parent() is self:
if mpv_widget.parentWidget() is self:
mpv_widget.setParent( parent )
@ -5122,7 +5155,13 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
def HideToSystemTray( self ):
if self.isVisible() and ClientGUISystemTray.SystemTrayAvailable() and not ( not HC.PLATFORM_WINDOWS and not HG.client_controller.new_options.GetBoolean( 'advanced_mode' ) ):
shown = not self._currently_minimised_to_system_tray
windows_or_advanced_mode = HC.PLATFORM_WINDOWS or HG.client_controller.new_options.GetBoolean( 'advanced_mode' )
good_to_go = ClientGUISystemTray.SystemTrayAvailable() and windows_or_advanced_mode
if shown and good_to_go:
self._FlipShowHideWholeUI()
@ -5391,7 +5430,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
if page is not None:
parent = page.parentWidget()
parent = page.GetParentNotebook()
parent.RefreshAllPages()
@ -5786,7 +5825,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
continue
if self._CurrentlyMinimisedOrHidden():
if tlw == self and self._CurrentlyMinimisedOrHidden():
continue
@ -5909,7 +5948,14 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
if self.isMinimized():
self.showNormal()
if self._was_maximised:
self.showMaximized()
else:
self.showNormal()
else:
@ -5935,7 +5981,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
#
if self.isVisible():
if self._have_shown_once:
if self._new_options.GetBoolean( 'saving_sash_positions_on_exit' ):

View File

@ -1669,6 +1669,8 @@ class CanvasPanel( Canvas ):
self._hidden_page_current_media = None
self._media_container.launchMediaViewer.connect( self.LaunchMediaViewer )
HG.client_controller.sub( self, 'ProcessContentUpdates', 'content_updates_gui' )

View File

@ -51,6 +51,8 @@ def ShouldHaveAnimationBar( media, show_action ):
class Animation( QW.QWidget ):
launchMediaViewer = QC.Signal()
def __init__( self, parent, canvas_type ):
QW.QWidget.__init__( self, parent )
@ -301,7 +303,7 @@ class Animation( QW.QWidget ):
elif action == 'launch_media_viewer' and self._canvas_type == ClientGUICommon.CANVAS_PREVIEW:
self.parent().LaunchMediaViewer()
self.launchMediaViewer.emit()
else:
@ -862,6 +864,8 @@ class MediaContainerDragClickReportingFilter( QC.QObject ):
class MediaContainer( QW.QWidget ):
launchMediaViewer = QC.Signal()
def __init__( self, parent, canvas_type ):
QW.QWidget.__init__( self, parent )
@ -888,7 +892,7 @@ class MediaContainer( QW.QWidget ):
self.setMouseTracking( True )
self._drag_click_reporting_filter = MediaContainerDragClickReportingFilter( self.parent() )
self._drag_click_reporting_filter = MediaContainerDragClickReportingFilter( self.parentWidget() )
self._animation_window = Animation( self, self._canvas_type )
self._animation_bar = AnimationBar( self )
@ -913,12 +917,16 @@ class MediaContainer( QW.QWidget ):
if isinstance( media_window, ( Animation, StaticImage ) ):
media_window.launchMediaViewer.disconnect( self.launchMediaViewer )
media_window.ClearMedia()
media_window.hide()
elif isinstance( media_window, ClientGUIMPV.mpvWidget ):
media_window.launchMediaViewer.disconnect( self.launchMediaViewer )
media_window.ClearMedia()
media_window.hide()
@ -1038,6 +1046,15 @@ class MediaContainer( QW.QWidget ):
old_media_window.removeEventFilter( self._drag_click_reporting_filter )
launch_media_viewer_classes = ( Animation, ClientGUIMPV.mpvWidget, StaticImage )
if isinstance( self._media_window, launch_media_viewer_classes ):
self._media_window.launchMediaViewer.connect( self.launchMediaViewer )
self._media_window
if old_media_window is not None and destroy_old_media_window:
@ -1181,18 +1198,6 @@ class MediaContainer( QW.QWidget ):
def LaunchMediaViewer( self ):
parent = self.parent()
from . import ClientGUICanvas
if isinstance( parent, ClientGUICanvas.CanvasPanel ):
parent.LaunchMediaViewer()
def MouseIsNearAnimationBar( self ):
if self._media is None:
@ -1562,6 +1567,8 @@ class OpenExternallyPanel( QW.QWidget ):
class StaticImage( QW.QWidget ):
launchMediaViewer = QC.Signal()
def __init__( self, parent, canvas_type ):
QW.QWidget.__init__( self, parent )
@ -1700,7 +1707,7 @@ class StaticImage( QW.QWidget ):
elif action == 'launch_media_viewer' and self._canvas_type == ClientGUICommon.CANVAS_PREVIEW:
self.parent().LaunchMediaViewer()
self.launchMediaViewer.emit()
else:

View File

@ -1050,7 +1050,7 @@ class EditStringMatchPanel( ClientGUIScrolledPanels.EditPanel ):
rows.append( ( 'match type: ', self._match_type ) )
rows.append( ( 'match text: ', self._match_value_text_input ) )
rows.append( ( 'match value (character set): ', self._match_value_flexible_input ) )
rows.append( ( 'minumum allowed number of characters: ', self._min_chars ) )
rows.append( ( 'minimum allowed number of characters: ', self._min_chars ) )
rows.append( ( 'maximum allowed number of characters: ', self._max_chars ) )
rows.append( ( 'example string: ', self._example_string ) )

View File

@ -1464,8 +1464,6 @@ class ListBox( QW.QScrollArea ):
def paintEvent( self, event ):
self._parent._SetVirtualSize()
painter = QG.QPainter( self )
self._parent._Redraw( painter )
@ -2336,7 +2334,7 @@ class ListBoxTagsAC( ListBoxTagsPredicates ):
if self._float_mode:
widget = self.window().parent()
widget = self.window().parentWidget()
else:

View File

@ -68,6 +68,8 @@ def GetClientAPIVersionString():
#Here is an example on how to render into a QOpenGLWidget instead: https://gist.github.com/cosven/b313de2acce1b7e15afda263779c0afc
class mpvWidget( QW.QWidget ):
launchMediaViewer = QC.Signal()
def __init__( self, parent ):
QW.QWidget.__init__( self, parent )
@ -342,7 +344,7 @@ class mpvWidget( QW.QWidget ):
elif action == 'launch_media_viewer' and self._canvas_type == ClientGUICommon.CANVAS_PREVIEW:
self.parent().LaunchMediaViewer()
self.launchMediaViewer.emit()
else:

View File

@ -193,7 +193,7 @@ class VolumeControl( QW.QWidget ):
def DoShowHide( self ):
parent = self.parent()
parent = self.parentWidget()
horizontal_offset = ( self.width() - parent.width() ) // 2

View File

@ -404,6 +404,8 @@ class Page( QW.QSplitter ):
QW.QSplitter.__init__( self, parent )
self._parent_notebook = parent
self._controller = controller
self._page_key = self._controller.AcquirePageKey()
@ -659,6 +661,11 @@ class Page( QW.QSplitter ):
return { self._page_key }
def GetParentNotebook( self ):
return self._parent_notebook
def GetSessionAPIInfoDict( self, is_selected = False ):
root = {}
@ -922,6 +929,8 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
QP.TabWidgetWithDnD.__init__( self, parent )
self._parent_notebook = parent
# this is disabled for now because it seems borked in Qt
if controller.new_options.GetBoolean( 'notebook_tabs_on_left' ):
@ -964,12 +973,15 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
self._controller.pub( 'refresh_page_name', page_widget.GetPageKey() )
if hasattr( source_widget.parentWidget(), 'GetPageKey' ):
source_notebook = source_widget.parentWidget()
if hasattr( source_notebook, 'GetPageKey' ):
self._controller.pub( 'refresh_page_name', source_widget.parentWidget().GetPageKey() )
self._controller.pub( 'refresh_page_name', source_notebook.GetPageKey() )
def _UpdatePreviousPageIndex( self ):
self._previous_page_index = self.currentIndex()
@ -1255,25 +1267,9 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
return None
def _GetTopNotebook( self ):
top_notebook = self
parent = top_notebook.parentWidget()
while isinstance( parent, PagesNotebook ):
top_notebook = parent
parent = top_notebook.parentWidget()
return top_notebook
def _MovePage( self, page, dest_notebook, insertion_tab_index, follow_dropped_page = False ):
source_notebook = page.parentWidget().parentWidget() # page.parentWidget() is a QStackedWidget
source_notebook = page.GetParentNotebook()
for ( index, p ) in enumerate( source_notebook._GetPages() ):
@ -1297,6 +1293,7 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
insertion_tab_index = min( insertion_tab_index, dest_notebook.count() )
dest_notebook.insertTab( insertion_tab_index, page, page.GetName() )
if follow_dropped_page: dest_notebook.setCurrentIndex( insertion_tab_index )
if follow_dropped_page:
@ -1313,7 +1310,7 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
for page in pages:
if page.parentWidget() != dest_notebook:
if page.GetParentNotebook() != dest_notebook:
self._MovePage( page, dest_notebook, insertion_tab_index )
@ -1577,6 +1574,7 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
ClientGUIMenus.AppendMenuItem( menu, 'send pages to the right to a new page of pages', 'Make a new page of pages and put all the pages to the right into it.', self._SendRightPagesToNewNotebook, tab_index )
if click_over_page_of_pages and page.count() > 0:
ClientGUIMenus.AppendSeparator( menu )
@ -1697,6 +1695,8 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
return
HG.client_controller.app.processEvents()
if load_in_a_page_of_pages:
destination = self.NewPagesNotebook( name = name, give_it_a_blank_page = False)
@ -1708,6 +1708,8 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
page_tuples = session.GetPageTuples()
HG.client_controller.app.processEvents()
destination.AppendSessionPageTuples( page_tuples )
job_key.Delete()
@ -2104,6 +2106,11 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
return page_keys
def GetParentNotebook( self ):
return self._parent_notebook
def GetSessionAPIInfoDict( self, is_selected = True ):
current_page = self.currentWidget()

View File

@ -1846,8 +1846,9 @@ class EditContentParserPanel( ClientGUIScrolledPanels.EditPanel ):
self._url_type = ClientGUICommon.BetterChoice( self._urls_panel )
self._url_type.addItem( 'url to download/pursue (file/post url)', HC.URL_TYPE_DESIRED )
self._url_type.addItem( 'url to associate (source url)', HC.URL_TYPE_SOURCE )
self._url_type.addItem( 'next gallery page', HC.URL_TYPE_NEXT )
self._url_type.addItem( 'POST parsers only: url to associate (source url)', HC.URL_TYPE_SOURCE )
self._url_type.addItem( 'GALLERY parsers only: next gallery page (not queued if no post/file urls found)', HC.URL_TYPE_NEXT )
self._url_type.addItem( 'EXPERIMENTAL: GALLERY parsers only: sub-gallery page (is queued even if no post/file urls found)', HC.URL_TYPE_SUB_GALLERY )
self._file_priority = QP.MakeQSpinBox( self._urls_panel, min=0, max=100 )
self._file_priority.setValue( 50 )
@ -1961,7 +1962,7 @@ class EditContentParserPanel( ClientGUIScrolledPanels.EditPanel ):
rows = []
rows.append( ( 'url type: ', self._url_type ) )
rows.append( ( 'file url quality precedence (higher is better): ', self._file_priority ) )
rows.append( ( 'url quality precedence (higher is better): ', self._file_priority ) )
gridbox = ClientGUICommon.WrapInGrid( self._urls_panel, rows )

View File

@ -24,7 +24,7 @@ class ResizingEventFilter( QC.QObject ):
if width_larger or height_larger:
QP.CallAfter( self.parent().WidgetJustSized, width_larger, height_larger )
QP.CallAfter( parent.WidgetJustSized, width_larger, height_larger )

View File

@ -3547,8 +3547,6 @@ class EditSubscriptionPanel( ClientGUIScrolledPanels.EditPanel ):
queries_panel.AddMenuButton( 'quality info', menu_items, enabled_only_on_selection = True )
self._checker_options = ClientGUIImport.CheckerOptionsButton( self._query_panel, checker_options, update_callable = self._CheckerOptionsUpdated )
#
self._file_limits_panel = ClientGUICommon.StaticBox( self, 'file limits' )
@ -3632,6 +3630,7 @@ But if 2 is--and is also perhaps accompanied by many 'could not parse' errors--t
show_downloader_options = True
self._checker_options = ClientGUIImport.CheckerOptionsButton( self, checker_options, update_callable = self._CheckerOptionsUpdated )
self._file_import_options = ClientGUIImport.FileImportOptionsButton( self, file_import_options, show_downloader_options )
self._tag_import_options = ClientGUIImport.TagImportOptionsButton( self, tag_import_options, show_downloader_options, allow_default_selection = True )
@ -3660,7 +3659,6 @@ But if 2 is--and is also perhaps accompanied by many 'could not parse' errors--t
self._query_panel.Add( self._gug_key_and_name, CC.FLAGS_EXPAND_PERPENDICULAR )
self._query_panel.Add( queries_panel, CC.FLAGS_EXPAND_BOTH_WAYS )
self._query_panel.Add( self._checker_options, CC.FLAGS_EXPAND_PERPENDICULAR )
#
@ -3709,6 +3707,7 @@ But if 2 is--and is also perhaps accompanied by many 'could not parse' errors--t
QP.AddToLayout( vbox, self._control_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
QP.AddToLayout( vbox, self._file_limits_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
QP.AddToLayout( vbox, self._file_presentation_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
QP.AddToLayout( vbox, self._checker_options, CC.FLAGS_EXPAND_PERPENDICULAR )
QP.AddToLayout( vbox, self._file_import_options, CC.FLAGS_EXPAND_PERPENDICULAR )
QP.AddToLayout( vbox, self._tag_import_options, CC.FLAGS_EXPAND_PERPENDICULAR )
@ -5241,7 +5240,7 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
return
checker_options = ClientDefaults.GetDefaultCheckerOptions( 'artist subscription' )
checker_options = subscriptions[0].GetCheckerOptions()
with ClientGUITopLevelWindows.DialogEdit( self, 'edit check timings' ) as dlg:
@ -5272,7 +5271,7 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
return
tag_import_options = HG.client_controller.network_engine.domain_manager.GetDefaultTagImportOptionsForPosts()
tag_import_options = subscriptions[0].GetTagImportOptions()
show_downloader_options = True
with ClientGUITopLevelWindows.DialogEdit( self, 'edit tag import options' ) as dlg:
@ -5364,6 +5363,15 @@ class EditTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ):
gridbox = ClientGUICommon.WrapInGrid( default_panel, rows )
if not HG.client_controller.new_options.GetBoolean( 'advanced_mode' ):
st = ClientGUICommon.BetterStaticText( default_panel, label = 'Most of the time, you want to rely on the default tag import options!' )
st.setObjectName( 'HydrusWarning' )
default_panel.Add( st, CC.FLAGS_EXPAND_PERPENDICULAR )
default_panel.Add( gridbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
default_panel.Add( self._load_default_options, CC.FLAGS_EXPAND_PERPENDICULAR )

View File

@ -407,7 +407,12 @@ def AncestorShortcutsHandlers( widget: QW.QWidget ):
return shortcuts_handlers
widget = widget.parent()
widget = widget.parentWidget()
if widget is None:
return shortcuts_handlers
while True:
@ -420,7 +425,7 @@ def AncestorShortcutsHandlers( widget: QW.QWidget ):
break
widget = widget.parent()
widget = widget.parentWidget()
if widget is None:

View File

@ -27,6 +27,12 @@ class ClientSystemTrayIcon( QW.QSystemTrayIcon ):
self._network_traffic_paused = False
self._subscriptions_paused = False
self._show_hide_menu_item = None
self._network_traffic_menu_item = None
self._subscriptions_paused_menu_item = None
self._just_clicked_to_show = False
png_path = os.path.join( HC.STATIC_DIR, 'hydrus_non-transparent.png' )
self.setIcon( QG.QIcon( png_path ) )
@ -43,19 +49,19 @@ class ClientSystemTrayIcon( QW.QSystemTrayIcon ):
new_menu = QW.QMenu( parent_widget )
label = 'hide' if self._ui_is_currently_shown else 'show'
self._show_hide_menu_item = ClientGUIMenus.AppendMenuItem( new_menu, 'show/hide', 'Hide or show the hydrus client', self.flip_show_ui.emit )
ClientGUIMenus.AppendMenuItem( new_menu, label, 'Hide or show the hydrus client', self.flip_show_ui.emit )
self._UpdateShowHideMenuItemLabel()
ClientGUIMenus.AppendSeparator( new_menu )
label = 'unpause network traffic' if self._network_traffic_paused else 'pause network traffic'
self._network_traffic_menu_item = ClientGUIMenus.AppendMenuItem( new_menu, 'network traffic', 'Pause/resume network traffic', self.flip_pause_network_jobs.emit )
ClientGUIMenus.AppendMenuItem( new_menu, label, 'Pause/resume network traffic', self.flip_pause_network_jobs.emit )
self._UpdateNetworkTrafficMenuItemLabel()
label = 'unpause subscriptions' if self._subscriptions_paused else 'pause subscriptions'
self._subscriptions_paused_menu_item = ClientGUIMenus.AppendMenuItem( new_menu, 'subscriptions', 'Pause/resume subscriptions', self.flip_pause_subscription_jobs.emit )
ClientGUIMenus.AppendMenuItem( new_menu, label, 'Pause/resume subscriptions', self.flip_pause_subscription_jobs.emit )
self._UpdateSubscriptionsMenuItemLabel()
ClientGUIMenus.AppendSeparator( new_menu )
@ -73,6 +79,20 @@ class ClientSystemTrayIcon( QW.QSystemTrayIcon ):
def _UpdateNetworkTrafficMenuItemLabel( self ):
label = 'unpause network traffic' if self._network_traffic_paused else 'pause network traffic'
self._network_traffic_menu_item.setText( label )
def _UpdateShowHideMenuItemLabel( self ):
label = 'hide' if self._ui_is_currently_shown else 'show'
self._show_hide_menu_item.setText( label )
def _UpdateShowSelf( self ) -> bool:
menu_regenerated = False
@ -96,21 +116,40 @@ class ClientSystemTrayIcon( QW.QSystemTrayIcon ):
return menu_regenerated
def _UpdateSubscriptionsMenuItemLabel( self ):
if self._subscriptions_paused_menu_item is not None:
label = 'unpause subscriptions' if self._subscriptions_paused else 'pause subscriptions'
self._subscriptions_paused_menu_item.setText( label )
def _WasActivated( self, activation_reason ):
if activation_reason in ( QW.QSystemTrayIcon.Unknown, QW.QSystemTrayIcon.Trigger ):
if self._ui_is_currently_shown:
self._just_clicked_to_show = False
self.highlight.emit()
else:
self._just_clicked_to_show = True
self.flip_show_ui.emit()
elif activation_reason in ( QW.QSystemTrayIcon.DoubleClick, QW.QSystemTrayIcon.MiddleClick ):
if activation_reason == QW.QSystemTrayIcon.DoubleClick and self._just_clicked_to_show:
return
self.flip_show_ui.emit()
@ -121,7 +160,7 @@ class ClientSystemTrayIcon( QW.QSystemTrayIcon ):
self._network_traffic_paused = network_traffic_paused
self._RegenerateMenu()
self._UpdateNetworkTrafficMenuItemLabel()
@ -131,7 +170,7 @@ class ClientSystemTrayIcon( QW.QSystemTrayIcon ):
self._subscriptions_paused = subscriptions_paused
self._RegenerateMenu()
self._UpdateSubscriptionsMenuItemLabel()
@ -145,7 +184,12 @@ class ClientSystemTrayIcon( QW.QSystemTrayIcon ):
if not menu_regenerated:
self._RegenerateMenu()
self._UpdateShowHideMenuItemLabel()
if not self._ui_is_currently_shown:
self._just_clicked_to_show = False

View File

@ -154,40 +154,6 @@ class EditCheckerOptions( ClientGUIScrolledPanels.EditPanel ):
self._flat_check_period_checkbox.clicked.connect( self.EventFlatPeriodCheck )
def _UpdateEnabledControls( self ):
if self._flat_check_period_checkbox.isChecked():
self._reactive_check_panel.hide()
self._static_check_panel.show()
else:
self._reactive_check_panel.show()
self._static_check_panel.hide()
def EventFlatPeriodCheck( self ):
self._UpdateEnabledControls()
def SetValue( self, checker_options ):
( intended_files_per_check, never_faster_than, never_slower_than, death_file_velocity ) = checker_options.ToTuple()
self._intended_files_per_check.setValue( intended_files_per_check )
self._never_faster_than.SetValue( never_faster_than )
self._never_slower_than.SetValue( never_slower_than )
self._death_file_velocity.SetValue( death_file_velocity )
self._flat_check_period.SetValue( never_faster_than )
self._flat_check_period_checkbox.setChecked( never_faster_than == never_slower_than )
self._UpdateEnabledControls()
def _ShowHelp( self ):
help = 'The intention of this object is to govern how frequently the watcher or subscription checks for new files--and when it should stop completely.'
@ -209,6 +175,24 @@ class EditCheckerOptions( ClientGUIScrolledPanels.EditPanel ):
QW.QMessageBox.information( self, 'Information', help )
def _UpdateEnabledControls( self ):
if self._flat_check_period_checkbox.isChecked():
self._reactive_check_panel.hide()
self._static_check_panel.show()
else:
self._reactive_check_panel.show()
self._static_check_panel.hide()
def EventFlatPeriodCheck( self ):
self._UpdateEnabledControls()
def GetValue( self ):
death_file_velocity = self._death_file_velocity.GetValue()
@ -229,6 +213,22 @@ class EditCheckerOptions( ClientGUIScrolledPanels.EditPanel ):
return ClientImportOptions.CheckerOptions( intended_files_per_check, never_faster_than, never_slower_than, death_file_velocity )
def SetValue( self, checker_options ):
( intended_files_per_check, never_faster_than, never_slower_than, death_file_velocity ) = checker_options.ToTuple()
self._intended_files_per_check.setValue( intended_files_per_check )
self._never_faster_than.SetValue( never_faster_than )
self._never_slower_than.SetValue( never_slower_than )
self._death_file_velocity.SetValue( death_file_velocity )
self._flat_check_period.SetValue( never_faster_than )
self._flat_check_period_checkbox.setChecked( never_faster_than == never_slower_than )
self._UpdateEnabledControls()
class TimeDeltaButton( QW.QPushButton ):
timeDeltaChanged = QC.Signal()

View File

@ -533,7 +533,11 @@ class NewDialog( QP.Dialog ):
was_ended = self._TryEndModal( QW.QDialog.Rejected )
if not was_ended:
if was_ended:
event.accept()
else:
event.ignore()
@ -583,16 +587,6 @@ class NewDialog( QP.Dialog ):
self._TryEndModal( QW.QDialog.Rejected )
def EventOK( self ):
if not self or not QP.isValid( self ):
return
self._TryEndModal( QW.QDialog.Accepted )
def keyPressEvent( self, event ):
( modifier, key ) = ClientGUIShortcuts.ConvertKeyEventToSimpleTuple( event )
@ -671,7 +665,7 @@ class DialogThatTakesScrollablePanel( DialogThatResizes ):
self._panel = panel
if hasattr( self._panel, 'okSignal'): self._panel.okSignal.connect( self.EventOK )
if hasattr( self._panel, 'okSignal'): self._panel.okSignal.connect( self.DoOK )
buttonbox = self._GetButtonBox()
@ -703,14 +697,12 @@ class DialogNullipotent( DialogThatTakesScrollablePanel ):
def _InitialiseButtons( self ):
self._close = QW.QPushButton( 'close', self )
self._close.clicked.connect( self.EventOK )
self._close.clicked.connect( self.DoOK )
if self._hide_buttons:
self._close.setVisible( False )
self._widget_event_filter.EVT_CLOSE( lambda ev: self.EventOK() ) # the close event no longer goes to the default button, since it is hidden, wew
def _ReadyToClose( self, value ):

View File

@ -356,9 +356,35 @@ class GallerySeed( HydrusSerialisable.SerialisableBase ):
can_add_more_gallery_urls = num_urls_added > 0 and can_search_for_more_files
if self._can_generate_more_pages and can_add_more_gallery_urls:
flattened_results = list( itertools.chain.from_iterable( all_parse_results ) )
sub_gallery_urls = ClientParsing.GetURLsFromParseResults( flattened_results, ( HC.URL_TYPE_SUB_GALLERY, ), only_get_top_priority = True )
sub_gallery_urls = HydrusData.DedupeList( sub_gallery_urls )
new_sub_gallery_urls = [ sub_gallery_url for sub_gallery_url in sub_gallery_urls if sub_gallery_url not in gallery_urls_seen_before ]
num_new_sub_gallery_urls = len( new_sub_gallery_urls )
if num_new_sub_gallery_urls > 0:
flattened_results = list( itertools.chain.from_iterable( all_parse_results ) )
sub_gallery_seeds = [ GallerySeed( sub_gallery_url ) for sub_gallery_url in new_sub_gallery_urls ]
for sub_gallery_seed in sub_gallery_seeds:
sub_gallery_seed.SetFixedServiceKeysToTags( self._fixed_service_keys_to_tags )
gallery_seed_log.AddGallerySeeds( sub_gallery_seeds )
added_new_gallery_pages = True
gallery_urls_seen_before.update( sub_gallery_urls )
note += ' - {} sub-gallery urls found'.format( HydrusData.ToHumanInt( num_new_sub_gallery_urls ) )
if self._can_generate_more_pages and can_add_more_gallery_urls:
next_page_urls = ClientParsing.GetURLsFromParseResults( flattened_results, ( HC.URL_TYPE_NEXT, ), only_get_top_priority = True )

View File

@ -116,7 +116,24 @@ class CheckerOptions( HydrusSerialisable.SerialisableBase ):
( death_files_found, death_time_delta ) = self._death_file_velocity
return death_time_delta
death_file_velocity_period = death_time_delta
never_dies = death_files_found == 0
static_check_timing = self._never_faster_than == self._never_slower_than
if static_check_timing:
death_file_velocity_period = min( death_file_velocity_period, self._never_faster_than * 5 )
if never_dies or static_check_timing:
six_months = 6 * 60 * 86400
death_file_velocity_period = min( death_file_velocity_period, six_months )
return death_file_velocity_period
def GetNextCheckTime( self, file_seed_cache, last_check_time, last_next_check_time ):
@ -244,6 +261,18 @@ class CheckerOptions( HydrusSerialisable.SerialisableBase ):
return timing_statement + os.linesep * 2 + death_statement
def HasStaticCheckTime( self ):
return self._never_faster_than == self._never_slower_than
def NeverDies( self ):
( death_files_found, death_time_delta ) = self._death_file_velocity
return death_files_found == 0
def IsDead( self, file_seed_cache, last_check_time ):
if len( file_seed_cache ) == 0 and last_check_time == 0:
@ -1075,7 +1104,12 @@ class TagImportOptions( HydrusSerialisable.SerialisableBase ):
def CheckBlacklist( self, tags ):
sibling_tags = HG.client_controller.tag_siblings_manager.CollapseTags( CC.COMBINED_TAG_SERVICE_KEY, tags )
sibling_tags = set()
for tag in tags:
sibling_tags.update( HG.client_controller.tag_siblings_manager.GetAllSiblings( CC.COMBINED_TAG_SERVICE_KEY, tag ) )
for test_tags in ( tags, sibling_tags ):

View File

@ -1147,6 +1147,11 @@ class Subscription( HydrusSerialisable.SerialisableBaseNamed ):
return best_next_work_time
def GetCheckerOptions( self ):
return self._checker_options
def GetGUGKeyAndName( self ):
return self._gug_key_and_name
@ -2116,7 +2121,7 @@ class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
self._paused = not self._paused
def RegisterSyncComplete( self, checker_options ):
def RegisterSyncComplete( self, checker_options: ClientImportOptions.CheckerOptions ):
self._last_check_time = HydrusData.GetNow()
@ -2124,7 +2129,7 @@ class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
death_period = checker_options.GetDeathFileVelocityPeriod()
compact_before_this_time = self._last_check_time - ( death_period * 2 )
compact_before_this_time = self._last_check_time - death_period
if self._gallery_seed_log.CanCompact( compact_before_this_time ):
@ -2184,7 +2189,7 @@ class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
self._tag_import_options = tag_import_options
def UpdateNextCheckTime( self, checker_options ):
def UpdateNextCheckTime( self, checker_options: ClientImportOptions.CheckerOptions ):
if self._check_now:

View File

@ -177,10 +177,6 @@ def CollapseTagSiblingPairs( groups_of_pairs ):
return siblings
def DeLoopTagSiblingPairs( groups_of_pairs ):
pass
def LoopInSimpleChildrenToParents( simple_children_to_parents, child, parent ):
potential_loop_paths = { parent }
@ -832,7 +828,7 @@ class TagSiblingsManager( object ):
if service.GetServiceType() == HC.LOCAL_TAG:
local_tags_pairs = set( all_pairs )
local_tags_pairs.update( all_pairs )
else:

View File

@ -59,6 +59,10 @@ def ConvertParseResultToPrettyString( result ):
return 'next page url (priority ' + str( priority ) + '): ' + parsed_text
elif url_type == HC.URL_TYPE_SUB_GALLERY:
return 'sub-gallery url (priority ' + str( priority ) + '): ' + parsed_text
elif content_type == HC.CONTENT_TYPE_MAPPINGS:
@ -142,6 +146,10 @@ def ConvertParsableContentToPrettyString( parsable_content, include_veto = False
pretty_strings.append( 'gallery next page url' )
elif url_type == HC.URL_TYPE_SUB_GALLERY:
pretty_strings.append( 'sub-gallery url' )
elif content_type == HC.CONTENT_TYPE_MAPPINGS:

View File

@ -692,3 +692,90 @@ class TagFilter( HydrusSerialisable.SerialisableBase ):
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_TAG_FILTER ] = TagFilter
class TagSiblingsStructure( object ):
def __init__( self ):
self._bad_tags_to_good_tags = {}
self._bad_tags_to_ideal_tags = {}
self._ideal_tags_to_all_worse_tags = collections.defaultdict( set )
# some sort of structure for 'bad cycles' so we can later raise these to the user to fix
def AddPair( self, bad_tag: object, good_tag: object ):
# disallowed siblings are:
# A -> A
# larger loops
# A -> C when A -> B already exists
if bad_tag == good_tag:
return
if bad_tag in self._bad_tags_to_good_tags:
return
joining_existing_chain = good_tag in self._bad_tags_to_ideal_tags
extending_existing_chain = bad_tag in self._ideal_tags_to_all_worse_tags
if extending_existing_chain and joining_existing_chain:
joined_chain_ideal = self._bad_tags_to_ideal_tags
if joined_chain_ideal == bad_tag:
# we found a cycle, as the ideal of the chain we are joining is our bad tag
# basically the chain we are joining and the chain we are extending are the same one
return
# now compute our ideal
ideal_tags_that_need_updating = set()
if joining_existing_chain:
# our ideal will be the end of that chain
ideal_tag = self._bad_tags_to_ideal_tags[ good_tag ]
else:
ideal_tag = good_tag
self._bad_tags_to_good_tags[ bad_tag ] = good_tag
self._bad_tags_to_ideal_tags[ bad_tag ] = ideal_tag
self._ideal_tags_to_all_worse_tags[ ideal_tag ].add( bad_tag )
if extending_existing_chain:
# the existing chain needs its ideal updating
old_ideal_tag = bad_tag
bad_tags_that_need_updating = self._ideal_tags_to_all_worse_tags[ old_ideal_tag ]
for bad_tag_that_needs_updating in bad_tags_that_need_updating:
self._bad_tags_to_ideal_tags[ bad_tag_that_needs_updating ] = ideal_tag
self._ideal_tags_to_all_worse_tags[ ideal_tag ].update( bad_tags_that_need_updating )
del self._ideal_tags_to_all_worse_tags[ old_ideal_tag ]
def GetBadTagsToIdealTags( self ):
return self._bad_tags_to_ideal_tags

View File

@ -70,7 +70,7 @@ options = {}
# Misc
NETWORK_VERSION = 18
SOFTWARE_VERSION = 391
SOFTWARE_VERSION = 392
CLIENT_API_VERSION = 11
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
@ -794,6 +794,7 @@ URL_TYPE_UNKNOWN = 5
URL_TYPE_NEXT = 6
URL_TYPE_DESIRED = 7
URL_TYPE_SOURCE = 8
URL_TYPE_SUB_GALLERY = 9
url_type_string_lookup = {}
@ -805,7 +806,8 @@ url_type_string_lookup[ URL_TYPE_WATCHABLE ] = 'watchable url'
url_type_string_lookup[ URL_TYPE_UNKNOWN ] = 'unknown url'
url_type_string_lookup[ URL_TYPE_NEXT ] = 'next page url'
url_type_string_lookup[ URL_TYPE_DESIRED ] = 'downloadable/pursuable url'
url_type_string_lookup[ URL_TYPE_SOURCE ] = 'associable/source url'
url_type_string_lookup[ URL_TYPE_SUB_GALLERY ] = 'sub-gallery url (is queued even if creator found no post/file urls)'
# default options

View File

@ -13,48 +13,35 @@ import time
CONNECTION_REFRESH_TIME = 60 * 30
def CanVacuum( db_path, stop_time = None ):
def CheckCanVacuum( db_path, stop_time = None ):
try:
db = sqlite3.connect( db_path, isolation_level = None, detect_types = sqlite3.PARSE_DECLTYPES )
c = db.cursor()
( page_size, ) = c.execute( 'PRAGMA page_size;' ).fetchone()
( page_count, ) = c.execute( 'PRAGMA page_count;' ).fetchone()
( freelist_count, ) = c.execute( 'PRAGMA freelist_count;' ).fetchone()
db_size = ( page_count - freelist_count ) * page_size
if stop_time is not None:
db = sqlite3.connect( db_path, isolation_level = None, detect_types = sqlite3.PARSE_DECLTYPES )
approx_vacuum_speed_mb_per_s = 1048576 * 1
c = db.cursor()
approx_vacuum_duration = db_size // approx_vacuum_speed_mb_per_s
( page_size, ) = c.execute( 'PRAGMA page_size;' ).fetchone()
( page_count, ) = c.execute( 'PRAGMA page_count;' ).fetchone()
( freelist_count, ) = c.execute( 'PRAGMA freelist_count;' ).fetchone()
time_i_will_have_to_start = stop_time - approx_vacuum_duration
db_size = ( page_count - freelist_count ) * page_size
if stop_time is not None:
if HydrusData.TimeHasPassed( time_i_will_have_to_start ):
approx_vacuum_speed_mb_per_s = 1048576 * 1
approx_vacuum_duration = db_size // approx_vacuum_speed_mb_per_s
time_i_will_have_to_start = stop_time - approx_vacuum_duration
if HydrusData.TimeHasPassed( time_i_will_have_to_start ):
return False
raise Exception( 'I believe you need about ' + HydrusData.TimeDeltaToPrettyTimeDelta( approx_vacuum_duration ) + ' to vacuum, but there is not enough time allotted.' )
( db_dir, db_filename ) = os.path.split( db_path )
( has_space, reason ) = HydrusPaths.HasSpaceForDBTransaction( db_dir, db_size )
return has_space
except Exception as e:
HydrusData.Print( 'Could not determine whether to vacuum or not:' )
HydrusData.PrintException( e )
return False
( db_dir, db_filename ) = os.path.split( db_path )
HydrusPaths.CheckHasSpaceForDBTransaction( db_dir, db_size )
def ReadLargeIdQueryInSeparateChunks( cursor, select_statement, chunk_size ):
@ -422,6 +409,11 @@ class HydrusDB( object ):
def _GenerateDBJob( self, job_type, synchronous, action, *args, **kwargs ):
return HydrusData.JobDatabase( job_type, synchronous, action, *args, **kwargs )
def _GetRowCount( self ):
row_count = self._c.rowcount
@ -979,7 +971,7 @@ class HydrusDB( object ):
synchronous = True
job = HydrusData.JobDatabase( job_type, synchronous, action, *args, **kwargs )
job = self._GenerateDBJob( job_type, synchronous, action, *args, **kwargs )
if HG.model_shutdown:
@ -1005,7 +997,7 @@ class HydrusDB( object ):
job_type = 'write'
job = HydrusData.JobDatabase( job_type, synchronous, action, *args, **kwargs )
job = self._GenerateDBJob( job_type, synchronous, action, *args, **kwargs )
if HG.model_shutdown:
@ -1030,16 +1022,16 @@ class TemporaryIntegerTable( object ):
def __enter__( self ):
self._cursor.execute( 'CREATE TABLE ' + self._table_name + ' ( ' + self._column_name + ' INTEGER PRIMARY KEY );' )
self._cursor.execute( 'CREATE TABLE {} ( {} INTEGER PRIMARY KEY );'.format( self._table_name, self._column_name ) )
self._cursor.executemany( 'INSERT INTO ' + self._table_name + ' ( ' + self._column_name + ' ) VALUES ( ? );', ( ( i, ) for i in self._integer_iterable ) )
self._cursor.executemany( 'INSERT INTO {} ( {} ) VALUES ( ? );'.format( self._table_name, self._column_name ), ( ( i, ) for i in self._integer_iterable ) )
return self._table_name
def __exit__( self, exc_type, exc_val, exc_tb ):
self._cursor.execute( 'DROP TABLE ' + self._table_name + ';' )
self._cursor.execute( 'DROP TABLE {};'.format( self._table_name ) )
return False

View File

@ -1630,6 +1630,16 @@ class JobDatabase( object ):
self._result_ready = threading.Event()
def __str__( self ):
return 'DB Job: {}'.format( self.ToString() )
def _DoDelayedResultRelief( self ):
pass
def GetCallableTuple( self ):
return ( self._action, self._args, self._kwargs )
@ -1650,6 +1660,8 @@ class JobDatabase( object ):
raise HydrusExceptions.ShutdownException( 'Application quit before db could serve result!' )
self._DoDelayedResultRelief()
if isinstance( self._result, Exception ):
@ -1682,7 +1694,7 @@ class JobDatabase( object ):
def ToString( self ):
return self._type + ' ' + self._action
return '{} {}'.format( self._type, self._action )
class ServiceUpdate( object ):

View File

@ -16,6 +16,7 @@ db_synchronous_override = None
import_folders_running = False
export_folders_running = False
db_ui_hang_relief_mode = False
callto_report_mode = False
db_report_mode = False
db_profile_mode = False

View File

@ -46,6 +46,61 @@ def AppendPathUntilNoConflicts( path ):
return good_path_absent_ext + ext
def CheckHasSpaceForDBTransaction( db_dir, num_bytes ):
if HG.no_db_temp_files:
space_needed = int( num_bytes * 1.1 )
approx_available_memory = psutil.virtual_memory().available * 4 / 5
if approx_available_memory < num_bytes:
raise Exception( 'I believe you need about ' + HydrusData.ToHumanBytes( space_needed ) + ' available memory, since you are running in no_db_temp_files mode, but you only seem to have ' + HydrusData.ToHumanBytes( approx_available_memory ) + '.' )
db_disk_free_space = GetFreeSpace( db_dir )
if db_disk_free_space < space_needed:
raise Exception( 'I believe you need about ' + HydrusData.ToHumanBytes( space_needed ) + ' on your db\'s partition, but you only seem to have ' + HydrusData.ToHumanBytes( db_disk_free_space ) + '.' )
else:
temp_dir = tempfile.gettempdir()
temp_disk_free_space = GetFreeSpace( temp_dir )
temp_and_db_on_same_device = GetDevice( temp_dir ) == GetDevice( db_dir )
if temp_and_db_on_same_device:
space_needed = int( num_bytes * 2.2 )
if temp_disk_free_space < space_needed:
raise Exception( 'I believe you need about ' + HydrusData.ToHumanBytes( space_needed ) + ' on your db\'s partition, which I think also holds your temporary path, but you only seem to have ' + HydrusData.ToHumanBytes( temp_disk_free_space ) + '.' )
else:
space_needed = int( num_bytes * 1.1 )
if temp_disk_free_space < space_needed:
raise Exception( 'I believe you need about ' + HydrusData.ToHumanBytes( space_needed ) + ' on your temporary path\'s partition, which I think is ' + temp_dir + ', but you only seem to have ' + HydrusData.ToHumanBytes( temp_disk_free_space ) + '.' )
db_disk_free_space = GetFreeSpace( db_dir )
if db_disk_free_space < space_needed:
raise Exception( 'I believe you need about ' + HydrusData.ToHumanBytes( space_needed ) + ' on your db\'s partition, but you only seem to have ' + HydrusData.ToHumanBytes( db_disk_free_space ) + '.' )
def CleanUpTempPath( os_file_handle, temp_path ):
try:
@ -389,63 +444,6 @@ def GetTempPath( suffix = '', dir = None ):
return tempfile.mkstemp( suffix = suffix, prefix = 'hydrus', dir = dir )
def HasSpaceForDBTransaction( db_dir, num_bytes ):
if HG.no_db_temp_files:
space_needed = int( num_bytes * 1.1 )
approx_available_memory = psutil.virtual_memory().available * 4 / 5
if approx_available_memory < num_bytes:
return ( False, 'I believe you need about ' + HydrusData.ToHumanBytes( space_needed ) + ' available memory, since you are running in no_db_temp_files mode, but you only seem to have ' + HydrusData.ToHumanBytes( approx_available_memory ) + '.' )
db_disk_free_space = GetFreeSpace( db_dir )
if db_disk_free_space < space_needed:
return ( False, 'I believe you need about ' + HydrusData.ToHumanBytes( space_needed ) + ' on your db\'s partition, but you only seem to have ' + HydrusData.ToHumanBytes( db_disk_free_space ) + '.' )
else:
temp_dir = tempfile.gettempdir()
temp_disk_free_space = GetFreeSpace( temp_dir )
temp_and_db_on_same_device = GetDevice( temp_dir ) == GetDevice( db_dir )
if temp_and_db_on_same_device:
space_needed = int( num_bytes * 2.2 )
if temp_disk_free_space < space_needed:
return ( False, 'I believe you need about ' + HydrusData.ToHumanBytes( space_needed ) + ' on your db\'s partition, which I think also holds your temporary path, but you only seem to have ' + HydrusData.ToHumanBytes( temp_disk_free_space ) + '.' )
else:
space_needed = int( num_bytes * 1.1 )
if temp_disk_free_space < space_needed:
return ( False, 'I believe you need about ' + HydrusData.ToHumanBytes( space_needed ) + ' on your temporary path\'s partition, which I think is ' + temp_dir + ', but you only seem to have ' + HydrusData.ToHumanBytes( temp_disk_free_space ) + '.' )
db_disk_free_space = GetFreeSpace( db_dir )
if db_disk_free_space < space_needed:
return ( False, 'I believe you need about ' + HydrusData.ToHumanBytes( space_needed ) + ' on your db\'s partition, but you only seem to have ' + HydrusData.ToHumanBytes( db_disk_free_space ) + '.' )
return ( True, 'You seem to have enough space!' )
def LaunchDirectory( path ):
def do_it():

View File

@ -280,7 +280,38 @@ class HydrusDomain( object ):
def CheckValid( self, client_ip ):
if self._local_only and client_ip != '127.0.0.1':
# >::ffff:127.0.0.1
#▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▓▓▓█████▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▒▓▓▓▒▒▒▓▓▓▓▓▓▓▒▒▓▓▓▓▒▒░░░░ ░░▒▒▒▒▒▓▓▓▓▓▓▓▒▒▓▓▓▓▓▓▓▒▒▒▓▓▓
#▓▓▓▓▓▓▓▒▓▓▓▓▓▓▓▓▒▒░░ ░░░░░ ░▒▒▓▓▓▓▓▓▓▓▒▓▓▓▓▓▒▓▓▓▒▓▓
#▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒░▒▒▒▒▓▓▓▒▒▒▒▒▒▒▒▒▓▓▓▓▓▒▓▓▓▓▓▓▒▓▓▓▓▓▓▓▓▒▒
#▒▒▓▓▓▓▓▓▓▒▒▓▓▒▒▒░░░░░▒▒▒▒▒▒▓▓▓▓█▓▓▓█▓▓▓▓▓▓▓▓▓▓▓▒▓▓▓▓▓▓▓▓▒
#▓▓▓▓▓▓▓▓▓▓▓▒▓▓░ ░░░░▓▓█▓█▓▓▓█▓▓▓▓▓▓▓▓▓▓▓▓▒▓▓▓▓▓▒▓
#▓▓▓▓▒▓▓▓▓▓▓▓▓░ ░ ▒▓██▓█▓▓▓▓▓▓█▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#▓▓▓▓▓▒▓▓▓▓▓█▒ ░ ░░░ ▓████▓▓▓▓▓█▓▒▒▓▓▓▓▓▓▓▒▒▓▓▓▓
#▓▓▓▓▓▓▓▒▓██▓ ░░░░░░░░░░░░ ░ ▒▓█▓▓▓▓▓▓▓█▓▓▒▓▓▓▓▓▒▓▓▓▒▓▓
#▒▓▓▓▓▓▓▓▓██░ ░▒▒▒░▒░░░░░░░░░░░ ░▓█▓▓▓███▓█▓▓▓▒▓▓▓▓▓▓▓▓▒▒
#▒▒▓▓▓▓▓▓▓█▓ ░▒░▒▒▒▒▒░░░░░░░░░░░░ ▒▓█▓█▓███▓▓▓▓▓▒▓▓▓▓▓▓▓▓▒
#▓▓▓▒▓▓▓▓▓█▓░▒▒▒▒▒▒▒░▒░░░▒░░░░░░░░░▒▓▓▓▓▓█▒ ░▒▓▓▓▓▒▓▓▓▓▓▒▓
#▓▓▓▓▒▓▓▓▓█▓░▒▓▒▒▒▒▒▒▒▒▒▒▒▒░░░░░░░░░░▒▓▓█▓░░▒▒▓▓▓▓▓▒▓▓▓▓▓▓
#▓▓▓▓▓▒▓▓▓█▓░▒▒▒▒▒▒▒▒▒▒▒░░░░░░░░░░░░ ▒██▒▒▒░░▓▓▓▓▓▓▒▒▓▓▓▓
#▓▓▓▓▓▓▓▒▒█▓░░▒▒▒▒▒▒▒▒▒▒▒ ░░░░░░░▒░░ ░█▓ ░▒▒░▓▓▓▓▓▒▓▓▓▒▓▓
#▒▓▓▓▓▓▓▓▓▓▓▒▒▓▒▒▓▓▓▒▒▒▓▓▓▓▓▒▒▒▒▒▒▒▒░░▒█▓ ▒▒ ▒▓▓▓▓▓▓▓▓▓▓▒▓
#▒▒▓▓▓▓▓▓▓▓▓▒▒▓▓█▓▓▓▓▓▒▒▓▓██▓▓▓▒▒▒▒░░░░░░░░░░░▓▓▓▓▓▓▓▓▓▓▓▒
#▓▓▓▒▓▓▓▓▓░ ▓▒▓▓▓▓▓▓▒▓▒▒▒▒▒▒▒▒▒▒▒▒░░░░░░░░░▒▒ ▒█▓▓▓▓▓▓▓▒▓
#▓▓▓▓▒▓▓▓▓▓▒▓▒▒▒▒▓▓▓▓▒░▒▒░░░▒▒▒░░░░░░░░░░░░░▒ ░▓▓████▓
#▓▓▓▓▓▒▓▓▓▓▓▓▒▒▒▒▒▒▓▓░░░▒▒▒▒▒▒▒░░░░▒▒▒▒░░▒░░░ ▒▒▓
#▓▓▓▓▓▓▓▒▒▓▓▓▓▒▒▒▒▓▓▒░░░░▒▒▒▒▒▓▒▒▒▒▒▒▒▒░░▒▒▒░ ░
#▒▓▓▓▓▓▓▓▓▒▓▓▓▓▓▓▓▓▓▓▓▒▒▒▒▒░░▒▓▓▓▒░▒▒▒▒▒▒▒▒▒ ▒░
#▒▒▓▓▓▓▓▓▓▓▒▓▓▓▓▓▓▓▓▓▓▓▒▒▒▒▒▒▒▓▓▓▒▒▒▒▒▒▒▒▒▒ ░▒░ ░░░░
#▓▓▓▒▓▓▓▓▓▓▓▓▒▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒▒▒▒▒▒▒▒▒▒▒▓▓▒ ░▓░ ░░░░
#▓▓▓▓▓▒▓▓▓▓▓▓▓▓▓▓█▓▓▓▓▓▓▓▒▒▒▒░▒▒▒▒▒▒▒▒▓▓▒ ░ ░▓▒ ░░░░
#▓▓▓▓▓▒▒▓▓▓▓▓██▓▒░ ▒▓▓▓▓▒▓▒▒▒▒░░▒▒▒▒▓▓▒░░░ ▒▒▓░ ░▒
#▒▓▓▓▓▓▓▓▒▓▓█▓░ ▒▓▓▒▒▒▒░░▒▒▒▒▒▓▓▓▒░░░ ▒▒▒▒░░░░░░░▒▒░
#▒▒▓▓▓▓▓▓█▓▒░ ░ ░ ░▒▓▓▓▒▒▒▒▒▒▓▓█▓▒░░░░ ▒▒▒▒ ░░░▒▒░▒▒░░
#▒▒▒▓▓▓▓█▓░ ░ ░░ ▒░▒▒▒▒▓██████▓░░░░ ▒▒▒▒░ ░ ▒▒░▒▒░░
if self._local_only and client_ip not in ( '127.0.0.1', '::1', '::ffff:127.0.0.1' ):
raise HydrusExceptions.InsufficientCredentialsException( 'Only local access allowed!' )

View File

@ -387,7 +387,34 @@ class Controller( HydrusController.HydrusController ):
context_factory = twisted.internet.ssl.DefaultOpenSSLContextFactory( ssl_key_path, ssl_cert_path, sslmethod )
self._service_keys_to_connected_ports[ service_key ] = reactor.listenSSL( port, http_factory, context_factory )
ipv6_port = None
try:
ipv6_port = reactor.listenSSL( port, http_factory, context_factory, interface = '::' )
except Exception as e:
HydrusData.Print( 'Could not bind to IPv6:' )
HydrusData.Print( str( e ) )
ipv4_port = None
try:
ipv4_port = reactor.listenSSL( port, http_factory, context_factory )
except:
if ipv6_port is None:
raise
self._service_keys_to_connected_ports[ service_key ] = ( ipv4_port, ipv6_port )
if not HydrusNetworking.LocalPortInUse( port ):
@ -409,11 +436,21 @@ class Controller( HydrusController.HydrusController ):
deferreds = []
for port in self._service_keys_to_connected_ports.values():
for ( ipv4_port, ipv6_port ) in self._service_keys_to_connected_ports.values():
deferred = defer.maybeDeferred( port.stopListening )
if ipv4_port is not None:
deferred = defer.maybeDeferred( ipv4_port.stopListening )
deferreds.append( deferred )
deferreds.append( deferred )
if ipv6_port is not None:
deferred = defer.maybeDeferred( ipv6_port.stopListening )
deferreds.append( deferred )
self._service_keys_to_connected_ports = {}

View File

@ -3335,21 +3335,25 @@ class DB( HydrusDB.HydrusDB ):
db_path = os.path.join( self._db_dir, self._db_filenames[ name ] )
if HydrusDB.CanVacuum( db_path ):
try:
started = HydrusData.GetNowPrecise()
HydrusDB.CheckCanVacuum( db_path )
HydrusDB.VacuumDB( db_path )
except Exception as e:
time_took = HydrusData.GetNowPrecise() - started
HydrusData.Print( 'Cannot vacuum "{}": {}'.format( db_path, e ) )
HydrusData.Print( 'Vacuumed ' + db_path + ' in ' + HydrusData.TimeDeltaToPrettyTimeDelta( time_took ) )
else:
HydrusData.Print( 'Could not vacuum ' + db_path + ' (probably due to limited disk space on db or system drive).' )
continue
started = HydrusData.GetNowPrecise()
HydrusDB.VacuumDB( db_path )
time_took = HydrusData.GetNowPrecise() - started
HydrusData.Print( 'Vacuumed ' + db_path + ' in ' + HydrusData.TimeDeltaToPrettyTimeDelta( time_took ) )
names_done.append( name )
except Exception as e:

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.3 KiB

After

Width:  |  Height:  |  Size: 2.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.9 KiB

After

Width:  |  Height:  |  Size: 2.4 KiB