Version 549
|
@ -7,6 +7,47 @@ title: Changelog
|
|||
!!! note
|
||||
This is the new changelog, only the most recent builds. For all versions, see the [old changelog](old_changelog.html).
|
||||
|
||||
## [Version 549](https://github.com/hydrusnetwork/hydrus/releases/tag/v549)
|
||||
|
||||
### misc
|
||||
|
||||
* optimised taglist sorting code, which is really groaning when it gets to 50k+ unique tags. the counting is more efficient now, but more work can be done
|
||||
* optimised taglist internal update recalc by updating existing items in place instead of remove/replace and skipping cleanup-sort when no new items are added and/or the sort is not count-based. it should also preserve selection and focus stuff a bit better now
|
||||
* thanks to a user, we have some new url classes to handle the recent change in sankaku URL format. your sank subscriptions are likely to go slightly crazy for a week as they figure out where they are caught up to, sorry!
|
||||
* if a file copy into your temp directory fails due to 'Errno 28 No space left on device', the client now pops up more information about this specific problem, which is often a symptom of trying to import a 4GB drive into a ramdisk-hosted tempdir and similar. many Linux flavours relatedly have special rules about the max filesize in the tempdir!
|
||||
|
||||
### maintenance and processing
|
||||
|
||||
* _advanced users only, don't worry about it too much_
|
||||
* the _options->maintenance and processing_ page has several advanced new settings. these are all separately hardcoded systems that I have merged into more of the same logic this week. the UI is a tower of spam, but it will serve us useful when we want to test and fine tune clients that are having various sorts of maintenance trouble
|
||||
* a new section for potential duplicate search now duplicates the 'do search in idle time' setting you see in the duplicates page and has new 'work packet time' and 'rest time percentage' settings
|
||||
* a new section for repository processing now exposes the similar 'work/rest' timings for 'normal', 'idle', and 'very idle' (after an hour of idle mode). **if I have been working with you on freezes or memory explosions during PTR processing, increase the rest percentages here to 50-2,000, let's see if that gives your client time to breathe and clean up old work**
|
||||
* a new section for sibling/parent sync does the same, for 'idle', 'normal', and 'work hard' modes **same deal here probably**
|
||||
* a new section for the deferred database table delete system does the same, for 'idle', 'normal', and 'work hard' modes
|
||||
* I duplicated the 'do sibling/parent sync in idle/normal time' _tags_ menu settings to this options page. they are synced, so altering one updates the other
|
||||
* if you change the 'run file maintenance jobs in idle/normal time' settings in the dialog, the _database_ menu now properly updates to reflect any changes
|
||||
* the way these various systems calculate their rest time now smoothes out extreme bumps. sibling/parent display, in particular, should wait for a good amount of time after a big bump, but won't allow itself to wait for a crazy amount of time
|
||||
|
||||
### all deleted files
|
||||
|
||||
* fixed the various 'clear deletion record' commands to also remove from the 'all deleted files' service cache, which stores all your deleted files for all known specific file services and is used for various search tech on deleted file domains
|
||||
* also wrote a command to completely regen this cache from original deletion records. it can be fired under _database->regenerate->all deleted files_. this will happen on update, to fix the above retroactively
|
||||
* removed the foolish 'deleted from all deleted files' typo-entry from the advanced multiple file domain selector list. the value and use of a deletion record from a virtual combined deletion record is a complicated idea, and the entities that lurk in the shadows of the inverse sphere would strongly prefer that we not consider the matter any more
|
||||
|
||||
### running from source stuff
|
||||
|
||||
* **the setup_venv script has slightly different Qt questions this week, so if you have your own custom script that types the letters in for you, double-check what it is going to do before updating this week!**
|
||||
* there's a new version of PySide6, 6.6.0. the `(t)est` Qt version in the 'setup_venv' now points to this. it seems fine to me on a fairly normal Win 11 machine, but if recent history is any guide, there's going to be a niggle somewhere. if you have been waiting for a fix on the menu position issue or anything else, give it a go! if things go well, I'll roll this into a larger 'future' test release and then we'll integrate it into main
|
||||
* also, since Qt is the most test-heavy library we have, the 'setup_venv' scripts for all platforms now have an option to `(w)rite` your own version in!
|
||||
* the program no longer needs `distutils`, and thus should now be compatible (or less incompatible, let's see, ha ha) with python 3.12. thanks for the user report and assistance here
|
||||
|
||||
### boring stuff
|
||||
|
||||
* rejiggered a couple of maintenance flows to spend less time aggressively chilling out doing nothing
|
||||
* the hydrus timedelta widget can now handle milliseconds
|
||||
* misc code cleaning
|
||||
* fixed a typo in the thumbnail 'select->local/not local' descriptions/tooltips
|
||||
|
||||
## [Version 548](https://github.com/hydrusnetwork/hydrus/releases/tag/v548)
|
||||
|
||||
### user contributions
|
||||
|
@ -349,50 +390,3 @@ title: Changelog
|
|||
* the `file_metadata` call now has two new fields, `filetype_human`, which looks like 'jpeg' or 'webm', and `filetype_enum`, which uses internal hydrus filetype numbers
|
||||
* the help and unit tests are updated for this
|
||||
* client api version is now 50
|
||||
|
||||
## [Version 539](https://github.com/hydrusnetwork/hydrus/releases/tag/v539)
|
||||
|
||||
### another new library today
|
||||
|
||||
* **if you run from source, I recommend you rebuild your venv today. we've got another library to add full PSD support**
|
||||
|
||||
### viewable PSD files
|
||||
|
||||
* thanks to a user, hydrus can now view PSD files in the main viewer, just like any other file. it can be a bit slow to load, and you can't show/hide away layers or anything, but for simply showing the image as-is, it works great
|
||||
* because this needs more CPU than for normal images, we're starting out with conservative view settings. while PSD files will show as normal in the media viewer, they'll only show the 'open externally' button in the preview viewer. see how it works for you, and remember you can change it under _options->media_. if there isn't any real problem IRL with showing even big PSDs everywhere, I'll change the defaults, so let's see how it goes
|
||||
* and of course if you have a borked PSD--say one that shows up in all the wrong colours, or has bad transparency--please send it in and I'll have a look. there is apparently a rare class of PSD files that simply won't render at all with our new system, and hydrus is pretty bad at handling that situation, so that'd also be useful if at the very least to get me to write some better error handling code
|
||||
* just like last week, if you run from source, please rebuild your venv again today--there's a PSD-handling library that supports all this
|
||||
* all PSD files will be scheduled for a thumbnail regen on update
|
||||
|
||||
### static vs animated gif
|
||||
|
||||
* the program now treats still and animated gifs as separate file types for the purpose of searching and selecting display options (previously, static gifs were just animated gifs with no duration). most people won't have many static gifs, so this doesn't matter too much, but it cleans up our image/animation filetype group distinction and makes a bunch of behind the scenes stuff simpler. all your gifs will be set to either camp on update
|
||||
* if you have an existing file import options or system:filetype that looked for gifs specifically, it will now search for 'animated gifs' only, so watch out if you need 'static gif' too/instead
|
||||
|
||||
### parseable rating predicates
|
||||
|
||||
* the system predicate parser (including the Client API) now accepts system:rating predicates. type 'system:has rating (service name)' or 'system:rating for (service name) > 4/5' or other reasonable variants and it should pre-fill
|
||||
* in the UI, the system 'has/no rating' predicate strings are now in the format 'has a rating for (service name)' and 'does not have a rating for (service name)'. (previously it was 'has/no (service name) rating', which is out of step with our usual syntax and generally unhelpfully parsing-ambiguous)
|
||||
* added a bunch of unit tests for this
|
||||
|
||||
### misc
|
||||
|
||||
* fixed the 'network timeout' setting under _options->connection_, which was not saving changes
|
||||
* the media viewer top hover window now enables/disables the 'show file metadata' button--rather than shows/hides--in order to stop the buttons on the left jumping around so much when you scroll through media
|
||||
* the duplicate filter's always-on-top right-hand window is fixed in place again. the buttons won't jump around any more. if the notes hover grows to overlap it, it won't show over it as long as your mouse is over the duplicate hover. this should make clicking those duplicate buttons on note-heavy files far less frustrating. sorry for the late fix here!
|
||||
* the duplicate filter now always presents a statement on the pair's filetypes, even if they are the same (it'll say like 'both are pngs'). this is to help catch the upcoming PSD matches (where you probably do not want to delete either) and other weirdness as we add new filetypes
|
||||
* just a small thing, but the 'management panel' labels are renamed broadly to 'sidebar' under the _pages_ menu. the panel on the left of pages is now called 'sidebar', and the wx-era 'sash' wording is gone
|
||||
* there's an issue with the file metadata reparse system right now where, on a filetype change, it will often fail to cleanly rename bigger files (e.g. from x.mkv to x.webm). the result is the file copies and the old one is never deleted, leaving a duplicate that is not properly cleared up later. on update this week, I am scheduling a fresh cleanup for these dupes. if, like me, you have a lot of large AV1-encoded vidya capture mkvs, you may have noticed your hydrus folders suddenly bloated in size this past week--this should be 99% fixed soon. I will fix the underlying issue here next week
|
||||
* touched up the 'running from source' help and 'setup_venv' texts in general and specifically regarding some new version info stuff. it looks like macOS <= 10.15 can't handle last week's new Qt6 version, and some versions of Linux need `libicu-dev` and `libxcb-cursor-dev` installed via apt or otherwise
|
||||
* fixed the file query sort-and-clip method when you are set to sort by file hash and also have system:limit, and fixed it for asc/desc too
|
||||
* for the second time, fixed the 'import QtSVG' error on hydrus server install when the client requirements.txt had not been run. turns out I messed up the 'proper' fix I did for the first time
|
||||
|
||||
### boring cleanup
|
||||
|
||||
* refactored a bunch of HydrusImageHandling code to HydrusAnimationHandling
|
||||
* cleaned up several of our enums and enum testing, and cleared out several hardcoded hooks to deal with different kinds of gif
|
||||
* did some similar enum cleaning and `gif`->`PILAnimation` renaming to encompass the new HEIF sequences
|
||||
* streamlined the image and animation metadata parsing methods significantly
|
||||
* a bunch of simple `image`->`animation` renames, like IMAGE_APNG is now ANIMATION_APNG
|
||||
* cleaned up some other confusing code handles for 'image' vs 'static image', to handle whether we are talking about strictly images or viewable raster image-likes (for now including PSD files) but I think it'll need more work
|
||||
* deleted some ancient and no longer used imageboard profile code
|
||||
|
|
|
@ -34,6 +34,40 @@
|
|||
<div class="content">
|
||||
<h1 id="changelog"><a href="#changelog">changelog</a></h1>
|
||||
<ul>
|
||||
<li>
|
||||
<h2 id="version_549"><a href="#version_549">version 549</a></h2>
|
||||
<ul>
|
||||
<li><h3>misc</h3></li>
|
||||
<li>optimised taglist sorting code, which is really groaning when it gets to 50k+ unique tags. the counting is more efficient now, but more work can be done</li>
|
||||
<li>optimised taglist internal update recalc by updating existing items in place instead of remove/replace and skipping cleanup-sort when no new items are added and/or the sort is not count-based. it should also preserve selection and focus stuff a bit better now</li>
|
||||
<li>thanks to a user, we have some new url classes to handle the recent change in sankaku URL format. your sank subscriptions are likely to go slightly crazy for a week as they figure out where they are caught up to, sorry!</li>
|
||||
<li>if a file copy into your temp directory fails due to 'Errno 28 No space left on device', the client now pops up more information about this specific problem, which is often a symptom of trying to import a 4GB drive into a ramdisk-hosted tempdir and similar. many Linux flavours relatedly have special rules about the max filesize in the tempdir!</li>
|
||||
<li><h3>maintenance and processing</h3></li>
|
||||
<li>_advanced users only, don't worry about it too much_</li>
|
||||
<li>the _options->maintenance and processing_ page has several advanced new settings. these are all separately hardcoded systems that I have merged into more of the same logic this week. the UI is a tower of spam, but it will serve us useful when we want to test and fine tune clients that are having various sorts of maintenance trouble</li>
|
||||
<li>a new section for potential duplicate search now duplicates the 'do search in idle time' setting you see in the duplicates page and has new 'work packet time' and 'rest time percentage' settings</li>
|
||||
<li>a new section for repository processing now exposes the similar 'work/rest' timings for 'normal', 'idle', and 'very idle' (after an hour of idle mode). **if I have been working with you on freezes or memory explosions during PTR processing, increase the rest percentages here to 50-2,000, let's see if that gives your client time to breathe and clean up old work**</li>
|
||||
<li>a new section for sibling/parent sync does the same, for 'idle', 'normal', and 'work hard' modes **same deal here probably**</li>
|
||||
<li>a new section for the deferred database table delete system does the same, for 'idle', 'normal', and 'work hard' modes</li>
|
||||
<li>I duplicated the 'do sibling/parent sync in idle/normal time' _tags_ menu settings to this options page. they are synced, so altering one updates the other</li>
|
||||
<li>if you change the 'run file maintenance jobs in idle/normal time' settings in the dialog, the _database_ menu now properly updates to reflect any changes</li>
|
||||
<li>the way these various systems calculate their rest time now smoothes out extreme bumps. sibling/parent display, in particular, should wait for a good amount of time after a big bump, but won't allow itself to wait for a crazy amount of time</li>
|
||||
<li><h3>all deleted files</h3></li>
|
||||
<li>fixed the various 'clear deletion record' commands to also remove from the 'all deleted files' service cache, which stores all your deleted files for all known specific file services and is used for various search tech on deleted file domains</li>
|
||||
<li>also wrote a command to completely regen this cache from original deletion records. it can be fired under _database->regenerate->all deleted files_. this will happen on update, to fix the above retroactively</li>
|
||||
<li>removed the foolish 'deleted from all deleted files' typo-entry from the advanced multiple file domain selector list. the value and use of a deletion record from a virtual combined deletion record is a complicated idea, and the entities that lurk in the shadows of the inverse sphere would strongly prefer that we not consider the matter any more</li>
|
||||
<li><h3>running from source stuff</h3></li>
|
||||
<li>**the setup_venv script has slightly different Qt questions this week, so if you have your own custom script that types the letters in for you, double-check what it is going to do before updating this week!**</li>
|
||||
<li>there's a new version of PySide6, 6.6.0. the `(t)est` Qt version in the 'setup_venv' now points to this. it seems fine to me on a fairly normal Win 11 machine, but if recent history is any guide, there's going to be a niggle somewhere. if you have been waiting for a fix on the menu position issue or anything else, give it a go! if things go well, I'll roll this into a larger 'future' test release and then we'll integrate it into main</li>
|
||||
<li>also, since Qt is the most test-heavy library we have, the 'setup_venv' scripts for all platforms now have an option to `(w)rite` your own version in!</li>
|
||||
<li>the program no longer needs `distutils`, and thus should now be compatible (or less incompatible, let's see, ha ha) with python 3.12. thanks for the user report and assistance here</li>
|
||||
<li><h3>boring stuff</h3></li>
|
||||
<li>rejiggered a couple of maintenance flows to spend less time aggressively chilling out doing nothing</li>
|
||||
<li>the hydrus timedelta widget can now handle milliseconds</li>
|
||||
<li>misc code cleaning</li>
|
||||
<li>fixed a typo in the thumbnail 'select->local/not local' descriptions/tooltips</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h2 id="version_548"><a href="#version_548">version 548</a></h2>
|
||||
<ul>
|
||||
|
|
|
@ -2122,13 +2122,6 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
service.SyncProcessUpdates( maintenance_mode = HC.MAINTENANCE_IDLE )
|
||||
|
||||
if HydrusThreading.IsThreadShuttingDown():
|
||||
|
||||
return
|
||||
|
||||
|
||||
time.sleep( 1 )
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -57,7 +57,7 @@ class DatabaseMaintenanceManager( object ):
|
|||
return True
|
||||
|
||||
|
||||
def _GetWaitPeriod( self, still_work_to_do: bool ):
|
||||
def _GetWaitPeriod( self, work_period: float, time_it_took: float, still_work_to_do: bool ):
|
||||
|
||||
if not still_work_to_do:
|
||||
|
||||
|
@ -66,33 +66,35 @@ class DatabaseMaintenanceManager( object ):
|
|||
|
||||
if self._is_working_hard:
|
||||
|
||||
return 0.5
|
||||
rest_ratio = HG.client_controller.new_options.GetInteger( 'deferred_table_delete_rest_percentage_work_hard' ) / 100
|
||||
|
||||
|
||||
if HG.client_controller.CurrentlyIdle():
|
||||
elif HG.client_controller.CurrentlyIdle():
|
||||
|
||||
return 2.0
|
||||
rest_ratio = HG.client_controller.new_options.GetInteger( 'deferred_table_delete_rest_percentage_idle' ) / 100
|
||||
|
||||
else:
|
||||
|
||||
return 2.5
|
||||
rest_ratio = HG.client_controller.new_options.GetInteger( 'deferred_table_delete_rest_percentage_normal' ) / 100
|
||||
|
||||
|
||||
reasonable_work_time = min( 5 * work_period, time_it_took )
|
||||
|
||||
return reasonable_work_time * rest_ratio
|
||||
|
||||
|
||||
def _GetWorkPeriod( self ):
|
||||
|
||||
if self._is_working_hard:
|
||||
|
||||
return 5.0
|
||||
return HG.client_controller.new_options.GetInteger( 'deferred_table_delete_work_time_ms_work_hard' ) / 1000
|
||||
|
||||
|
||||
if HG.client_controller.CurrentlyIdle():
|
||||
elif HG.client_controller.CurrentlyIdle():
|
||||
|
||||
return 20.0
|
||||
return HG.client_controller.new_options.GetInteger( 'deferred_table_delete_work_time_ms_idle' ) / 1000
|
||||
|
||||
else:
|
||||
|
||||
return 0.25
|
||||
return HG.client_controller.new_options.GetInteger( 'deferred_table_delete_work_time_ms_normal' ) / 1000
|
||||
|
||||
|
||||
|
||||
|
@ -131,10 +133,14 @@ class DatabaseMaintenanceManager( object ):
|
|||
work_period = self._GetWorkPeriod()
|
||||
|
||||
|
||||
time_it_took = 1.0
|
||||
|
||||
if able_to_work:
|
||||
|
||||
time_to_stop = HydrusTime.GetNowFloat() + work_period
|
||||
|
||||
start_time = HydrusTime.GetNowFloat()
|
||||
|
||||
try:
|
||||
|
||||
still_work_to_do = HG.client_controller.WriteSynchronous( 'do_deferred_table_delete_work', time_to_stop )
|
||||
|
@ -156,10 +162,12 @@ class DatabaseMaintenanceManager( object ):
|
|||
self._controller.pub( 'notify_deferred_delete_database_maintenance_work_complete' )
|
||||
|
||||
|
||||
time_it_took = HydrusTime.GetNowFloat() - start_time
|
||||
|
||||
|
||||
with self._lock:
|
||||
|
||||
wait_period = self._GetWaitPeriod( still_work_to_do )
|
||||
wait_period = self._GetWaitPeriod( work_period, time_it_took, still_work_to_do )
|
||||
|
||||
|
||||
self._wake_event.wait( wait_period )
|
||||
|
|
|
@ -739,7 +739,11 @@ class DuplicatesManager( object ):
|
|||
|
||||
start_time = HydrusTime.GetNowPrecise()
|
||||
|
||||
( still_work_to_do, num_done ) = HG.client_controller.WriteSynchronous( 'maintain_similar_files_search_for_potential_duplicates', search_distance, maintenance_mode = HC.MAINTENANCE_FORCED, job_key = job_key, work_time_float = 0.5 )
|
||||
work_time_ms = HG.client_controller.new_options.GetInteger( 'potential_duplicates_search_work_time_ms' )
|
||||
|
||||
work_time = work_time_ms / 1000
|
||||
|
||||
( still_work_to_do, num_done ) = HG.client_controller.WriteSynchronous( 'maintain_similar_files_search_for_potential_duplicates', search_distance, maintenance_mode = HC.MAINTENANCE_FORCED, job_key = job_key, work_time_float = work_time )
|
||||
|
||||
time_it_took = HydrusTime.GetNowPrecise() - start_time
|
||||
|
||||
|
@ -773,7 +777,11 @@ class DuplicatesManager( object ):
|
|||
break
|
||||
|
||||
|
||||
time.sleep( min( 5, time_it_took ) ) # ideally 0.5s, but potentially longer
|
||||
rest_ratio = HG.client_controller.new_options.GetInteger( 'potential_duplicates_search_rest_percentage' ) / 100
|
||||
|
||||
reasonable_work_time = min( 5 * work_time, time_it_took )
|
||||
|
||||
time.sleep( reasonable_work_time * rest_ratio )
|
||||
|
||||
|
||||
job_key.Delete()
|
||||
|
|
|
@ -521,6 +521,36 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._dictionary[ 'integers' ][ 'ms_to_wait_between_physical_file_deletes' ] = 250
|
||||
|
||||
self._dictionary[ 'integers' ][ 'potential_duplicates_search_work_time_ms' ] = 500
|
||||
self._dictionary[ 'integers' ][ 'potential_duplicates_search_rest_percentage' ] = 100
|
||||
|
||||
self._dictionary[ 'integers' ][ 'repository_processing_work_time_ms_very_idle' ] = 30000
|
||||
self._dictionary[ 'integers' ][ 'repository_processing_rest_percentage_very_idle' ] = 3
|
||||
|
||||
self._dictionary[ 'integers' ][ 'repository_processing_work_time_ms_idle' ] = 10000
|
||||
self._dictionary[ 'integers' ][ 'repository_processing_rest_percentage_idle' ] = 5
|
||||
|
||||
self._dictionary[ 'integers' ][ 'repository_processing_work_time_ms_normal' ] = 500
|
||||
self._dictionary[ 'integers' ][ 'repository_processing_rest_percentage_normal' ] = 10
|
||||
|
||||
self._dictionary[ 'integers' ][ 'tag_display_processing_work_time_ms_idle' ] = 15000
|
||||
self._dictionary[ 'integers' ][ 'tag_display_processing_rest_percentage_idle' ] = 3
|
||||
|
||||
self._dictionary[ 'integers' ][ 'tag_display_processing_work_time_ms_normal' ] = 100
|
||||
self._dictionary[ 'integers' ][ 'tag_display_processing_rest_percentage_normal' ] = 9900
|
||||
|
||||
self._dictionary[ 'integers' ][ 'tag_display_processing_work_time_ms_work_hard' ] = 5000
|
||||
self._dictionary[ 'integers' ][ 'tag_display_processing_rest_percentage_work_hard' ] = 5
|
||||
|
||||
self._dictionary[ 'integers' ][ 'deferred_table_delete_work_time_ms_idle' ] = 20000
|
||||
self._dictionary[ 'integers' ][ 'deferred_table_delete_rest_percentage_idle' ] = 10
|
||||
|
||||
self._dictionary[ 'integers' ][ 'deferred_table_delete_work_time_ms_normal' ] = 250
|
||||
self._dictionary[ 'integers' ][ 'deferred_table_delete_rest_percentage_normal' ] = 1000
|
||||
|
||||
self._dictionary[ 'integers' ][ 'deferred_table_delete_work_time_ms_work_hard' ] = 5000
|
||||
self._dictionary[ 'integers' ][ 'deferred_table_delete_rest_percentage_work_hard' ] = 10
|
||||
|
||||
#
|
||||
|
||||
self._dictionary[ 'keys' ] = {}
|
||||
|
|
|
@ -2110,18 +2110,18 @@ class ServiceRepository( ServiceRestricted ):
|
|||
|
||||
if HG.client_controller.CurrentlyVeryIdle():
|
||||
|
||||
work_time = 30
|
||||
break_percentage = 0.03
|
||||
work_time = HG.client_controller.new_options.GetInteger( 'repository_processing_work_time_ms_very_idle' ) / 1000
|
||||
rest_ratio = HG.client_controller.new_options.GetInteger( 'repository_processing_rest_percentage_very_idle' ) / 100
|
||||
|
||||
elif HG.client_controller.CurrentlyIdle():
|
||||
|
||||
work_time = 10
|
||||
break_percentage = 0.05
|
||||
work_time = HG.client_controller.new_options.GetInteger( 'repository_processing_work_time_ms_idle' ) / 1000
|
||||
rest_ratio = HG.client_controller.new_options.GetInteger( 'repository_processing_rest_percentage_idle' ) / 100
|
||||
|
||||
else:
|
||||
|
||||
work_time = 0.5
|
||||
break_percentage = 0.1
|
||||
work_time = HG.client_controller.new_options.GetInteger( 'repository_processing_work_time_ms_normal' ) / 1000
|
||||
rest_ratio = HG.client_controller.new_options.GetInteger( 'repository_processing_rest_percentage_normal' ) / 100
|
||||
|
||||
|
||||
start_time = HydrusTime.GetNowPrecise()
|
||||
|
@ -2147,7 +2147,9 @@ class ServiceRepository( ServiceRestricted ):
|
|||
return
|
||||
|
||||
|
||||
time.sleep( break_percentage * time_it_took )
|
||||
reasonable_work_time = min( 5 * work_time, time_it_took )
|
||||
|
||||
time.sleep( reasonable_work_time * rest_ratio )
|
||||
|
||||
self._ReportOngoingRowSpeed( job_key, rows_done_in_this_update, rows_in_this_update, this_work_start_time, num_rows_done, 'definitions' )
|
||||
|
||||
|
@ -2258,18 +2260,18 @@ class ServiceRepository( ServiceRestricted ):
|
|||
|
||||
if HG.client_controller.CurrentlyVeryIdle():
|
||||
|
||||
work_time = 30
|
||||
break_percentage = 0.03
|
||||
work_time = HG.client_controller.new_options.GetInteger( 'repository_processing_work_time_ms_very_idle' ) / 1000
|
||||
rest_ratio = HG.client_controller.new_options.GetInteger( 'repository_processing_rest_percentage_very_idle' ) / 100
|
||||
|
||||
elif HG.client_controller.CurrentlyIdle():
|
||||
|
||||
work_time = 10
|
||||
break_percentage = 0.05
|
||||
work_time = HG.client_controller.new_options.GetInteger( 'repository_processing_work_time_ms_idle' ) / 1000
|
||||
rest_ratio = HG.client_controller.new_options.GetInteger( 'repository_processing_rest_percentage_idle' ) / 100
|
||||
|
||||
else:
|
||||
|
||||
work_time = 0.5
|
||||
break_percentage = 0.1
|
||||
work_time = HG.client_controller.new_options.GetInteger( 'repository_processing_work_time_ms_normal' ) / 1000
|
||||
rest_ratio = HG.client_controller.new_options.GetInteger( 'repository_processing_rest_percentage_normal' ) / 100
|
||||
|
||||
|
||||
start_time = HydrusTime.GetNowPrecise()
|
||||
|
@ -2295,7 +2297,9 @@ class ServiceRepository( ServiceRestricted ):
|
|||
return
|
||||
|
||||
|
||||
time.sleep( break_percentage * time_it_took )
|
||||
reasonable_work_time = min( 5 * work_time, time_it_took )
|
||||
|
||||
time.sleep( reasonable_work_time * rest_ratio )
|
||||
|
||||
self._ReportOngoingRowSpeed( job_key, rows_done_in_this_update, rows_in_this_update, this_work_start_time, num_rows_done, 'content rows' )
|
||||
|
||||
|
|
|
@ -1440,8 +1440,6 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if service_type in HC.FILE_SERVICES_COVERED_BY_COMBINED_DELETED_FILE:
|
||||
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
rows = [ ( hash_id, now ) for hash_id in existing_hash_ids ]
|
||||
|
||||
self._AddFiles( self.modules_services.combined_deleted_file_service_id, rows )
|
||||
|
@ -1451,8 +1449,6 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if service_id in local_file_service_ids:
|
||||
|
||||
hash_ids_still_in_another_service = set()
|
||||
|
||||
other_local_file_service_ids = set( local_file_service_ids )
|
||||
other_local_file_service_ids.discard( service_id )
|
||||
|
||||
|
@ -1496,7 +1492,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
|
||||
# push the info updates, notify
|
||||
# push the info updates
|
||||
|
||||
self._ExecuteMany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', service_info_updates )
|
||||
|
||||
|
@ -5576,12 +5572,16 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
service_ids_to_nums_cleared = self.modules_files_storage.ClearLocalDeleteRecord()
|
||||
|
||||
self._SyncCombinedDeletedFiles()
|
||||
|
||||
else:
|
||||
|
||||
hash_ids = self.modules_hashes_local_cache.GetHashIds( hashes )
|
||||
|
||||
service_ids_to_nums_cleared = self.modules_files_storage.ClearLocalDeleteRecord( hash_ids )
|
||||
|
||||
self._SyncCombinedDeletedFiles( hash_ids )
|
||||
|
||||
|
||||
self._ExecuteMany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', ( ( -num_cleared, clear_service_id, HC.SERVICE_INFO_NUM_DELETED_FILES ) for ( clear_service_id, num_cleared ) in service_ids_to_nums_cleared.items() ) )
|
||||
|
||||
|
@ -8257,6 +8257,84 @@ class DB( HydrusDB.HydrusDB ):
|
|||
self._SaveOptions( self._controller.options )
|
||||
|
||||
|
||||
def _SyncCombinedDeletedFiles( self, hash_ids = None, do_full_rebuild = False ):
|
||||
|
||||
combined_files_stakeholder_service_ids = self.modules_services.GetServiceIds( HC.FILE_SERVICES_COVERED_BY_COMBINED_DELETED_FILE )
|
||||
|
||||
hash_ids_that_are_desired = set()
|
||||
|
||||
if hash_ids is None:
|
||||
|
||||
for service_id in combined_files_stakeholder_service_ids:
|
||||
|
||||
hash_ids_that_are_desired.update( self.modules_files_storage.GetDeletedHashIdsList( service_id ) )
|
||||
|
||||
|
||||
existing_hash_ids = set( self.modules_files_storage.GetCurrentHashIdsList( self.modules_services.combined_deleted_file_service_id ) )
|
||||
|
||||
else:
|
||||
|
||||
for service_id in combined_files_stakeholder_service_ids:
|
||||
|
||||
hash_ids_that_are_desired.update( self.modules_files_storage.FilterHashIdsToStatus( service_id, hash_ids, HC.CONTENT_STATUS_DELETED ) )
|
||||
|
||||
|
||||
existing_hash_ids = self.modules_files_storage.FilterHashIdsToStatus( self.modules_services.combined_deleted_file_service_id, hash_ids, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
|
||||
if do_full_rebuild:
|
||||
|
||||
# this happens in the full 'regenerate' call from the UI database menu. full wipe and recalculation to get any errant timestamps
|
||||
|
||||
hash_ids_to_remove = existing_hash_ids
|
||||
hash_ids_to_add = hash_ids_that_are_desired
|
||||
|
||||
else:
|
||||
|
||||
hash_ids_to_remove = existing_hash_ids.difference( hash_ids_that_are_desired )
|
||||
hash_ids_to_add = hash_ids_that_are_desired.difference( existing_hash_ids )
|
||||
|
||||
|
||||
if len( hash_ids_to_remove ) > 0:
|
||||
|
||||
self._DeleteFiles( self.modules_services.combined_deleted_file_service_id, hash_ids_to_remove, only_if_current = True )
|
||||
|
||||
|
||||
if len( hash_ids_to_add ) > 0:
|
||||
|
||||
hash_ids_to_earliest_timestamps = {}
|
||||
|
||||
for service_id in combined_files_stakeholder_service_ids:
|
||||
|
||||
hash_ids_to_both_timestamps = self.modules_files_storage.GetDeletedHashIdsToTimestamps( service_id, hash_ids_to_add )
|
||||
|
||||
for ( hash_id, ( timestamp, original_timestamp ) ) in hash_ids_to_both_timestamps.items():
|
||||
|
||||
if hash_id in hash_ids_to_earliest_timestamps:
|
||||
|
||||
if timestamp is not None:
|
||||
|
||||
existing_timestamp = hash_ids_to_earliest_timestamps[ hash_id ]
|
||||
|
||||
if existing_timestamp is None or timestamp < existing_timestamp:
|
||||
|
||||
hash_ids_to_earliest_timestamps[ hash_id ] = timestamp
|
||||
|
||||
|
||||
|
||||
else:
|
||||
|
||||
hash_ids_to_earliest_timestamps[ hash_id ] = timestamp
|
||||
|
||||
|
||||
|
||||
|
||||
rows = list( hash_ids_to_earliest_timestamps.items() )
|
||||
|
||||
self._AddFiles( self.modules_services.combined_deleted_file_service_id, rows )
|
||||
|
||||
|
||||
|
||||
def _UndeleteFiles( self, service_id, hash_ids ):
|
||||
|
||||
if service_id in ( self.modules_services.combined_local_file_service_id, self.modules_services.combined_local_media_service_id, self.modules_services.trash_service_id ):
|
||||
|
@ -9993,6 +10071,67 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
|
||||
if version == 548:
|
||||
|
||||
self._controller.frame_splash_status.SetSubtext( f'fixing some old delete records' )
|
||||
|
||||
try:
|
||||
|
||||
self._SyncCombinedDeletedFiles( do_full_rebuild = True )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
message = 'Trying to resynchronise the deleted file cache failed! This is not super important, but hydev would be interested in seeing the error that was printed to the log.'
|
||||
|
||||
self.pub_initial_message( message )
|
||||
|
||||
|
||||
#
|
||||
|
||||
try:
|
||||
|
||||
domain_manager = self.modules_serialisable.GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER )
|
||||
|
||||
domain_manager.Initialise()
|
||||
|
||||
#
|
||||
|
||||
domain_manager.OverwriteDefaultParsers( [
|
||||
'sankaku file page parser'
|
||||
] )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.OverwriteDefaultURLClasses( [
|
||||
'sankaku chan file page',
|
||||
'sankaku chan file page (old md5 format)',
|
||||
'sankaku chan file page (old id format)'
|
||||
] )
|
||||
|
||||
domain_manager.DeleteURLClasses( [
|
||||
'sankaku chan file page (md5)'
|
||||
] )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.TryToLinkURLClassesAndParsers()
|
||||
|
||||
#
|
||||
|
||||
self.modules_serialisable.SetJSONDump( domain_manager )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
message = 'Trying to update some downloaders failed! Please let hydrus dev know!'
|
||||
|
||||
self.pub_initial_message( message )
|
||||
|
||||
|
||||
|
||||
self._controller.frame_splash_status.SetTitleText( 'updated db to v{}'.format( HydrusData.ToHumanInt( version + 1 ) ) )
|
||||
|
||||
self._Execute( 'UPDATE version SET version = ?;', ( version + 1, ) )
|
||||
|
@ -10507,6 +10646,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
elif action == 'process_repository_content': result = self._ProcessRepositoryContent( *args, **kwargs )
|
||||
elif action == 'process_repository_definitions': result = self.modules_repositories.ProcessRepositoryDefinitions( *args, **kwargs )
|
||||
elif action == 'push_recent_tags': self.modules_recent_tags.PushRecentTags( *args, **kwargs )
|
||||
elif action == 'regenerate_combined_deleted_files': self._SyncCombinedDeletedFiles( *args, **kwargs )
|
||||
elif action == 'regenerate_local_hash_cache': self._RegenerateLocalHashCache( *args, **kwargs )
|
||||
elif action == 'regenerate_local_tag_cache': self._RegenerateLocalTagCache( *args, **kwargs )
|
||||
elif action == 'regenerate_similar_files': self.modules_similar_files.RegenerateTree( *args, **kwargs )
|
||||
|
|
|
@ -326,6 +326,8 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
|
||||
def AddFiles( self, service_id, insert_rows ):
|
||||
|
||||
# just a note, the timestamp in insert_rows can be None
|
||||
|
||||
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO {} VALUES ( ?, ? );'.format( current_files_table_name ), ( ( hash_id, timestamp ) for ( hash_id, timestamp ) in insert_rows ) )
|
||||
|
@ -404,11 +406,15 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
|
||||
def ClearLocalDeleteRecord( self, hash_ids = None ):
|
||||
|
||||
# Just as a side note, this guy should be accompanied by calls to SyncCombinedDeletedFiles above; this module can't do proper add/delete with mappings and stuff, so just this guy isn't enough
|
||||
|
||||
# we delete from everywhere, but not for files currently in the trash
|
||||
|
||||
service_ids_to_nums_cleared = {}
|
||||
|
||||
local_non_trash_service_ids = self.modules_services.GetServiceIds( ( HC.COMBINED_LOCAL_FILE, HC.COMBINED_LOCAL_MEDIA, HC.LOCAL_FILE_DOMAIN ) )
|
||||
local_non_trash_service_types = { HC.COMBINED_LOCAL_FILE, HC.COMBINED_LOCAL_MEDIA, HC.LOCAL_FILE_DOMAIN }
|
||||
|
||||
local_non_trash_service_ids = self.modules_services.GetServiceIds( local_non_trash_service_types )
|
||||
|
||||
if hash_ids is None:
|
||||
|
||||
|
@ -892,6 +898,18 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
return hash_ids
|
||||
|
||||
|
||||
def GetDeletedHashIdsToTimestamps( self, service_id, hash_ids ):
|
||||
|
||||
deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED )
|
||||
|
||||
with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
|
||||
|
||||
rows = self._Execute( 'SELECT hash_id, timestamp, original_timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, deleted_files_table_name ) ).fetchall()
|
||||
|
||||
|
||||
return { hash_id : ( timestamp, original_timestamp ) for ( hash_id, timestamp, original_timestamp ) in rows }
|
||||
|
||||
|
||||
def GetDeletionStatus( self, service_id, hash_id ):
|
||||
|
||||
# can have a value here and just be in trash, so we fetch it whatever the end result
|
||||
|
|
|
@ -2206,12 +2206,13 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
|
||||
self._menubar.setNativeMenuBar( False )
|
||||
|
||||
self._menu_updater_database = self._InitialiseMenubarGetMenuUpdaterDatabase()
|
||||
self._menu_updater_file = self._InitialiseMenubarGetMenuUpdaterFile()
|
||||
self._menu_updater_database = self._InitialiseMenubarGetMenuUpdaterDatabase()
|
||||
self._menu_updater_network = self._InitialiseMenubarGetMenuUpdaterNetwork()
|
||||
self._menu_updater_pages = self._InitialiseMenubarGetMenuUpdaterPages()
|
||||
self._menu_updater_pending = self._InitialiseMenubarGetMenuUpdaterPending()
|
||||
self._menu_updater_services = self._InitialiseMenubarGetMenuUpdaterServices()
|
||||
self._menu_updater_tags = self._InitialiseMenubarGetMenuUpdaterTags()
|
||||
self._menu_updater_undo = self._InitialiseMenubarGetMenuUpdaterUndo()
|
||||
|
||||
self._menu_updater_pages_count = ClientGUIAsync.FastThreadToGUIUpdater( self, self._UpdateMenuPagesCount )
|
||||
|
@ -2285,6 +2286,8 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
|
||||
self.ReplaceMenu( name, menu, label )
|
||||
|
||||
self._menu_updater_tags.update()
|
||||
|
||||
elif name == 'undo':
|
||||
|
||||
( self._menubar_undo_submenu, label ) = self._InitialiseMenuInfoUndo()
|
||||
|
@ -2394,6 +2397,9 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
|
||||
self._menubar_database_multiple_location_label.setVisible( not all_locations_are_default )
|
||||
|
||||
self._menubar_database_file_maintenance_during_idle.setChecked( HG.client_controller.new_options.GetBoolean( 'file_maintenance_during_idle' ) )
|
||||
self._menubar_database_file_maintenance_during_active.setChecked( HG.client_controller.new_options.GetBoolean( 'file_maintenance_during_active' ) )
|
||||
|
||||
|
||||
return ClientGUIAsync.AsyncQtUpdater( self, loading_callable, work_callable, publish_callable )
|
||||
|
||||
|
@ -2924,6 +2930,27 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
return ClientGUIAsync.AsyncQtUpdater( self, loading_callable, work_callable, publish_callable )
|
||||
|
||||
|
||||
def _InitialiseMenubarGetMenuUpdaterTags( self ):
|
||||
|
||||
def loading_callable():
|
||||
|
||||
pass
|
||||
|
||||
|
||||
def work_callable( args ):
|
||||
|
||||
return 1
|
||||
|
||||
|
||||
def publish_callable( result ):
|
||||
|
||||
self._menubar_tags_tag_display_maintenance_during_idle.setChecked( HG.client_controller.new_options.GetBoolean( 'tag_display_maintenance_during_idle' ) )
|
||||
self._menubar_tags_tag_display_maintenance_during_active.setChecked( HG.client_controller.new_options.GetBoolean( 'tag_display_maintenance_during_active' ) )
|
||||
|
||||
|
||||
return ClientGUIAsync.AsyncQtUpdater( self, loading_callable, work_callable, publish_callable )
|
||||
|
||||
|
||||
def _InitialiseMenubarGetMenuUpdaterUndo( self ):
|
||||
|
||||
def loading_callable():
|
||||
|
@ -3038,14 +3065,14 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
current_value = check_manager.GetCurrentValue()
|
||||
func = check_manager.Invert
|
||||
|
||||
ClientGUIMenus.AppendMenuCheckItem( file_maintenance_menu, 'work file jobs during idle time', 'Control whether file maintenance can work during idle time.', current_value, func )
|
||||
self._menubar_database_file_maintenance_during_idle = ClientGUIMenus.AppendMenuCheckItem( file_maintenance_menu, 'work file jobs during idle time', 'Control whether file maintenance can work during idle time.', current_value, func )
|
||||
|
||||
check_manager = ClientGUICommon.CheckboxManagerOptions( 'file_maintenance_during_active' )
|
||||
|
||||
current_value = check_manager.GetCurrentValue()
|
||||
func = check_manager.Invert
|
||||
|
||||
ClientGUIMenus.AppendMenuCheckItem( file_maintenance_menu, 'work file jobs during normal time', 'Control whether file maintenance can work during normal time.', current_value, func )
|
||||
self._menubar_database_file_maintenance_during_active = ClientGUIMenus.AppendMenuCheckItem( file_maintenance_menu, 'work file jobs during normal time', 'Control whether file maintenance can work during normal time.', current_value, func )
|
||||
|
||||
ClientGUIMenus.AppendSeparator( file_maintenance_menu )
|
||||
|
||||
|
@ -3118,6 +3145,7 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
|
||||
ClientGUIMenus.AppendSeparator( regen_submenu )
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( regen_submenu, 'all deleted files', 'Resynchronise the store of all known deleted files.', self._RegenerateCombinedDeletedFiles )
|
||||
ClientGUIMenus.AppendMenuItem( regen_submenu, 'local hash cache', 'Repopulate the cache hydrus uses for fast hash lookup for local files.', self._RegenerateLocalHashCache )
|
||||
ClientGUIMenus.AppendMenuItem( regen_submenu, 'local tag cache', 'Repopulate the cache hydrus uses for fast tag lookup for local files.', self._RegenerateLocalTagCache )
|
||||
|
||||
|
@ -3676,14 +3704,14 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
current_value = check_manager.GetCurrentValue()
|
||||
func = check_manager.Invert
|
||||
|
||||
ClientGUIMenus.AppendMenuCheckItem( tag_display_maintenance_menu, 'sync tag display during idle time', 'Control whether tag display processing can work during idle time.', current_value, func )
|
||||
self._menubar_tags_tag_display_maintenance_during_idle = ClientGUIMenus.AppendMenuCheckItem( tag_display_maintenance_menu, 'sync tag display during idle time', 'Control whether tag display processing can work during idle time.', current_value, func )
|
||||
|
||||
check_manager = ClientGUICommon.CheckboxManagerOptions( 'tag_display_maintenance_during_active' )
|
||||
|
||||
current_value = check_manager.GetCurrentValue()
|
||||
func = check_manager.Invert
|
||||
|
||||
ClientGUIMenus.AppendMenuCheckItem( tag_display_maintenance_menu, 'sync tag display during normal time', 'Control whether tag display processing can work during normal time.', current_value, func )
|
||||
self._menubar_tags_tag_display_maintenance_during_active = ClientGUIMenus.AppendMenuCheckItem( tag_display_maintenance_menu, 'sync tag display during normal time', 'Control whether tag display processing can work during normal time.', current_value, func )
|
||||
|
||||
ClientGUIMenus.AppendMenu( menu, tag_display_maintenance_menu, 'sibling/parent sync' )
|
||||
|
||||
|
@ -5081,6 +5109,22 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
self._statusbar.SetStatusText( db_status, 5, tooltip = db_tooltip )
|
||||
|
||||
|
||||
def _RegenerateCombinedDeletedFiles( self ):
|
||||
|
||||
message = 'This will resynchronise the "all deleted files" cache to the actual records in the database, ensuring that various tag searches over the deleted files domain give correct counts and file results. It isn\'t super important, but this routine fixes it if it is desynchronised.'
|
||||
message += os.linesep * 2
|
||||
message += 'It should not take all that long, but if you have a lot of deleted files, it can take a little while, during which the gui may hang.'
|
||||
message += os.linesep * 2
|
||||
message += 'If you do not have a specific reason to run this, it is pointless.'
|
||||
|
||||
result = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = 'do it', no_label = 'forget it' )
|
||||
|
||||
if result == QW.QDialog.Accepted:
|
||||
|
||||
self._controller.Write( 'regenerate_combined_deleted_files', do_full_rebuild = True )
|
||||
|
||||
|
||||
|
||||
def _RegenerateTagCache( self ):
|
||||
|
||||
message = 'This will delete and then recreate the fast search cache for one or all tag services.'
|
||||
|
@ -7563,6 +7607,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
self._menu_updater_database.update()
|
||||
self._menu_updater_services.update()
|
||||
self._menu_updater_tags.update()
|
||||
|
||||
|
||||
def NotifyNewPages( self ):
|
||||
|
|
|
@ -1812,13 +1812,11 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
self._new_options = HG.client_controller.new_options
|
||||
|
||||
self._jobs_panel = ClientGUICommon.StaticBox( self, 'when to run high cpu jobs' )
|
||||
self._file_maintenance_panel = ClientGUICommon.StaticBox( self, 'file maintenance' )
|
||||
|
||||
self._idle_panel = ClientGUICommon.StaticBox( self._jobs_panel, 'idle' )
|
||||
self._shutdown_panel = ClientGUICommon.StaticBox( self._jobs_panel, 'shutdown' )
|
||||
|
||||
#
|
||||
|
||||
self._idle_panel = ClientGUICommon.StaticBox( self._jobs_panel, 'idle' )
|
||||
|
||||
self._idle_normal = QW.QCheckBox( self._idle_panel )
|
||||
self._idle_normal.clicked.connect( self._EnableDisableIdleNormal )
|
||||
|
||||
|
@ -1830,6 +1828,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
#
|
||||
|
||||
self._shutdown_panel = ClientGUICommon.StaticBox( self._jobs_panel, 'shutdown' )
|
||||
|
||||
self._idle_shutdown = ClientGUICommon.BetterChoice( self._shutdown_panel )
|
||||
|
||||
for idle_id in ( CC.IDLE_NOT_ON_SHUTDOWN, CC.IDLE_ON_SHUTDOWN, CC.IDLE_ON_SHUTDOWN_ASK_FIRST ):
|
||||
|
@ -1844,6 +1844,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
#
|
||||
|
||||
self._file_maintenance_panel = ClientGUICommon.StaticBox( self, 'file maintenance' )
|
||||
|
||||
min_unit_value = 1
|
||||
max_unit_value = 1000
|
||||
min_time_delta = 1
|
||||
|
@ -1865,6 +1867,109 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
#
|
||||
|
||||
self._repository_processing_panel = ClientGUICommon.StaticBox( self, 'repository processing' )
|
||||
|
||||
self._repository_processing_work_time_very_idle = ClientGUITime.TimeDeltaCtrl( self._repository_processing_panel, min = 0.1, seconds = True, milliseconds = True )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Repository processing operates on a work-rest cycle. This setting determines how long it should work for in each work packet. Actual work time will normally be a little larger than this. Very Idle is after an hour of idle mode.'
|
||||
self._repository_processing_work_time_very_idle.setToolTip( tt )
|
||||
|
||||
self._repository_processing_rest_percentage_very_idle = ClientGUICommon.BetterSpinBox( self._repository_processing_panel, min = 0, max = 100000 )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Repository processing operates on a work-rest cycle. This setting determines how long it should wait before starting a new work packet, in multiples of the last work time. Very Idle is after an hour of idle mode.'
|
||||
self._repository_processing_rest_percentage_very_idle.setToolTip( tt )
|
||||
|
||||
self._repository_processing_work_time_idle = ClientGUITime.TimeDeltaCtrl( self._repository_processing_panel, min = 0.1, seconds = True, milliseconds = True )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Repository processing operates on a work-rest cycle. This setting determines how long it should work for in each work packet. Actual work time will normally be a little larger than this. This is for idle mode.'
|
||||
self._repository_processing_work_time_idle.setToolTip( tt )
|
||||
|
||||
self._repository_processing_rest_percentage_idle = ClientGUICommon.BetterSpinBox( self._repository_processing_panel, min = 0, max = 100000 )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Repository processing operates on a work-rest cycle. This setting determines how long it should wait before starting a new work packet, in multiples of the last work time. This is for idle mode.'
|
||||
self._repository_processing_rest_percentage_idle.setToolTip( tt )
|
||||
|
||||
self._repository_processing_work_time_normal = ClientGUITime.TimeDeltaCtrl( self._repository_processing_panel, min = 0.1, seconds = True, milliseconds = True )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Repository processing operates on a work-rest cycle. This setting determines how long it should work for in each work packet. Actual work time will normally be a little larger than this. This is for when you force-start work from review services.'
|
||||
self._repository_processing_work_time_normal.setToolTip( tt )
|
||||
|
||||
self._repository_processing_rest_percentage_normal = ClientGUICommon.BetterSpinBox( self._repository_processing_panel, min = 0, max = 100000 )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Repository processing operates on a work-rest cycle. This setting determines how long it should wait before starting a new work packet, in multiples of the last work time. This is for when you force-start work from review services.'
|
||||
self._repository_processing_rest_percentage_normal.setToolTip( tt )
|
||||
|
||||
#
|
||||
|
||||
self._tag_display_processing_panel = ClientGUICommon.StaticBox( self, 'sibling/parent sync processing' )
|
||||
|
||||
self._tag_display_maintenance_during_idle = QW.QCheckBox( self._tag_display_processing_panel )
|
||||
self._tag_display_maintenance_during_active = QW.QCheckBox( self._tag_display_processing_panel )
|
||||
tt = 'This can be a real killer. If you are catching up with the PTR and notice a lot of lag bumps, sometimes several seconds long, try turning this off.'
|
||||
self._tag_display_maintenance_during_active.setToolTip( tt )
|
||||
|
||||
self._tag_display_processing_work_time_idle = ClientGUITime.TimeDeltaCtrl( self._tag_display_processing_panel, min = 0.1, seconds = True, milliseconds = True )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Sibling/parent sync operates on a work-rest cycle. This setting determines how long it should work for in each work packet. Actual work time will normally be a little larger than this. This is for idle mode.'
|
||||
self._tag_display_processing_work_time_idle.setToolTip( tt )
|
||||
|
||||
self._tag_display_processing_rest_percentage_idle = ClientGUICommon.BetterSpinBox( self._tag_display_processing_panel, min = 0, max = 100000 )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Sibling/parent sync operates on a work-rest cycle. This setting determines how long it should wait before starting a new work packet, in multiples of the last work time. This is for idle mode.'
|
||||
self._tag_display_processing_rest_percentage_idle.setToolTip( tt )
|
||||
|
||||
self._tag_display_processing_work_time_normal = ClientGUITime.TimeDeltaCtrl( self._tag_display_processing_panel, min = 0.1, seconds = True, milliseconds = True )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Sibling/parent sync operates on a work-rest cycle. This setting determines how long it should work for in each work packet. Actual work time will normally be a little larger than this. This is for when you force-start work from review services.'
|
||||
self._tag_display_processing_work_time_normal.setToolTip( tt )
|
||||
|
||||
self._tag_display_processing_rest_percentage_normal = ClientGUICommon.BetterSpinBox( self._tag_display_processing_panel, min = 0, max = 100000 )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Sibling/parent sync operates on a work-rest cycle. This setting determines how long it should wait before starting a new work packet, in multiples of the last work time. This is for when you force-start work from review services.'
|
||||
self._tag_display_processing_rest_percentage_normal.setToolTip( tt )
|
||||
|
||||
self._tag_display_processing_work_time_work_hard = ClientGUITime.TimeDeltaCtrl( self._tag_display_processing_panel, min = 0.1, seconds = True, milliseconds = True )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Sibling/parent sync operates on a work-rest cycle. This setting determines how long it should work for in each work packet. Actual work time will normally be a little larger than this. This is for when you force it to work hard through the dialog.'
|
||||
self._tag_display_processing_work_time_work_hard.setToolTip( tt )
|
||||
|
||||
self._tag_display_processing_rest_percentage_work_hard = ClientGUICommon.BetterSpinBox( self._tag_display_processing_panel, min = 0, max = 100000 )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Sibling/parent sync operates on a work-rest cycle. This setting determines how long it should wait before starting a new work packet, in multiples of the last work time. This is for when you force it to work hard through the dialog.'
|
||||
self._tag_display_processing_rest_percentage_work_hard.setToolTip( tt )
|
||||
|
||||
#
|
||||
|
||||
self._duplicates_panel = ClientGUICommon.StaticBox( self, 'potential duplicates search' )
|
||||
|
||||
self._maintain_similar_files_duplicate_pairs_during_idle = QW.QCheckBox( self._duplicates_panel )
|
||||
|
||||
self._potential_duplicates_search_work_time = ClientGUITime.TimeDeltaCtrl( self._duplicates_panel, min = 0.1, seconds = True, milliseconds = True )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Potential search operates on a work-rest cycle. This setting determines how long it should work for in each work packet. Actual work time will normally be a little larger than this, and on large databases the minimum work time may be upwards of several seconds.'
|
||||
self._potential_duplicates_search_work_time.setToolTip( tt )
|
||||
|
||||
self._potential_duplicates_search_rest_percentage = ClientGUICommon.BetterSpinBox( self._duplicates_panel, min = 0, max = 100000 )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Potential search operates on a work-rest cycle. This setting determines how long it should wait before starting a new work packet, as a percentage of the last work time.'
|
||||
self._potential_duplicates_search_rest_percentage.setToolTip( tt )
|
||||
|
||||
#
|
||||
|
||||
self._deferred_table_delete_panel = ClientGUICommon.StaticBox( self, 'deferred table delete' )
|
||||
|
||||
self._deferred_table_delete_work_time_idle = ClientGUITime.TimeDeltaCtrl( self._deferred_table_delete_panel, min = 0.1, seconds = True, milliseconds = True )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Deferred table delete operates on a work-rest cycle. This setting determines how long it should work for in each work packet. Actual work time will normally be a little larger than this. This is for idle mode.'
|
||||
self._deferred_table_delete_work_time_idle.setToolTip( tt )
|
||||
|
||||
self._deferred_table_delete_rest_percentage_idle = ClientGUICommon.BetterSpinBox( self._deferred_table_delete_panel, min = 0, max = 100000 )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Deferred table delete operates on a work-rest cycle. This setting determines how long it should wait before starting a new work packet, in multiples of the last work time. This is for idle mode.'
|
||||
self._deferred_table_delete_rest_percentage_idle.setToolTip( tt )
|
||||
|
||||
self._deferred_table_delete_work_time_normal = ClientGUITime.TimeDeltaCtrl( self._deferred_table_delete_panel, min = 0.1, seconds = True, milliseconds = True )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Deferred table delete operates on a work-rest cycle. This setting determines how long it should work for in each work packet. Actual work time will normally be a little larger than this. This is for when you force-start work from review services.'
|
||||
self._deferred_table_delete_work_time_normal.setToolTip( tt )
|
||||
|
||||
self._deferred_table_delete_rest_percentage_normal = ClientGUICommon.BetterSpinBox( self._deferred_table_delete_panel, min = 0, max = 100000 )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Deferred table delete operates on a work-rest cycle. This setting determines how long it should wait before starting a new work packet, in multiples of the last work time. This is for when you force-start work from review services.'
|
||||
self._deferred_table_delete_rest_percentage_normal.setToolTip( tt )
|
||||
|
||||
self._deferred_table_delete_work_time_work_hard = ClientGUITime.TimeDeltaCtrl( self._deferred_table_delete_panel, min = 0.1, seconds = True, milliseconds = True )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Deferred table delete operates on a work-rest cycle. This setting determines how long it should work for in each work packet. Actual work time will normally be a little larger than this. This is for when you force it to work hard through the dialog.'
|
||||
self._deferred_table_delete_work_time_work_hard.setToolTip( tt )
|
||||
|
||||
self._deferred_table_delete_rest_percentage_work_hard = ClientGUICommon.BetterSpinBox( self._deferred_table_delete_panel, min = 0, max = 100000 )
|
||||
tt = 'DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING. Deferred table delete operates on a work-rest cycle. This setting determines how long it should wait before starting a new work packet, in multiples of the last work time. This is for when you force it to work hard through the dialog.'
|
||||
self._deferred_table_delete_rest_percentage_work_hard.setToolTip( tt )
|
||||
|
||||
#
|
||||
|
||||
self._idle_normal.setChecked( HC.options[ 'idle_normal' ] )
|
||||
self._idle_period.SetValue( HC.options['idle_period'] )
|
||||
self._idle_mouse_period.SetValue( HC.options['idle_mouse_period'] )
|
||||
|
@ -1894,6 +1999,40 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
self._file_maintenance_active_throttle_velocity.SetValue( file_maintenance_active_throttle_velocity )
|
||||
|
||||
self._repository_processing_work_time_very_idle.SetValue( self._new_options.GetInteger( 'repository_processing_work_time_ms_very_idle' ) / 1000 )
|
||||
self._repository_processing_rest_percentage_very_idle.setValue( self._new_options.GetInteger( 'repository_processing_rest_percentage_very_idle' ) )
|
||||
|
||||
self._repository_processing_work_time_idle.SetValue( self._new_options.GetInteger( 'repository_processing_work_time_ms_idle' ) / 1000 )
|
||||
self._repository_processing_rest_percentage_idle.setValue( self._new_options.GetInteger( 'repository_processing_rest_percentage_idle' ) )
|
||||
|
||||
self._repository_processing_work_time_normal.SetValue( self._new_options.GetInteger( 'repository_processing_work_time_ms_normal' ) / 1000 )
|
||||
self._repository_processing_rest_percentage_normal.setValue( self._new_options.GetInteger( 'repository_processing_rest_percentage_normal' ) )
|
||||
|
||||
self._tag_display_maintenance_during_idle.setChecked( self._new_options.GetBoolean( 'tag_display_maintenance_during_idle' ) )
|
||||
self._tag_display_maintenance_during_active.setChecked( self._new_options.GetBoolean( 'tag_display_maintenance_during_active' ) )
|
||||
|
||||
self._tag_display_processing_work_time_idle.SetValue( self._new_options.GetInteger( 'tag_display_processing_work_time_ms_idle' ) / 1000 )
|
||||
self._tag_display_processing_rest_percentage_idle.setValue( self._new_options.GetInteger( 'tag_display_processing_rest_percentage_idle' ) )
|
||||
|
||||
self._tag_display_processing_work_time_normal.SetValue( self._new_options.GetInteger( 'tag_display_processing_work_time_ms_normal' ) / 1000 )
|
||||
self._tag_display_processing_rest_percentage_normal.setValue( self._new_options.GetInteger( 'tag_display_processing_rest_percentage_normal' ) )
|
||||
|
||||
self._tag_display_processing_work_time_work_hard.SetValue( self._new_options.GetInteger( 'tag_display_processing_work_time_ms_work_hard' ) / 1000 )
|
||||
self._tag_display_processing_rest_percentage_work_hard.setValue( self._new_options.GetInteger( 'tag_display_processing_rest_percentage_work_hard' ) )
|
||||
|
||||
self._maintain_similar_files_duplicate_pairs_during_idle.setChecked( self._new_options.GetBoolean( 'maintain_similar_files_duplicate_pairs_during_idle' ) )
|
||||
self._potential_duplicates_search_work_time.SetValue( self._new_options.GetInteger( 'potential_duplicates_search_work_time_ms' ) / 1000 )
|
||||
self._potential_duplicates_search_rest_percentage.setValue( self._new_options.GetInteger( 'potential_duplicates_search_rest_percentage' ) )
|
||||
|
||||
self._deferred_table_delete_work_time_idle.SetValue( self._new_options.GetInteger( 'deferred_table_delete_work_time_ms_idle' ) / 1000 )
|
||||
self._deferred_table_delete_rest_percentage_idle.setValue( self._new_options.GetInteger( 'deferred_table_delete_rest_percentage_idle' ) )
|
||||
|
||||
self._deferred_table_delete_work_time_normal.SetValue( self._new_options.GetInteger( 'deferred_table_delete_work_time_ms_normal' ) / 1000 )
|
||||
self._deferred_table_delete_rest_percentage_normal.setValue( self._new_options.GetInteger( 'deferred_table_delete_rest_percentage_normal' ) )
|
||||
|
||||
self._deferred_table_delete_work_time_work_hard.SetValue( self._new_options.GetInteger( 'deferred_table_delete_work_time_ms_work_hard' ) / 1000 )
|
||||
self._deferred_table_delete_rest_percentage_work_hard.setValue( self._new_options.GetInteger( 'deferred_table_delete_rest_percentage_work_hard' ) )
|
||||
|
||||
#
|
||||
|
||||
rows = []
|
||||
|
@ -1975,10 +2114,89 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
#
|
||||
|
||||
message = 'Repository processing takes a lot of CPU and works best when it can rip for long periods in idle time.'
|
||||
|
||||
self._repository_processing_panel.Add( ClientGUICommon.BetterStaticText( self._repository_processing_panel, label = message ), CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
rows = []
|
||||
|
||||
rows.append( ( '"Very idle" ideal work packet time: ', self._repository_processing_work_time_very_idle ) )
|
||||
rows.append( ( '"Very idle" rest time percentage: ', self._repository_processing_rest_percentage_very_idle ) )
|
||||
rows.append( ( '"Idle" ideal work packet time: ', self._repository_processing_work_time_idle ) )
|
||||
rows.append( ( '"Idle" rest time percentage: ', self._repository_processing_rest_percentage_idle ) )
|
||||
rows.append( ( '"Normal" ideal work packet time: ', self._repository_processing_work_time_normal ) )
|
||||
rows.append( ( '"Normal" rest time percentage: ', self._repository_processing_rest_percentage_normal ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( self._repository_processing_panel, rows )
|
||||
|
||||
self._repository_processing_panel.Add( gridbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
||||
#
|
||||
|
||||
message = 'The database compiles sibling and parent implication calculations in the background. This can use a LOT of CPU in big bumps.'
|
||||
|
||||
self._tag_display_processing_panel.Add( ClientGUICommon.BetterStaticText( self._tag_display_processing_panel, label = message ), CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
rows = []
|
||||
|
||||
rows.append( ( 'Do work in "idle" time: ', self._tag_display_maintenance_during_idle ) )
|
||||
rows.append( ( '"Idle" ideal work packet time: ', self._tag_display_processing_work_time_idle ) )
|
||||
rows.append( ( '"Idle" rest time percentage: ', self._tag_display_processing_rest_percentage_idle ) )
|
||||
rows.append( ( 'Do work in "normal" time: ', self._tag_display_maintenance_during_active ) )
|
||||
rows.append( ( '"Normal" ideal work packet time: ', self._tag_display_processing_work_time_normal ) )
|
||||
rows.append( ( '"Normal" rest time percentage: ', self._tag_display_processing_rest_percentage_normal ) )
|
||||
rows.append( ( '"Work hard" ideal work packet time: ', self._tag_display_processing_work_time_work_hard ) )
|
||||
rows.append( ( '"Work hard" rest time percentage: ', self._tag_display_processing_rest_percentage_work_hard ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( self._tag_display_processing_panel, rows )
|
||||
|
||||
self._tag_display_processing_panel.Add( gridbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
||||
#
|
||||
|
||||
message = 'The search for potential duplicate file pairs (as on the duplicates page) can keep up to date automatically in idle time and shutdown.'
|
||||
|
||||
self._duplicates_panel.Add( ClientGUICommon.BetterStaticText( self._duplicates_panel, label = message ), CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
rows = []
|
||||
|
||||
rows.append( ( 'Search for potential duplicates in idle time/shutdown: ', self._maintain_similar_files_duplicate_pairs_during_idle ) )
|
||||
rows.append( ( '"Idle" ideal work packet time: ', self._potential_duplicates_search_work_time ) )
|
||||
rows.append( ( '"Idle" rest time percentage: ', self._potential_duplicates_search_rest_percentage ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( self._duplicates_panel, rows )
|
||||
|
||||
self._duplicates_panel.Add( gridbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
||||
#
|
||||
|
||||
message = 'The database deletes old data in the background.'
|
||||
|
||||
self._deferred_table_delete_panel.Add( ClientGUICommon.BetterStaticText( self._deferred_table_delete_panel, label = message ), CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
rows = []
|
||||
|
||||
rows.append( ( '"Idle" ideal work packet time: ', self._deferred_table_delete_work_time_idle ) )
|
||||
rows.append( ( '"Idle" rest time percentage: ', self._deferred_table_delete_rest_percentage_idle ) )
|
||||
rows.append( ( '"Normal" ideal work packet time: ', self._deferred_table_delete_work_time_normal ) )
|
||||
rows.append( ( '"Normal" rest time percentage: ', self._deferred_table_delete_rest_percentage_normal ) )
|
||||
rows.append( ( '"Work hard" ideal work packet time: ', self._deferred_table_delete_work_time_work_hard ) )
|
||||
rows.append( ( '"Work hard" rest time percentage: ', self._deferred_table_delete_rest_percentage_work_hard ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( self._deferred_table_delete_panel, rows )
|
||||
|
||||
self._deferred_table_delete_panel.Add( gridbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
||||
#
|
||||
|
||||
vbox = QP.VBoxLayout()
|
||||
|
||||
QP.AddToLayout( vbox, self._jobs_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, self._file_maintenance_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, self._repository_processing_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, self._tag_display_processing_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, self._duplicates_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, self._deferred_table_delete_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.addStretch( 1 )
|
||||
|
||||
self.setLayout( vbox )
|
||||
|
@ -2050,6 +2268,40 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
self._new_options.SetInteger( 'file_maintenance_active_throttle_files', file_maintenance_active_throttle_files )
|
||||
self._new_options.SetInteger( 'file_maintenance_active_throttle_time_delta', file_maintenance_active_throttle_time_delta )
|
||||
|
||||
self._new_options.SetInteger( 'repository_processing_work_time_ms_very_idle', int( self._repository_processing_work_time_very_idle.GetValue() * 1000 ) )
|
||||
self._new_options.SetInteger( 'repository_processing_rest_percentage_very_idle', self._repository_processing_rest_percentage_very_idle.value() )
|
||||
|
||||
self._new_options.SetInteger( 'repository_processing_work_time_ms_idle', int( self._repository_processing_work_time_idle.GetValue() * 1000 ) )
|
||||
self._new_options.SetInteger( 'repository_processing_rest_percentage_idle', self._repository_processing_rest_percentage_idle.value() )
|
||||
|
||||
self._new_options.SetInteger( 'repository_processing_work_time_ms_normal', int( self._repository_processing_work_time_normal.GetValue() * 1000 ) )
|
||||
self._new_options.SetInteger( 'repository_processing_rest_percentage_normal', self._repository_processing_rest_percentage_normal.value() )
|
||||
|
||||
self._new_options.SetBoolean( 'tag_display_maintenance_during_idle', self._tag_display_maintenance_during_idle.isChecked() )
|
||||
self._new_options.SetBoolean( 'tag_display_maintenance_during_active', self._tag_display_maintenance_during_active.isChecked() )
|
||||
|
||||
self._new_options.SetInteger( 'tag_display_processing_work_time_ms_idle', int( self._tag_display_processing_work_time_idle.GetValue() * 1000 ) )
|
||||
self._new_options.SetInteger( 'tag_display_processing_rest_percentage_idle', self._tag_display_processing_rest_percentage_idle.value() )
|
||||
|
||||
self._new_options.SetInteger( 'tag_display_processing_work_time_ms_normal', int( self._tag_display_processing_work_time_normal.GetValue() * 1000 ) )
|
||||
self._new_options.SetInteger( 'tag_display_processing_rest_percentage_normal', self._tag_display_processing_rest_percentage_normal.value() )
|
||||
|
||||
self._new_options.SetInteger( 'tag_display_processing_work_time_ms_work_hard', int( self._tag_display_processing_work_time_work_hard.GetValue() * 1000 ) )
|
||||
self._new_options.SetInteger( 'tag_display_processing_rest_percentage_work_hard', self._tag_display_processing_rest_percentage_work_hard.value() )
|
||||
|
||||
self._new_options.SetBoolean( 'maintain_similar_files_duplicate_pairs_during_idle', self._maintain_similar_files_duplicate_pairs_during_idle.isChecked() )
|
||||
self._new_options.SetInteger( 'potential_duplicates_search_work_time_ms', int( self._potential_duplicates_search_work_time.GetValue() * 1000 ) )
|
||||
self._new_options.SetInteger( 'potential_duplicates_search_rest_percentage', self._potential_duplicates_search_rest_percentage.value() )
|
||||
|
||||
self._new_options.SetInteger( 'deferred_table_delete_work_time_ms_idle', int( self._deferred_table_delete_work_time_idle.GetValue() * 1000 ) )
|
||||
self._new_options.SetInteger( 'deferred_table_delete_rest_percentage_idle', self._deferred_table_delete_rest_percentage_idle.value() )
|
||||
|
||||
self._new_options.SetInteger( 'deferred_table_delete_work_time_ms_normal', int( self._deferred_table_delete_work_time_normal.GetValue() * 1000 ) )
|
||||
self._new_options.SetInteger( 'deferred_table_delete_rest_percentage_normal', self._deferred_table_delete_rest_percentage_normal.value() )
|
||||
|
||||
self._new_options.SetInteger( 'deferred_table_delete_work_time_ms_work_hard', int( self._deferred_table_delete_work_time_work_hard.GetValue() * 1000 ) )
|
||||
self._new_options.SetInteger( 'deferred_table_delete_rest_percentage_work_hard', self._deferred_table_delete_rest_percentage_work_hard.value() )
|
||||
|
||||
|
||||
|
||||
class _MediaPanel( QW.QWidget ):
|
||||
|
|
|
@ -698,7 +698,7 @@ class TimeDeltaCtrl( QW.QWidget ):
|
|||
|
||||
timeDeltaChanged = QC.Signal()
|
||||
|
||||
def __init__( self, parent, min = 1, days = False, hours = False, minutes = False, seconds = False, monthly_allowed = False, monthly_label = 'monthly' ):
|
||||
def __init__( self, parent, min = 1, days = False, hours = False, minutes = False, seconds = False, milliseconds = False, monthly_allowed = False, monthly_label = 'monthly' ):
|
||||
|
||||
QW.QWidget.__init__( self, parent )
|
||||
|
||||
|
@ -707,6 +707,7 @@ class TimeDeltaCtrl( QW.QWidget ):
|
|||
self._show_hours = hours
|
||||
self._show_minutes = minutes
|
||||
self._show_seconds = seconds
|
||||
self._show_milliseconds = milliseconds
|
||||
self._monthly_allowed = monthly_allowed
|
||||
|
||||
hbox = QP.HBoxLayout( margin = 0 )
|
||||
|
@ -747,6 +748,15 @@ class TimeDeltaCtrl( QW.QWidget ):
|
|||
QP.AddToLayout( hbox, ClientGUICommon.BetterStaticText(self,'seconds'), CC.FLAGS_CENTER_PERPENDICULAR )
|
||||
|
||||
|
||||
if self._show_milliseconds:
|
||||
|
||||
self._milliseconds = ClientGUICommon.BetterSpinBox( self, min=0, max=999, width = 65 )
|
||||
self._milliseconds.valueChanged.connect( self.EventChange )
|
||||
|
||||
QP.AddToLayout( hbox, self._milliseconds, CC.FLAGS_CENTER_PERPENDICULAR )
|
||||
QP.AddToLayout( hbox, ClientGUICommon.BetterStaticText(self,'ms'), CC.FLAGS_CENTER_PERPENDICULAR )
|
||||
|
||||
|
||||
if self._monthly_allowed:
|
||||
|
||||
self._monthly = QW.QCheckBox( self )
|
||||
|
@ -763,49 +773,29 @@ class TimeDeltaCtrl( QW.QWidget ):
|
|||
|
||||
value = self.GetValue()
|
||||
|
||||
if value is None:
|
||||
if self._show_days:
|
||||
|
||||
if self._show_days:
|
||||
|
||||
self._days.setEnabled( False )
|
||||
|
||||
self._days.setEnabled( value is not None )
|
||||
|
||||
if self._show_hours:
|
||||
|
||||
self._hours.setEnabled( False )
|
||||
|
||||
|
||||
if self._show_hours:
|
||||
|
||||
if self._show_minutes:
|
||||
|
||||
self._minutes.setEnabled( False )
|
||||
|
||||
self._hours.setEnabled( value is not None )
|
||||
|
||||
if self._show_seconds:
|
||||
|
||||
self._seconds.setEnabled( False )
|
||||
|
||||
|
||||
if self._show_minutes:
|
||||
|
||||
else:
|
||||
self._minutes.setEnabled( value is not None )
|
||||
|
||||
if self._show_days:
|
||||
|
||||
self._days.setEnabled( True )
|
||||
|
||||
|
||||
if self._show_seconds:
|
||||
|
||||
if self._show_hours:
|
||||
|
||||
self._hours.setEnabled( True )
|
||||
|
||||
self._seconds.setEnabled( value is not None )
|
||||
|
||||
if self._show_minutes:
|
||||
|
||||
self._minutes.setEnabled( True )
|
||||
|
||||
|
||||
if self._show_milliseconds:
|
||||
|
||||
if self._show_seconds:
|
||||
|
||||
self._seconds.setEnabled( True )
|
||||
|
||||
self._milliseconds.setEnabled( value is not None )
|
||||
|
||||
|
||||
|
||||
|
@ -852,6 +842,11 @@ class TimeDeltaCtrl( QW.QWidget ):
|
|||
value += self._seconds.value()
|
||||
|
||||
|
||||
if self._show_milliseconds:
|
||||
|
||||
value += self._milliseconds.value() / 1000
|
||||
|
||||
|
||||
return value
|
||||
|
||||
|
||||
|
@ -899,7 +894,14 @@ class TimeDeltaCtrl( QW.QWidget ):
|
|||
|
||||
if self._show_seconds:
|
||||
|
||||
self._seconds.setValue( value )
|
||||
self._seconds.setValue( int( value ) )
|
||||
|
||||
value %= 1
|
||||
|
||||
|
||||
if self._show_milliseconds and value > 0:
|
||||
|
||||
self._milliseconds.setValue( int( value * 1000 ) )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -3889,12 +3889,12 @@ class ListBoxTagsMedia( ListBoxTagsDisplayCapable ):
|
|||
|
||||
def _UpdateTerms( self, limit_to_these_tags = None ):
|
||||
|
||||
# TODO: optimisation here--instead of remove/add, edit the terms _in place_ with new counts, removing now empty, and then just resort
|
||||
|
||||
previous_selected_terms = set( self._selected_terms )
|
||||
|
||||
if limit_to_these_tags is None:
|
||||
|
||||
self._Clear()
|
||||
|
||||
nonzero_tags = set()
|
||||
|
||||
if self._show_current: nonzero_tags.update( ( tag for ( tag, count ) in self._current_tags_to_count.items() if count > 0 ) )
|
||||
|
@ -3902,17 +3902,20 @@ class ListBoxTagsMedia( ListBoxTagsDisplayCapable ):
|
|||
if self._show_pending: nonzero_tags.update( ( tag for ( tag, count ) in self._pending_tags_to_count.items() if count > 0 ) )
|
||||
if self._show_petitioned: nonzero_tags.update( ( tag for ( tag, count ) in self._petitioned_tags_to_count.items() if count > 0 ) )
|
||||
|
||||
zero_tags = { tag for tag in ( term.GetTag() for term in self._ordered_terms ) if tag not in nonzero_tags }
|
||||
|
||||
else:
|
||||
|
||||
if len( limit_to_these_tags ) == 0:
|
||||
|
||||
return
|
||||
|
||||
|
||||
if not isinstance( limit_to_these_tags, set ):
|
||||
|
||||
limit_to_these_tags = set( limit_to_these_tags )
|
||||
|
||||
|
||||
clear_terms = [ self._GenerateTermFromTag( tag ) for tag in limit_to_these_tags ]
|
||||
|
||||
self._RemoveTerms( clear_terms )
|
||||
|
||||
nonzero_tags = set()
|
||||
|
||||
if self._show_current: nonzero_tags.update( ( tag for ( tag, count ) in self._current_tags_to_count.items() if count > 0 and tag in limit_to_these_tags ) )
|
||||
|
@ -3920,10 +3923,59 @@ class ListBoxTagsMedia( ListBoxTagsDisplayCapable ):
|
|||
if self._show_pending: nonzero_tags.update( ( tag for ( tag, count ) in self._pending_tags_to_count.items() if count > 0 and tag in limit_to_these_tags ) )
|
||||
if self._show_petitioned: nonzero_tags.update( ( tag for ( tag, count ) in self._petitioned_tags_to_count.items() if count > 0 and tag in limit_to_these_tags ) )
|
||||
|
||||
zero_tags = { tag for tag in limit_to_these_tags if tag not in nonzero_tags }
|
||||
|
||||
|
||||
if len( zero_tags ) + len( nonzero_tags ) == 0:
|
||||
|
||||
return
|
||||
|
||||
|
||||
removee_terms = [ self._GenerateTermFromTag( tag ) for tag in zero_tags ]
|
||||
|
||||
nonzero_terms = [ self._GenerateTermFromTag( tag ) for tag in nonzero_tags ]
|
||||
|
||||
self._AppendTerms( nonzero_terms )
|
||||
if len( removee_terms ) > len( self._ordered_terms ) / 2:
|
||||
|
||||
self._Clear()
|
||||
|
||||
altered_terms = []
|
||||
new_terms = nonzero_terms
|
||||
|
||||
else:
|
||||
|
||||
if len( removee_terms ) > 0:
|
||||
|
||||
self._RemoveTerms( removee_terms )
|
||||
|
||||
|
||||
exists_tuple = [ ( term in self._terms_to_logical_indices, term ) for term in nonzero_terms ]
|
||||
|
||||
altered_terms = [ term for ( exists, term ) in exists_tuple if exists ]
|
||||
|
||||
new_terms = [ term for ( exists, term ) in exists_tuple if not exists ]
|
||||
|
||||
|
||||
if len( altered_terms ) > 0:
|
||||
|
||||
for term in altered_terms:
|
||||
|
||||
actual_term = self._positional_indices_to_terms[ self._terms_to_positional_indices[ term ] ]
|
||||
|
||||
actual_term.UpdateFromOtherTerm( term )
|
||||
|
||||
|
||||
|
||||
sort_needed = self._tag_sort.AffectedByCount()
|
||||
|
||||
if len( new_terms ) > 0:
|
||||
|
||||
# TODO: if not sort_needed at this stage, it would be ideal to call an auto-sorting '_InsertTerms', if that is reasonably doable
|
||||
|
||||
self._AppendTerms( new_terms )
|
||||
|
||||
sort_needed = True
|
||||
|
||||
|
||||
for term in previous_selected_terms:
|
||||
|
||||
|
@ -3933,11 +3985,17 @@ class ListBoxTagsMedia( ListBoxTagsDisplayCapable ):
|
|||
|
||||
|
||||
|
||||
self._Sort()
|
||||
if sort_needed:
|
||||
|
||||
self._Sort()
|
||||
|
||||
|
||||
|
||||
def _Sort( self ):
|
||||
|
||||
# TODO: hey, rejigger this to cleverly and neatly not need to count tags if the sort doesn't care about counts at all!
|
||||
# probably means converting .SortTags later into cleaner subcalls and then calling them directly here or something
|
||||
|
||||
# I do this weird terms to count instead of tags to count because of tag vs ideal tag gubbins later on in sort
|
||||
|
||||
terms_to_count = collections.Counter()
|
||||
|
@ -3949,15 +4007,34 @@ class ListBoxTagsMedia( ListBoxTagsDisplayCapable ):
|
|||
( self._show_petitioned, self._petitioned_tags_to_count )
|
||||
]
|
||||
|
||||
counts_to_include = [ c for ( show, c ) in jobs if show ]
|
||||
counts_to_include = [ c for ( show, c ) in jobs if show and len( c ) > 0 ]
|
||||
|
||||
for term in self._ordered_terms:
|
||||
# this is a CPU sensitive area, so let's compress and hardcode the faster branches
|
||||
if len( counts_to_include ) == 1:
|
||||
|
||||
tag = term.GetTag()
|
||||
( c, ) = counts_to_include
|
||||
|
||||
count = sum( ( c[ tag ] for c in counts_to_include if tag in c ) )
|
||||
terms_to_count = collections.Counter(
|
||||
{ term : c[ term.GetTag() ] for term in self._ordered_terms }
|
||||
)
|
||||
|
||||
terms_to_count[ term ] = count
|
||||
elif len( counts_to_include ) == 2:
|
||||
|
||||
( c1, c2 ) = counts_to_include
|
||||
|
||||
tt_iter = ( ( term, term.GetTag() ) for term in self._ordered_terms )
|
||||
|
||||
terms_to_count = collections.Counter(
|
||||
{ term : c1[ tag ] + c2[ tag ] for ( term, tag ) in tt_iter }
|
||||
)
|
||||
|
||||
else:
|
||||
|
||||
tt_iter = ( ( term, term.GetTag() ) for term in self._ordered_terms )
|
||||
|
||||
terms_to_count = collections.Counter(
|
||||
{ term : sum( ( c[ tag ] for c in counts_to_include ) ) for ( term, tag ) in tt_iter }
|
||||
)
|
||||
|
||||
|
||||
item_to_tag_key_wrapper = lambda term: term.GetTag()
|
||||
|
|
|
@ -70,6 +70,11 @@ class ListBoxItem( object ):
|
|||
raise NotImplementedError()
|
||||
|
||||
|
||||
def UpdateFromOtherTerm( self, term: "ListBoxItem" ):
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class ListBoxItemTagSlice( ListBoxItem ):
|
||||
|
||||
def __init__( self, tag_slice: str ):
|
||||
|
@ -451,6 +456,15 @@ class ListBoxItemTextTagWithCounts( ListBoxItemTextTag ):
|
|||
return rows_of_texts_with_namespaces
|
||||
|
||||
|
||||
def UpdateFromOtherTerm( self, term: "ListBoxItemTextTagWithCounts" ):
|
||||
|
||||
self._current_count = term._current_count
|
||||
self._deleted_count = term._deleted_count
|
||||
self._pending_count = term._pending_count
|
||||
self._petitioned_count = term._petitioned_count
|
||||
self._include_actual_counts = term._include_actual_counts
|
||||
|
||||
|
||||
class ListBoxItemPredicate( ListBoxItem ):
|
||||
|
||||
def __init__( self, predicate: ClientSearch.Predicate ):
|
||||
|
|
|
@ -475,7 +475,7 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
|
|||
|
||||
check_manager = ClientGUICommon.CheckboxManagerOptions( 'maintain_similar_files_duplicate_pairs_during_idle' )
|
||||
|
||||
menu_items.append( ( 'check', 'search for duplicate pairs at the current distance during normal db maintenance', 'Tell the client to find duplicate pairs in its normal db maintenance cycles, whether you have that set to idle or shutdown time.', check_manager ) )
|
||||
menu_items.append( ( 'check', 'search for potential duplicates at the current distance during idle time/shutdown', 'Tell the client to find duplicate pairs in its normal db maintenance cycles, whether you have that set to idle or shutdown time.', check_manager ) )
|
||||
|
||||
self._cog_button = ClientGUIMenuButton.MenuBitmapButton( self._main_left_panel, CC.global_pixmaps().cog, menu_items )
|
||||
|
||||
|
|
|
@ -4783,6 +4783,7 @@ def AddRemoveMenu( win: MediaPanel, menu, filter_counts, all_specific_file_domai
|
|||
ClientGUIMenus.AppendMenu( menu, remove_menu, 'remove' )
|
||||
|
||||
|
||||
|
||||
def AddSelectMenu( win: MediaPanel, menu, filter_counts, all_specific_file_domains, has_local_and_remote ):
|
||||
|
||||
file_filter_all = ClientMediaFileFilter.FileFilter( ClientMediaFileFilter.FILE_FILTER_ALL )
|
||||
|
@ -4842,8 +4843,8 @@ def AddSelectMenu( win: MediaPanel, menu, filter_counts, all_specific_file_domai
|
|||
|
||||
ClientGUIMenus.AppendSeparator( select_menu )
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( select_menu, file_filter_local.ToStringWithCount( win, filter_counts ), 'Remove all the files that are in this client.', win._Select, file_filter_local )
|
||||
ClientGUIMenus.AppendMenuItem( select_menu, file_filter_remote.ToStringWithCount( win, filter_counts ), 'Remove all the files that are not in this client.', win._Select, file_filter_remote )
|
||||
ClientGUIMenus.AppendMenuItem( select_menu, file_filter_local.ToStringWithCount( win, filter_counts ), 'Select all the files that are in this client.', win._Select, file_filter_local )
|
||||
ClientGUIMenus.AppendMenuItem( select_menu, file_filter_remote.ToStringWithCount( win, filter_counts ), 'Select all the files that are not in this client.', win._Select, file_filter_remote )
|
||||
|
||||
|
||||
file_filter_selected = ClientMediaFileFilter.FileFilter( ClientMediaFileFilter.FILE_FILTER_SELECTED )
|
||||
|
|
|
@ -47,7 +47,7 @@ class EditMultipleLocationContextPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
name = service.GetName()
|
||||
service_key = service.GetServiceKey()
|
||||
|
||||
if service_key in ( CC.COMBINED_FILE_SERVICE_KEY, CC.TRASH_SERVICE_KEY ):
|
||||
if service.GetServiceType() in HC.FILE_SERVICES_WITH_NO_DELETE_RECORD:
|
||||
|
||||
continue
|
||||
|
||||
|
|
|
@ -62,6 +62,11 @@ class TagSort( HydrusSerialisable.SerialisableBase ):
|
|||
( self.sort_type, self.sort_order, self.use_siblings, self.group_by ) = serialisable_info
|
||||
|
||||
|
||||
def AffectedByCount( self ):
|
||||
|
||||
return self.sort_type == SORT_BY_COUNT
|
||||
|
||||
|
||||
def ToString( self ):
|
||||
|
||||
return '{} {}{}'.format(
|
||||
|
|
|
@ -373,7 +373,9 @@ class TagDisplayMaintenanceManager( object ):
|
|||
self._controller.sub( self, 'NotifyNewDisplayData', 'notify_new_tag_display_application' )
|
||||
|
||||
|
||||
def _GetAfterWorkWaitTime( self, service_key, expected_work_time, actual_work_time ):
|
||||
def _GetRestTime( self, service_key, expected_work_time, actual_work_time ):
|
||||
|
||||
rest_ratio = None
|
||||
|
||||
with self._lock:
|
||||
|
||||
|
@ -384,28 +386,37 @@ class TagDisplayMaintenanceManager( object ):
|
|||
self._go_faster.discard( service_key )
|
||||
|
||||
|
||||
return 0.1
|
||||
rest_ratio = HG.client_controller.new_options.GetInteger( 'tag_display_processing_rest_percentage_work_hard' ) / 100
|
||||
|
||||
|
||||
|
||||
if self._controller.CurrentlyIdle():
|
||||
if rest_ratio is None:
|
||||
|
||||
return 0.5
|
||||
|
||||
else:
|
||||
|
||||
if actual_work_time > expected_work_time * 10:
|
||||
if self._controller.CurrentlyIdle():
|
||||
|
||||
# if suddenly a job blats the user for ten seconds or _ten minutes_ during normal time, we are going to take a big break
|
||||
work_rest_ratio = 30
|
||||
rest_ratio = HG.client_controller.new_options.GetInteger( 'tag_display_processing_rest_percentage_idle' ) / 100
|
||||
|
||||
else:
|
||||
|
||||
work_rest_ratio = 9
|
||||
rest_ratio = HG.client_controller.new_options.GetInteger( 'tag_display_processing_rest_percentage_normal' ) / 100
|
||||
|
||||
if actual_work_time > expected_work_time * 10:
|
||||
|
||||
# if suddenly a job blats the user for ten seconds or _ten minutes_ during normal time, we are going to take a big break
|
||||
rest_ratio *= 5
|
||||
|
||||
|
||||
|
||||
return max( actual_work_time, expected_work_time ) * work_rest_ratio
|
||||
|
||||
if actual_work_time > expected_work_time * 10:
|
||||
|
||||
# if suddenly a job blats the user for ten seconds or _ten minutes_ during normal time, we are going to take a big break
|
||||
rest_ratio *= 30
|
||||
|
||||
|
||||
reasonable_work_time = min( 5 * expected_work_time, actual_work_time )
|
||||
|
||||
return reasonable_work_time * rest_ratio
|
||||
|
||||
|
||||
def _GetServiceKeyToWorkOn( self ):
|
||||
|
@ -433,25 +444,24 @@ class TagDisplayMaintenanceManager( object ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
# TODO: reinstitute accelerating time here with a new object
|
||||
# ok, this previously did a thing where it remembered the last last work time and accelerated the work time up to 30s
|
||||
# we should convert all this work/rest ratio thing, which we use all over the place, into an object we get a context around or something similar and unify it all
|
||||
# and then we can do clever stuff like accelerating code in one location!
|
||||
|
||||
if service_key in self._go_faster:
|
||||
|
||||
ideally = 30
|
||||
|
||||
base = max( 0.5, self._last_loop_work_time )
|
||||
|
||||
accelerating_time = min( base * 1.2, ideally )
|
||||
|
||||
return accelerating_time
|
||||
return HG.client_controller.new_options.GetInteger( 'tag_display_processing_work_time_ms_work_hard' ) / 1000
|
||||
|
||||
|
||||
|
||||
if self._controller.CurrentlyIdle():
|
||||
|
||||
return 15
|
||||
return HG.client_controller.new_options.GetInteger( 'tag_display_processing_work_time_ms_idle' ) / 1000
|
||||
|
||||
else:
|
||||
|
||||
return 0.1
|
||||
return HG.client_controller.new_options.GetInteger( 'tag_display_processing_work_time_ms_normal' ) / 1000
|
||||
|
||||
|
||||
|
||||
|
@ -589,7 +599,7 @@ class TagDisplayMaintenanceManager( object ):
|
|||
|
||||
self._service_keys_to_needs_work[ service_key ] = still_needs_work
|
||||
|
||||
wait_time = self._GetAfterWorkWaitTime( service_key, work_time, total_time_took )
|
||||
wait_time = self._GetRestTime( service_key, work_time, total_time_took )
|
||||
|
||||
self._last_loop_work_time = work_time
|
||||
|
||||
|
|
|
@ -103,7 +103,7 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 20
|
||||
SOFTWARE_VERSION = 548
|
||||
SOFTWARE_VERSION = 549
|
||||
CLIENT_API_VERSION = 54
|
||||
|
||||
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
@ -469,9 +469,6 @@ ADDREMOVABLE_SERVICES = ( LOCAL_TAG, LOCAL_FILE_DOMAIN, LOCAL_RATING_LIKE, LOCAL
|
|||
MUST_HAVE_AT_LEAST_ONE_SERVICES = ( LOCAL_TAG, LOCAL_FILE_DOMAIN )
|
||||
MUST_BE_EMPTY_OF_FILES_SERVICES = ( LOCAL_FILE_DOMAIN, )
|
||||
|
||||
FILE_SERVICES_WITH_NO_DELETE_RECORD = ( LOCAL_FILE_TRASH_DOMAIN, COMBINED_DELETED_FILE )
|
||||
FILE_SERVICES_WITH_DELETE_RECORD = tuple( ( t for t in FILE_SERVICES if t not in FILE_SERVICES_WITH_NO_DELETE_RECORD ) )
|
||||
|
||||
FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES = SPECIFIC_LOCAL_FILE_SERVICES + ( COMBINED_LOCAL_FILE, COMBINED_LOCAL_MEDIA, COMBINED_DELETED_FILE ) + REMOTE_FILE_SERVICES
|
||||
FILE_SERVICES_WITH_SPECIFIC_TAG_LOOKUP_CACHES = ( COMBINED_LOCAL_FILE, COMBINED_DELETED_FILE, FILE_REPOSITORY, IPFS )
|
||||
|
||||
|
@ -483,6 +480,9 @@ ALL_SERVICES = REMOTE_SERVICES + LOCAL_SERVICES + ( COMBINED_FILE, COMBINED_TAG,
|
|||
ALL_TAG_SERVICES = REAL_TAG_SERVICES + ( COMBINED_TAG, )
|
||||
ALL_FILE_SERVICES = FILE_SERVICES + ( COMBINED_FILE, )
|
||||
|
||||
FILE_SERVICES_WITH_NO_DELETE_RECORD = ( COMBINED_FILE, LOCAL_FILE_TRASH_DOMAIN, COMBINED_DELETED_FILE )
|
||||
FILE_SERVICES_WITH_DELETE_RECORD = tuple( ( t for t in FILE_SERVICES if t not in FILE_SERVICES_WITH_NO_DELETE_RECORD ) )
|
||||
|
||||
SERVICES_WITH_THUMBNAILS = [ FILE_REPOSITORY, LOCAL_FILE_DOMAIN ]
|
||||
|
||||
SERVICE_TYPES_TO_CONTENT_TYPES = {
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
import collections
|
||||
import distutils.version
|
||||
import os
|
||||
import queue
|
||||
import sqlite3
|
||||
|
@ -99,7 +98,7 @@ def VacuumDB( db_path ):
|
|||
|
||||
c = db.cursor()
|
||||
|
||||
fast_big_transaction_wal = not distutils.version.LooseVersion( sqlite3.sqlite_version ) < distutils.version.LooseVersion( '3.11.0' )
|
||||
fast_big_transaction_wal = not sqlite3.sqlite_version_info < ( 3, 11, 0 )
|
||||
|
||||
if HG.db_journal_mode == 'WAL' and not fast_big_transaction_wal:
|
||||
|
||||
|
|
|
@ -17,7 +17,6 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusThreading
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
mimes_to_default_thumbnail_paths = collections.defaultdict( lambda: os.path.join( HC.STATIC_DIR, 'hydrus.png' ) )
|
||||
|
||||
|
@ -553,8 +552,6 @@ def FileModifiedTimeIsOk( mtime: int ):
|
|||
|
||||
def safe_copy2( source, dest ):
|
||||
|
||||
copy_metadata = True
|
||||
|
||||
mtime = os.path.getmtime( source )
|
||||
|
||||
if FileModifiedTimeIsOk( mtime ):
|
||||
|
@ -567,6 +564,7 @@ def safe_copy2( source, dest ):
|
|||
shutil.copy( source, dest )
|
||||
|
||||
|
||||
|
||||
def MergeFile( source, dest ):
|
||||
|
||||
# this can merge a file, but if it is given a dir it will just straight up overwrite not merge
|
||||
|
@ -702,6 +700,24 @@ def MirrorFile( source, dest ):
|
|||
|
||||
HydrusData.ShowText( 'Trying to copy ' + source + ' to ' + dest + ' caused the following problem:' )
|
||||
|
||||
from hydrus.core import HydrusTemp
|
||||
|
||||
if isinstance( e, OSError ) and 'Errno 28' in str( e ) and dest.startswith( HydrusTemp.GetCurrentTempDir() ):
|
||||
|
||||
message = 'It looks like I failed to copy a file into your temporary folder because I ran out of disk space!'
|
||||
message += '\n' * 2
|
||||
message += 'This folder is on your system drive, so either free up space on that or use the "--temp_dir" launch command to tell hydrus to use a different location for the temporary folder. (Check the advanced help for more info!)'
|
||||
message += '\n' * 2
|
||||
message += 'If your system drive appears to have space but your temp folder still maxed out, then there are probably special rules about how big a file we are allowed to put in there. Use --temp_dir.'
|
||||
|
||||
if HC.PLATFORM_LINUX:
|
||||
|
||||
message += ' You are also on Linux, where these temp dir rules are not uncommon!'
|
||||
|
||||
|
||||
HydrusData.ShowText( message )
|
||||
|
||||
|
||||
HydrusData.ShowException( e )
|
||||
|
||||
return False
|
||||
|
|
|
@ -76,17 +76,34 @@ ECHO:
|
|||
ECHO Qt - User Interface
|
||||
ECHO:
|
||||
ECHO Most people want "6".
|
||||
ECHO If you are on Windows ^<=8.1, choose "5".
|
||||
ECHO If you have multi-monitor menu position bugs with the normal Qt6, try "o" on Python ^<=3.10 or "m" on Python ^>=3.11.
|
||||
SET /P qt="Do you want Qt(5), Qt(6), Qt6 (o)lder, Qt6 (m)iddle or (t)est? "
|
||||
ECHO If you are on Windows ^<=8.1, choose "5". If you want a specific version, choose "a".
|
||||
SET /P qt="Do you want Qt(5), Qt(6), or (a)dvanced? "
|
||||
|
||||
IF "%qt%" == "5" goto :question_mpv
|
||||
IF "%qt%" == "6" goto :question_mpv
|
||||
IF "%qt%" == "a" goto :question_qt_advanced
|
||||
goto :parse_fail
|
||||
|
||||
:question_qt_advanced
|
||||
|
||||
ECHO:
|
||||
ECHO If you have multi-monitor menu position bugs with the normal Qt6, try "o" on Python ^<=3.10 or "m" on Python ^>=3.11.
|
||||
SET /P qt="Do you want Qt6 (o)lder, Qt6 (m)iddle, Qt6 (t)est, or (w)rite your own? "
|
||||
|
||||
IF "%qt%" == "o" goto :question_mpv
|
||||
IF "%qt%" == "m" goto :question_mpv
|
||||
IF "%qt%" == "t" goto :question_mpv
|
||||
IF "%qt%" == "w" goto :question_qt_custom
|
||||
goto :parse_fail
|
||||
|
||||
:question_qt_custom
|
||||
|
||||
ECHO:
|
||||
SET /P qt_custom_pyside6="Enter the exact PySide6 version you want, e.g. '6.6.0': "
|
||||
SET /P qt_custom_qtpy="Enter the exact qtpy version you want (probably '2.4.1'): "
|
||||
|
||||
goto :question_mpv
|
||||
|
||||
:question_mpv
|
||||
|
||||
ECHO --------
|
||||
|
@ -176,6 +193,33 @@ IF "%install_type%" == "d" (
|
|||
|
||||
IF "%install_type%" == "a" (
|
||||
|
||||
IF "%qt%" == "w" (
|
||||
|
||||
python -m pip install QtPy==%qt_custom_qtpy%
|
||||
|
||||
IF ERRORLEVEL 1 (
|
||||
|
||||
SET /P gumpf="It looks like we could not find that qtpy version!"
|
||||
|
||||
popd
|
||||
|
||||
EXIT /B 1
|
||||
|
||||
)
|
||||
|
||||
python -m pip install PySide6==%qt_custom_pyside6%
|
||||
|
||||
IF ERRORLEVEL 1 (
|
||||
|
||||
SET /P gumpf="It looks like we could not find that PySide6 version!"
|
||||
|
||||
popd
|
||||
|
||||
EXIT /B 1
|
||||
|
||||
)
|
||||
)
|
||||
|
||||
python -m pip install -r static\requirements\advanced\requirements_core.txt
|
||||
python -m pip install -r static\requirements\advanced\requirements_windows.txt
|
||||
|
||||
|
|
|
@ -62,26 +62,47 @@ elif [ "$install_type" = "a" ]; then
|
|||
echo
|
||||
echo "Qt - User Interface"
|
||||
echo "Most people want \"6\"."
|
||||
echo "If you are <= 10.13 (High Sierra), choose \"5\"."
|
||||
echo "If you are <=10.15 (Catalina) or otherwise have trouble with the normal Qt6, try \"o\" on Python <=3.10 or \"m\" on Python >=3.11."
|
||||
echo "Do you want Qt(5), Qt(6), Qt6 (o)lder, Qt6 (m)iddle or (t)est? "
|
||||
echo "If you are <= 10.13 (High Sierra), choose \"5\". If you want a specific version, choose \"a\"."
|
||||
echo "Do you want Qt(5), Qt(6), or (a)dvanced? "
|
||||
read -r qt
|
||||
if [ "$qt" = "5" ]; then
|
||||
:
|
||||
elif [ "$qt" = "6" ]; then
|
||||
:
|
||||
elif [ "$qt" = "o" ]; then
|
||||
:
|
||||
elif [ "$qt" = "m" ]; then
|
||||
:
|
||||
elif [ "$qt" = "t" ]; then
|
||||
elif [ "$qt" = "a" ]; then
|
||||
:
|
||||
else
|
||||
echo "Sorry, did not understand that input!"
|
||||
popd || exit 1
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$qt" = "a" ]; then
|
||||
echo
|
||||
echo "If you are <=10.15 (Catalina) or otherwise have trouble with the normal Qt6, try \"o\" on Python <=3.10 or \"m\" on Python >=3.11."
|
||||
echo "Do you want Qt6 (o)lder, Qt6 (m)iddle, Qt6 (t)est, or (w)rite your own? "
|
||||
read -r qt
|
||||
if [ "$qt" = "o" ]; then
|
||||
:
|
||||
elif [ "$qt" = "m" ]; then
|
||||
:
|
||||
elif [ "$qt" = "t" ]; then
|
||||
:
|
||||
elif [ "$qt" = "w" ]; then
|
||||
:
|
||||
else
|
||||
echo "Sorry, did not understand that input!"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$qt" = "w" ]; then
|
||||
echo
|
||||
echo "Enter the exact PySide6 version you want, e.g. '6.6.0': "
|
||||
read -r qt_custom_pyside6
|
||||
echo "Enter the exact qtpy version you want (probably '2.4.1'): "
|
||||
read -r qt_custom_qtpy
|
||||
fi
|
||||
|
||||
echo "--------"
|
||||
echo "mpv - audio and video playback"
|
||||
echo
|
||||
|
@ -158,8 +179,30 @@ python -m pip install --upgrade pip
|
|||
python -m pip install --upgrade wheel
|
||||
|
||||
if [ "$install_type" = "s" ]; then
|
||||
|
||||
python -m pip install -r requirements.txt
|
||||
|
||||
elif [ "$install_type" = "a" ]; then
|
||||
|
||||
if [ "$qt" = "w" ]; then
|
||||
|
||||
python -m pip install qtpy="$qt_custom_qtpy"
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "It looks like we could not find that qtpy version!"
|
||||
popd || exit 1
|
||||
exit 1
|
||||
fi
|
||||
|
||||
python -m pip install PySide6="$qt_custom_pyside6"
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "It looks like we could not find that PySide6 version!"
|
||||
popd || exit 1
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
python -m pip install -r static/requirements/advanced/requirements_core.txt
|
||||
|
||||
if [ "$qt" = "5" ]; then
|
||||
|
|
|
@ -62,25 +62,47 @@ elif [ "$install_type" = "a" ]; then
|
|||
echo
|
||||
echo "Qt - User Interface"
|
||||
echo "Most people want \"6\"."
|
||||
echo "If you are <=Ubuntu 18.04 or equivalent, choose \"5\"."
|
||||
echo "If you cannot boot with the normal Qt6, try \"o\" on Python ^<=3.10 or \"m\" on Python ^>=3.11."
|
||||
echo "Do you want Qt(5), Qt(6), Qt6 (o)lder, Qt6 (m)iddle or (t)est? "
|
||||
echo "If you are <=Ubuntu 18.04 or equivalent, choose \"5\". If you want a specific version, choose \"a\"."
|
||||
echo "Do you want Qt(5), Qt(6), or (a)dvanced? "
|
||||
read -r qt
|
||||
if [ "$qt" = "5" ]; then
|
||||
:
|
||||
elif [ "$qt" = "6" ]; then
|
||||
:
|
||||
elif [ "$qt" = "o" ]; then
|
||||
:
|
||||
elif [ "$qt" = "m" ]; then
|
||||
:
|
||||
elif [ "$qt" = "t" ]; then
|
||||
elif [ "$qt" = "a" ]; then
|
||||
:
|
||||
else
|
||||
echo "Sorry, did not understand that input!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$qt" = "a" ]; then
|
||||
echo
|
||||
echo "If you cannot boot with the normal Qt6, try \"o\" on Python ^<=3.10 or \"m\" on Python ^>=3.11."
|
||||
echo "Do you want Qt6 (o)lder, Qt6 (m)iddle, Qt6 (t)est, or (w)rite your own? "
|
||||
read -r qt
|
||||
if [ "$qt" = "o" ]; then
|
||||
:
|
||||
elif [ "$qt" = "m" ]; then
|
||||
:
|
||||
elif [ "$qt" = "t" ]; then
|
||||
:
|
||||
elif [ "$qt" = "w" ]; then
|
||||
:
|
||||
else
|
||||
echo "Sorry, did not understand that input!"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$qt" = "w" ]; then
|
||||
echo
|
||||
echo "Enter the exact PySide6 version you want, e.g. '6.6.0': "
|
||||
read -r qt_custom_pyside6
|
||||
echo "Enter the exact qtpy version you want (probably '2.4.1'): "
|
||||
read -r qt_custom_qtpy
|
||||
fi
|
||||
|
||||
echo "--------"
|
||||
echo "mpv - audio and video playback"
|
||||
echo
|
||||
|
@ -157,8 +179,30 @@ python -m pip install --upgrade pip
|
|||
python -m pip install --upgrade wheel
|
||||
|
||||
if [ "$install_type" = "s" ]; then
|
||||
|
||||
python -m pip install -r requirements.txt
|
||||
|
||||
elif [ "$install_type" = "a" ]; then
|
||||
|
||||
if [ "$qt" = "w" ]; then
|
||||
|
||||
python -m pip install qtpy="$qt_custom_qtpy"
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "It looks like we could not find that qtpy version!"
|
||||
popd || exit 1
|
||||
exit 1
|
||||
fi
|
||||
|
||||
python -m pip install PySide6="$qt_custom_pyside6"
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "It looks like we could not find that PySide6 version!"
|
||||
popd || exit 1
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
python -m pip install -r static/requirements/advanced/requirements_core.txt
|
||||
|
||||
if [ "$qt" = "5" ]; then
|
||||
|
|
Before Width: | Height: | Size: 3.1 KiB After Width: | Height: | Size: 3.1 KiB |
Before Width: | Height: | Size: 2.6 KiB |
After Width: | Height: | Size: 2.7 KiB |
After Width: | Height: | Size: 2.8 KiB |
Before Width: | Height: | Size: 2.3 KiB After Width: | Height: | Size: 2.2 KiB |
|
@ -1,2 +1,2 @@
|
|||
QtPy==2.3.1
|
||||
PySide6==6.5.2
|
||||
QtPy==2.4.1
|
||||
PySide6==6.6.0
|
||||
|
|