Version 488
This commit is contained in:
parent
a39462c4bc
commit
4e4ef92cad
|
@ -17,7 +17,7 @@ Next, we have two classes of problem: files in the database but not in storage,
|
||||||
|
|
||||||
These are the cause of the missing file errors. They could have been deleted or damaged, or they might have been imported after you made your file storage backup. The are not on disk any more, but the database doesn't know that, so any time it tries to access where they _were_, it is surprised and throws the error. We need to let it fix all those records.
|
These are the cause of the missing file errors. They could have been deleted or damaged, or they might have been imported after you made your file storage backup. The are not on disk any more, but the database doesn't know that, so any time it tries to access where they _were_, it is surprised and throws the error. We need to let it fix all those records.
|
||||||
|
|
||||||
Hit _database->file maintenance->review_, and then hit the 'add new jobs' tab. Click the 'all media files' button, or just run a search for 'system:everything', and then queue up the job that says 'if file is missing and has URL, try to download, else remove record'. It should be the default job. There are other jobs--feel free to read their descriptions--but you want to deal with the 'missing' suite for now.
|
Hit _database->file maintenance->manage scheduled jobs_, and then hit the 'add new jobs' tab. Click the 'all media files' button, or just run a search for 'system:everything', and then queue up the job that says 'if file is missing and has URL, try to download, else remove record'. It should be the default job. There are other jobs--feel free to read their descriptions--but you want to deal with the 'missing' suite for now.
|
||||||
|
|
||||||
Queue up that job for everything and then on the first tab you can hurry all that work along. It may take an hour or more to check all your files. Any of the files with URLs (i.e. anything you once downloaded) will be queued up in a new download page that may get pretty large, so you might like to pause file maintenance under the same _database->file maintenance_ menu for a bit if it gets too big.
|
Queue up that job for everything and then on the first tab you can hurry all that work along. It may take an hour or more to check all your files. Any of the files with URLs (i.e. anything you once downloaded) will be queued up in a new download page that may get pretty large, so you might like to pause file maintenance under the same _database->file maintenance_ menu for a bit if it gets too big.
|
||||||
|
|
||||||
|
|
|
@ -5,6 +5,22 @@
|
||||||
|
|
||||||
## [Version 487](https://github.com/hydrusnetwork/hydrus/releases/tag/v487)
|
## [Version 487](https://github.com/hydrusnetwork/hydrus/releases/tag/v487)
|
||||||
|
|
||||||
|
### all misc this week
|
||||||
|
* the client now supports 'wavpack' files. these are basically a kind of compressed wav. mpv seems to play them fine too!
|
||||||
|
* added a new file maintenance action, 'if file is missing, note it in log', which records the metadata about missing files to the database directory but makes no other action
|
||||||
|
* the 'file is missing/incorrect' file maintenance jobs now also export the files' tags to the database directory, to further help identify them
|
||||||
|
* simplified the logic behind the 'remove files if they are trashed' option. it should fire off more reliably now, even if you have a weird multiple-domain location for the current page, and still not fire if you are actually looking at the trash
|
||||||
|
* if you paste an URL into the normal 'urls' downloader page, and it already has that URL and the URL has status 'failed', that existing URL will now be tried again. let's see how this works IRL, maybe it needs an option, maybe this feels natural when it comes up
|
||||||
|
* the default bandwidth rules are boosted. the client is more efficient these days and doesn't need so many forced breaks on big import lists, and the internet has generally moved on. thanks to the users who helped talk out what the new limits should aim at. if you are an existing user, you can change your current defaults under _network->data->review bandwidth usage and edit rules_--there's even a button to revert your defaults 'back' to these new rules
|
||||||
|
* now like all its neighbours, the cog icon on the duplicate right-side hover no longer annoyingly steals keyboard focus on a click.
|
||||||
|
* did some code and logic cleanup around 'delete files', particularly to improve repository update deletes now we have multiple local file services, and in planning for future maintenance in this area
|
||||||
|
* all the 'yes yes no' dialogs--the ones with multiple yes options--are moved to the newer panel system and will render their size and layout a bit more uniformly
|
||||||
|
* may have fixed an issue with a very slow to boot client trying to politely wait on the thumbnail cache before it instantiates
|
||||||
|
* misc UI text rewording and layout flag fixes
|
||||||
|
* fixed some jank formatting on database migration help
|
||||||
|
|
||||||
|
## [Version 487](https://github.com/hydrusnetwork/hydrus/releases/tag/v487)
|
||||||
|
|
||||||
### misc
|
### misc
|
||||||
* updated the duplicate filter 'show next pair' logic again, mostly simplification and merging of decision making. it _should_ be even more resistant to weird problems at the end of batches, particularly if you have deleted files manually
|
* updated the duplicate filter 'show next pair' logic again, mostly simplification and merging of decision making. it _should_ be even more resistant to weird problems at the end of batches, particularly if you have deleted files manually
|
||||||
* a new button on the duplicate filter right hover window now appends the current pair to the parent duplicate media page (for if you want to do more processing to them later)
|
* a new button on the duplicate filter right hover window now appends the current pair to the parent duplicate media page (for if you want to do more processing to them later)
|
||||||
|
|
|
@ -60,9 +60,9 @@ A straight call to the client executable will look for a database in _install_di
|
||||||
|
|
||||||
So, pass it a -d or --db_dir command line argument, like so:
|
So, pass it a -d or --db_dir command line argument, like so:
|
||||||
|
|
||||||
* `client -d="D:\\media\\my\_hydrus\_database"`
|
* `client -d="D:\media\my_hydrus_database"`
|
||||||
* _--or--_
|
* _--or--_
|
||||||
* `client --db_dir="G:\\misc documents\\New Folder (3)\\DO NOT ENTER"`
|
* `client --db_dir="G:\misc documents\New Folder (3)\DO NOT ENTER"`
|
||||||
* _--or, for macOS--_
|
* _--or, for macOS--_
|
||||||
* `open -n -a "Hydrus Network.app" --args -d="/path/to/db"`
|
* `open -n -a "Hydrus Network.app" --args -d="/path/to/db"`
|
||||||
|
|
||||||
|
@ -111,4 +111,4 @@ You should now have _something_ like this:
|
||||||
|
|
||||||
## p.s. running multiple clients { id="multiple_clients" }
|
## p.s. running multiple clients { id="multiple_clients" }
|
||||||
|
|
||||||
Since you now know how to tell the software about an external database, you can, if you like, run multiple clients from the same install (and if you previously had multiple install folders, now you can now just use the one). Just make multiple shortcuts to the same client executable but with different database directories. They can run at the same time. You'll save yourself a little memory and update-hassle. I do this on my laptop client to run a regular client for my media and a separate 'admin' client to do PTR petitions and so on.
|
Since you now know how to tell the software about an external database, you can, if you like, run multiple clients from the same install (and if you previously had multiple install folders, now you can now just use the one). Just make multiple shortcuts to the same client executable but with different database directories. They can run at the same time. You'll save yourself a little memory and update-hassle. I do this on my laptop client to run a regular client for my media and a separate 'admin' client to do PTR petitions and so on.
|
||||||
|
|
|
@ -33,6 +33,21 @@
|
||||||
<div class="content">
|
<div class="content">
|
||||||
<h3 id="changelog"><a href="#changelog">changelog</a></h3>
|
<h3 id="changelog"><a href="#changelog">changelog</a></h3>
|
||||||
<ul>
|
<ul>
|
||||||
|
<li><h3 id="version_488"><a href="#version_488">version 488</a></h3></li>
|
||||||
|
<ul>
|
||||||
|
<li>the client now supports 'wavpack' files. these are basically a kind of compressed wav. mpv seems to play them fine too!</li>
|
||||||
|
<li>added a new file maintenance action, 'if file is missing, note it in log', which records the metadata about missing files to the database directory but makes no other action</li>
|
||||||
|
<li>the 'file is missing/incorrect' file maintenance jobs now also export the files' tags to the database directory, to further help identify them</li>
|
||||||
|
<li>simplified the logic behind the 'remove files if they are trashed' option. it should fire off more reliably now, even if you have a weird multiple-domain location for the current page, and still not fire if you are actually looking at the trash</li>
|
||||||
|
<li>if you paste an URL into the normal 'urls' downloader page, and it already has that URL and the URL has status 'failed', that existing URL will now be tried again. let's see how this works IRL, maybe it needs an option, maybe this feels natural when it comes up</li>
|
||||||
|
<li>the default bandwidth rules are boosted. the client is more efficient these days and doesn't need so many forced breaks on big import lists, and the internet has generally moved on. thanks to the users who helped talk out what the new limits should aim at. if you are an existing user, you can change your current defaults under _network->data->review bandwidth usage and edit rules_--there's even a button to revert your defaults 'back' to these new rules</li>
|
||||||
|
<li>now like all its neighbours, the cog icon on the duplicate right-side hover no longer annoyingly steals keyboard focus on a click.</li>
|
||||||
|
<li>did some code and logic cleanup around 'delete files', particularly to improve repository update deletes now we have multiple local file services, and in planning for future maintenance in this area</li>
|
||||||
|
<li>all the 'yes yes no' dialogs--the ones with multiple yes options--are moved to the newer panel system and will render their size and layout a bit more uniformly</li>
|
||||||
|
<li>may have fixed an issue with a very slow to boot client trying to politely wait on the thumbnail cache before it instantiates</li>
|
||||||
|
<li>misc UI text rewording and layout flag fixes</li>
|
||||||
|
<li>fixed some jank formatting on database migration help</li>
|
||||||
|
</ul>
|
||||||
<li><h3 id="version_487"><a href="#version_487">version 487</a></h3></li>
|
<li><h3 id="version_487"><a href="#version_487">version 487</a></h3></li>
|
||||||
<ul>
|
<ul>
|
||||||
<li>misc:</li>
|
<li>misc:</li>
|
||||||
|
|
|
@ -1006,6 +1006,14 @@ class Controller( HydrusController.HydrusController ):
|
||||||
|
|
||||||
self.frame_splash_status.SetText( 'initialising managers' )
|
self.frame_splash_status.SetText( 'initialising managers' )
|
||||||
|
|
||||||
|
self.frame_splash_status.SetSubtext( 'image caches' )
|
||||||
|
|
||||||
|
# careful: outside of qt since they don't need qt for init, seems ok _for now_
|
||||||
|
self._caches[ 'images' ] = ClientCaches.ImageRendererCache( self )
|
||||||
|
self._caches[ 'image_tiles' ] = ClientCaches.ImageTileCache( self )
|
||||||
|
self._caches[ 'thumbnail' ] = ClientCaches.ThumbnailCache( self )
|
||||||
|
self.bitmap_manager = ClientManagers.BitmapManager( self )
|
||||||
|
|
||||||
self.frame_splash_status.SetSubtext( 'services' )
|
self.frame_splash_status.SetSubtext( 'services' )
|
||||||
|
|
||||||
self.services_manager = ClientServices.ServicesManager( self )
|
self.services_manager = ClientServices.ServicesManager( self )
|
||||||
|
@ -1192,14 +1200,6 @@ class Controller( HydrusController.HydrusController ):
|
||||||
|
|
||||||
self._managers[ 'undo' ] = ClientManagers.UndoManager( self )
|
self._managers[ 'undo' ] = ClientManagers.UndoManager( self )
|
||||||
|
|
||||||
self.frame_splash_status.SetSubtext( 'image caches' )
|
|
||||||
|
|
||||||
# careful: outside of qt since they don't need qt for init, seems ok _for now_
|
|
||||||
self._caches[ 'images' ] = ClientCaches.ImageRendererCache( self )
|
|
||||||
self._caches[ 'image_tiles' ] = ClientCaches.ImageTileCache( self )
|
|
||||||
self._caches[ 'thumbnail' ] = ClientCaches.ThumbnailCache( self )
|
|
||||||
self.bitmap_manager = ClientManagers.BitmapManager( self )
|
|
||||||
|
|
||||||
self.sub( self, 'ToClipboard', 'clipboard' )
|
self.sub( self, 'ToClipboard', 'clipboard' )
|
||||||
|
|
||||||
|
|
||||||
|
@ -2281,14 +2281,20 @@ class Controller( HydrusController.HydrusController ):
|
||||||
|
|
||||||
def WaitUntilThumbnailsFree( self ):
|
def WaitUntilThumbnailsFree( self ):
|
||||||
|
|
||||||
self._caches[ 'thumbnail' ].WaitUntilFree()
|
if 'thumbnail' in self._caches:
|
||||||
|
|
||||||
|
self._caches[ 'thumbnail' ].WaitUntilFree()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def Write( self, action, *args, **kwargs ):
|
def Write( self, action, *args, **kwargs ):
|
||||||
|
|
||||||
if action == 'content_updates':
|
if action == 'content_updates':
|
||||||
|
|
||||||
self._managers[ 'undo' ].AddCommand( 'content_updates', *args, **kwargs )
|
if 'undo' in self._managers:
|
||||||
|
|
||||||
|
self._managers[ 'undo' ].AddCommand( 'content_updates', *args, **kwargs )
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
return HydrusController.HydrusController.Write( self, action, *args, **kwargs )
|
return HydrusController.HydrusController.Write( self, action, *args, **kwargs )
|
||||||
|
|
|
@ -526,9 +526,8 @@ def SetDefaultBandwidthManagerRules( bandwidth_manager ):
|
||||||
rules = HydrusNetworking.BandwidthRules()
|
rules = HydrusNetworking.BandwidthRules()
|
||||||
|
|
||||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 1, 5 ) # stop accidental spam
|
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 1, 5 ) # stop accidental spam
|
||||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 60, 512 * MB ) # smooth out heavy usage. db and gui prob need a break
|
|
||||||
|
|
||||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 10 * GB ) # check your inbox lad
|
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 16 * GB ) # check your inbox lad
|
||||||
|
|
||||||
bandwidth_manager.SetRules( ClientNetworkingContexts.GLOBAL_NETWORK_CONTEXT, rules )
|
bandwidth_manager.SetRules( ClientNetworkingContexts.GLOBAL_NETWORK_CONTEXT, rules )
|
||||||
|
|
||||||
|
@ -538,7 +537,7 @@ def SetDefaultBandwidthManagerRules( bandwidth_manager ):
|
||||||
|
|
||||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 1, 1 ) # don't ever hammer a domain
|
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 1, 1 ) # don't ever hammer a domain
|
||||||
|
|
||||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 2 * GB ) # don't go nuts on a site in a single day
|
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 8 * GB ) # don't go nuts on a site in a single day
|
||||||
|
|
||||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_DOMAIN ), rules )
|
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_DOMAIN ), rules )
|
||||||
|
|
||||||
|
@ -554,7 +553,7 @@ def SetDefaultBandwidthManagerRules( bandwidth_manager ):
|
||||||
|
|
||||||
rules = HydrusNetworking.BandwidthRules()
|
rules = HydrusNetworking.BandwidthRules()
|
||||||
|
|
||||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 300, 512 * MB ) # just a careful stopgap
|
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 300, 1024 * MB ) # just a careful stopgap
|
||||||
|
|
||||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_DOWNLOADER_PAGE ), rules )
|
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_DOWNLOADER_PAGE ), rules )
|
||||||
|
|
||||||
|
@ -563,9 +562,9 @@ def SetDefaultBandwidthManagerRules( bandwidth_manager ):
|
||||||
rules = HydrusNetworking.BandwidthRules()
|
rules = HydrusNetworking.BandwidthRules()
|
||||||
|
|
||||||
# most gallery downloaders need two rqs per file (page and file), remember
|
# most gallery downloaders need two rqs per file (page and file), remember
|
||||||
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 86400, 800 ) # catch up on a big sub in little chunks every day
|
rules.AddRule( HC.BANDWIDTH_TYPE_REQUESTS, 86400, 1000 ) # catch up on a swell of many new things in chunks every day
|
||||||
|
|
||||||
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 768 * MB ) # catch up on a big sub in little chunks every day
|
rules.AddRule( HC.BANDWIDTH_TYPE_DATA, 86400, 1024 * MB ) # catch up on a stonking bump in chunks every day
|
||||||
|
|
||||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_SUBSCRIPTION ), rules )
|
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_SUBSCRIPTION ), rules )
|
||||||
|
|
||||||
|
@ -573,6 +572,8 @@ def SetDefaultBandwidthManagerRules( bandwidth_manager ):
|
||||||
|
|
||||||
rules = HydrusNetworking.BandwidthRules()
|
rules = HydrusNetworking.BandwidthRules()
|
||||||
|
|
||||||
|
# watchers have time pressure, so no additional rules beyond global and domain limits
|
||||||
|
|
||||||
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_WATCHER_PAGE ), rules )
|
bandwidth_manager.SetRules( ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_WATCHER_PAGE ), rules )
|
||||||
|
|
||||||
#
|
#
|
||||||
|
|
|
@ -22,6 +22,7 @@ from hydrus.client import ClientImageHandling
|
||||||
from hydrus.client import ClientPaths
|
from hydrus.client import ClientPaths
|
||||||
from hydrus.client import ClientThreading
|
from hydrus.client import ClientThreading
|
||||||
from hydrus.client.gui import QtPorting as QP
|
from hydrus.client.gui import QtPorting as QP
|
||||||
|
from hydrus.client.metadata import ClientTags
|
||||||
|
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_METADATA = 0
|
REGENERATE_FILE_DATA_JOB_FILE_METADATA = 0
|
||||||
REGENERATE_FILE_DATA_JOB_FORCE_THUMBNAIL = 1
|
REGENERATE_FILE_DATA_JOB_FORCE_THUMBNAIL = 1
|
||||||
|
@ -41,6 +42,7 @@ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD = 14
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD = 15
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD = 15
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE = 16
|
REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE = 16
|
||||||
REGENERATE_FILE_DATA_JOB_PIXEL_HASH = 17
|
REGENERATE_FILE_DATA_JOB_PIXEL_HASH = 17
|
||||||
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY = 18
|
||||||
|
|
||||||
regen_file_enum_to_str_lookup = {
|
regen_file_enum_to_str_lookup = {
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_METADATA : 'regenerate file metadata',
|
REGENERATE_FILE_DATA_JOB_FILE_METADATA : 'regenerate file metadata',
|
||||||
|
@ -51,6 +53,7 @@ regen_file_enum_to_str_lookup = {
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD : 'if file is missing, remove record',
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD : 'if file is missing, remove record',
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL : 'if file is missing, then if has URL try to redownload',
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL : 'if file is missing, then if has URL try to redownload',
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD : 'if file is missing, then if has URL try to redownload, else remove record',
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD : 'if file is missing, then if has URL try to redownload, else remove record',
|
||||||
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY : 'if file is missing, note it in log',
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD : 'if file is missing/incorrect, move file out and remove record',
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD : 'if file is missing/incorrect, move file out and remove record',
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL : 'if file is missing/incorrect, then move file out, and if has URL try to redownload',
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL : 'if file is missing/incorrect, then move file out, and if has URL try to redownload',
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD : 'if file is missing/incorrect, then move file out, and if has URL try to redownload, else remove record',
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD : 'if file is missing/incorrect, then move file out, and if has URL try to redownload, else remove record',
|
||||||
|
@ -69,12 +72,13 @@ regen_file_enum_to_description_lookup = {
|
||||||
REGENERATE_FILE_DATA_JOB_REFIT_THUMBNAIL : 'This looks for the existing thumbnail, and if it is not the correct resolution or is missing, will regenerate a new one for the source file.',
|
REGENERATE_FILE_DATA_JOB_REFIT_THUMBNAIL : 'This looks for the existing thumbnail, and if it is not the correct resolution or is missing, will regenerate a new one for the source file.',
|
||||||
REGENERATE_FILE_DATA_JOB_OTHER_HASHES : 'This regenerates hydrus\'s store of md5, sha1, and sha512 supplementary hashes, which it can use for various external (usually website) lookups.',
|
REGENERATE_FILE_DATA_JOB_OTHER_HASHES : 'This regenerates hydrus\'s store of md5, sha1, and sha512 supplementary hashes, which it can use for various external (usually website) lookups.',
|
||||||
REGENERATE_FILE_DATA_JOB_DELETE_NEIGHBOUR_DUPES : 'Sometimes, a file metadata regeneration will mean a new filetype and thus a new file extension. If the existing, incorrectly named file is in use, it must be copied rather than renamed, and so there is a spare duplicate left over after the operation. This jobs cleans up the duplicate at a later time.',
|
REGENERATE_FILE_DATA_JOB_DELETE_NEIGHBOUR_DUPES : 'Sometimes, a file metadata regeneration will mean a new filetype and thus a new file extension. If the existing, incorrectly named file is in use, it must be copied rather than renamed, and so there is a spare duplicate left over after the operation. This jobs cleans up the duplicate at a later time.',
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD : 'This checks to see if the file is present in the file system as expected. If it is not, the internal file record in the database is removed, just as if the file were deleted. Use this if you have manually deleted or otherwise lost a number of files from your file structure and need hydrus to re-sync with what it actually has. Missing files will have their known URLs exported to your database directory.',
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD : 'This checks to see if the file is present in the file system as expected. If it is not, the internal file record in the database is removed, just as if the file were deleted. Use this if you have manually deleted or otherwise lost a number of files from your file structure and need hydrus to re-sync with what it actually has. Missing files will have their hashes, tags, and URLs exported to your database directory.',
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL : 'This checks to see if the file is present in the file system as expected. If it is not, and it has known post/file URLs, the URLs will be automatically added to a new URL downloader. Missing files will also have their known URLs exported to your database directory.',
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL : 'This checks to see if the file is present in the file system as expected. If it is not, and it has known post/file URLs, the URLs will be automatically added to a new URL downloader. Missing files will also have their hashes, tags, and URLs exported to your database directory.',
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD : 'THIS IS THE EASY AND QUICK ONE-SHOT WAY TO FIX A DATABASE WITH MISSING FILES. This checks to see if the file is present in the file system as expected. If it is not, then if it has known post/file URLs, the URLs will be automatically added to a new URL downloader. If it has no URLs, then the internal file record in the database is removed, just as if the file were deleted. Missing files will also have their known URLs exported to your database directory.',
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD : 'THIS IS THE EASY AND QUICK ONE-SHOT WAY TO FIX A DATABASE WITH MISSING FILES. This checks to see if the file is present in the file system as expected. If it is not, then if it has known post/file URLs, the URLs will be automatically added to a new URL downloader. If it has no URLs, then the internal file record in the database is removed, just as if the file were deleted. Missing files will also have their hashes, tags, and URLs exported to your database directory.',
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD : 'This does the same check as the \'file is missing\' job, and if the file is where it is expected, it ensures its file content, byte-for-byte, is correct. This is a heavy job, so be wary. If the file is incorrect, it will be exported to your database directory along with their known URLs, and the file record deleted.',
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY : 'This checks to see if the file is present in the file system as expected. If it is not, it records the file\'s hash, tags, and URLs to your database directory, just like the other "missing file" jobs, but makes no other action.',
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL : 'This does the same check as the \'file is missing\' job, and if the file is where it is expected, it ensures its file content, byte-for-byte, is correct. This is a heavy job, so be wary. If the file is incorrect _and_ is has known post/file URLs, the URLs will be automatically added to a new URL downloader. Incorrect files will also have their known URLs exported to your database directory.',
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD : 'This does the same check as the \'file is missing\' job, and if the file is where it is expected, it ensures its file content, byte-for-byte, is correct. This is a heavy job, so be wary. If the file is incorrect, it will be exported to your database directory along with its tags and URLs, and the file record deleted.',
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD : 'This does the same check as the \'file is missing\' job, and if the file is where it is expected, it ensures its file content, byte-for-byte, is correct. This is a heavy job, so be wary. If the file is incorrect _and_ is has known post/file URLs, the URLs will be automatically added to a new URL downloader. If it has no URLs, then the internal file record in the database is removed, just as if the file were deleted. Incorrect files will also have their known URLs exported to your database directory.',
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL : 'This does the same check as the \'file is missing\' job, and if the file is where it is expected, it ensures its file content, byte-for-byte, is correct. This is a heavy job, so be wary. If the file is incorrect _and_ is has known post/file URLs, the URLs will be automatically added to a new URL downloader. Incorrect files will also have their hashes, tags, and URLs exported to your database directory.',
|
||||||
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD : 'This does the same check as the \'file is missing\' job, and if the file is where it is expected, it ensures its file content, byte-for-byte, is correct. This is a heavy job, so be wary. If the file is incorrect _and_ is has known post/file URLs, the URLs will be automatically added to a new URL downloader. If it has no URLs, then the internal file record in the database is removed, just as if the file were deleted. Incorrect files will also have their hashes, tags, and URLs exported to your database directory.',
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE : 'If the file is where it is expected, this ensures its file content, byte-for-byte, is correct. This is a heavy job, so be wary. If the file is incorrect, it will be exported to your database directory along with its known URLs. The client\'s file record will not be deleted. This is useful if you have a valid backup and need to clear out invalid files from your live db so you can fill in gaps from your backup with a program like FreeFileSync.',
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE : 'If the file is where it is expected, this ensures its file content, byte-for-byte, is correct. This is a heavy job, so be wary. If the file is incorrect, it will be exported to your database directory along with its known URLs. The client\'s file record will not be deleted. This is useful if you have a valid backup and need to clear out invalid files from your live db so you can fill in gaps from your backup with a program like FreeFileSync.',
|
||||||
REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS : 'This ensures that files in the file system are readable and writeable. For Linux/macOS users, it specifically sets 644. If you wish to run this job on Linux/macOS, ensure you are first the file owner of all your files.',
|
REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS : 'This ensures that files in the file system are readable and writeable. For Linux/macOS users, it specifically sets 644. If you wish to run this job on Linux/macOS, ensure you are first the file owner of all your files.',
|
||||||
REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP : 'This checks to see if files should be in the similar files system, and if they are falsely in or falsely out, it will remove their record or queue them up for a search as appropriate. It is useful to repair database damage.',
|
REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP : 'This checks to see if files should be in the similar files system, and if they are falsely in or falsely out, it will remove their record or queue them up for a search as appropriate. It is useful to repair database damage.',
|
||||||
|
@ -93,17 +97,18 @@ regen_file_enum_to_job_weight_lookup = {
|
||||||
REGENERATE_FILE_DATA_JOB_OTHER_HASHES : 100,
|
REGENERATE_FILE_DATA_JOB_OTHER_HASHES : 100,
|
||||||
REGENERATE_FILE_DATA_JOB_DELETE_NEIGHBOUR_DUPES : 25,
|
REGENERATE_FILE_DATA_JOB_DELETE_NEIGHBOUR_DUPES : 25,
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD : 5,
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD : 5,
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL : 50,
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL : 25,
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD : 55,
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD : 30,
|
||||||
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY : 5,
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD : 100,
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD : 100,
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL : 100,
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL : 100,
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD : 100,
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD : 100,
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE : 100,
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE : 100,
|
||||||
REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS : 25,
|
REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS : 25,
|
||||||
REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP : 50,
|
REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP : 20,
|
||||||
REGENERATE_FILE_DATA_JOB_SIMILAR_FILES_METADATA : 100,
|
REGENERATE_FILE_DATA_JOB_SIMILAR_FILES_METADATA : 100,
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_MODIFIED_TIMESTAMP : 10,
|
REGENERATE_FILE_DATA_JOB_FILE_MODIFIED_TIMESTAMP : 10,
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE : 100,
|
REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE : 25,
|
||||||
REGENERATE_FILE_DATA_JOB_PIXEL_HASH : 100
|
REGENERATE_FILE_DATA_JOB_PIXEL_HASH : 100
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -113,12 +118,13 @@ regen_file_enum_to_overruled_jobs = {
|
||||||
REGENERATE_FILE_DATA_JOB_REFIT_THUMBNAIL : [],
|
REGENERATE_FILE_DATA_JOB_REFIT_THUMBNAIL : [],
|
||||||
REGENERATE_FILE_DATA_JOB_OTHER_HASHES : [],
|
REGENERATE_FILE_DATA_JOB_OTHER_HASHES : [],
|
||||||
REGENERATE_FILE_DATA_JOB_DELETE_NEIGHBOUR_DUPES : [],
|
REGENERATE_FILE_DATA_JOB_DELETE_NEIGHBOUR_DUPES : [],
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD : [],
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY : [],
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL : [],
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD : [ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY ],
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD : [ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD ],
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL : [ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY ],
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD : [ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD ],
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD : [ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD ],
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL : [ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL ],
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD : [ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD ],
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD : [ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD ],
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL : [ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL ],
|
||||||
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD : [ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD ],
|
||||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE : [],
|
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE : [],
|
||||||
REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS : [],
|
REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS : [],
|
||||||
REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP : [],
|
REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP : [],
|
||||||
|
@ -128,7 +134,7 @@ regen_file_enum_to_overruled_jobs = {
|
||||||
REGENERATE_FILE_DATA_JOB_PIXEL_HASH : []
|
REGENERATE_FILE_DATA_JOB_PIXEL_HASH : []
|
||||||
}
|
}
|
||||||
|
|
||||||
ALL_REGEN_JOBS_IN_PREFERRED_ORDER = [ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE, REGENERATE_FILE_DATA_JOB_FILE_METADATA, REGENERATE_FILE_DATA_JOB_REFIT_THUMBNAIL, REGENERATE_FILE_DATA_JOB_FORCE_THUMBNAIL, REGENERATE_FILE_DATA_JOB_SIMILAR_FILES_METADATA, REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP, REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS, REGENERATE_FILE_DATA_JOB_FILE_MODIFIED_TIMESTAMP, REGENERATE_FILE_DATA_JOB_OTHER_HASHES, REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE, REGENERATE_FILE_DATA_JOB_PIXEL_HASH, REGENERATE_FILE_DATA_JOB_DELETE_NEIGHBOUR_DUPES ]
|
ALL_REGEN_JOBS_IN_PREFERRED_ORDER = [ REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY, REGENERATE_FILE_DATA_JOB_FILE_METADATA, REGENERATE_FILE_DATA_JOB_REFIT_THUMBNAIL, REGENERATE_FILE_DATA_JOB_FORCE_THUMBNAIL, REGENERATE_FILE_DATA_JOB_SIMILAR_FILES_METADATA, REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP, REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS, REGENERATE_FILE_DATA_JOB_FILE_MODIFIED_TIMESTAMP, REGENERATE_FILE_DATA_JOB_OTHER_HASHES, REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE, REGENERATE_FILE_DATA_JOB_PIXEL_HASH, REGENERATE_FILE_DATA_JOB_DELETE_NEIGHBOUR_DUPES ]
|
||||||
|
|
||||||
def GetAllFilePaths( raw_paths, do_human_sort = True ):
|
def GetAllFilePaths( raw_paths, do_human_sort = True ):
|
||||||
|
|
||||||
|
@ -1518,8 +1524,6 @@ class FilesMaintenanceManager( object ):
|
||||||
hash = media_result.GetHash()
|
hash = media_result.GetHash()
|
||||||
mime = media_result.GetMime()
|
mime = media_result.GetMime()
|
||||||
|
|
||||||
error_dir = os.path.join( self._controller.GetDBDir(), 'missing_and_invalid_files' )
|
|
||||||
|
|
||||||
file_is_missing = False
|
file_is_missing = False
|
||||||
file_is_invalid = False
|
file_is_invalid = False
|
||||||
|
|
||||||
|
@ -1550,28 +1554,71 @@ class FilesMaintenanceManager( object ):
|
||||||
|
|
||||||
if file_was_bad:
|
if file_was_bad:
|
||||||
|
|
||||||
|
error_dir = os.path.join( self._controller.GetDBDir(), 'missing_and_invalid_files' )
|
||||||
|
|
||||||
|
HydrusPaths.MakeSureDirectoryExists( error_dir )
|
||||||
|
|
||||||
|
pretty_timestamp = time.strftime( '%Y-%m-%d %H-%M-%S', time.localtime( self._controller.GetBootTime() ) )
|
||||||
|
|
||||||
|
missing_hashes_filename = '{} missing hashes.txt'.format( pretty_timestamp )
|
||||||
|
|
||||||
|
missing_hashes_path = os.path.join( error_dir, missing_hashes_filename )
|
||||||
|
|
||||||
|
with open( missing_hashes_path, 'a', encoding = 'utf-8' ) as f:
|
||||||
|
|
||||||
|
f.write( hash.hex() )
|
||||||
|
f.write( '\n' )
|
||||||
|
|
||||||
|
|
||||||
|
tags = media_result.GetTagsManager().GetCurrentAndPending( CC.COMBINED_TAG_SERVICE_KEY, ClientTags.TAG_DISPLAY_STORAGE )
|
||||||
|
|
||||||
|
if len( tags ) > 0:
|
||||||
|
|
||||||
|
try:
|
||||||
|
|
||||||
|
with open( os.path.join( error_dir, '{}.tags.txt'.format( hash.hex() ) ), 'w', encoding = 'utf-8' ) as f:
|
||||||
|
|
||||||
|
for tag in sorted( tags ):
|
||||||
|
|
||||||
|
f.write( tag )
|
||||||
|
f.write( '\n' )
|
||||||
|
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
|
||||||
|
HydrusData.Print( 'Tried to export tags for missing file {}, but encountered this error:'.format( hash.hex() ) )
|
||||||
|
HydrusData.PrintException( e, do_wait = False )
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
urls = media_result.GetLocationsManager().GetURLs()
|
urls = media_result.GetLocationsManager().GetURLs()
|
||||||
|
|
||||||
if len( urls ) > 0:
|
if len( urls ) > 0:
|
||||||
|
|
||||||
HydrusPaths.MakeSureDirectoryExists( error_dir )
|
try:
|
||||||
|
|
||||||
with open( os.path.join( error_dir, '{}.urls.txt'.format( hash.hex() ) ), 'w', encoding = 'utf-8' ) as f:
|
|
||||||
|
|
||||||
for url in urls:
|
with open( os.path.join( error_dir, '{}.urls.txt'.format( hash.hex() ) ), 'w', encoding = 'utf-8' ) as f:
|
||||||
|
|
||||||
f.write( url )
|
for url in urls:
|
||||||
f.write( '\n' )
|
|
||||||
|
f.write( url )
|
||||||
|
f.write( '\n' )
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
with open( os.path.join( error_dir, 'all_urls.txt' ), 'a', encoding = 'utf-8' ) as f:
|
||||||
with open( os.path.join( error_dir, 'all_urls.txt' ), 'a', encoding = 'utf-8' ) as f:
|
|
||||||
|
for url in urls:
|
||||||
|
|
||||||
|
f.write( url )
|
||||||
|
f.write( '\n' )
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
for url in urls:
|
except Exception as e:
|
||||||
|
|
||||||
f.write( url )
|
HydrusData.Print( 'Tried to export URLs for missing file {}, but encountered this error:'.format( hash.hex() ) )
|
||||||
f.write( '\n' )
|
HydrusData.PrintException( e, do_wait = False )
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -1608,7 +1655,12 @@ class FilesMaintenanceManager( object ):
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if job_type in ( REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD ):
|
if job_type == REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY:
|
||||||
|
|
||||||
|
try_redownload = False
|
||||||
|
delete_record = False
|
||||||
|
|
||||||
|
elif job_type in ( REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD ):
|
||||||
|
|
||||||
try_redownload = len( useful_urls ) > 0
|
try_redownload = len( useful_urls ) > 0
|
||||||
delete_record = not try_redownload
|
delete_record = not try_redownload
|
||||||
|
@ -1621,23 +1673,6 @@ class FilesMaintenanceManager( object ):
|
||||||
|
|
||||||
do_export = file_is_invalid and ( job_type in ( REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE ) or ( job_type == REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL and try_redownload ) )
|
do_export = file_is_invalid and ( job_type in ( REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE ) or ( job_type == REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL and try_redownload ) )
|
||||||
|
|
||||||
if do_export or delete_record:
|
|
||||||
|
|
||||||
HydrusPaths.MakeSureDirectoryExists( error_dir )
|
|
||||||
|
|
||||||
pretty_timestamp = time.strftime( '%Y-%m-%d %H-%M-%S', time.localtime( self._controller.GetBootTime() ) )
|
|
||||||
|
|
||||||
missing_hashes_filename = '{} missing hashes.txt'.format( pretty_timestamp )
|
|
||||||
|
|
||||||
missing_hashes_path = os.path.join( error_dir, missing_hashes_filename )
|
|
||||||
|
|
||||||
with open( missing_hashes_path, 'a', encoding = 'utf-8' ) as f:
|
|
||||||
|
|
||||||
f.write( hash.hex() )
|
|
||||||
f.write( '\n' )
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if do_export:
|
if do_export:
|
||||||
|
|
||||||
HydrusPaths.MakeSureDirectoryExists( error_dir )
|
HydrusPaths.MakeSureDirectoryExists( error_dir )
|
||||||
|
@ -2091,7 +2126,7 @@ class FilesMaintenanceManager( object ):
|
||||||
|
|
||||||
self._FixFilePermissions( media_result )
|
self._FixFilePermissions( media_result )
|
||||||
|
|
||||||
elif job_type in ( REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE ):
|
elif job_type in ( REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE, REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY ):
|
||||||
|
|
||||||
if not job_key.HasVariable( 'num_bad_files' ):
|
if not job_key.HasVariable( 'num_bad_files' ):
|
||||||
|
|
||||||
|
|
|
@ -1899,7 +1899,7 @@ class ServiceRepository( ServiceRestricted ):
|
||||||
|
|
||||||
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD )
|
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD )
|
||||||
|
|
||||||
raise Exception( 'An unusual error has occured during repository processing: a definition update file ({}) was missing. Your repository should be paused, and all update files have been scheduled for a presence check. Please permit file maintenance under _database->file maintenance->review_ to finish its new work, which should fix this, before unpausing your repository.'.format( definition_hash.hex() ) )
|
raise Exception( 'An unusual error has occured during repository processing: a definition update file ({}) was missing. Your repository should be paused, and all update files have been scheduled for a presence check. Please permit file maintenance under _database->file maintenance->manage scheduled jobs_ to finish its new work, which should fix this, before unpausing your repository.'.format( definition_hash.hex() ) )
|
||||||
|
|
||||||
|
|
||||||
with open( update_path, 'rb' ) as f:
|
with open( update_path, 'rb' ) as f:
|
||||||
|
@ -1915,14 +1915,14 @@ class ServiceRepository( ServiceRestricted ):
|
||||||
|
|
||||||
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD )
|
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD )
|
||||||
|
|
||||||
raise Exception( 'An unusual error has occured during repository processing: a definition update file ({}) was invalid. Your repository should be paused, and all update files have been scheduled for an integrity check. Please permit file maintenance under _database->file maintenance->review_ to finish its new work, which should fix this, before unpausing your repository.'.format( definition_hash.hex() ) )
|
raise Exception( 'An unusual error has occured during repository processing: a definition update file ({}) was invalid. Your repository should be paused, and all update files have been scheduled for an integrity check. Please permit file maintenance under _database->file maintenance->manage scheduled jobs_ to finish its new work, which should fix this, before unpausing your repository.'.format( definition_hash.hex() ) )
|
||||||
|
|
||||||
|
|
||||||
if not isinstance( definition_update, HydrusNetwork.DefinitionsUpdate ):
|
if not isinstance( definition_update, HydrusNetwork.DefinitionsUpdate ):
|
||||||
|
|
||||||
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_METADATA )
|
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_METADATA )
|
||||||
|
|
||||||
raise Exception( 'An unusual error has occured during repository processing: a definition update file ({}) has incorrect metadata. Your repository should be paused, and all update files have been scheduled for a metadata rescan. Please permit file maintenance under _database->file maintenance->review_ to finish its new work, which should fix this, before unpausing your repository.'.format( definition_hash.hex() ) )
|
raise Exception( 'An unusual error has occured during repository processing: a definition update file ({}) has incorrect metadata. Your repository should be paused, and all update files have been scheduled for a metadata rescan. Please permit file maintenance under _database->file maintenance->manage scheduled jobs_ to finish its new work, which should fix this, before unpausing your repository.'.format( definition_hash.hex() ) )
|
||||||
|
|
||||||
|
|
||||||
rows_in_this_update = definition_update.GetNumRows()
|
rows_in_this_update = definition_update.GetNumRows()
|
||||||
|
@ -2026,7 +2026,7 @@ class ServiceRepository( ServiceRestricted ):
|
||||||
|
|
||||||
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD )
|
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD )
|
||||||
|
|
||||||
raise Exception( 'An unusual error has occured during repository processing: a content update file ({}) was missing. Your repository should be paused, and all update files have been scheduled for a presence check. Please permit file maintenance under _database->file maintenance->review_ to finish its new work, which should fix this, before unpausing your repository.'.format( content_hash.hex() ) )
|
raise Exception( 'An unusual error has occured during repository processing: a content update file ({}) was missing. Your repository should be paused, and all update files have been scheduled for a presence check. Please permit file maintenance under _database->file maintenance->manage scheduled jobs_ to finish its new work, which should fix this, before unpausing your repository.'.format( content_hash.hex() ) )
|
||||||
|
|
||||||
|
|
||||||
with open( update_path, 'rb' ) as f:
|
with open( update_path, 'rb' ) as f:
|
||||||
|
@ -2042,14 +2042,14 @@ class ServiceRepository( ServiceRestricted ):
|
||||||
|
|
||||||
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD )
|
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD )
|
||||||
|
|
||||||
raise Exception( 'An unusual error has occured during repository processing: a content update file ({}) was invalid. Your repository should be paused, and all update files have been scheduled for an integrity check. Please permit file maintenance under _database->file maintenance->review_ to finish its new work, which should fix this, before unpausing your repository.'.format( content_hash.hex() ) )
|
raise Exception( 'An unusual error has occured during repository processing: a content update file ({}) was invalid. Your repository should be paused, and all update files have been scheduled for an integrity check. Please permit file maintenance under _database->file maintenance->manage scheduled jobs_ to finish its new work, which should fix this, before unpausing your repository.'.format( content_hash.hex() ) )
|
||||||
|
|
||||||
|
|
||||||
if not isinstance( content_update, HydrusNetwork.ContentUpdate ):
|
if not isinstance( content_update, HydrusNetwork.ContentUpdate ):
|
||||||
|
|
||||||
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_METADATA )
|
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_METADATA )
|
||||||
|
|
||||||
raise Exception( 'An unusual error has occured during repository processing: a content update file ({}) has incorrect metadata. Your repository should be paused, and all update files have been scheduled for a metadata rescan. Please permit file maintenance under _database->file maintenance->review_ to finish its new work, which should fix this, before unpausing your repository.'.format( content_hash.hex() ) )
|
raise Exception( 'An unusual error has occured during repository processing: a content update file ({}) has incorrect metadata. Your repository should be paused, and all update files have been scheduled for a metadata rescan. Please permit file maintenance under _database->file maintenance->manage scheduled jobs_ to finish its new work, which should fix this, before unpausing your repository.'.format( content_hash.hex() ) )
|
||||||
|
|
||||||
|
|
||||||
rows_in_this_update = content_update.GetNumRows( content_types )
|
rows_in_this_update = content_update.GetNumRows( content_types )
|
||||||
|
|
|
@ -347,13 +347,33 @@ class DB( HydrusDB.HydrusDB ):
|
||||||
|
|
||||||
service_id = self.modules_services.AddService( service_key, service_type, name, dictionary )
|
service_id = self.modules_services.AddService( service_key, service_type, name, dictionary )
|
||||||
|
|
||||||
self._AddServiceCreateFiles( service_id, service_type )
|
self._AddServiceCreateFilesTables( service_id, service_type )
|
||||||
|
|
||||||
if service_type in HC.REPOSITORIES:
|
if service_type in HC.REPOSITORIES:
|
||||||
|
|
||||||
self.modules_repositories.GenerateRepositoryTables( service_id )
|
self.modules_repositories.GenerateRepositoryTables( service_id )
|
||||||
|
|
||||||
|
|
||||||
|
self._AddServiceCreateMappingsTables( service_id, service_type )
|
||||||
|
|
||||||
|
|
||||||
|
def _AddServiceCreateFilesTables( self, service_id, service_type ):
|
||||||
|
|
||||||
|
if service_type in HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES:
|
||||||
|
|
||||||
|
self.modules_files_storage.GenerateFilesTables( service_id )
|
||||||
|
|
||||||
|
tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
|
||||||
|
|
||||||
|
for tag_service_id in tag_service_ids:
|
||||||
|
|
||||||
|
self.modules_mappings_cache_specific_storage.Generate( service_id, tag_service_id )
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def _AddServiceCreateMappingsTables( self, service_id, service_type ):
|
||||||
|
|
||||||
if service_type in HC.REAL_TAG_SERVICES:
|
if service_type in HC.REAL_TAG_SERVICES:
|
||||||
|
|
||||||
self.modules_tag_search.Generate( self.modules_services.combined_file_service_id, service_id )
|
self.modules_tag_search.Generate( self.modules_services.combined_file_service_id, service_id )
|
||||||
|
@ -368,39 +388,6 @@ class DB( HydrusDB.HydrusDB ):
|
||||||
self.modules_tag_parents.Generate( service_id )
|
self.modules_tag_parents.Generate( service_id )
|
||||||
self.modules_tag_siblings.Generate( service_id )
|
self.modules_tag_siblings.Generate( service_id )
|
||||||
|
|
||||||
|
|
||||||
self._AddServiceCreateMappings( service_id, service_type )
|
|
||||||
|
|
||||||
if service_type in HC.FILE_SERVICES_WITH_SPECIFIC_TAG_LOOKUP_CACHES:
|
|
||||||
|
|
||||||
tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
|
|
||||||
|
|
||||||
for tag_service_id in tag_service_ids:
|
|
||||||
|
|
||||||
self.modules_tag_search.Generate( service_id, tag_service_id )
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def _AddServiceCreateFiles( self, service_id, service_type ):
|
|
||||||
|
|
||||||
if service_type in HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES:
|
|
||||||
|
|
||||||
self.modules_files_storage.GenerateFilesTables( service_id )
|
|
||||||
|
|
||||||
tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
|
|
||||||
|
|
||||||
for tag_service_id in tag_service_ids:
|
|
||||||
|
|
||||||
self.modules_mappings_cache_specific_storage.Generate( service_id, tag_service_id )
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def _AddServiceCreateMappings( self, service_id, service_type ):
|
|
||||||
|
|
||||||
if service_type in HC.REAL_TAG_SERVICES:
|
|
||||||
|
|
||||||
self.modules_mappings_storage.GenerateMappingsTables( service_id )
|
self.modules_mappings_storage.GenerateMappingsTables( service_id )
|
||||||
|
|
||||||
self.modules_mappings_cache_combined_files_storage.Generate( service_id )
|
self.modules_mappings_cache_combined_files_storage.Generate( service_id )
|
||||||
|
@ -413,6 +400,16 @@ class DB( HydrusDB.HydrusDB ):
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
if service_type in HC.FILE_SERVICES_WITH_SPECIFIC_TAG_LOOKUP_CACHES:
|
||||||
|
|
||||||
|
tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
|
||||||
|
|
||||||
|
for tag_service_id in tag_service_ids:
|
||||||
|
|
||||||
|
self.modules_tag_search.Generate( service_id, tag_service_id )
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def _ArchiveFiles( self, hash_ids ):
|
def _ArchiveFiles( self, hash_ids ):
|
||||||
|
|
||||||
|
@ -1631,18 +1628,25 @@ class DB( HydrusDB.HydrusDB ):
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# if we are deleting from repo updates, do a physical delete now
|
||||||
|
|
||||||
|
if service_id == self.modules_services.local_update_service_id:
|
||||||
|
|
||||||
|
self._DeleteFiles( self.modules_services.combined_local_file_service_id, existing_hash_ids )
|
||||||
|
|
||||||
|
|
||||||
# if the files are being fully deleted, then physically delete them
|
# if the files are being fully deleted, then physically delete them
|
||||||
|
|
||||||
if service_id == self.modules_services.combined_local_file_service_id:
|
if service_id == self.modules_services.combined_local_file_service_id:
|
||||||
|
|
||||||
self._ArchiveFiles( hash_ids )
|
self._ArchiveFiles( existing_hash_ids )
|
||||||
|
|
||||||
for hash_id in hash_ids:
|
for hash_id in existing_hash_ids:
|
||||||
|
|
||||||
self.modules_similar_files.StopSearchingFile( hash_id )
|
self.modules_similar_files.StopSearchingFile( hash_id )
|
||||||
|
|
||||||
|
|
||||||
self.modules_files_maintenance_queue.CancelFiles( hash_ids )
|
self.modules_files_maintenance_queue.CancelFiles( existing_hash_ids )
|
||||||
|
|
||||||
self.modules_hashes_local_cache.DropHashIdsFromCache( existing_hash_ids )
|
self.modules_hashes_local_cache.DropHashIdsFromCache( existing_hash_ids )
|
||||||
|
|
||||||
|
@ -1699,17 +1703,77 @@ class DB( HydrusDB.HydrusDB ):
|
||||||
self._Execute( 'DELETE FROM recent_tags WHERE service_id = ?;', ( service_id, ) )
|
self._Execute( 'DELETE FROM recent_tags WHERE service_id = ?;', ( service_id, ) )
|
||||||
self._Execute( 'DELETE FROM service_info WHERE service_id = ?;', ( service_id, ) )
|
self._Execute( 'DELETE FROM service_info WHERE service_id = ?;', ( service_id, ) )
|
||||||
|
|
||||||
self._DeleteServiceDropFiles( service_id, service_type )
|
self._DeleteServiceDropFilesTables( service_id, service_type )
|
||||||
|
|
||||||
if service_type in HC.REPOSITORIES:
|
if service_type in HC.REPOSITORIES:
|
||||||
|
|
||||||
self.modules_repositories.DropRepositoryTables( service_id )
|
self.modules_repositories.DropRepositoryTables( service_id )
|
||||||
|
|
||||||
|
|
||||||
self._DeleteServiceDropMappings( service_id, service_type )
|
self._DeleteServiceDropMappingsTables( service_id, service_type )
|
||||||
|
|
||||||
|
self.modules_services.DeleteService( service_id )
|
||||||
|
|
||||||
|
service_update = HydrusData.ServiceUpdate( HC.SERVICE_UPDATE_RESET )
|
||||||
|
|
||||||
|
service_keys_to_service_updates = { service_key : [ service_update ] }
|
||||||
|
|
||||||
|
self.pub_service_updates_after_commit( service_keys_to_service_updates )
|
||||||
|
|
||||||
|
|
||||||
|
def _DeleteServiceDirectory( self, service_id, dirname ):
|
||||||
|
|
||||||
|
directory_id = self.modules_texts.GetTextId( dirname )
|
||||||
|
|
||||||
|
self._Execute( 'DELETE FROM service_directories WHERE service_id = ? AND directory_id = ?;', ( service_id, directory_id ) )
|
||||||
|
self._Execute( 'DELETE FROM service_directory_file_map WHERE service_id = ? AND directory_id = ?;', ( service_id, directory_id ) )
|
||||||
|
|
||||||
|
|
||||||
|
def _DeleteServiceDropFilesTables( self, service_id, service_type ):
|
||||||
|
|
||||||
|
if service_type == HC.FILE_REPOSITORY:
|
||||||
|
|
||||||
|
self._Execute( 'DELETE FROM remote_thumbnails WHERE service_id = ?;', ( service_id, ) )
|
||||||
|
|
||||||
|
|
||||||
|
if service_type == HC.IPFS:
|
||||||
|
|
||||||
|
self._Execute( 'DELETE FROM service_filenames WHERE service_id = ?;', ( service_id, ) )
|
||||||
|
self._Execute( 'DELETE FROM service_directories WHERE service_id = ?;', ( service_id, ) )
|
||||||
|
self._Execute( 'DELETE FROM service_directory_file_map WHERE service_id = ?;', ( service_id, ) )
|
||||||
|
|
||||||
|
|
||||||
|
if service_type in HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES:
|
||||||
|
|
||||||
|
self.modules_files_storage.DropFilesTables( service_id )
|
||||||
|
|
||||||
|
|
||||||
|
if service_type in HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES:
|
||||||
|
|
||||||
|
tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
|
||||||
|
|
||||||
|
for tag_service_id in tag_service_ids:
|
||||||
|
|
||||||
|
self.modules_mappings_cache_specific_storage.Drop( service_id, tag_service_id )
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def _DeleteServiceDropMappingsTables( self, service_id, service_type ):
|
||||||
|
|
||||||
if service_type in HC.REAL_TAG_SERVICES:
|
if service_type in HC.REAL_TAG_SERVICES:
|
||||||
|
|
||||||
|
self.modules_mappings_storage.DropMappingsTables( service_id )
|
||||||
|
|
||||||
|
self.modules_mappings_cache_combined_files_storage.Drop( service_id )
|
||||||
|
|
||||||
|
file_service_ids = self.modules_services.GetServiceIds( HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES )
|
||||||
|
|
||||||
|
for file_service_id in file_service_ids:
|
||||||
|
|
||||||
|
self.modules_mappings_cache_specific_storage.Drop( file_service_id, service_id )
|
||||||
|
|
||||||
|
|
||||||
interested_service_ids = set( self.modules_tag_display.GetInterestedServiceIds( service_id ) )
|
interested_service_ids = set( self.modules_tag_display.GetInterestedServiceIds( service_id ) )
|
||||||
|
|
||||||
interested_service_ids.discard( service_id ) # lmao, not any more!
|
interested_service_ids.discard( service_id ) # lmao, not any more!
|
||||||
|
@ -1743,69 +1807,6 @@ class DB( HydrusDB.HydrusDB ):
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
self.modules_services.DeleteService( service_id )
|
|
||||||
|
|
||||||
service_update = HydrusData.ServiceUpdate( HC.SERVICE_UPDATE_RESET )
|
|
||||||
|
|
||||||
service_keys_to_service_updates = { service_key : [ service_update ] }
|
|
||||||
|
|
||||||
self.pub_service_updates_after_commit( service_keys_to_service_updates )
|
|
||||||
|
|
||||||
|
|
||||||
def _DeleteServiceDirectory( self, service_id, dirname ):
|
|
||||||
|
|
||||||
directory_id = self.modules_texts.GetTextId( dirname )
|
|
||||||
|
|
||||||
self._Execute( 'DELETE FROM service_directories WHERE service_id = ? AND directory_id = ?;', ( service_id, directory_id ) )
|
|
||||||
self._Execute( 'DELETE FROM service_directory_file_map WHERE service_id = ? AND directory_id = ?;', ( service_id, directory_id ) )
|
|
||||||
|
|
||||||
|
|
||||||
def _DeleteServiceDropFiles( self, service_id, service_type ):
|
|
||||||
|
|
||||||
if service_type == HC.FILE_REPOSITORY:
|
|
||||||
|
|
||||||
self._Execute( 'DELETE FROM remote_thumbnails WHERE service_id = ?;', ( service_id, ) )
|
|
||||||
|
|
||||||
|
|
||||||
if service_type == HC.IPFS:
|
|
||||||
|
|
||||||
self._Execute( 'DELETE FROM service_filenames WHERE service_id = ?;', ( service_id, ) )
|
|
||||||
self._Execute( 'DELETE FROM service_directories WHERE service_id = ?;', ( service_id, ) )
|
|
||||||
self._Execute( 'DELETE FROM service_directory_file_map WHERE service_id = ?;', ( service_id, ) )
|
|
||||||
|
|
||||||
|
|
||||||
if service_type in HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES:
|
|
||||||
|
|
||||||
self.modules_files_storage.DropFilesTables( service_id )
|
|
||||||
|
|
||||||
|
|
||||||
if service_type in HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES:
|
|
||||||
|
|
||||||
tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
|
|
||||||
|
|
||||||
for tag_service_id in tag_service_ids:
|
|
||||||
|
|
||||||
self.modules_mappings_cache_specific_storage.Drop( service_id, tag_service_id )
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def _DeleteServiceDropMappings( self, service_id, service_type ):
|
|
||||||
|
|
||||||
if service_type in HC.REAL_TAG_SERVICES:
|
|
||||||
|
|
||||||
self.modules_mappings_storage.DropMappingsTables( service_id )
|
|
||||||
|
|
||||||
self.modules_mappings_cache_combined_files_storage.Drop( service_id )
|
|
||||||
|
|
||||||
file_service_ids = self.modules_services.GetServiceIds( HC.FILE_SERVICES_WITH_SPECIFIC_MAPPING_CACHES )
|
|
||||||
|
|
||||||
for file_service_id in file_service_ids:
|
|
||||||
|
|
||||||
self.modules_mappings_cache_specific_storage.Drop( file_service_id, service_id )
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def _DeleteServiceInfo( self, service_key = None, types_to_delete = None ):
|
def _DeleteServiceInfo( self, service_key = None, types_to_delete = None ):
|
||||||
|
|
||||||
|
|
|
@ -1200,35 +1200,36 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes, CAC.ApplicationCo
|
||||||
yes_tuples.append( ( 'save to file', 'file' ) )
|
yes_tuples.append( ( 'save to file', 'file' ) )
|
||||||
yes_tuples.append( ( 'copy to clipboard', 'clipboard' ) )
|
yes_tuples.append( ( 'copy to clipboard', 'clipboard' ) )
|
||||||
|
|
||||||
with ClientGUIDialogs.DialogYesYesNo( self, text, yes_tuples = yes_tuples, no_label = 'forget it' ) as dlg:
|
try:
|
||||||
|
|
||||||
if dlg.exec() == QW.QDialog.Accepted:
|
result = ClientGUIDialogsQuick.GetYesYesNo( self, text, yes_tuples = yes_tuples, no_label = 'forget it' )
|
||||||
|
|
||||||
|
except HydrusExceptions.CancelledException:
|
||||||
|
|
||||||
|
return
|
||||||
|
|
||||||
|
|
||||||
|
if result == 'file':
|
||||||
|
|
||||||
|
with QP.FileDialog( self, 'select where to save content', default_filename = 'result.html', acceptMode = QW.QFileDialog.AcceptSave, fileMode = QW.QFileDialog.AnyFile ) as f_dlg:
|
||||||
|
|
||||||
value = dlg.GetValue()
|
if f_dlg.exec() == QW.QDialog.Accepted:
|
||||||
|
|
||||||
if value == 'file':
|
|
||||||
|
|
||||||
with QP.FileDialog( self, 'select where to save content', default_filename = 'result.html', acceptMode = QW.QFileDialog.AcceptSave, fileMode = QW.QFileDialog.AnyFile ) as f_dlg:
|
path = f_dlg.GetPath()
|
||||||
|
|
||||||
|
with open( path, 'wb' ) as f:
|
||||||
|
|
||||||
if f_dlg.exec() == QW.QDialog.Accepted:
|
f.write( content )
|
||||||
|
|
||||||
path = f_dlg.GetPath()
|
|
||||||
|
|
||||||
with open( path, 'wb' ) as f:
|
|
||||||
|
|
||||||
f.write( content )
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
elif value == 'clipboard':
|
|
||||||
|
|
||||||
text = network_job.GetContentText()
|
|
||||||
|
|
||||||
self._controller.pub( 'clipboard', 'text', text )
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
elif result == 'clipboard':
|
||||||
|
|
||||||
|
text = network_job.GetContentText()
|
||||||
|
|
||||||
|
self._controller.pub( 'clipboard', 'text', text )
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def thread_wait( url ):
|
def thread_wait( url ):
|
||||||
|
@ -4849,27 +4850,25 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes, CAC.ApplicationCo
|
||||||
yes_tuples.append( ( 'open online help', 0 ) )
|
yes_tuples.append( ( 'open online help', 0 ) )
|
||||||
yes_tuples.append( ( 'open how to build guide', 1 ) )
|
yes_tuples.append( ( 'open how to build guide', 1 ) )
|
||||||
|
|
||||||
with ClientGUIDialogs.DialogYesYesNo( self, message, yes_tuples = yes_tuples, no_label = 'forget it' ) as dlg:
|
try:
|
||||||
|
|
||||||
if dlg.exec() == QW.QDialog.Accepted:
|
result = ClientGUIDialogsQuick.GetYesYesNo( self, message, yes_tuples = yes_tuples, no_label = 'forget it' )
|
||||||
|
|
||||||
value = dlg.GetValue()
|
|
||||||
|
|
||||||
if value == 0:
|
|
||||||
|
|
||||||
url = 'https://hydrusnetwork.github.io/hydrus/'
|
|
||||||
|
|
||||||
elif value == 1:
|
|
||||||
|
|
||||||
url = 'https://hydrusnetwork.github.io/hydrus/about_docs.html'
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
return
|
|
||||||
|
|
||||||
ClientPaths.LaunchURLInWebBrowser( url )
|
|
||||||
|
|
||||||
|
|
||||||
|
except HydrusExceptions.CancelledException:
|
||||||
|
|
||||||
|
return
|
||||||
|
|
||||||
|
|
||||||
|
if result == 0:
|
||||||
|
|
||||||
|
url = 'https://hydrusnetwork.github.io/hydrus/'
|
||||||
|
|
||||||
|
elif result == 1:
|
||||||
|
|
||||||
|
url = 'https://hydrusnetwork.github.io/hydrus/about_docs.html'
|
||||||
|
|
||||||
|
|
||||||
|
ClientPaths.LaunchURLInWebBrowser( url )
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -921,70 +921,3 @@ class DialogTextEntry( Dialog ):
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class DialogYesYesNo( Dialog ):
|
|
||||||
|
|
||||||
def __init__( self, parent, message, title = 'Are you sure?', yes_tuples = None, no_label = 'no' ):
|
|
||||||
|
|
||||||
if yes_tuples is None:
|
|
||||||
|
|
||||||
yes_tuples = [ ( 'yes', 'yes' ) ]
|
|
||||||
|
|
||||||
|
|
||||||
Dialog.__init__( self, parent, title, position = 'center' )
|
|
||||||
|
|
||||||
self._value = None
|
|
||||||
|
|
||||||
yes_buttons = []
|
|
||||||
|
|
||||||
for ( label, data ) in yes_tuples:
|
|
||||||
|
|
||||||
yes_button = ClientGUICommon.BetterButton( self, label, self._DoYes, data )
|
|
||||||
yes_button.setObjectName( 'HydrusAccept' )
|
|
||||||
|
|
||||||
yes_buttons.append( yes_button )
|
|
||||||
|
|
||||||
|
|
||||||
self._no = ClientGUICommon.BetterButton( self, no_label, self.done, QW.QDialog.Rejected )
|
|
||||||
self._no.setObjectName( 'HydrusCancel' )
|
|
||||||
|
|
||||||
#
|
|
||||||
|
|
||||||
hbox = QP.HBoxLayout()
|
|
||||||
|
|
||||||
for yes_button in yes_buttons:
|
|
||||||
|
|
||||||
QP.AddToLayout( hbox, yes_button, CC.FLAGS_CENTER_PERPENDICULAR )
|
|
||||||
|
|
||||||
|
|
||||||
QP.AddToLayout( hbox, self._no, CC.FLAGS_CENTER_PERPENDICULAR )
|
|
||||||
|
|
||||||
vbox = QP.VBoxLayout()
|
|
||||||
|
|
||||||
text = ClientGUICommon.BetterStaticText( self, message )
|
|
||||||
text.setWordWrap( True )
|
|
||||||
|
|
||||||
QP.AddToLayout( vbox, text, CC.FLAGS_EXPAND_BOTH_WAYS )
|
|
||||||
QP.AddToLayout( vbox, hbox, CC.FLAGS_ON_RIGHT )
|
|
||||||
|
|
||||||
self.setLayout( vbox )
|
|
||||||
|
|
||||||
size_hint = self.sizeHint()
|
|
||||||
|
|
||||||
size_hint.setWidth( max( size_hint.width(), 250 ) )
|
|
||||||
|
|
||||||
QP.SetInitialSize( self, size_hint )
|
|
||||||
|
|
||||||
ClientGUIFunctions.SetFocusLater( yes_buttons[0] )
|
|
||||||
|
|
||||||
|
|
||||||
def _DoYes( self, value ):
|
|
||||||
|
|
||||||
self._value = value
|
|
||||||
|
|
||||||
self.done( QW.QDialog.Accepted )
|
|
||||||
|
|
||||||
|
|
||||||
def GetValue( self ):
|
|
||||||
|
|
||||||
return self._value
|
|
||||||
|
|
||||||
|
|
|
@ -97,6 +97,25 @@ def GetYesNo( win, message, title = 'Are you sure?', yes_label = 'yes', no_label
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def GetYesYesNo( win, message, title = 'Are you sure?', yes_tuples = None, no_label = 'no' ):
|
||||||
|
|
||||||
|
with ClientGUITopLevelWindowsPanels.DialogCustomButtonQuestion( win, title ) as dlg:
|
||||||
|
|
||||||
|
panel = ClientGUIScrolledPanelsButtonQuestions.QuestionYesYesNoPanel( dlg, message, yes_tuples = yes_tuples, no_label = no_label )
|
||||||
|
|
||||||
|
dlg.SetPanel( panel )
|
||||||
|
|
||||||
|
if dlg.exec() == QW.QDialog.Accepted:
|
||||||
|
|
||||||
|
return panel.GetValue()
|
||||||
|
|
||||||
|
else:
|
||||||
|
|
||||||
|
raise HydrusExceptions.CancelledException( 'Dialog cancelled.' )
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def SelectFromList( win, title, choice_tuples, value_to_select = None, sort_tuples = True ):
|
def SelectFromList( win, title, choice_tuples, value_to_select = None, sort_tuples = True ):
|
||||||
|
|
||||||
if len( choice_tuples ) == 1:
|
if len( choice_tuples ) == 1:
|
||||||
|
|
|
@ -95,25 +95,23 @@ class QuestionYesNoPanel( ClientGUIScrolledPanels.ResizingScrolledPanel ):
|
||||||
|
|
||||||
self._yes = ClientGUICommon.BetterButton( self, yes_label, self.parentWidget().done, QW.QDialog.Accepted )
|
self._yes = ClientGUICommon.BetterButton( self, yes_label, self.parentWidget().done, QW.QDialog.Accepted )
|
||||||
self._yes.setObjectName( 'HydrusAccept' )
|
self._yes.setObjectName( 'HydrusAccept' )
|
||||||
self._yes.setText( yes_label )
|
|
||||||
|
|
||||||
self._no = ClientGUICommon.BetterButton( self, no_label, self.parentWidget().done, QW.QDialog.Rejected )
|
self._no = ClientGUICommon.BetterButton( self, no_label, self.parentWidget().done, QW.QDialog.Rejected )
|
||||||
self._no.setObjectName( 'HydrusCancel' )
|
self._no.setObjectName( 'HydrusCancel' )
|
||||||
self._no.setText( no_label )
|
|
||||||
|
|
||||||
#
|
#
|
||||||
|
|
||||||
hbox = QP.HBoxLayout()
|
hbox = QP.HBoxLayout()
|
||||||
|
|
||||||
QP.AddToLayout( hbox, self._yes )
|
QP.AddToLayout( hbox, self._yes, CC.FLAGS_CENTER_PERPENDICULAR )
|
||||||
QP.AddToLayout( hbox, self._no )
|
QP.AddToLayout( hbox, self._no, CC.FLAGS_CENTER_PERPENDICULAR )
|
||||||
|
|
||||||
vbox = QP.VBoxLayout()
|
vbox = QP.VBoxLayout()
|
||||||
|
|
||||||
text = ClientGUICommon.BetterStaticText( self, message )
|
text = ClientGUICommon.BetterStaticText( self, message )
|
||||||
text.setWordWrap( True )
|
text.setWordWrap( True )
|
||||||
|
|
||||||
QP.AddToLayout( vbox, text )
|
QP.AddToLayout( vbox, text, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||||
QP.AddToLayout( vbox, hbox, CC.FLAGS_ON_RIGHT )
|
QP.AddToLayout( vbox, hbox, CC.FLAGS_ON_RIGHT )
|
||||||
|
|
||||||
self.widget().setLayout( vbox )
|
self.widget().setLayout( vbox )
|
||||||
|
@ -122,3 +120,65 @@ class QuestionYesNoPanel( ClientGUIScrolledPanels.ResizingScrolledPanel ):
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
class QuestionYesYesNoPanel( ClientGUIScrolledPanels.ResizingScrolledPanel ):
|
||||||
|
|
||||||
|
def __init__( self, parent, message, yes_tuples = None, no_label = 'no' ):
|
||||||
|
|
||||||
|
ClientGUIScrolledPanels.ResizingScrolledPanel.__init__( self, parent )
|
||||||
|
|
||||||
|
if yes_tuples is None:
|
||||||
|
|
||||||
|
yes_tuples = [ ( 'yes', 'yes' ) ]
|
||||||
|
|
||||||
|
|
||||||
|
self._value = yes_tuples[0][1]
|
||||||
|
|
||||||
|
yes_buttons = []
|
||||||
|
|
||||||
|
for ( label, data ) in yes_tuples:
|
||||||
|
|
||||||
|
yes_button = ClientGUICommon.BetterButton( self, label, self._DoYes, data )
|
||||||
|
yes_button.setObjectName( 'HydrusAccept' )
|
||||||
|
|
||||||
|
yes_buttons.append( yes_button )
|
||||||
|
|
||||||
|
|
||||||
|
self._no = ClientGUICommon.BetterButton( self, no_label, self.parentWidget().done, QW.QDialog.Rejected )
|
||||||
|
self._no.setObjectName( 'HydrusCancel' )
|
||||||
|
|
||||||
|
#
|
||||||
|
|
||||||
|
text = ClientGUICommon.BetterStaticText( self, message )
|
||||||
|
text.setWordWrap( True )
|
||||||
|
|
||||||
|
hbox = QP.HBoxLayout()
|
||||||
|
|
||||||
|
for yes_button in yes_buttons:
|
||||||
|
|
||||||
|
QP.AddToLayout( hbox, yes_button, CC.FLAGS_CENTER_PERPENDICULAR )
|
||||||
|
|
||||||
|
|
||||||
|
QP.AddToLayout( hbox, self._no, CC.FLAGS_CENTER_PERPENDICULAR )
|
||||||
|
|
||||||
|
vbox = QP.VBoxLayout()
|
||||||
|
|
||||||
|
QP.AddToLayout( vbox, text, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||||
|
QP.AddToLayout( vbox, hbox, CC.FLAGS_ON_RIGHT )
|
||||||
|
|
||||||
|
self.widget().setLayout( vbox )
|
||||||
|
|
||||||
|
ClientGUIFunctions.SetFocusLater( yes_buttons[0] )
|
||||||
|
|
||||||
|
|
||||||
|
def _DoYes( self, value ):
|
||||||
|
|
||||||
|
self._value = value
|
||||||
|
|
||||||
|
self.parentWidget().done( QW.QDialog.Accepted )
|
||||||
|
|
||||||
|
|
||||||
|
def GetValue( self ):
|
||||||
|
|
||||||
|
return self._value
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -696,28 +696,25 @@ class MigrateDatabasePanel( ClientGUIScrolledPanels.ReviewPanel ):
|
||||||
yes_tuples.append( ( 'run for 1 hour', 3600 ) )
|
yes_tuples.append( ( 'run for 1 hour', 3600 ) )
|
||||||
yes_tuples.append( ( 'run indefinitely', None ) )
|
yes_tuples.append( ( 'run indefinitely', None ) )
|
||||||
|
|
||||||
with ClientGUIDialogs.DialogYesYesNo( self, message, yes_tuples = yes_tuples, no_label = 'forget it' ) as dlg:
|
try:
|
||||||
|
|
||||||
if dlg.exec() == QW.QDialog.Accepted:
|
result = ClientGUIDialogsQuick.GetYesYesNo( self, message, yes_tuples = yes_tuples, no_label = 'forget it' )
|
||||||
|
|
||||||
value = dlg.GetValue()
|
|
||||||
|
|
||||||
if value is None:
|
|
||||||
|
|
||||||
stop_time = None
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
stop_time = HydrusData.GetNow() + value
|
|
||||||
|
|
||||||
|
|
||||||
job_key = ClientThreading.JobKey( cancellable = True, stop_time = stop_time )
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
|
except HydrusExceptions.CancelledException:
|
||||||
|
|
||||||
|
return
|
||||||
|
|
||||||
|
|
||||||
|
if result is None:
|
||||||
|
|
||||||
|
stop_time = None
|
||||||
|
|
||||||
|
else:
|
||||||
|
|
||||||
|
stop_time = HydrusData.GetNow() + result
|
||||||
|
|
||||||
|
|
||||||
|
job_key = ClientThreading.JobKey( cancellable = True, stop_time = stop_time )
|
||||||
|
|
||||||
HG.client_controller.pub( 'do_file_storage_rebalance', job_key )
|
HG.client_controller.pub( 'do_file_storage_rebalance', job_key )
|
||||||
|
|
||||||
|
|
|
@ -2191,40 +2191,37 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
||||||
yes_tuples.append( ( 'do them all', 'all' ) )
|
yes_tuples.append( ( 'do them all', 'all' ) )
|
||||||
yes_tuples.append( ( 'select which to do', 'select' ) )
|
yes_tuples.append( ( 'select which to do', 'select' ) )
|
||||||
|
|
||||||
with ClientGUIDialogs.DialogYesYesNo( self, message, yes_tuples = yes_tuples, no_label = 'forget it' ) as dlg:
|
try:
|
||||||
|
|
||||||
if dlg.exec() != QW.QDialog.Accepted:
|
result = ClientGUIDialogsQuick.GetYesYesNo( self, message, yes_tuples = yes_tuples, no_label = 'forget it' )
|
||||||
|
|
||||||
|
except HydrusExceptions.CancelledException:
|
||||||
|
|
||||||
|
return
|
||||||
|
|
||||||
|
|
||||||
|
if result == 'all':
|
||||||
|
|
||||||
|
query_texts_we_want_to_dedupe_now = potential_dupe_query_texts
|
||||||
|
|
||||||
|
else:
|
||||||
|
|
||||||
|
selected = True
|
||||||
|
|
||||||
|
choice_tuples = [ ( query_text, query_text, selected ) for query_text in potential_dupe_query_texts ]
|
||||||
|
|
||||||
|
try:
|
||||||
|
|
||||||
|
query_texts_we_want_to_dedupe_now = ClientGUIDialogsQuick.SelectMultipleFromList( self, 'Select which query texts to dedupe', choice_tuples )
|
||||||
|
|
||||||
|
except HydrusExceptions.CancelledException:
|
||||||
|
|
||||||
return
|
return
|
||||||
|
|
||||||
else:
|
|
||||||
|
if len( query_texts_we_want_to_dedupe_now ) == 0:
|
||||||
|
|
||||||
value = dlg.GetValue()
|
return
|
||||||
|
|
||||||
if value == 'all':
|
|
||||||
|
|
||||||
query_texts_we_want_to_dedupe_now = potential_dupe_query_texts
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
selected = True
|
|
||||||
|
|
||||||
choice_tuples = [ ( query_text, query_text, selected ) for query_text in potential_dupe_query_texts ]
|
|
||||||
|
|
||||||
try:
|
|
||||||
|
|
||||||
query_texts_we_want_to_dedupe_now = ClientGUIDialogsQuick.SelectMultipleFromList( self, 'Select which query texts to dedupe', choice_tuples )
|
|
||||||
|
|
||||||
except HydrusExceptions.CancelledException:
|
|
||||||
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
if len( query_texts_we_want_to_dedupe_now ) == 0:
|
|
||||||
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -2643,16 +2640,13 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
||||||
message = 'Are you sure you want to separate the selected subscriptions? Separating breaks merged subscriptions apart into smaller pieces.'
|
message = 'Are you sure you want to separate the selected subscriptions? Separating breaks merged subscriptions apart into smaller pieces.'
|
||||||
yes_tuples = [ ( 'break it in half', 'half' ), ( 'break it all into single-query subscriptions', 'whole' ), ( 'only extract some of the subscription', 'part' ) ]
|
yes_tuples = [ ( 'break it in half', 'half' ), ( 'break it all into single-query subscriptions', 'whole' ), ( 'only extract some of the subscription', 'part' ) ]
|
||||||
|
|
||||||
with ClientGUIDialogs.DialogYesYesNo( self, message, yes_tuples = yes_tuples, no_label = 'forget it' ) as dlg:
|
try:
|
||||||
|
|
||||||
if dlg.exec() == QW.QDialog.Accepted:
|
action = ClientGUIDialogsQuick.GetYesYesNo( self, message, yes_tuples = yes_tuples, no_label = 'forget it' )
|
||||||
|
|
||||||
action = dlg.GetValue()
|
except HydrusExceptions.CancelledException:
|
||||||
|
|
||||||
else:
|
return
|
||||||
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
|
@ -2687,16 +2681,13 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
||||||
|
|
||||||
message = 'Do you want the extracted queries to be a new merged subscription, or many subscriptions with only one query?'
|
message = 'Do you want the extracted queries to be a new merged subscription, or many subscriptions with only one query?'
|
||||||
|
|
||||||
with ClientGUIDialogs.DialogYesYesNo( self, message, yes_tuples = yes_tuples, no_label = 'forget it' ) as dlg:
|
try:
|
||||||
|
|
||||||
if dlg.exec() == QW.QDialog.Accepted:
|
want_post_merge = ClientGUIDialogsQuick.GetYesYesNo( self, message, yes_tuples = yes_tuples, no_label = 'forget it' )
|
||||||
|
|
||||||
want_post_merge = dlg.GetValue()
|
except HydrusExceptions.CancelledException:
|
||||||
|
|
||||||
else:
|
return
|
||||||
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -326,9 +326,14 @@ class EditTagDisplayApplication( ClientGUIScrolledPanels.EditPanel ):
|
||||||
|
|
||||||
vbox = QP.VBoxLayout()
|
vbox = QP.VBoxLayout()
|
||||||
|
|
||||||
message = 'While a tag service normally applies its own siblings and parents to itself, it does not have to. If you want a different service\'s siblings (e.g. putting the PTR\'s siblings on your "my tags"), or multiple services\', then set it here. You can also apply no siblings or parents at all.'
|
warning = 'THIS IS COMPLICATED, THINK CAREFULLY'
|
||||||
|
|
||||||
|
self._warning = ClientGUICommon.BetterStaticText( self, label = warning )
|
||||||
|
self._warning.setObjectName( 'HydrusWarning' )
|
||||||
|
|
||||||
|
message = 'While a tag service normally only applies its own siblings and parents to itself, it does not have to. You can have other services\' rules apply (e.g. putting the PTR\'s siblings on your "my tags"), or no siblings/parents at all.'
|
||||||
message += os.linesep * 2
|
message += os.linesep * 2
|
||||||
message += 'If there are conflicts, the services at the top of the list have precedence. Parents are collapsed by sibling rules before they are applied.'
|
message += 'If you apply multiple services and there are conflicts (e.g. disagreements on where siblings go, or loops), the services at the top of the list have precedence. If you want to overwrite some PTR rules, then make what you want on a local service and then put it above the PTR here. Also, siblings apply first, then parents.'
|
||||||
|
|
||||||
self._message = ClientGUICommon.BetterStaticText( self, label = message )
|
self._message = ClientGUICommon.BetterStaticText( self, label = message )
|
||||||
self._message.setWordWrap( True )
|
self._message.setWordWrap( True )
|
||||||
|
@ -359,6 +364,7 @@ class EditTagDisplayApplication( ClientGUIScrolledPanels.EditPanel ):
|
||||||
|
|
||||||
self._sync_status.style().polish( self._sync_status )
|
self._sync_status.style().polish( self._sync_status )
|
||||||
|
|
||||||
|
QP.AddToLayout( vbox, self._warning, CC.FLAGS_CENTER )
|
||||||
QP.AddToLayout( vbox, self._message, CC.FLAGS_EXPAND_PERPENDICULAR )
|
QP.AddToLayout( vbox, self._message, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||||
QP.AddToLayout( vbox, self._sync_status, CC.FLAGS_EXPAND_PERPENDICULAR )
|
QP.AddToLayout( vbox, self._sync_status, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||||
QP.AddToLayout( vbox, self._tag_services_notebook, CC.FLAGS_EXPAND_BOTH_WAYS )
|
QP.AddToLayout( vbox, self._tag_services_notebook, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||||
|
|
|
@ -3002,46 +3002,39 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
||||||
|
|
||||||
if self._current_media is None:
|
if self._current_media is None:
|
||||||
|
|
||||||
return
|
return False
|
||||||
|
|
||||||
|
|
||||||
first_media = self._current_media
|
first_media = self._current_media
|
||||||
second_media = self._media_list.GetNext( self._current_media )
|
second_media = self._media_list.GetNext( self._current_media )
|
||||||
|
|
||||||
text = 'Delete just this file, or both?'
|
message = 'Delete just this file, or both?'
|
||||||
|
|
||||||
yes_tuples = []
|
yes_tuples = []
|
||||||
|
|
||||||
yes_tuples.append( ( 'delete just this one', 'current' ) )
|
yes_tuples.append( ( 'delete just this one', 'current' ) )
|
||||||
yes_tuples.append( ( 'delete both', 'both' ) )
|
yes_tuples.append( ( 'delete both', 'both' ) )
|
||||||
|
|
||||||
with ClientGUIDialogs.DialogYesYesNo( self, text, yes_tuples = yes_tuples, no_label = 'forget it' ) as dlg:
|
try:
|
||||||
|
|
||||||
if dlg.exec() == QW.QDialog.Accepted:
|
result = ClientGUIDialogsQuick.GetYesYesNo( self, message, yes_tuples = yes_tuples, no_label = 'forget it' )
|
||||||
|
|
||||||
value = dlg.GetValue()
|
except HydrusExceptions.CancelledException:
|
||||||
|
|
||||||
if value == 'current':
|
return False
|
||||||
|
|
||||||
media = [ first_media ]
|
|
||||||
|
if result == 'current':
|
||||||
default_reason = 'Deleted manually in Duplicate Filter.'
|
|
||||||
|
media = [ first_media ]
|
||||||
elif value == 'both':
|
|
||||||
|
default_reason = 'Deleted manually in Duplicate Filter.'
|
||||||
media = [ first_media, second_media ]
|
|
||||||
|
elif result == 'both':
|
||||||
default_reason = 'Deleted manually in Duplicate Filter, along with its potential duplicate.'
|
|
||||||
|
media = [ first_media, second_media ]
|
||||||
else:
|
|
||||||
|
default_reason = 'Deleted manually in Duplicate Filter, along with its potential duplicate.'
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
jobs = CanvasWithHovers._Delete( self, media = media, default_reason = default_reason, file_service_key = file_service_key, just_get_jobs = True )
|
jobs = CanvasWithHovers._Delete( self, media = media, default_reason = default_reason, file_service_key = file_service_key, just_get_jobs = True )
|
||||||
|
@ -3112,7 +3105,7 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
||||||
duplicate_action_options = None
|
duplicate_action_options = None
|
||||||
|
|
||||||
|
|
||||||
text = 'Delete any of the files?'
|
message = 'Delete any of the files?'
|
||||||
|
|
||||||
yes_tuples = []
|
yes_tuples = []
|
||||||
|
|
||||||
|
@ -3121,35 +3114,30 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
||||||
yes_tuples.append( ( 'delete the other', 'delete_second' ) )
|
yes_tuples.append( ( 'delete the other', 'delete_second' ) )
|
||||||
yes_tuples.append( ( 'delete both', 'delete_both' ) )
|
yes_tuples.append( ( 'delete both', 'delete_both' ) )
|
||||||
|
|
||||||
|
try:
|
||||||
|
|
||||||
|
result = ClientGUIDialogsQuick.GetYesYesNo( self, message, yes_tuples = yes_tuples, no_label = 'forget it' )
|
||||||
|
|
||||||
|
except HydrusExceptions.CancelledException:
|
||||||
|
|
||||||
|
return
|
||||||
|
|
||||||
|
|
||||||
delete_first = False
|
delete_first = False
|
||||||
delete_second = False
|
delete_second = False
|
||||||
delete_both = False
|
delete_both = False
|
||||||
|
|
||||||
with ClientGUIDialogs.DialogYesYesNo( self, text, yes_tuples = yes_tuples, no_label = 'forget it' ) as dlg:
|
if result == 'delete_first':
|
||||||
|
|
||||||
result = dlg.exec()
|
delete_first = True
|
||||||
|
|
||||||
if result == QW.QDialog.Accepted:
|
elif result == 'delete_second':
|
||||||
|
|
||||||
value = dlg.GetValue()
|
delete_second = True
|
||||||
|
|
||||||
if value == 'delete_first':
|
elif result == 'delete_both':
|
||||||
|
|
||||||
delete_first = True
|
delete_both = True
|
||||||
|
|
||||||
elif value == 'delete_second':
|
|
||||||
|
|
||||||
delete_second = True
|
|
||||||
|
|
||||||
elif value == 'delete_both':
|
|
||||||
|
|
||||||
delete_both = True
|
|
||||||
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
self._ProcessPair( duplicate_type, delete_first = delete_first, delete_second = delete_second, delete_both = delete_both, duplicate_action_options = duplicate_action_options )
|
self._ProcessPair( duplicate_type, delete_first = delete_first, delete_second = delete_second, delete_both = delete_both, duplicate_action_options = duplicate_action_options )
|
||||||
|
|
|
@ -1585,6 +1585,7 @@ class CanvasHoverFrameRightDuplicates( CanvasHoverFrame ):
|
||||||
menu_items.append( ( 'normal', 'edit background lighten/darken switch intensity', 'edit how much the background will brighten or darken as you switch between the pair', self._EditBackgroundSwitchIntensity ) )
|
menu_items.append( ( 'normal', 'edit background lighten/darken switch intensity', 'edit how much the background will brighten or darken as you switch between the pair', self._EditBackgroundSwitchIntensity ) )
|
||||||
|
|
||||||
self._cog_button = ClientGUIMenuButton.MenuBitmapButton( self, CC.global_pixmaps().cog, menu_items )
|
self._cog_button = ClientGUIMenuButton.MenuBitmapButton( self, CC.global_pixmaps().cog, menu_items )
|
||||||
|
self._cog_button.setFocusPolicy( QC.Qt.TabFocus )
|
||||||
|
|
||||||
close_button = ClientGUICommon.BetterBitmapButton( self, CC.global_pixmaps().stop, HG.client_controller.pub, 'canvas_close', self._canvas_key )
|
close_button = ClientGUICommon.BetterBitmapButton( self, CC.global_pixmaps().stop, HG.client_controller.pub, 'canvas_close', self._canvas_key )
|
||||||
close_button.setToolTip( 'close filter' )
|
close_button.setToolTip( 'close filter' )
|
||||||
|
|
|
@ -768,13 +768,13 @@ class ReviewAllBandwidthPanel( ClientGUIScrolledPanels.ReviewPanel ):
|
||||||
|
|
||||||
def _ShowDefaultRulesHelp( self ):
|
def _ShowDefaultRulesHelp( self ):
|
||||||
|
|
||||||
help_text = 'Network requests act in multiple contexts. Most use the \'global\' and \'web domain\' network contexts, but a hydrus server request, for instance, will add its own service-specific context, and a subscription will add both itself and its downloader.'
|
help_text = 'Network requests act in multiple contexts. Most use the \'global\' and \'web domain\' network contexts, but a downloader page or subscription will also add a special label for itself. Each context can have its own set of bandwidth rules.'
|
||||||
help_text += os.linesep * 2
|
help_text += os.linesep * 2
|
||||||
help_text += 'If a network context does not have some specific rules set up, it will use its respective default, which may or may not have rules of its own. If you want to set general policy, like "Never download more than 1GB/day from any individual website," or "Limit the entire client to 2MB/s," do it through \'global\' and these defaults.'
|
help_text += 'If a network context does not have some specific rules set up, it will fall back to its respective default. It is possible for a default to not have any rules. If you want to set general policy, like "Never download more than 1GB/day from any individual website," or "Limit the entire client to 2MB/s," do it through \'global\' and the defaults.'
|
||||||
help_text += os.linesep * 2
|
help_text += os.linesep * 2
|
||||||
help_text += 'All contexts\' rules are consulted and have to pass before a request can do work. If you set a 200KB/s limit on a website domain and a 50KB/s limit on global, your download will only ever run at 50KB/s. To make sense, network contexts with broader scope should have more lenient rules.'
|
help_text += 'All contexts\' rules are consulted and have to pass before a request can do work. If you set a 2MB/s limit on a website domain and a 64KB/s limit on global, your download will only ever run at 64KB/s (and it fact it will probably run much slower, since everything shares the global context!). To make sense, network contexts with broader scope should have more lenient rules.'
|
||||||
help_text += os.linesep * 2
|
help_text += os.linesep * 2
|
||||||
help_text += 'There are two special \'instance\' contexts, for downloaders and threads. These represent individual queries, either a single gallery search or a single watched thread. It is useful to set rules for these so your searches will gather a fast initial sample of results in the first few minutes--so you can make sure you are happy with them--but otherwise trickle the rest in over time. This keeps your CPU and other bandwidth limits less hammered and helps to avoid accidental downloads of many thousands of small bad files or a few hundred gigantic files all in one go.'
|
help_text += 'There are two special ephemeral \'instance\' contexts, for downloaders and thread watchers. These represent individual queries, either a single gallery search or a single watched thread. It can be useful to set default rules for these so your searches will gather a fast initial sample of results in the first few minutes--so you can make sure you are happy with them--but otherwise trickle the rest in over time. This keeps your CPU and other bandwidth limits less hammered and helps to avoid accidental downloads of many thousands of small bad files or a few hundred gigantic files all in one go.'
|
||||||
help_text += os.linesep * 2
|
help_text += os.linesep * 2
|
||||||
help_text += 'Please note that this system bases its calendar dates on UTC/GMT time (it helps servers and clients around the world stay in sync a bit easier). This has no bearing on what, for instance, the \'past 24 hours\' means, but monthly transitions may occur a few hours off whatever your midnight is.'
|
help_text += 'Please note that this system bases its calendar dates on UTC/GMT time (it helps servers and clients around the world stay in sync a bit easier). This has no bearing on what, for instance, the \'past 24 hours\' means, but monthly transitions may occur a few hours off whatever your midnight is.'
|
||||||
help_text += os.linesep * 2
|
help_text += os.linesep * 2
|
||||||
|
|
|
@ -1575,59 +1575,45 @@ class MediaPanel( ClientMedia.ListeningMediaList, QW.QScrollArea, CAC.Applicatio
|
||||||
|
|
||||||
if job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_METADATA:
|
if job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_METADATA:
|
||||||
|
|
||||||
text = 'This will reparse the {} selected files\' metadata.'.format( HydrusData.ToHumanInt( num_files ) )
|
message = 'This will reparse the {} selected files\' metadata.'.format( HydrusData.ToHumanInt( num_files ) )
|
||||||
text += os.linesep * 2
|
message += os.linesep * 2
|
||||||
text += 'If the files were imported before some more recent improvement in the parsing code (such as EXIF rotation or bad video resolution or duration or frame count calculation), this will update them.'
|
message += 'If the files were imported before some more recent improvement in the parsing code (such as EXIF rotation or bad video resolution or duration or frame count calculation), this will update them.'
|
||||||
|
|
||||||
elif job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_FORCE_THUMBNAIL:
|
elif job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_FORCE_THUMBNAIL:
|
||||||
|
|
||||||
text = 'This will force-regenerate the {} selected files\' thumbnails.'.format( HydrusData.ToHumanInt( num_files ) )
|
message = 'This will force-regenerate the {} selected files\' thumbnails.'.format( HydrusData.ToHumanInt( num_files ) )
|
||||||
|
|
||||||
elif job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_REFIT_THUMBNAIL:
|
elif job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_REFIT_THUMBNAIL:
|
||||||
|
|
||||||
text = 'This will regenerate the {} selected files\' thumbnails, but only if they are the wrong size.'.format( HydrusData.ToHumanInt( num_files ) )
|
message = 'This will regenerate the {} selected files\' thumbnails, but only if they are the wrong size.'.format( HydrusData.ToHumanInt( num_files ) )
|
||||||
|
|
||||||
|
|
||||||
do_it_now = True
|
do_it_now = True
|
||||||
|
|
||||||
if num_files > 50:
|
if num_files > 50:
|
||||||
|
|
||||||
text += os.linesep * 2
|
message += os.linesep * 2
|
||||||
text += 'You have selected {} files, so this job may take some time. You can run it all now or schedule it to the overall file maintenance queue for later spread-out processing.'.format( HydrusData.ToHumanInt( num_files ) )
|
message += 'You have selected {} files, so this job may take some time. You can run it all now or schedule it to the overall file maintenance queue for later spread-out processing.'.format( HydrusData.ToHumanInt( num_files ) )
|
||||||
|
|
||||||
yes_tuples = []
|
yes_tuples = []
|
||||||
|
|
||||||
yes_tuples.append( ( 'do it now', 'now' ) )
|
yes_tuples.append( ( 'do it now', 'now' ) )
|
||||||
yes_tuples.append( ( 'do it later', 'later' ) )
|
yes_tuples.append( ( 'do it later', 'later' ) )
|
||||||
|
|
||||||
with ClientGUIDialogs.DialogYesYesNo( self, text, yes_tuples = yes_tuples, no_label = 'forget it' ) as dlg:
|
try:
|
||||||
|
|
||||||
if dlg.exec() == QW.QDialog.Accepted:
|
result = ClientGUIDialogsQuick.GetYesYesNo( self, message, yes_tuples = yes_tuples, no_label = 'forget it' )
|
||||||
|
|
||||||
value = dlg.GetValue()
|
|
||||||
|
|
||||||
if value == 'now':
|
|
||||||
|
|
||||||
do_it_now = True
|
|
||||||
|
|
||||||
elif value == 'later':
|
|
||||||
|
|
||||||
do_it_now = False
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
else:
|
|
||||||
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
|
except HydrusExceptions.CancelledException:
|
||||||
|
|
||||||
|
return
|
||||||
|
|
||||||
|
|
||||||
|
do_it_now = result == 'now'
|
||||||
|
|
||||||
else:
|
else:
|
||||||
|
|
||||||
result = ClientGUIDialogsQuick.GetYesNo( self, text )
|
result = ClientGUIDialogsQuick.GetYesNo( self, message )
|
||||||
|
|
||||||
if result != QW.QDialog.Accepted:
|
if result != QW.QDialog.Accepted:
|
||||||
|
|
||||||
|
|
|
@ -1919,6 +1919,21 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
||||||
return latest_timestamp
|
return latest_timestamp
|
||||||
|
|
||||||
|
|
||||||
|
def _GetMyFileSeed( self, file_seed: FileSeed ) -> typing.Optional[ FileSeed ]:
|
||||||
|
|
||||||
|
search_file_seeds = file_seed.GetSearchFileSeeds()
|
||||||
|
|
||||||
|
for f_s in self._file_seeds:
|
||||||
|
|
||||||
|
if f_s in search_file_seeds:
|
||||||
|
|
||||||
|
return f_s
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
def _GetNextFileSeed( self, status: int ) -> typing.Optional[ FileSeed ]:
|
def _GetNextFileSeed( self, status: int ) -> typing.Optional[ FileSeed ]:
|
||||||
|
|
||||||
# the problem with this is if a file seed recently changed but 'notifyupdated' hasn't had a chance to go yet
|
# the problem with this is if a file seed recently changed but 'notifyupdated' hasn't had a chance to go yet
|
||||||
|
@ -2249,14 +2264,14 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def AddFileSeeds( self, file_seeds: typing.Collection[ FileSeed ] ):
|
def AddFileSeeds( self, file_seeds: typing.Collection[ FileSeed ], dupe_try_again = True ):
|
||||||
|
|
||||||
if len( file_seeds ) == 0:
|
if len( file_seeds ) == 0:
|
||||||
|
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
|
|
||||||
new_file_seeds = []
|
updated_or_new_file_seeds = []
|
||||||
|
|
||||||
with self._lock:
|
with self._lock:
|
||||||
|
|
||||||
|
@ -2264,6 +2279,21 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
||||||
|
|
||||||
if self._HasFileSeed( file_seed ):
|
if self._HasFileSeed( file_seed ):
|
||||||
|
|
||||||
|
if dupe_try_again:
|
||||||
|
|
||||||
|
f_s = self._GetMyFileSeed( file_seed )
|
||||||
|
|
||||||
|
if f_s is not None:
|
||||||
|
|
||||||
|
if f_s.status == CC.STATUS_ERROR:
|
||||||
|
|
||||||
|
f_s.SetStatus( CC.STATUS_UNKNOWN )
|
||||||
|
|
||||||
|
updated_or_new_file_seeds.append( f_s )
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
|
||||||
|
@ -2278,7 +2308,7 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
|
||||||
new_file_seeds.append( file_seed )
|
updated_or_new_file_seeds.append( file_seed )
|
||||||
|
|
||||||
self._file_seeds.append( file_seed )
|
self._file_seeds.append( file_seed )
|
||||||
|
|
||||||
|
@ -2295,9 +2325,9 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
||||||
self._SetStatusDirty()
|
self._SetStatusDirty()
|
||||||
|
|
||||||
|
|
||||||
self.NotifyFileSeedsUpdated( new_file_seeds )
|
self.NotifyFileSeedsUpdated( updated_or_new_file_seeds )
|
||||||
|
|
||||||
return len( new_file_seeds )
|
return len( updated_or_new_file_seeds )
|
||||||
|
|
||||||
|
|
||||||
def AdvanceFileSeed( self, file_seed: FileSeed ):
|
def AdvanceFileSeed( self, file_seed: FileSeed ):
|
||||||
|
|
|
@ -760,7 +760,7 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
|
||||||
self._status_dirty = True
|
self._status_dirty = True
|
||||||
|
|
||||||
|
|
||||||
def AddGallerySeeds( self, gallery_seeds ):
|
def AddGallerySeeds( self, gallery_seeds, dupe_try_again = False ):
|
||||||
|
|
||||||
if len( gallery_seeds ) == 0:
|
if len( gallery_seeds ) == 0:
|
||||||
|
|
||||||
|
|
|
@ -1155,7 +1155,7 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
|
||||||
|
|
||||||
if len( file_seeds ) > 0:
|
if len( file_seeds ) > 0:
|
||||||
|
|
||||||
self._file_seed_cache.AddFileSeeds( file_seeds )
|
self._file_seed_cache.AddFileSeeds( file_seeds, dupe_try_again = True )
|
||||||
|
|
||||||
ClientImporting.WakeRepeatingJob( self._files_repeating_job )
|
ClientImporting.WakeRepeatingJob( self._files_repeating_job )
|
||||||
|
|
||||||
|
|
|
@ -1658,14 +1658,18 @@ class MediaList( object ):
|
||||||
trashed = service_key in local_file_domains
|
trashed = service_key in local_file_domains
|
||||||
deleted_from_our_domain = self._location_context.IsOneDomain() and service_key in self._location_context.current_service_keys
|
deleted_from_our_domain = self._location_context.IsOneDomain() and service_key in self._location_context.current_service_keys
|
||||||
|
|
||||||
|
we_are_looking_at_trash = self._location_context.IsOneDomain() and CC.TRASH_SERVICE_KEY in self._location_context.current_service_keys
|
||||||
our_view_is_all_local = self._location_context.IncludesCurrent() and not self._location_context.IncludesDeleted() and self._location_context.current_service_keys.issubset( all_local_file_services )
|
our_view_is_all_local = self._location_context.IncludesCurrent() and not self._location_context.IncludesDeleted() and self._location_context.current_service_keys.issubset( all_local_file_services )
|
||||||
|
|
||||||
|
# case one, disappeared from hard drive and we are looking at local files
|
||||||
physically_deleted_and_local_view = physically_deleted and our_view_is_all_local
|
physically_deleted_and_local_view = physically_deleted and our_view_is_all_local
|
||||||
|
|
||||||
user_says_remove_and_trashed_from_non_trash_local_view = HC.options[ 'remove_trashed_files' ] and trashed and our_view_is_all_local and CC.TRASH_SERVICE_KEY not in self._location_context.current_service_keys
|
# case two, disappeared from repo hard drive while we are looking at it
|
||||||
|
|
||||||
deleted_from_repo_and_repo_view = service_key not in all_local_file_services and deleted_from_our_domain
|
deleted_from_repo_and_repo_view = service_key not in all_local_file_services and deleted_from_our_domain
|
||||||
|
|
||||||
|
# case three, user asked for this to happen
|
||||||
|
user_says_remove_and_trashed_from_non_trash_local_view = HC.options[ 'remove_trashed_files' ] and trashed and not we_are_looking_at_trash
|
||||||
|
|
||||||
if physically_deleted_and_local_view or user_says_remove_and_trashed_from_non_trash_local_view or deleted_from_repo_and_repo_view:
|
if physically_deleted_and_local_view or user_says_remove_and_trashed_from_non_trash_local_view or deleted_from_repo_and_repo_view:
|
||||||
|
|
||||||
self._RemoveMediaByHashes( hashes )
|
self._RemoveMediaByHashes( hashes )
|
||||||
|
|
|
@ -80,7 +80,7 @@ options = {}
|
||||||
# Misc
|
# Misc
|
||||||
|
|
||||||
NETWORK_VERSION = 20
|
NETWORK_VERSION = 20
|
||||||
SOFTWARE_VERSION = 487
|
SOFTWARE_VERSION = 488
|
||||||
CLIENT_API_VERSION = 31
|
CLIENT_API_VERSION = 31
|
||||||
|
|
||||||
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||||
|
@ -557,12 +557,13 @@ AUDIO_MP4 = 49
|
||||||
UNDETERMINED_MP4 = 50
|
UNDETERMINED_MP4 = 50
|
||||||
APPLICATION_CBOR = 51
|
APPLICATION_CBOR = 51
|
||||||
APPLICATION_WINDOWS_EXE = 52
|
APPLICATION_WINDOWS_EXE = 52
|
||||||
|
AUDIO_WAVPACK = 53
|
||||||
APPLICATION_OCTET_STREAM = 100
|
APPLICATION_OCTET_STREAM = 100
|
||||||
APPLICATION_UNKNOWN = 101
|
APPLICATION_UNKNOWN = 101
|
||||||
|
|
||||||
GENERAL_FILETYPES = { GENERAL_APPLICATION, GENERAL_AUDIO, GENERAL_IMAGE, GENERAL_VIDEO, GENERAL_ANIMATION }
|
GENERAL_FILETYPES = { GENERAL_APPLICATION, GENERAL_AUDIO, GENERAL_IMAGE, GENERAL_VIDEO, GENERAL_ANIMATION }
|
||||||
|
|
||||||
SEARCHABLE_MIMES = { IMAGE_JPEG, IMAGE_PNG, IMAGE_APNG, IMAGE_GIF, IMAGE_WEBP, IMAGE_TIFF, IMAGE_ICON, APPLICATION_FLASH, VIDEO_AVI, VIDEO_FLV, VIDEO_MOV, VIDEO_MP4, VIDEO_MKV, VIDEO_REALMEDIA, VIDEO_WEBM, VIDEO_OGV, VIDEO_MPEG, APPLICATION_CLIP, APPLICATION_PSD, APPLICATION_PDF, APPLICATION_ZIP, APPLICATION_RAR, APPLICATION_7Z, AUDIO_M4A, AUDIO_MP3, AUDIO_REALMEDIA, AUDIO_OGG, AUDIO_FLAC, AUDIO_WAVE, AUDIO_TRUEAUDIO, AUDIO_WMA, VIDEO_WMV, AUDIO_MKV, AUDIO_MP4 }
|
SEARCHABLE_MIMES = { IMAGE_JPEG, IMAGE_PNG, IMAGE_APNG, IMAGE_GIF, IMAGE_WEBP, IMAGE_TIFF, IMAGE_ICON, APPLICATION_FLASH, VIDEO_AVI, VIDEO_FLV, VIDEO_MOV, VIDEO_MP4, VIDEO_MKV, VIDEO_REALMEDIA, VIDEO_WEBM, VIDEO_OGV, VIDEO_MPEG, APPLICATION_CLIP, APPLICATION_PSD, APPLICATION_PDF, APPLICATION_ZIP, APPLICATION_RAR, APPLICATION_7Z, AUDIO_M4A, AUDIO_MP3, AUDIO_REALMEDIA, AUDIO_OGG, AUDIO_FLAC, AUDIO_WAVE, AUDIO_TRUEAUDIO, AUDIO_WMA, VIDEO_WMV, AUDIO_MKV, AUDIO_MP4, AUDIO_WAVPACK }
|
||||||
|
|
||||||
STORABLE_MIMES = set( SEARCHABLE_MIMES ).union( { APPLICATION_HYDRUS_UPDATE_CONTENT, APPLICATION_HYDRUS_UPDATE_DEFINITIONS } )
|
STORABLE_MIMES = set( SEARCHABLE_MIMES ).union( { APPLICATION_HYDRUS_UPDATE_CONTENT, APPLICATION_HYDRUS_UPDATE_DEFINITIONS } )
|
||||||
|
|
||||||
|
@ -574,7 +575,7 @@ IMAGES = { IMAGE_JPEG, IMAGE_PNG, IMAGE_BMP, IMAGE_WEBP, IMAGE_TIFF, IMAGE_ICON
|
||||||
|
|
||||||
ANIMATIONS = { IMAGE_GIF, IMAGE_APNG }
|
ANIMATIONS = { IMAGE_GIF, IMAGE_APNG }
|
||||||
|
|
||||||
AUDIO = { AUDIO_M4A, AUDIO_MP3, AUDIO_OGG, AUDIO_FLAC, AUDIO_WAVE, AUDIO_WMA, AUDIO_REALMEDIA, AUDIO_TRUEAUDIO, AUDIO_MKV, AUDIO_MP4 }
|
AUDIO = { AUDIO_M4A, AUDIO_MP3, AUDIO_OGG, AUDIO_FLAC, AUDIO_WAVE, AUDIO_WMA, AUDIO_REALMEDIA, AUDIO_TRUEAUDIO, AUDIO_MKV, AUDIO_MP4, AUDIO_WAVPACK }
|
||||||
|
|
||||||
VIDEO = { VIDEO_AVI, VIDEO_FLV, VIDEO_MOV, VIDEO_MP4, VIDEO_WMV, VIDEO_MKV, VIDEO_REALMEDIA, VIDEO_WEBM, VIDEO_OGV, VIDEO_MPEG }
|
VIDEO = { VIDEO_AVI, VIDEO_FLV, VIDEO_MOV, VIDEO_MP4, VIDEO_WMV, VIDEO_MKV, VIDEO_REALMEDIA, VIDEO_WEBM, VIDEO_OGV, VIDEO_MPEG }
|
||||||
|
|
||||||
|
@ -655,6 +656,7 @@ mime_enum_lookup = {
|
||||||
'audio/wav' : AUDIO_WAVE,
|
'audio/wav' : AUDIO_WAVE,
|
||||||
'audio/wave' : AUDIO_WAVE,
|
'audio/wave' : AUDIO_WAVE,
|
||||||
'audio/x-ms-wma' : AUDIO_WMA,
|
'audio/x-ms-wma' : AUDIO_WMA,
|
||||||
|
'audio/wavpack' : AUDIO_WAVPACK,
|
||||||
'text/html' : TEXT_HTML,
|
'text/html' : TEXT_HTML,
|
||||||
'text/plain' : TEXT_PLAIN,
|
'text/plain' : TEXT_PLAIN,
|
||||||
'video/x-msvideo' : VIDEO_AVI,
|
'video/x-msvideo' : VIDEO_AVI,
|
||||||
|
@ -707,6 +709,7 @@ mime_string_lookup = {
|
||||||
AUDIO_REALMEDIA : 'realaudio',
|
AUDIO_REALMEDIA : 'realaudio',
|
||||||
AUDIO_TRUEAUDIO : 'tta',
|
AUDIO_TRUEAUDIO : 'tta',
|
||||||
AUDIO_WMA : 'wma',
|
AUDIO_WMA : 'wma',
|
||||||
|
AUDIO_WAVPACK : 'wavpack',
|
||||||
TEXT_HTML : 'html',
|
TEXT_HTML : 'html',
|
||||||
TEXT_PLAIN : 'plaintext',
|
TEXT_PLAIN : 'plaintext',
|
||||||
VIDEO_AVI : 'avi',
|
VIDEO_AVI : 'avi',
|
||||||
|
@ -765,6 +768,7 @@ mime_mimetype_string_lookup = {
|
||||||
AUDIO_REALMEDIA : 'audio/vnd.rn-realaudio',
|
AUDIO_REALMEDIA : 'audio/vnd.rn-realaudio',
|
||||||
AUDIO_TRUEAUDIO : 'audio/x-tta',
|
AUDIO_TRUEAUDIO : 'audio/x-tta',
|
||||||
AUDIO_WMA : 'audio/x-ms-wma',
|
AUDIO_WMA : 'audio/x-ms-wma',
|
||||||
|
AUDIO_WAVPACK : 'audio/wavpack',
|
||||||
TEXT_HTML : 'text/html',
|
TEXT_HTML : 'text/html',
|
||||||
TEXT_PLAIN : 'text/plain',
|
TEXT_PLAIN : 'text/plain',
|
||||||
VIDEO_AVI : 'video/x-msvideo',
|
VIDEO_AVI : 'video/x-msvideo',
|
||||||
|
@ -823,6 +827,7 @@ mime_ext_lookup = {
|
||||||
AUDIO_WAVE : '.wav',
|
AUDIO_WAVE : '.wav',
|
||||||
AUDIO_TRUEAUDIO : '.tta',
|
AUDIO_TRUEAUDIO : '.tta',
|
||||||
AUDIO_WMA : '.wma',
|
AUDIO_WMA : '.wma',
|
||||||
|
AUDIO_WAVPACK : '.wv',
|
||||||
TEXT_HTML : '.html',
|
TEXT_HTML : '.html',
|
||||||
TEXT_PLAIN : '.txt',
|
TEXT_PLAIN : '.txt',
|
||||||
VIDEO_AVI : '.avi',
|
VIDEO_AVI : '.avi',
|
||||||
|
|
|
@ -61,6 +61,7 @@ headers_and_mime.extend( [
|
||||||
( ( ( 4, b'ftypqt' ), ), HC.VIDEO_MOV ),
|
( ( ( 4, b'ftypqt' ), ), HC.VIDEO_MOV ),
|
||||||
( ( ( 0, b'fLaC' ), ), HC.AUDIO_FLAC ),
|
( ( ( 0, b'fLaC' ), ), HC.AUDIO_FLAC ),
|
||||||
( ( ( 0, b'RIFF' ), ( 8, b'WAVE' ) ), HC.AUDIO_WAVE ),
|
( ( ( 0, b'RIFF' ), ( 8, b'WAVE' ) ), HC.AUDIO_WAVE ),
|
||||||
|
( ( ( 0, b'wvpk' ), ), HC.AUDIO_WAVPACK ),
|
||||||
( ( ( 8, b'AVI ' ), ), HC.VIDEO_AVI ),
|
( ( ( 8, b'AVI ' ), ), HC.VIDEO_AVI ),
|
||||||
( ( ( 0, b'\x30\x26\xB2\x75\x8E\x66\xCF\x11\xA6\xD9\x00\xAA\x00\x62\xCE\x6C' ), ), HC.UNDETERMINED_WM ),
|
( ( ( 0, b'\x30\x26\xB2\x75\x8E\x66\xCF\x11\xA6\xD9\x00\xAA\x00\x62\xCE\x6C' ), ), HC.UNDETERMINED_WM ),
|
||||||
( ( ( 0, b'\x4D\x5A\x90\x00\x03', ), ), HC.APPLICATION_WINDOWS_EXE )
|
( ( ( 0, b'\x4D\x5A\x90\x00\x03', ), ), HC.APPLICATION_WINDOWS_EXE )
|
||||||
|
|
|
@ -562,6 +562,14 @@ def GetMime( path ):
|
||||||
return HC.AUDIO_WMA
|
return HC.AUDIO_WMA
|
||||||
|
|
||||||
|
|
||||||
|
elif mime_text == 'wav':
|
||||||
|
|
||||||
|
return HC.AUDIO_WAVE
|
||||||
|
|
||||||
|
elif mime_text == 'wv':
|
||||||
|
|
||||||
|
return HC.AUDIO_WAVPACK
|
||||||
|
|
||||||
|
|
||||||
return HC.APPLICATION_UNKNOWN
|
return HC.APPLICATION_UNKNOWN
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue