Version 489
This commit is contained in:
parent
e624414cfc
commit
9df036c0ec
|
@ -3,7 +3,36 @@
|
|||
!!! note
|
||||
This is the new changelog, only the most recent builds. For all versions, see the [old changelog](old_changelog.html).
|
||||
|
||||
## [Version 488](https://github.com/hydrusnetwork/hydrus/releases/tag/v487)
|
||||
## [Version 489](https://github.com/hydrusnetwork/hydrus/releases/tag/v489)
|
||||
|
||||
### downloader pages
|
||||
* greatly improved the status reporting for downloader pages. the way the little text updates on your file and gallery progress are generated and presented is overhauled, and tests are unified across the different downloader pages. you now get specific texts on all possible reasons the queue cannot currently process, such as the emergency pause states under the _network_ menu or specific info like hitting the file limit, and all the code involved here is much cleaner
|
||||
* the 'working/pending' status, when you have a whole bunch of galleries or watchers wanting to run at the same time, is now calculated more reliably, and the UI will report 'waiting for a work slot' on pending jobs. no more blank pending!
|
||||
* when you pause mid-job, the 'pausing - status' text is generated is a little neater too
|
||||
* with luck, we'll also have fewer examples of 64KB of 503 error html spamming the UI
|
||||
* any critical unhandled errors during importing proper now stop that queue until a client restart and make an appropriate status text and popup (in some situations, they previously could spam every thirty seconds)
|
||||
* the simple downloader and urls downloader now support the 'delay work until later' error system. actual UI for status reporting on these downloaders remains limited, however
|
||||
* a bunch of misc downloader page cleanup
|
||||
|
||||
### archive/delete
|
||||
* the final 'commit/forget/back' confirmation dialog on the archive/delete filter now lists all the possible local file domains you could delete from with separate file counts and 'commit' buttons, including 'all my files' if there are multiple, defaulting to the parent page's location at the top of the list. this let's you do a 'yes, purge all these from everywhere' delete or a 'no, just from here' delete as needed and generally makes what is going on more visible
|
||||
* fixed archive/delete commit for users with the 'archived file delete lock' turned on
|
||||
|
||||
### misc
|
||||
* fixed a bug in the parsing sanity check that makes sure bad 'last modified' timestamps are not added. some ~1970-01-01 results were slipping through. on update, all modified dates within a week of this epoch will be retroactively removed
|
||||
* the 'connection' panel in the options now lets you configure how many times a network request can retry connections and requests. the logic behind these values is improved, too--network jobs now count connection and request errors separately
|
||||
* optimised the master tag update routine when you petition tags
|
||||
* the Client API help for /add_tags/add_tags now clarifies that deleting a tag that does not exist _will_ make a change--it makes a deletion record
|
||||
* thanks to a user, the 'getting started with files' help has had a pass
|
||||
* I looked into memory bloat some users are seeing after media viewer use, but I couldn't reproduce it locally. I am now making a plan to finally integrate a memory profiler and add some memory debug UI so we can better see what is going on when a couple gigs suddenly appear
|
||||
|
||||
### important repository processing fixes
|
||||
* I've been trying to chase down a persistent processing bug some users got, where no matter what resyncs or checks they do, a content update seems to be cast as a definition update. fingers crossed, I have finally fixed it this week. it turns out there was a bug near my 'is this a definition or a content update?' check that is used for auto-repair maintenance here (long story short, ffmpeg was false-positive discovering mpegs in json). whatever the case, I have scheduled all users for a repository update file metadata check, so with luck anyone with a bad record will be fixed automatically in the background within a few hours of background work. anyone who encounters this problem in future should be fixed by the automatic repair too. thank you very much to the patient users who sent in reports about this and worked with me to figure this out. please try processing again, and let me know if you still have any issues
|
||||
* I also cleaned some of the maintenance code, and made it more aggressive, so 'do a full metadata resync' is now be even more uncompromising
|
||||
* also, the repository updates file service gets a bit of cleanup. it seems some ghost files have snuck in there over time, and today their records are corrected. the bug that let this happen in the first place is also fixed
|
||||
* there remains an issue where some users' clients have tried to hit the PTR with 404ing update file hashes. I am still investigating this
|
||||
|
||||
## [Version 488](https://github.com/hydrusnetwork/hydrus/releases/tag/v488)
|
||||
|
||||
### all misc this week
|
||||
* the client now supports 'wavpack' files. these are basically a kind of compressed wav. mpv seems to play them fine too!
|
||||
|
@ -311,38 +340,3 @@
|
|||
* cleaned up an old wx label patch
|
||||
* cleaned up an old wx system colour patch
|
||||
* cleaned up some misc initialisation code
|
||||
|
||||
## [Version 478](https://github.com/hydrusnetwork/hydrus/releases/tag/v478)
|
||||
|
||||
### misc
|
||||
* if a file note text is crazy and can't be displayed, this is now handled and the best visual approximation is displayed (and saved back on ok) instead
|
||||
* fixed an error in the cloudflare problem detection calls for the newer versions of cloudscraper (>=1.2.60) while maintaining support for the older versions. fingers crossed, we also shouldn't repeat this specific error if they refactor again
|
||||
|
||||
### file history chart updates
|
||||
* fixed the 'inbox' line in file history, which has to be calculated in an odd way and was not counting on file imports adding to the inbox
|
||||
* the file history chart now expands its y axis range to show all data even if deleted_files is huge. we'll see how nice this actually is IRL
|
||||
* bumped the file history resolution up from 1,000 to 2,000 steps
|
||||
* the y axis _should_ now show localised numbers, 5,000 instead of 5000, but the method by which this occurs involves fox tongues and the breath of a slighted widow, so it may just not work for some machines
|
||||
|
||||
### cleanup, mostly file location stuff
|
||||
* I believe I have replaced all the remaining surplus static 'my files' references with code compatible with multiple local file services. when I add the capability to create new local file services, there now won't be a problem trying to display thumbnails or generate menu actions etc... if they aren't in 'my files'
|
||||
* pulled the autocomplete dropdown file domain button code out to its own class and refactored it and the multiple location context panel to their own file
|
||||
* added a 'default file location' option to 'files and trash' page, and a bunch of dialogs (e.g. the search panel when you make a new export folder) and similar now pull it to initialise. for most users this will stay 'my files' forever, but when we hit multiple local file services, it may want to change
|
||||
* the file domain override options in 'manage tag display and search' now work on the new location system and support multple file services
|
||||
* in downloaders, when highlighting, a database job that does the 'show files' filter (e.g. to include those in trash or not) now works on the new location context system and will handle files that will be imported to places other than my files
|
||||
* refactored client api file service parsing
|
||||
* refactored client api hashes parsing
|
||||
* cleaned a whole heap of misc location code
|
||||
* cleaned misc basic code across hydrus and client constant files
|
||||
* gave 'you don't want the server' help page a very quick pass
|
||||
|
||||
### client api
|
||||
* in prep for multiple local file services, delete_files now takes an optional file_service_key or file_service_name. by default, it now deletes from all appropriate local services, so behaviour is unchanged from before without the parameter if you just want to delete m8
|
||||
* undelete files is the same. when we have multiple local file services, an undelete without a file service will undelete to all locations that have a delete record
|
||||
* delete_files also now takes an optional 'reason' parameter
|
||||
* the 'set_notes' command now checks the type of the notes Object. it obviously has to be string-to-string
|
||||
* the 'get_thumbnail' command should now never 404. if you ask for a pdf thumb, it gives the pdf default thumb, and if there is no thumb for whatever reason, you get the hydrus fallback thumbnail. just like in the client itself
|
||||
* updated client api help to talk about these
|
||||
* updated the unit tests to handle them too
|
||||
* did a pass over the client api help to unify indent style and fix other small formatting issues
|
||||
* client api version is now 28
|
||||
|
|
|
@ -647,7 +647,14 @@ Response description:
|
|||
: 200 and no content.
|
||||
|
||||
!!! note
|
||||
Note also that hydrus tag actions are safely idempotent. You can pend a tag that is already pended and not worry about an error--it will be discarded. The same for other reasonable logical scenarios: deleting a tag that does not exist will silently make no change, pending a tag that is already 'current' will again be passed over. It is fine to just throw 'process this' tags at every file import you add and not have to worry about checking which files you already added it to.
|
||||
Note also that hydrus tag actions are safely idempotent. You can pend a tag that is already pended, or add a tag that already exists, and not worry about an error--the surplus add action will be discarded. The same is true if you try to pend a tag that actually already exists, or rescinding a petition that doesn't. Any invalid actions will fail silently.
|
||||
|
||||
It is fine to just throw your 'process this' tags at every file import and not have to worry about checking which files you already added them to.
|
||||
|
||||
!!! danger "HOWEVER"
|
||||
When you delete a tag, a deletion record is made _even if the tag does not exist on the file_. This is important if you expect to add the tags again via parsing, because, in general, when hydrus adds tags through a downloader, it will not overwrite a previously 'deleted' tag record (this is to stop re-downloads overwriting the tags you hand-removed previously). Undeletes usually have to be done manually by a human.
|
||||
|
||||
So, _do_ be careful about how you spam delete unless it is something that doesn't matter or it is something you'll only be touching again via the API anyway.
|
||||
|
||||
## Adding URLs
|
||||
|
||||
|
|
|
@ -33,6 +33,35 @@
|
|||
<div class="content">
|
||||
<h3 id="changelog"><a href="#changelog">changelog</a></h3>
|
||||
<ul>
|
||||
<li><h3 id="version_489"><a href="#version_489">version 489</a></h3></li>
|
||||
<ul>
|
||||
<li>downloader pages:</li>
|
||||
<li>greatly improved the status reporting for downloader pages. the way the little text updates on your file and gallery progress are generated and presented is overhauled, and tests are unified across the different downloader pages. you now get specific texts on all possible reasons the queue cannot currently process, such as the emergency pause states under the _network_ menu or specific info like hitting the file limit, and all the code involved here is much cleaner</li>
|
||||
<li>the 'working/pending' status, when you have a whole bunch of galleries or watchers wanting to run at the same time, is now calculated more reliably, and the UI will report 'waiting for a work slot' on pending jobs. no more blank pending!</li>
|
||||
<li>when you pause mid-job, the 'pausing - status' text is generated is a little neater too</li>
|
||||
<li>with luck, we'll also have fewer examples of 64KB of 503 error html spamming the UI</li>
|
||||
<li>any critical unhandled errors during importing proper now stop that queue until a client restart and make an appropriate status text and popup (in some situations, they previously could spam every thirty seconds)</li>
|
||||
<li>the simple downloader and urls downloader now support the 'delay work until later' error system. actual UI for status reporting on these downloaders remains limited, however</li>
|
||||
<li>a bunch of misc downloader page cleanup</li>
|
||||
<li>.</li>
|
||||
<li>archive/delete:</li>
|
||||
<li>the final 'commit/forget/back' confirmation dialog on the archive/delete filter now lists all the possible local file domains you could delete from with separate file counts and 'commit' buttons, including 'all my files' if there are multiple, defaulting to the parent page's location at the top of the list. this let's you do a 'yes, purge all these from everywhere' delete or a 'no, just from here' delete as needed and generally makes what is going on more visible</li>
|
||||
<li>fixed archive/delete commit for users with the 'archived file delete lock' turned on</li>
|
||||
<li>.</li>
|
||||
<li>misc:</li>
|
||||
<li>fixed a bug in the parsing sanity check that makes sure bad 'last modified' timestamps are not added. some ~1970-01-01 results were slipping through. on update, all modified dates within a week of this epoch will be retroactively removed</li>
|
||||
<li>the 'connection' panel in the options now lets you configure how many times a network request can retry connections and requests. the logic behind these values is improved, too--network jobs now count connection and request errors separately</li>
|
||||
<li>optimised the master tag update routine when you petition tags</li>
|
||||
<li>the Client API help for /add_tags/add_tags now clarifies that deleting a tag that does not exist _will_ make a change--it makes a deletion record</li>
|
||||
<li>thanks to a user, the 'getting started with files' help has had a pass</li>
|
||||
<li>I looked into memory bloat some users are seeing after media viewer use, but I couldn't reproduce it locally. I am now making a plan to finally integrate a memory profiler and add some memory debug UI so we can better see what is going on when a couple gigs suddenly appear</li>
|
||||
<li>.</li>
|
||||
<li>important repository processing fixes:</li>
|
||||
<li>I've been trying to chase down a persistent processing bug some users got, where no matter what resyncs or checks they do, a content update seems to be cast as a definition update. fingers crossed, I have finally fixed it this week. it turns out there was a bug near my 'is this a definition or a content update?' check that is used for auto-repair maintenance here (long story short, ffmpeg was false-positive discovering mpegs in json). whatever the case, I have scheduled all users for a repository update file metadata check, so with luck anyone with a bad record will be fixed automatically in the background within a few hours of background work. anyone who encounters this problem in future should be fixed by the automatic repair too. thank you very much to the patient users who sent in reports about this and worked with me to figure this out. please try processing again, and let me know if you still have any issues</li>
|
||||
<li>I also cleaned some of the maintenance code, and made it more aggressive, so 'do a full metadata resync' is now be even more uncompromising</li>
|
||||
<li>also, the repository updates file service gets a bit of cleanup. it seems some ghost files have snuck in there over time, and today their records are corrected. the bug that let this happen in the first place is also fixed</li>
|
||||
<li>there remains an issue where some users' clients have tried to hit the PTR with 404ing update file hashes. I am still investigating this</li>
|
||||
</ul>
|
||||
<li><h3 id="version_488"><a href="#version_488">version 488</a></h3></li>
|
||||
<ul>
|
||||
<li>the client now supports 'wavpack' files. these are basically a kind of compressed wav. mpv seems to play them fine too!</li>
|
||||
|
|
|
@ -361,6 +361,9 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
self._dictionary[ 'integers' ][ 'max_network_jobs' ] = 15
|
||||
self._dictionary[ 'integers' ][ 'max_network_jobs_per_domain' ] = 3
|
||||
|
||||
self._dictionary[ 'integers' ][ 'max_connection_attempts_allowed' ] = 5
|
||||
self._dictionary[ 'integers' ][ 'max_request_attempts_allowed_get' ] = 5
|
||||
|
||||
from hydrus.core import HydrusImageHandling
|
||||
|
||||
self._dictionary[ 'integers' ][ 'thumbnail_scale_type' ] = HydrusImageHandling.THUMBNAIL_SCALE_DOWN_ONLY
|
||||
|
|
|
@ -1,15 +1,17 @@
|
|||
import typing
|
||||
|
||||
from hydrus.core import HydrusData
|
||||
|
||||
def ShouldUpdateDomainModifiedTime( existing_timestamp: int, new_timestamp: int ):
|
||||
def ShouldUpdateDomainModifiedTime( existing_timestamp: int, new_timestamp: typing.Optional[ int ] ) -> bool:
|
||||
|
||||
# assume anything too early is a meme and a timestamp parsing conversion error
|
||||
if new_timestamp <= 86400 * 7:
|
||||
if not TimestampIsSensible( new_timestamp ):
|
||||
|
||||
return False
|
||||
|
||||
|
||||
if not TimestampIsSensible( existing_timestamp ):
|
||||
|
||||
return True
|
||||
|
||||
|
||||
# only go backwards, in general
|
||||
if new_timestamp >= existing_timestamp:
|
||||
|
||||
|
@ -20,14 +22,14 @@ def ShouldUpdateDomainModifiedTime( existing_timestamp: int, new_timestamp: int
|
|||
|
||||
def MergeModifiedTimes( existing_timestamp: typing.Optional[ int ], new_timestamp: typing.Optional[ int ] ) -> typing.Optional[ int ]:
|
||||
|
||||
if existing_timestamp is None:
|
||||
if not TimestampIsSensible( existing_timestamp ):
|
||||
|
||||
return new_timestamp
|
||||
existing_timestamp = None
|
||||
|
||||
|
||||
if new_timestamp is None:
|
||||
if not TimestampIsSensible( new_timestamp ):
|
||||
|
||||
return existing_timestamp
|
||||
new_timestamp = None
|
||||
|
||||
|
||||
if ShouldUpdateDomainModifiedTime( existing_timestamp, new_timestamp ):
|
||||
|
@ -39,3 +41,19 @@ def MergeModifiedTimes( existing_timestamp: typing.Optional[ int ], new_timestam
|
|||
return existing_timestamp
|
||||
|
||||
|
||||
|
||||
def TimestampIsSensible( timestamp: typing.Optional[ int ] ) -> bool:
|
||||
|
||||
if timestamp is None:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
# assume anything too early is a meme and a timestamp parsing conversion error
|
||||
if timestamp <= 86400 * 7:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
return True
|
||||
|
||||
|
|
|
@ -262,6 +262,11 @@ class DB( HydrusDB.HydrusDB ):
|
|||
self._AddFiles( self.modules_services.combined_local_file_service_id, valid_rows )
|
||||
|
||||
|
||||
if service_type == HC.LOCAL_FILE_UPDATE_DOMAIN:
|
||||
|
||||
self._AddFiles( self.modules_services.combined_local_file_service_id, valid_rows )
|
||||
|
||||
|
||||
# insert the files
|
||||
|
||||
pending_changed = self.modules_files_storage.AddFiles( service_id, valid_rows )
|
||||
|
@ -2349,7 +2354,12 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
culled_mappings_ids = []
|
||||
|
||||
for ( tag_id, hash_ids ) in mappings_ids:
|
||||
for row in mappings_ids:
|
||||
|
||||
# mappings_ids here can have 'reason_id' for petitions, so we'll index our values here
|
||||
|
||||
tag_id = row[0]
|
||||
hash_ids = row[1]
|
||||
|
||||
if len( hash_ids ) == 0:
|
||||
|
||||
|
@ -2420,6 +2430,36 @@ class DB( HydrusDB.HydrusDB ):
|
|||
valid_hash_ids = hash_ids
|
||||
|
||||
|
||||
elif action == HC.CONTENT_UPDATE_PETITION:
|
||||
|
||||
result = self._Execute( 'SELECT 1 FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( petitioned_mappings_table_name ), ( tag_id, hash_id ) ).fetchone()
|
||||
|
||||
if result is None:
|
||||
|
||||
valid_hash_ids = hash_ids
|
||||
|
||||
else:
|
||||
|
||||
continue
|
||||
|
||||
|
||||
elif action == HC.CONTENT_UPDATE_RESCIND_PETITION:
|
||||
|
||||
result = self._Execute( 'SELECT 1 FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( petitioned_mappings_table_name ), ( tag_id, hash_id ) ).fetchone()
|
||||
|
||||
if result is None:
|
||||
|
||||
continue
|
||||
|
||||
else:
|
||||
|
||||
valid_hash_ids = hash_ids
|
||||
|
||||
|
||||
else:
|
||||
|
||||
valid_hash_ids = set()
|
||||
|
||||
|
||||
else:
|
||||
|
||||
|
@ -2439,7 +2479,9 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
elif action == HC.CONTENT_UPDATE_PEND:
|
||||
|
||||
# prohibited hash_ids
|
||||
existing_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( temp_hash_ids_table_name, current_mappings_table_name ), ( tag_id, ) ) )
|
||||
# existing_hash_ids
|
||||
existing_hash_ids.update( self._STI( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( temp_hash_ids_table_name, pending_mappings_table_name ), ( tag_id, ) ) ) )
|
||||
|
||||
valid_hash_ids = set( hash_ids ).difference( existing_hash_ids )
|
||||
|
@ -2448,12 +2490,36 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
valid_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( temp_hash_ids_table_name, pending_mappings_table_name ), ( tag_id, ) ) )
|
||||
|
||||
elif action == HC.CONTENT_UPDATE_PETITION:
|
||||
|
||||
# we are technically ok with deleting tags that don't exist yet!
|
||||
existing_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( temp_hash_ids_table_name, petitioned_mappings_table_name ), ( tag_id, ) ) )
|
||||
|
||||
valid_hash_ids = set( hash_ids ).difference( existing_hash_ids )
|
||||
|
||||
elif action == HC.CONTENT_UPDATE_RESCIND_PETITION:
|
||||
|
||||
valid_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( temp_hash_ids_table_name, petitioned_mappings_table_name ), ( tag_id, ) ) )
|
||||
|
||||
else:
|
||||
|
||||
valid_hash_ids = set()
|
||||
|
||||
|
||||
|
||||
|
||||
if len( valid_hash_ids ) > 0:
|
||||
|
||||
culled_mappings_ids.append( ( tag_id, valid_hash_ids ) )
|
||||
if action == HC.CONTENT_UPDATE_PETITION:
|
||||
|
||||
reason_id = row[2]
|
||||
|
||||
culled_mappings_ids.append( ( tag_id, valid_hash_ids, reason_id ) )
|
||||
|
||||
else:
|
||||
|
||||
culled_mappings_ids.append( ( tag_id, valid_hash_ids ) )
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -11872,6 +11938,36 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
|
||||
if version == 488:
|
||||
|
||||
# clearing up some garbo 1970-01-01 timestamps that got saved
|
||||
self._Execute( 'DELETE FROM file_domain_modified_timestamps WHERE file_modified_timestamp < ?;', ( 86400 * 7, ) )
|
||||
|
||||
#
|
||||
|
||||
# mysterious situation where repo updates domain had some ghost files that were not in all local files!
|
||||
|
||||
hash_ids_in_repo_updates = set( self.modules_files_storage.GetCurrentHashIdsList( self.modules_services.local_update_service_id ) )
|
||||
|
||||
hash_ids_in_all_files = self.modules_files_storage.FilterHashIds( ClientLocation.LocationContext.STATICCreateSimple( CC.COMBINED_LOCAL_FILE_SERVICE_KEY ), hash_ids_in_repo_updates )
|
||||
|
||||
orphan_hash_ids = hash_ids_in_repo_updates.difference( hash_ids_in_all_files )
|
||||
|
||||
if len( orphan_hash_ids ) > 0:
|
||||
|
||||
hash_ids_to_timestamps = self.modules_files_storage.GetCurrentHashIdsToTimestamps( self.modules_services.local_update_service_id, orphan_hash_ids )
|
||||
|
||||
rows = list( hash_ids_to_timestamps.items() )
|
||||
|
||||
self.modules_files_storage.AddFiles( self.modules_services.combined_local_file_service_id, rows )
|
||||
|
||||
|
||||
# turns out ffmpeg was detecting some updates as mpegs, so this wasn't always working right!
|
||||
self.modules_files_maintenance_queue.AddJobs( hash_ids_in_repo_updates, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_METADATA )
|
||||
|
||||
self._Execute( 'DELETE FROM service_info WHERE service_id = ?;', ( self.modules_services.local_update_service_id, ) )
|
||||
|
||||
|
||||
self._controller.frame_splash_status.SetTitleText( 'updated db to v{}'.format( HydrusData.ToHumanInt( version + 1 ) ) )
|
||||
|
||||
self._Execute( 'UPDATE version SET version = ?;', ( version + 1, ) )
|
||||
|
@ -11892,6 +11988,8 @@ class DB( HydrusDB.HydrusDB ):
|
|||
deleted_mappings_ids = self._FilterExistingUpdateMappings( tag_service_id, deleted_mappings_ids, HC.CONTENT_UPDATE_DELETE )
|
||||
pending_mappings_ids = self._FilterExistingUpdateMappings( tag_service_id, pending_mappings_ids, HC.CONTENT_UPDATE_PEND )
|
||||
pending_rescinded_mappings_ids = self._FilterExistingUpdateMappings( tag_service_id, pending_rescinded_mappings_ids, HC.CONTENT_UPDATE_RESCIND_PEND )
|
||||
petitioned_mappings_ids = self._FilterExistingUpdateMappings( tag_service_id, petitioned_mappings_ids, HC.CONTENT_UPDATE_PETITION )
|
||||
petitioned_rescinded_mappings_ids = self._FilterExistingUpdateMappings( tag_service_id, petitioned_rescinded_mappings_ids, HC.CONTENT_UPDATE_RESCIND_PETITION )
|
||||
|
||||
tag_ids_to_filter_chained = { tag_id for ( tag_id, hash_ids ) in itertools.chain.from_iterable( ( mappings_ids, deleted_mappings_ids, pending_mappings_ids, pending_rescinded_mappings_ids ) ) }
|
||||
|
||||
|
|
|
@ -166,7 +166,9 @@ class ClientDBRepositories( ClientDBModule.ClientDBModule ):
|
|||
raise Exception( message )
|
||||
|
||||
|
||||
def _RegisterUpdates( self, service_id, hash_ids = None ):
|
||||
def _RegisterLocalUpdates( self, service_id, hash_ids = None ):
|
||||
|
||||
# this function takes anything in 'unregistered', sees what is local, and figures out the correct 'content types' for those hash ids in the 'processed' table. converting unknown/bad hash_ids to correct and ready to process
|
||||
|
||||
# it is ok if this guy gets hash ids that are already in the 'processed' table--it'll now resync them and correct if needed
|
||||
|
||||
|
@ -224,7 +226,7 @@ class ClientDBRepositories( ClientDBModule.ClientDBModule ):
|
|||
if len( insert_rows ) > 0:
|
||||
|
||||
processed = False
|
||||
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id, content_type, processed ) VALUES ( ?, ?, ? );'.format( repository_updates_processed_table_name ), ( ( hash_id, content_type, processed ) for ( hash_id, content_type ) in insert_rows ) )
|
||||
|
||||
|
||||
|
@ -287,7 +289,7 @@ class ClientDBRepositories( ClientDBModule.ClientDBModule ):
|
|||
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for ( update_index, hash_id ) in inserts ) )
|
||||
|
||||
|
||||
self._RegisterUpdates( service_id )
|
||||
self._RegisterLocalUpdates( service_id )
|
||||
|
||||
|
||||
def DropRepositoryTables( self, service_id: int ):
|
||||
|
@ -310,7 +312,7 @@ class ClientDBRepositories( ClientDBModule.ClientDBModule ):
|
|||
|
||||
for service_id in self.modules_services.GetServiceIds( HC.REPOSITORIES ):
|
||||
|
||||
self._RegisterUpdates( service_id )
|
||||
self._RegisterLocalUpdates( service_id )
|
||||
|
||||
|
||||
|
||||
|
@ -594,13 +596,15 @@ class ClientDBRepositories( ClientDBModule.ClientDBModule ):
|
|||
|
||||
def NotifyUpdatesChanged( self, hash_ids ):
|
||||
|
||||
# a mime changed
|
||||
|
||||
for service_id in self.modules_services.GetServiceIds( HC.REPOSITORIES ):
|
||||
|
||||
( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id )
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
|
||||
|
||||
self._RegisterUpdates( service_id, hash_ids )
|
||||
self._RegisterLocalUpdates( service_id, hash_ids )
|
||||
|
||||
|
||||
|
||||
|
@ -608,7 +612,7 @@ class ClientDBRepositories( ClientDBModule.ClientDBModule ):
|
|||
|
||||
for service_id in self.modules_services.GetServiceIds( HC.REPOSITORIES ):
|
||||
|
||||
self._RegisterUpdates( service_id, hash_ids )
|
||||
self._RegisterLocalUpdates( service_id, hash_ids )
|
||||
|
||||
|
||||
|
||||
|
@ -722,19 +726,17 @@ class ClientDBRepositories( ClientDBModule.ClientDBModule ):
|
|||
|
||||
current_update_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {};'.format( repository_updates_table_name ) ) )
|
||||
|
||||
deletee_hash_ids = current_update_hash_ids.difference( all_future_update_hash_ids )
|
||||
|
||||
self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_updates_table_name ), ( ( hash_id, ) for hash_id in deletee_hash_ids ) )
|
||||
self._Execute( 'DELETE FROM {};'.format( repository_updates_table_name ) )
|
||||
|
||||
#
|
||||
|
||||
self._Execute( 'DELETE FROM {};'.format( repository_unregistered_updates_table_name ) )
|
||||
|
||||
#
|
||||
# we want to keep 'yes we processed this' records on a full metadata resync
|
||||
|
||||
good_current_hash_ids = current_update_hash_ids.intersection( all_future_update_hash_ids )
|
||||
|
||||
current_processed_table_update_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {};'.format( repository_updates_processed_table_name ) ) )
|
||||
current_processed_table_update_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} WHERE processed = ?;'.format( repository_updates_processed_table_name ), ( True, ) ) )
|
||||
|
||||
deletee_processed_table_update_hash_ids = current_processed_table_update_hash_ids.difference( good_current_hash_ids )
|
||||
|
||||
|
@ -750,21 +752,14 @@ class ClientDBRepositories( ClientDBModule.ClientDBModule ):
|
|||
|
||||
hash_id = self.modules_hashes_local_cache.GetHashId( update_hash )
|
||||
|
||||
if hash_id in current_update_hash_ids:
|
||||
|
||||
self._Execute( 'UPDATE {} SET update_index = ? WHERE hash_id = ?;'.format( repository_updates_table_name ), ( update_index, hash_id ) )
|
||||
|
||||
else:
|
||||
|
||||
inserts.append( ( update_index, hash_id ) )
|
||||
|
||||
inserts.append( ( update_index, hash_id ) )
|
||||
|
||||
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( update_index, hash_id ) VALUES ( ?, ? );'.format( repository_updates_table_name ), inserts )
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for hash_id in all_future_update_hash_ids ) )
|
||||
|
||||
self._RegisterUpdates( service_id )
|
||||
self._RegisterLocalUpdates( service_id )
|
||||
|
||||
|
||||
def SetUpdateProcessed( self, service_id: int, update_hash: bytes, content_types: typing.Collection[ int ] ):
|
||||
|
@ -780,4 +775,3 @@ class ClientDBRepositories( ClientDBModule.ClientDBModule ):
|
|||
self._ClearOutstandingWorkCache( service_id, content_type )
|
||||
|
||||
|
||||
|
|
@ -3313,7 +3313,7 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes, CAC.ApplicationCo
|
|||
|
||||
ClientGUIMenus.AppendMenuItem( memory_actions, 'run fast memory maintenance', 'Tell all the fast caches to maintain themselves.', self._controller.MaintainMemoryFast )
|
||||
ClientGUIMenus.AppendMenuItem( memory_actions, 'run slow memory maintenance', 'Tell all the slow caches to maintain themselves.', self._controller.MaintainMemorySlow )
|
||||
ClientGUIMenus.AppendMenuItem( memory_actions, 'clear image rendering cache', 'Tell the image rendering system to forget all current images. This will often free up a bunch of memory immediately.', self._controller.ClearCaches )
|
||||
ClientGUIMenus.AppendMenuItem( memory_actions, 'clear all rendering caches', 'Tell the image rendering system to forget all current images, tiles, and thumbs. This will often free up a bunch of memory immediately.', self._controller.ClearCaches )
|
||||
ClientGUIMenus.AppendMenuItem( memory_actions, 'clear thumbnail cache', 'Tell the thumbnail cache to forget everything and redraw all current thumbs.', self._controller.pub, 'reset_thumbnail_cache' )
|
||||
ClientGUIMenus.AppendMenuItem( memory_actions, 'print garbage', 'Print some information about the python garbage to the log.', self._DebugPrintGarbage )
|
||||
ClientGUIMenus.AppendMenuItem( memory_actions, 'take garbage snapshot', 'Capture current garbage object counts.', self._DebugTakeGarbageSnapshot )
|
||||
|
|
|
@ -37,6 +37,21 @@ def GetDeleteFilesJobs( win, media, default_reason, suggested_file_service_key =
|
|||
|
||||
|
||||
|
||||
def GetFinishArchiveDeleteFilteringAnswer( win, kept_label, deletion_options ):
|
||||
|
||||
with ClientGUITopLevelWindowsPanels.DialogCustomButtonQuestion( win, 'filtering done?' ) as dlg:
|
||||
|
||||
panel = ClientGUIScrolledPanelsButtonQuestions.QuestionArchiveDeleteFinishFilteringPanel( dlg, kept_label, deletion_options )
|
||||
|
||||
dlg.SetPanel( panel )
|
||||
|
||||
result = dlg.exec()
|
||||
location_context = panel.GetLocationContext()
|
||||
was_cancelled = dlg.WasCancelled()
|
||||
|
||||
return ( result, location_context, was_cancelled )
|
||||
|
||||
|
||||
def GetFinishFilteringAnswer( win, label ):
|
||||
|
||||
with ClientGUITopLevelWindowsPanels.DialogCustomButtonQuestion( win, label ) as dlg:
|
||||
|
|
|
@ -234,7 +234,7 @@ class EditExportFolderPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
self._query_box = ClientGUICommon.StaticBox( self, 'query to export' )
|
||||
|
||||
self._page_key = 'export folders placeholder'
|
||||
self._page_key = b'export folders placeholder'
|
||||
|
||||
self._tag_autocomplete = ClientGUIACDropdown.AutoCompleteDropdownTagsRead( self._query_box, self._page_key, file_search_context, allow_all_known_files = False, force_system_everything = True )
|
||||
|
||||
|
|
|
@ -1952,32 +1952,8 @@ class GalleryImportPanel( ClientGUICommon.StaticBox ):
|
|||
ClientGUIFunctions.SetBitmapButtonBitmap( self._gallery_pause_button, CC.global_pixmaps().gallery_pause )
|
||||
|
||||
|
||||
if gallery_paused:
|
||||
|
||||
if gallery_status == '':
|
||||
|
||||
gallery_status = 'paused'
|
||||
|
||||
else:
|
||||
|
||||
gallery_status = 'paused - ' + gallery_status
|
||||
|
||||
|
||||
|
||||
self._gallery_status.setText( gallery_status )
|
||||
|
||||
if files_paused:
|
||||
|
||||
if file_status == '':
|
||||
|
||||
file_status = 'paused'
|
||||
|
||||
else:
|
||||
|
||||
file_status = 'pausing - ' + file_status
|
||||
|
||||
|
||||
|
||||
self._file_status.setText( file_status )
|
||||
|
||||
( file_network_job, gallery_network_job ) = self._gallery_import.GetNetworkJobs()
|
||||
|
@ -2554,15 +2530,6 @@ class WatcherReviewPanel( ClientGUICommon.StaticBox ):
|
|||
|
||||
if files_paused:
|
||||
|
||||
if file_status == '':
|
||||
|
||||
file_status = 'paused'
|
||||
|
||||
else:
|
||||
|
||||
file_status = 'pausing, ' + file_status
|
||||
|
||||
|
||||
ClientGUIFunctions.SetBitmapButtonBitmap( self._files_pause_button, CC.global_pixmaps().file_play )
|
||||
|
||||
else:
|
||||
|
@ -2576,11 +2543,6 @@ class WatcherReviewPanel( ClientGUICommon.StaticBox ):
|
|||
|
||||
if checking_paused:
|
||||
|
||||
if watcher_status == '':
|
||||
|
||||
watcher_status = 'paused'
|
||||
|
||||
|
||||
ClientGUIFunctions.SetBitmapButtonBitmap( self._checking_pause_button, CC.global_pixmaps().gallery_play )
|
||||
|
||||
else:
|
||||
|
|
|
@ -1,9 +1,13 @@
|
|||
import os
|
||||
import typing
|
||||
|
||||
from qtpy import QtCore as QC
|
||||
from qtpy import QtWidgets as QW
|
||||
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientLocation
|
||||
from hydrus.client.gui import ClientGUIFunctions
|
||||
from hydrus.client.gui import ClientGUIScrolledPanels
|
||||
from hydrus.client.gui import QtPorting as QP
|
||||
|
@ -41,6 +45,133 @@ class QuestionCommitInterstitialFilteringPanel( ClientGUIScrolledPanels.Resizing
|
|||
ClientGUIFunctions.SetFocusLater( self._commit )
|
||||
|
||||
|
||||
class QuestionArchiveDeleteFinishFilteringPanel( ClientGUIScrolledPanels.ResizingScrolledPanel ):
|
||||
|
||||
def __init__( self, parent, kept_label: typing.Optional[ str ], deletion_options ):
|
||||
|
||||
ClientGUIScrolledPanels.ResizingScrolledPanel.__init__( self, parent )
|
||||
|
||||
self._location_context = ClientLocation.LocationContext() # empty
|
||||
|
||||
vbox = QP.VBoxLayout()
|
||||
|
||||
first_commit = None
|
||||
|
||||
if len( deletion_options ) == 0:
|
||||
|
||||
if kept_label is None:
|
||||
|
||||
kept_label = 'ERROR: do not seem to have any actions at all!'
|
||||
|
||||
|
||||
label = '{}?'.format( kept_label )
|
||||
|
||||
st = ClientGUICommon.BetterStaticText( self, label )
|
||||
|
||||
st.setAlignment( QC.Qt.AlignCenter )
|
||||
|
||||
QP.AddToLayout( vbox, st, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
first_commit = ClientGUICommon.BetterButton( self, 'commit', self.DoCommit, ClientLocation.LocationContext() )
|
||||
first_commit.setObjectName( 'HydrusAccept' )
|
||||
|
||||
QP.AddToLayout( vbox, first_commit, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
elif len( deletion_options ) == 1:
|
||||
|
||||
( location_context, delete_label ) = deletion_options[0]
|
||||
|
||||
if kept_label is None:
|
||||
|
||||
label = '{}?'.format( delete_label )
|
||||
|
||||
else:
|
||||
|
||||
label = '{} and {}?'.format( kept_label, delete_label )
|
||||
|
||||
|
||||
st = ClientGUICommon.BetterStaticText( self, label )
|
||||
|
||||
st.setAlignment( QC.Qt.AlignCenter )
|
||||
|
||||
QP.AddToLayout( vbox, st, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
first_commit = ClientGUICommon.BetterButton( self, 'commit', self.DoCommit, location_context )
|
||||
first_commit.setObjectName( 'HydrusAccept' )
|
||||
|
||||
QP.AddToLayout( vbox, first_commit, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
else:
|
||||
|
||||
if kept_label is not None:
|
||||
|
||||
label = '{}{}-and-'.format( kept_label, os.linesep )
|
||||
|
||||
st = ClientGUICommon.BetterStaticText( self, label )
|
||||
|
||||
st.setAlignment( QC.Qt.AlignCenter )
|
||||
|
||||
QP.AddToLayout( vbox, st, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
|
||||
for ( location_context, delete_label ) in deletion_options:
|
||||
|
||||
label = '{}?'.format( delete_label )
|
||||
|
||||
st = ClientGUICommon.BetterStaticText( self, label )
|
||||
|
||||
st.setAlignment( QC.Qt.AlignCenter )
|
||||
|
||||
QP.AddToLayout( vbox, st, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
commit = ClientGUICommon.BetterButton( self, 'commit', self.DoCommit, location_context )
|
||||
commit.setObjectName( 'HydrusAccept' )
|
||||
|
||||
QP.AddToLayout( vbox, commit, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
if first_commit is None:
|
||||
|
||||
first_commit = commit
|
||||
|
||||
|
||||
|
||||
|
||||
self._forget = ClientGUICommon.BetterButton( self, 'forget', self.parentWidget().done, QW.QDialog.Rejected )
|
||||
self._forget.setObjectName( 'HydrusCancel' )
|
||||
|
||||
self._back = ClientGUICommon.BetterButton( self, 'back to filtering', self.DoGoBack )
|
||||
|
||||
QP.AddToLayout( vbox, self._forget, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
st = ClientGUICommon.BetterStaticText( self, '-or-' )
|
||||
|
||||
st.setAlignment( QC.Qt.AlignCenter )
|
||||
|
||||
QP.AddToLayout( vbox, st, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, self._back, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
self.widget().setLayout( vbox )
|
||||
|
||||
ClientGUIFunctions.SetFocusLater( first_commit )
|
||||
|
||||
|
||||
def DoGoBack( self ):
|
||||
|
||||
self.parentWidget().SetCancelled( True )
|
||||
self.parentWidget().done( QW.QDialog.Rejected )
|
||||
|
||||
|
||||
def DoCommit( self, location_context ):
|
||||
|
||||
self._location_context = location_context
|
||||
self.parentWidget().done( QW.QDialog.Accepted )
|
||||
|
||||
|
||||
def GetLocationContext( self ) -> ClientLocation.LocationContext:
|
||||
|
||||
return self._location_context
|
||||
|
||||
|
||||
class QuestionFinishFilteringPanel( ClientGUIScrolledPanels.ResizingScrolledPanel ):
|
||||
|
||||
def __init__( self, parent, label ):
|
||||
|
|
|
@ -304,8 +304,14 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
max_network_jobs_per_domain_max = 5
|
||||
|
||||
|
||||
self._max_connection_attempts_allowed = ClientGUICommon.BetterSpinBox( general, min = 1, max = 10 )
|
||||
self._max_connection_attempts_allowed.setToolTip( 'This refers to timeouts when actually making the initial connection.' )
|
||||
|
||||
self._max_request_attempts_allowed_get = ClientGUICommon.BetterSpinBox( general, min = 1, max = 10 )
|
||||
self._max_request_attempts_allowed_get.setToolTip( 'This refers to timeouts when waiting for a response to our GET requests, whether that is the start or an interruption part way through.' )
|
||||
|
||||
self._network_timeout = ClientGUICommon.BetterSpinBox( general, min = network_timeout_min, max = network_timeout_max )
|
||||
self._network_timeout.setToolTip( 'If a network connection cannot be made in this duration or, if once started, it experiences uninterrupted inactivity for six times this duration, it will be abandoned.' )
|
||||
self._network_timeout.setToolTip( 'If a network connection cannot be made in this duration or, once started, it experiences inactivity for six times this duration, it will be considered dead and retried or abandoned.' )
|
||||
|
||||
self._connection_error_wait_time = ClientGUICommon.BetterSpinBox( general, min = error_wait_time_min, max = error_wait_time_max )
|
||||
self._connection_error_wait_time.setToolTip( 'If a network connection times out as above, it will wait increasing multiples of this base time before retrying.' )
|
||||
|
@ -334,6 +340,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
self._https_proxy.SetValue( self._new_options.GetNoneableString( 'https_proxy' ) )
|
||||
self._no_proxy.SetValue( self._new_options.GetNoneableString( 'no_proxy' ) )
|
||||
|
||||
self._max_connection_attempts_allowed.setValue( self._new_options.GetInteger( 'max_connection_attempts_allowed' ) )
|
||||
self._max_request_attempts_allowed_get.setValue( self._new_options.GetInteger( 'max_request_attempts_allowed_get' ) )
|
||||
self._network_timeout.setValue( self._new_options.GetInteger( 'network_timeout' ) )
|
||||
self._connection_error_wait_time.setValue( self._new_options.GetInteger( 'connection_error_wait_time' ) )
|
||||
self._serverside_bandwidth_wait_time.setValue( self._new_options.GetInteger( 'serverside_bandwidth_wait_time' ) )
|
||||
|
@ -362,6 +370,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
rows = []
|
||||
|
||||
rows.append( ( 'max connection attempts allowed per request: ', self._max_connection_attempts_allowed ) )
|
||||
rows.append( ( 'max retries allowed per request: ', self._max_request_attempts_allowed_get ) )
|
||||
rows.append( ( 'network timeout (seconds): ', self._network_timeout ) )
|
||||
rows.append( ( 'connection error retry wait (seconds): ', self._connection_error_wait_time ) )
|
||||
rows.append( ( 'serverside bandwidth retry wait (seconds): ', self._serverside_bandwidth_wait_time ) )
|
||||
|
@ -425,7 +435,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
self._new_options.SetNoneableString( 'https_proxy', self._https_proxy.GetValue() )
|
||||
self._new_options.SetNoneableString( 'no_proxy', self._no_proxy.GetValue() )
|
||||
|
||||
self._new_options.SetInteger( 'network_timeout', self._network_timeout.value() )
|
||||
self._new_options.SetInteger( 'max_connection_attempts_allowed', self._max_connection_attempts_allowed.value() )
|
||||
self._new_options.SetInteger( 'max_request_attempts_allowed_get', self._max_request_attempts_allowed_get.value() )
|
||||
self._new_options.SetInteger( 'connection_error_wait_time', self._connection_error_wait_time.value() )
|
||||
self._new_options.SetInteger( 'serverside_bandwidth_wait_time', self._serverside_bandwidth_wait_time.value() )
|
||||
self._new_options.SetInteger( 'max_network_jobs', self._max_network_jobs.value() )
|
||||
|
|
|
@ -4224,7 +4224,7 @@ class CanvasMediaListFilterArchiveDelete( CanvasMediaList ):
|
|||
|
||||
if delete_lock_for_archived_files:
|
||||
|
||||
deleted = [ media.GetHash() for media in self._deleted if not media.HasArchive() ]
|
||||
deleted = [ media for media in self._deleted if not media.HasArchive() ]
|
||||
|
||||
else:
|
||||
|
||||
|
@ -4233,33 +4233,70 @@ class CanvasMediaListFilterArchiveDelete( CanvasMediaList ):
|
|||
|
||||
if len( kept ) > 0 or len( deleted ) > 0:
|
||||
|
||||
label_components = []
|
||||
|
||||
if len( kept ) > 0:
|
||||
|
||||
label_components.append( 'keep {}'.format( HydrusData.ToHumanInt( len( kept ) ) ) )
|
||||
kept_label = 'keep {}'.format( HydrusData.ToHumanInt( len( kept ) ) )
|
||||
|
||||
else:
|
||||
|
||||
kept_label = None
|
||||
|
||||
|
||||
deletion_options = []
|
||||
|
||||
if len( deleted ) > 0:
|
||||
|
||||
if self._location_context.IncludesCurrent():
|
||||
location_contexts_to_present_options_for = [ self._location_context ]
|
||||
|
||||
current_local_service_keys = HydrusData.MassUnion( [ m.GetLocationsManager().GetCurrent() for m in deleted ] )
|
||||
|
||||
local_file_domain_service_keys = [ service_key for service_key in current_local_service_keys if HG.client_controller.services_manager.GetServiceType( service_key ) == HC.LOCAL_FILE_DOMAIN ]
|
||||
|
||||
location_contexts_to_present_options_for.extend( [ ClientLocation.LocationContext.STATICCreateSimple( service_key ) for service_key in local_file_domain_service_keys ] )
|
||||
|
||||
if len( local_file_domain_service_keys ) > 1:
|
||||
|
||||
location_context_label = self._location_context.ToString( HG.client_controller.services_manager.GetName )
|
||||
|
||||
else:
|
||||
|
||||
location_context_label = 'all possible local file services'
|
||||
location_contexts_to_present_options_for.append( ClientLocation.LocationContext.STATICCreateSimple( CC.COMBINED_LOCAL_MEDIA_SERVICE_KEY ) )
|
||||
|
||||
|
||||
label_components.append( 'delete (from {}) {}'.format( location_context_label, HydrusData.ToHumanInt( len( self._deleted ) ) ) )
|
||||
specific_local_service_keys = [ service_key for service_key in current_local_service_keys if HG.client_controller.services_manager.GetServiceType( service_key ) in HC.SPECIFIC_LOCAL_FILE_SERVICES ]
|
||||
|
||||
if len( specific_local_service_keys ) > len( local_file_domain_service_keys ): # we have some trash or I guess repo updates
|
||||
|
||||
location_contexts_to_present_options_for.append( ClientLocation.LocationContext.STATICCreateSimple( CC.COMBINED_LOCAL_FILE_SERVICE_KEY ) )
|
||||
|
||||
|
||||
location_contexts_to_present_options_for = HydrusData.DedupeList( location_contexts_to_present_options_for )
|
||||
|
||||
for location_context in location_contexts_to_present_options_for:
|
||||
|
||||
file_service_keys = location_context.current_service_keys
|
||||
|
||||
num_deletable = len( [ m for m in deleted if len( m.GetLocationsManager().GetCurrent().intersection( file_service_keys ) ) > 0 ] )
|
||||
|
||||
if num_deletable > 0:
|
||||
|
||||
if location_context == ClientLocation.LocationContext.STATICCreateSimple( CC.COMBINED_LOCAL_MEDIA_SERVICE_KEY ):
|
||||
|
||||
location_label = 'all local file domains'
|
||||
|
||||
elif location_context == ClientLocation.LocationContext.STATICCreateSimple( CC.COMBINED_LOCAL_FILE_SERVICE_KEY ):
|
||||
|
||||
location_label = 'my hard disk'
|
||||
|
||||
else:
|
||||
|
||||
location_label = location_context.ToString( HG.client_controller.services_manager.GetName )
|
||||
|
||||
|
||||
delete_label = 'delete {} from {}'.format( HydrusData.ToHumanInt( num_deletable ), location_label )
|
||||
|
||||
deletion_options.append( ( location_context, delete_label ) )
|
||||
|
||||
|
||||
|
||||
|
||||
label = '{} files?'.format( ' and '.join( label_components ) )
|
||||
|
||||
# TODO: ok so ideally we should total up the deleteds' actual file services and give users UI to select what they want to delete from
|
||||
# so like '23 files in my files, 2 in favourites' and then a 'yes' button for all or just my files or favourites
|
||||
|
||||
( result, cancelled ) = ClientGUIDialogsQuick.GetFinishFilteringAnswer( self, label )
|
||||
( result, deletee_location_context, cancelled ) = ClientGUIDialogsQuick.GetFinishArchiveDeleteFilteringAnswer( self, kept_label, deletion_options )
|
||||
|
||||
if cancelled:
|
||||
|
||||
|
@ -4282,7 +4319,7 @@ class CanvasMediaListFilterArchiveDelete( CanvasMediaList ):
|
|||
|
||||
self._current_media = self._GetFirst() # so the pubsub on close is better
|
||||
|
||||
HG.client_controller.CallToThread( CommitArchiveDelete, self._page_key, self._location_context, kept, deleted )
|
||||
HG.client_controller.CallToThread( CommitArchiveDelete, self._page_key, deletee_location_context, kept, deleted )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1830,18 +1830,6 @@ class ManagementPanelImporterHDD( ManagementPanelImporter ):
|
|||
ClientGUIFunctions.SetBitmapButtonBitmap( self._pause_button, CC.global_pixmaps().file_pause )
|
||||
|
||||
|
||||
if paused:
|
||||
|
||||
if current_action == '':
|
||||
|
||||
current_action = 'paused'
|
||||
|
||||
else:
|
||||
|
||||
current_action = 'pausing - ' + current_action
|
||||
|
||||
|
||||
|
||||
self._current_action.setText( current_action )
|
||||
|
||||
|
||||
|
@ -3862,18 +3850,8 @@ class ManagementPanelImporterSimpleDownloader( ManagementPanelImporter ):
|
|||
self._pending_jobs_listbox.SelectData( selected_jobs )
|
||||
|
||||
|
||||
if queue_paused:
|
||||
|
||||
parser_status = 'paused'
|
||||
|
||||
|
||||
self._parser_status.setText( parser_status )
|
||||
|
||||
if current_action == '' and files_paused:
|
||||
|
||||
current_action = 'paused'
|
||||
|
||||
|
||||
self._current_action.setText( current_action )
|
||||
|
||||
if files_paused:
|
||||
|
|
|
@ -2142,7 +2142,7 @@ class MediaPanel( ClientMedia.ListeningMediaList, QW.QScrollArea, CAC.Applicatio
|
|||
|
||||
def ClearPageKey( self ):
|
||||
|
||||
self._page_key = 'dead media panel page key'
|
||||
self._page_key = b'dead media panel page key'
|
||||
|
||||
|
||||
def Collect( self, page_key, media_collect = None ):
|
||||
|
|
|
@ -0,0 +1,129 @@
|
|||
import typing
|
||||
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
|
||||
from hydrus.client.importing import ClientImportFileSeeds
|
||||
from hydrus.client.importing import ClientImportGallerySeeds
|
||||
from hydrus.client.importing.options import FileImportOptions
|
||||
|
||||
def CheckImporterCanDoFileWorkBecausePaused( paused: bool, file_seed_cache: ClientImportFileSeeds.FileSeedCache, page_key: bytes ):
|
||||
|
||||
if paused:
|
||||
|
||||
raise HydrusExceptions.VetoException( 'paused' )
|
||||
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' ):
|
||||
|
||||
raise HydrusExceptions.VetoException( 'all file import queues are paused!' )
|
||||
|
||||
|
||||
work_pending = file_seed_cache.WorkToDo()
|
||||
|
||||
if not work_pending:
|
||||
|
||||
raise HydrusExceptions.VetoException()
|
||||
|
||||
|
||||
if HG.client_controller.PageClosedButNotDestroyed( page_key ):
|
||||
|
||||
raise HydrusExceptions.VetoException( 'page is closed' )
|
||||
|
||||
|
||||
|
||||
def CheckImporterCanDoFileWorkBecausePausifyingProblem( file_import_options: FileImportOptions ):
|
||||
|
||||
try:
|
||||
|
||||
file_import_options.CheckReadyToImport()
|
||||
|
||||
except Exception as e:
|
||||
|
||||
raise HydrusExceptions.VetoException( str( e ) )
|
||||
|
||||
|
||||
|
||||
def CheckImporterCanDoGalleryWorkBecausePaused( paused: bool, gallery_seed_log: typing.Optional[ ClientImportGallerySeeds.GallerySeedLog ] ):
|
||||
|
||||
if paused:
|
||||
|
||||
raise HydrusExceptions.VetoException( 'paused' )
|
||||
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'pause_all_gallery_searches' ):
|
||||
|
||||
raise HydrusExceptions.VetoException( 'all gallery searches are paused!' )
|
||||
|
||||
|
||||
if gallery_seed_log is not None:
|
||||
|
||||
work_pending = gallery_seed_log.WorkToDo()
|
||||
|
||||
if not work_pending:
|
||||
|
||||
raise HydrusExceptions.VetoException()
|
||||
|
||||
|
||||
|
||||
|
||||
def CheckCanDoNetworkWork( no_work_until: int, no_work_until_reason: str ):
|
||||
|
||||
if not HydrusData.TimeHasPassed( no_work_until ):
|
||||
|
||||
no_work_text = '{}: {}'.format( HydrusData.ConvertTimestampToPrettyExpires( no_work_until ), no_work_until_reason )
|
||||
|
||||
raise HydrusExceptions.VetoException( no_work_text )
|
||||
|
||||
|
||||
if HG.client_controller.network_engine.IsBusy():
|
||||
|
||||
raise HydrusExceptions.VetoException( 'network engine is too busy!' )
|
||||
|
||||
|
||||
|
||||
def CheckImporterCanDoWorkBecauseStopped( page_key: bytes ):
|
||||
|
||||
if PageImporterShouldStopWorking( page_key ):
|
||||
|
||||
raise HydrusExceptions.VetoException( 'page should stop working' )
|
||||
|
||||
|
||||
|
||||
def GenerateLiveStatusText( text: str, paused: bool, no_work_until: int, no_work_until_reason: str ) -> str:
|
||||
|
||||
if not HydrusData.TimeHasPassed( no_work_until ):
|
||||
|
||||
return '{}: {}'.format( HydrusData.ConvertTimestampToPrettyExpires( no_work_until ), no_work_until_reason )
|
||||
|
||||
|
||||
if paused and text != 'paused':
|
||||
|
||||
if text == '':
|
||||
|
||||
text = 'pausing'
|
||||
|
||||
else:
|
||||
|
||||
text = 'pausing - {}'.format( text )
|
||||
|
||||
|
||||
|
||||
return text
|
||||
|
||||
|
||||
def NeatenStatusText( text: str ) -> str:
|
||||
|
||||
if len( text ) > 0:
|
||||
|
||||
text = text.splitlines()[0]
|
||||
|
||||
|
||||
return text
|
||||
|
||||
|
||||
def PageImporterShouldStopWorking( page_key: bytes ):
|
||||
|
||||
return HG.started_shutdown or not HG.client_controller.PageAlive( page_key )
|
||||
|
|
@ -364,7 +364,7 @@ class FileSeed( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
source_timestamp = min( HydrusData.GetNow() - 30, source_timestamp )
|
||||
|
||||
self.source_time = source_timestamp
|
||||
self.source_time = ClientTime.MergeModifiedTimes( self.source_time, source_timestamp )
|
||||
|
||||
|
||||
self._UpdateModified()
|
||||
|
|
|
@ -12,8 +12,8 @@ from hydrus.client import ClientDownloading
|
|||
from hydrus.client.importing import ClientImportFileSeeds
|
||||
from hydrus.client.importing import ClientImportGallerySeeds
|
||||
from hydrus.client.importing import ClientImporting
|
||||
from hydrus.client.importing import ClientImportControl
|
||||
from hydrus.client.importing.options import FileImportOptions
|
||||
from hydrus.client.importing.options import PresentationImportOptions
|
||||
from hydrus.client.importing.options import TagImportOptions
|
||||
from hydrus.client.networking import ClientNetworkingJobs
|
||||
|
||||
|
@ -51,7 +51,7 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._source_name = source_name
|
||||
|
||||
self._page_key = 'initialising page key'
|
||||
self._page_key = b'initialising page key'
|
||||
self._publish_to_page = False
|
||||
|
||||
self._current_page_index = 0
|
||||
|
@ -79,11 +79,15 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._no_work_until_reason = ''
|
||||
|
||||
self._lock = threading.Lock()
|
||||
self._files_working_lock = threading.Lock()
|
||||
self._gallery_working_lock = threading.Lock()
|
||||
|
||||
self._file_status = ''
|
||||
self._files_status = ''
|
||||
self._gallery_status = ''
|
||||
self._gallery_status_can_change_timestamp = 0
|
||||
|
||||
self._we_are_probably_pending = True
|
||||
|
||||
self._all_work_finished = False
|
||||
|
||||
self._have_started = False
|
||||
|
@ -112,6 +116,11 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
def _DelayWork( self, time_delta, reason ):
|
||||
|
||||
if len( reason ) > 0:
|
||||
|
||||
reason = reason.splitlines()[0]
|
||||
|
||||
|
||||
self._no_work_until = HydrusData.GetNow() + time_delta
|
||||
self._no_work_until_reason = reason
|
||||
|
||||
|
@ -229,12 +238,7 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
if len( text ) > 0:
|
||||
|
||||
text = text.splitlines()[0]
|
||||
|
||||
|
||||
self._file_status = text
|
||||
self._files_status = ClientImportControl.NeatenStatusText( text )
|
||||
|
||||
|
||||
|
||||
|
@ -309,6 +313,14 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
return
|
||||
|
||||
|
||||
def status_hook( text ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._gallery_status = ClientImportControl.NeatenStatusText( text )
|
||||
|
||||
|
||||
|
||||
with self._lock:
|
||||
|
||||
if self._AmOverFileLimit():
|
||||
|
@ -318,8 +330,8 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
return
|
||||
|
||||
|
||||
self._gallery_status = 'checking next page'
|
||||
|
||||
|
||||
status_hook( 'checking next page' )
|
||||
|
||||
def file_seeds_callable( file_seeds ):
|
||||
|
||||
|
@ -335,19 +347,6 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
return ClientImporting.UpdateFileSeedCacheWithFileSeeds( self._file_seed_cache, file_seeds, max_new_urls_allowed = max_new_urls_allowed )
|
||||
|
||||
|
||||
def status_hook( text ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if len( text ) > 0:
|
||||
|
||||
text = text.splitlines()[0]
|
||||
|
||||
|
||||
self._gallery_status = text
|
||||
|
||||
|
||||
|
||||
def title_hook( text ):
|
||||
|
||||
return
|
||||
|
@ -524,25 +523,6 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetGalleryStatus( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if HydrusData.TimeHasPassed( self._no_work_until ):
|
||||
|
||||
gallery_status = self._gallery_status
|
||||
|
||||
else:
|
||||
|
||||
no_work_text = '{}: {}'.format( HydrusData.ConvertTimestampToPrettyExpires( self._no_work_until ), self._no_work_until_reason )
|
||||
|
||||
gallery_status = no_work_text
|
||||
|
||||
|
||||
return gallery_status
|
||||
|
||||
|
||||
|
||||
def GetHashes( self ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -614,13 +594,16 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
return ( ClientImporting.DOWNLOADER_SIMPLE_STATUS_DONE, 'DONE' )
|
||||
|
||||
elif self._gallery_status != '' or self._file_status != '':
|
||||
|
||||
return ( ClientImporting.DOWNLOADER_SIMPLE_STATUS_WORKING, 'working' )
|
||||
|
||||
elif gallery_go or files_go:
|
||||
|
||||
return ( ClientImporting.DOWNLOADER_SIMPLE_STATUS_PENDING, 'pending' )
|
||||
if self._gallery_working_lock.locked() or self._files_working_lock.locked():
|
||||
|
||||
return ( ClientImporting.DOWNLOADER_SIMPLE_STATUS_WORKING, 'working' )
|
||||
|
||||
else:
|
||||
|
||||
return ( ClientImporting.DOWNLOADER_SIMPLE_STATUS_PENDING, 'pending' )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
|
@ -639,22 +622,31 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
def GetStatus( self ):
|
||||
|
||||
# first off, if we are waiting for a work slot, our status is not being updated!
|
||||
# so, we'll update it here as needed
|
||||
|
||||
with self._lock:
|
||||
|
||||
if HydrusData.TimeHasPassed( self._no_work_until ):
|
||||
gallery_work_to_do = self._gallery_seed_log.WorkToDo()
|
||||
files_work_to_do = self._file_seed_cache.WorkToDo()
|
||||
|
||||
gallery_go = gallery_work_to_do and not self._gallery_paused
|
||||
files_go = files_work_to_do and not self._files_paused
|
||||
|
||||
if gallery_go and not self._gallery_working_lock.locked():
|
||||
|
||||
gallery_status = self._gallery_status
|
||||
file_status = self._file_status
|
||||
|
||||
else:
|
||||
|
||||
no_work_text = '{}: {}'.format( HydrusData.ConvertTimestampToPrettyExpires( self._no_work_until ), self._no_work_until_reason )
|
||||
|
||||
gallery_status = no_work_text
|
||||
file_status = no_work_text
|
||||
self._gallery_status = 'waiting for a work slot'
|
||||
|
||||
|
||||
return ( gallery_status, file_status, self._files_paused, self._gallery_paused )
|
||||
if files_go and not self._files_working_lock.locked():
|
||||
|
||||
self._files_status = 'waiting for a work slot'
|
||||
|
||||
|
||||
gallery_text = ClientImportControl.GenerateLiveStatusText( self._gallery_status, self._gallery_paused, self._no_work_until, self._no_work_until_reason )
|
||||
file_text = ClientImportControl.GenerateLiveStatusText( self._files_status, self._files_paused, self._no_work_until, self._no_work_until_reason )
|
||||
|
||||
return ( gallery_text, file_text, self._files_paused, self._gallery_paused )
|
||||
|
||||
|
||||
|
||||
|
@ -846,74 +838,43 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def CanDoFileWork( self ):
|
||||
def CheckCanDoFileWork( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if ClientImporting.PageImporterShouldStopWorking( self._page_key ):
|
||||
|
||||
self._files_repeating_job.Cancel()
|
||||
|
||||
return False
|
||||
|
||||
|
||||
files_paused = self._files_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' )
|
||||
|
||||
if files_paused:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
try:
|
||||
|
||||
self._file_import_options.CheckReadyToImport()
|
||||
ClientImportControl.CheckImporterCanDoWorkBecauseStopped( self._page_key )
|
||||
|
||||
except Exception as e:
|
||||
except HydrusExceptions.VetoException:
|
||||
|
||||
self._file_status = str( e )
|
||||
self._files_repeating_job.Cancel()
|
||||
|
||||
HydrusData.ShowText( str( e ) )
|
||||
raise
|
||||
|
||||
|
||||
ClientImportControl.CheckImporterCanDoFileWorkBecausePaused( self._files_paused, self._file_seed_cache, self._page_key )
|
||||
|
||||
try:
|
||||
|
||||
ClientImportControl.CheckImporterCanDoFileWorkBecausePausifyingProblem( self._file_import_options )
|
||||
|
||||
except HydrusExceptions.VetoException:
|
||||
|
||||
self._files_paused = True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
work_pending = self._file_seed_cache.WorkToDo()
|
||||
|
||||
if not work_pending:
|
||||
|
||||
return False
|
||||
raise
|
||||
|
||||
|
||||
|
||||
return self.CanDoNetworkWork()
|
||||
self.CheckCanDoNetworkWork()
|
||||
|
||||
|
||||
def CanDoNetworkWork( self ):
|
||||
def CheckCanDoNetworkWork( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
no_delays = HydrusData.TimeHasPassed( self._no_work_until )
|
||||
|
||||
if not no_delays:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
page_shown = not HG.client_controller.PageClosedButNotDestroyed( self._page_key )
|
||||
|
||||
if not page_shown:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
network_engine_good = not HG.client_controller.network_engine.IsBusy()
|
||||
|
||||
if not network_engine_good:
|
||||
|
||||
return False
|
||||
|
||||
ClientImportControl.CheckCanDoNetworkWork( self._no_work_until, self._no_work_until_reason )
|
||||
|
||||
|
||||
return True
|
||||
|
@ -921,12 +882,26 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
def REPEATINGWorkOnFiles( self ):
|
||||
|
||||
try:
|
||||
with self._files_working_lock:
|
||||
|
||||
while self.CanDoFileWork():
|
||||
while True:
|
||||
|
||||
try:
|
||||
|
||||
try:
|
||||
|
||||
self.CheckCanDoFileWork()
|
||||
|
||||
except HydrusExceptions.VetoException as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._files_status = str( e )
|
||||
|
||||
|
||||
break
|
||||
|
||||
|
||||
self._WorkOnFiles()
|
||||
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
@ -935,56 +910,67 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
except Exception as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._files_status = 'stopping work: {}'.format( str( e ) )
|
||||
|
||||
|
||||
HydrusData.ShowException( e )
|
||||
|
||||
|
||||
|
||||
finally:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._file_status = ''
|
||||
return
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def CanDoGalleryWork( self ):
|
||||
def CheckCanDoGalleryWork( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if ClientImporting.PageImporterShouldStopWorking( self._page_key ):
|
||||
try:
|
||||
|
||||
ClientImportControl.CheckImporterCanDoWorkBecauseStopped( self._page_key )
|
||||
|
||||
except HydrusExceptions.VetoException:
|
||||
|
||||
self._gallery_repeating_job.Cancel()
|
||||
|
||||
return False
|
||||
raise
|
||||
|
||||
|
||||
gallery_paused = self._gallery_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_gallery_searches' )
|
||||
|
||||
if gallery_paused:
|
||||
if self._AmOverFileLimit():
|
||||
|
||||
return False
|
||||
raise HydrusExceptions.VetoException( 'hit file limit' )
|
||||
|
||||
|
||||
work_pending = self._gallery_seed_log.WorkToDo()
|
||||
|
||||
if not work_pending:
|
||||
|
||||
return False
|
||||
|
||||
ClientImportControl.CheckImporterCanDoGalleryWorkBecausePaused( self._gallery_paused, self._gallery_seed_log )
|
||||
|
||||
|
||||
return self.CanDoNetworkWork()
|
||||
return self.CheckCanDoNetworkWork()
|
||||
|
||||
|
||||
def REPEATINGWorkOnGallery( self ):
|
||||
|
||||
try:
|
||||
with self._gallery_working_lock:
|
||||
|
||||
while self.CanDoGalleryWork():
|
||||
while True:
|
||||
|
||||
try:
|
||||
|
||||
try:
|
||||
|
||||
self.CheckCanDoGalleryWork()
|
||||
|
||||
except HydrusExceptions.VetoException as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._gallery_status = str( e )
|
||||
|
||||
|
||||
break
|
||||
|
||||
|
||||
self._WorkOnGallery()
|
||||
|
||||
time.sleep( 1 )
|
||||
|
@ -995,19 +981,20 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
except Exception as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._gallery_status = 'stopping work: {}'.format( str( e ) )
|
||||
|
||||
|
||||
HydrusData.ShowException( e )
|
||||
|
||||
|
||||
|
||||
finally:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._gallery_status = ''
|
||||
return
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_GALLERY_IMPORT ] = GalleryImport
|
||||
|
||||
class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
|
||||
|
@ -1027,7 +1014,7 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._lock = threading.Lock()
|
||||
|
||||
self._page_key = 'initialising page key'
|
||||
self._page_key = b'initialising page key'
|
||||
|
||||
self._gug_key_and_name = gug_key_and_name
|
||||
|
||||
|
@ -1737,7 +1724,7 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
if ClientImporting.PageImporterShouldStopWorking( self._page_key ):
|
||||
if ClientImportControl.PageImporterShouldStopWorking( self._page_key ):
|
||||
|
||||
self._importers_repeating_job.Cancel()
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@ import time
|
|||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusFileHandling
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusPaths
|
||||
|
@ -15,6 +16,7 @@ from hydrus.client import ClientData
|
|||
from hydrus.client import ClientFiles
|
||||
from hydrus.client import ClientPaths
|
||||
from hydrus.client import ClientThreading
|
||||
from hydrus.client.importing import ClientImportControl
|
||||
from hydrus.client.importing import ClientImporting
|
||||
from hydrus.client.importing import ClientImportFileSeeds
|
||||
from hydrus.client.importing.options import FileImportOptions
|
||||
|
@ -70,7 +72,9 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._file_import_options = file_import_options
|
||||
self._delete_after_success = delete_after_success
|
||||
|
||||
self._current_action = ''
|
||||
self._page_key = b'initialising page key'
|
||||
|
||||
self._files_status = ''
|
||||
self._paused = False
|
||||
|
||||
self._lock = threading.Lock()
|
||||
|
@ -131,7 +135,7 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def _WorkOnFiles( self, page_key ):
|
||||
def _WorkOnFiles( self ):
|
||||
|
||||
file_seed = self._file_seed_cache.GetNextFileSeed( CC.STATUS_UNKNOWN )
|
||||
|
||||
|
@ -140,39 +144,28 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
|
|||
return
|
||||
|
||||
|
||||
did_substantial_work = False
|
||||
|
||||
path = file_seed.file_seed_data
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._current_action = 'importing'
|
||||
self._files_status = 'importing'
|
||||
|
||||
|
||||
def status_hook( text ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if len( text ) > 0:
|
||||
|
||||
text = text.splitlines()[0]
|
||||
|
||||
|
||||
self._current_action = text
|
||||
self._file_status = ClientImportControl.NeatenStatusText( text )
|
||||
|
||||
|
||||
|
||||
file_seed.ImportPath( self._file_seed_cache, self._file_import_options, status_hook = status_hook )
|
||||
|
||||
did_substantial_work = True
|
||||
|
||||
if file_seed.status in CC.SUCCESSFUL_IMPORT_STATES:
|
||||
|
||||
if file_seed.ShouldPresent( self._file_import_options.GetPresentationImportOptions() ):
|
||||
|
||||
file_seed.PresentToPage( page_key )
|
||||
|
||||
did_substantial_work = True
|
||||
file_seed.PresentToPage( self._page_key )
|
||||
|
||||
|
||||
if self._delete_after_success:
|
||||
|
@ -206,13 +199,10 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
self._current_action = ''
|
||||
self._files_status = ''
|
||||
|
||||
|
||||
if did_substantial_work:
|
||||
|
||||
time.sleep( ClientImporting.DID_SUBSTANTIAL_FILE_WORK_MINIMUM_SLEEP_TIME )
|
||||
|
||||
time.sleep( ClientImporting.DID_SUBSTANTIAL_FILE_WORK_MINIMUM_SLEEP_TIME )
|
||||
|
||||
|
||||
def CurrentlyWorking( self ):
|
||||
|
@ -264,7 +254,9 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
return ( self._current_action, self._paused )
|
||||
text = ClientImportControl.GenerateLiveStatusText( self._files_status, self._paused, 0, '' )
|
||||
|
||||
return ( text, self._paused )
|
||||
|
||||
|
||||
|
||||
|
@ -321,69 +313,55 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
def Start( self, page_key ):
|
||||
|
||||
self._files_repeating_job = HG.client_controller.CallRepeating( ClientImporting.GetRepeatingJobInitialDelay(), ClientImporting.REPEATING_JOB_TYPICAL_PERIOD, self.REPEATINGWorkOnFiles, page_key )
|
||||
self._page_key = page_key
|
||||
|
||||
self._files_repeating_job = HG.client_controller.CallRepeating( ClientImporting.GetRepeatingJobInitialDelay(), ClientImporting.REPEATING_JOB_TYPICAL_PERIOD, self.REPEATINGWorkOnFiles )
|
||||
|
||||
self._files_repeating_job.SetThreadSlotType( 'misc' )
|
||||
|
||||
|
||||
def CanDoFileWork( self, page_key ):
|
||||
def CheckCanDoFileWork( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if ClientImporting.PageImporterShouldStopWorking( page_key ):
|
||||
try:
|
||||
|
||||
ClientImportControl.CheckImporterCanDoWorkBecauseStopped( self._page_key )
|
||||
|
||||
except HydrusExceptions.VetoException:
|
||||
|
||||
self._files_repeating_job.Cancel()
|
||||
|
||||
return False
|
||||
raise
|
||||
|
||||
|
||||
paused = self._paused or HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' )
|
||||
|
||||
if paused:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
try:
|
||||
|
||||
self._file_import_options.CheckReadyToImport()
|
||||
|
||||
except Exception as e:
|
||||
|
||||
self._current_action = str( e )
|
||||
|
||||
HydrusData.ShowText( str( e ) )
|
||||
|
||||
self._paused = True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
work_to_do = self._file_seed_cache.WorkToDo()
|
||||
|
||||
if not work_to_do:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
page_shown = not HG.client_controller.PageClosedButNotDestroyed( page_key )
|
||||
|
||||
if not page_shown:
|
||||
|
||||
return False
|
||||
|
||||
ClientImportControl.CheckImporterCanDoFileWorkBecausePaused( self._paused, self._file_seed_cache, self._page_key )
|
||||
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def REPEATINGWorkOnFiles( self, page_key ):
|
||||
def REPEATINGWorkOnFiles( self ):
|
||||
|
||||
while self.CanDoFileWork( page_key ):
|
||||
while True:
|
||||
|
||||
try:
|
||||
|
||||
self._WorkOnFiles( page_key )
|
||||
try:
|
||||
|
||||
self.CheckCanDoFileWork()
|
||||
|
||||
except HydrusExceptions.VetoException as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._file_status = str( e )
|
||||
|
||||
|
||||
break
|
||||
|
||||
|
||||
self._WorkOnFiles()
|
||||
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
|
@ -391,8 +369,15 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
except Exception as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._file_status = 'stopping work: {}'.format( str( e ) )
|
||||
|
||||
|
||||
HydrusData.ShowException( e )
|
||||
|
||||
return
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,4 +1,3 @@
|
|||
import os
|
||||
import threading
|
||||
import time
|
||||
import urllib.parse
|
||||
|
@ -10,6 +9,7 @@ from hydrus.core import HydrusGlobals as HG
|
|||
from hydrus.core import HydrusSerialisable
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client.importing import ClientImportControl
|
||||
from hydrus.client.importing import ClientImporting
|
||||
from hydrus.client.importing import ClientImportFileSeeds
|
||||
from hydrus.client.importing import ClientImportGallerySeeds
|
||||
|
@ -35,13 +35,17 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._file_seed_cache = ClientImportFileSeeds.FileSeedCache()
|
||||
self._file_import_options = file_import_options
|
||||
self._formula_name = 'all files linked by images in page'
|
||||
self._queue_paused = False
|
||||
self._gallery_paused = False
|
||||
self._files_paused = False
|
||||
|
||||
self._no_work_until = 0
|
||||
self._no_work_until_reason = ''
|
||||
|
||||
self._page_key = b'initialising page key'
|
||||
self._downloader_key = HydrusData.GenerateKey()
|
||||
|
||||
self._parser_status = ''
|
||||
self._current_action = ''
|
||||
self._gallery_status = ''
|
||||
self._files_status = ''
|
||||
|
||||
self._lock = threading.Lock()
|
||||
|
||||
|
@ -51,13 +55,24 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._page_network_job = None
|
||||
|
||||
self._files_repeating_job = None
|
||||
self._queue_repeating_job = None
|
||||
self._gallery_repeating_job = None
|
||||
|
||||
self._last_serialisable_change_timestamp = 0
|
||||
|
||||
HG.client_controller.sub( self, 'NotifyFileSeedsUpdated', 'file_seed_cache_file_seeds_updated' )
|
||||
|
||||
|
||||
def _DelayWork( self, time_delta, reason ):
|
||||
|
||||
if len( reason ) > 0:
|
||||
|
||||
reason = reason.splitlines()[0]
|
||||
|
||||
|
||||
self._no_work_until = HydrusData.GetNow() + time_delta
|
||||
self._no_work_until_reason = reason
|
||||
|
||||
|
||||
def _FileNetworkJobPresentationContextFactory( self, network_job ):
|
||||
|
||||
def enter_call():
|
||||
|
@ -87,12 +102,12 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
serialisable_file_seed_cache = self._file_seed_cache.GetSerialisableTuple()
|
||||
serialisable_file_import_options = self._file_import_options.GetSerialisableTuple()
|
||||
|
||||
return ( serialisable_pending_jobs, serialisable_gallery_seed_log, serialisable_file_seed_cache, serialisable_file_import_options, self._formula_name, self._queue_paused, self._files_paused )
|
||||
return ( serialisable_pending_jobs, serialisable_gallery_seed_log, serialisable_file_seed_cache, serialisable_file_import_options, self._formula_name, self._gallery_paused, self._files_paused )
|
||||
|
||||
|
||||
def _InitialiseFromSerialisableInfo( self, serialisable_info ):
|
||||
|
||||
( serialisable_pending_jobs, serialisable_gallery_seed_log, serialisable_file_seed_cache, serialisable_file_import_options, self._formula_name, self._queue_paused, self._files_paused ) = serialisable_info
|
||||
( serialisable_pending_jobs, serialisable_gallery_seed_log, serialisable_file_seed_cache, serialisable_file_import_options, self._formula_name, self._gallery_paused, self._files_paused ) = serialisable_info
|
||||
|
||||
self._pending_jobs = [ ( url, HydrusSerialisable.CreateFromSerialisableTuple( serialisable_simple_downloader_formula ) ) for ( url, serialisable_simple_downloader_formula ) in serialisable_pending_jobs ]
|
||||
|
||||
|
@ -186,7 +201,7 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def _WorkOnFiles( self, page_key ):
|
||||
def _WorkOnFiles( self ):
|
||||
|
||||
file_seed = self._file_seed_cache.GetNextFileSeed( CC.STATUS_UNKNOWN )
|
||||
|
||||
|
@ -201,29 +216,45 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
if len( text ) > 0:
|
||||
|
||||
text = text.splitlines()[0]
|
||||
|
||||
|
||||
self._current_action = text
|
||||
self._files_status = ClientImportControl.NeatenStatusText( text )
|
||||
|
||||
|
||||
|
||||
tag_import_options = TagImportOptions.TagImportOptions( is_default = True )
|
||||
|
||||
did_substantial_work = file_seed.WorkOnURL( self._file_seed_cache, status_hook, self._NetworkJobFactory, self._FileNetworkJobPresentationContextFactory, self._file_import_options, tag_import_options )
|
||||
try:
|
||||
|
||||
did_substantial_work = file_seed.WorkOnURL( self._file_seed_cache, status_hook, self._NetworkJobFactory, self._FileNetworkJobPresentationContextFactory, self._file_import_options, tag_import_options )
|
||||
|
||||
except HydrusExceptions.NetworkException as e:
|
||||
|
||||
delay = HG.client_controller.new_options.GetInteger( 'downloader_network_error_delay' )
|
||||
|
||||
self._DelayWork( delay, str( e ) )
|
||||
|
||||
file_seed.SetStatus( CC.STATUS_ERROR, str( e ) )
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
status = CC.STATUS_ERROR
|
||||
|
||||
file_seed.SetStatus( status, exception = e )
|
||||
|
||||
time.sleep( 3 )
|
||||
|
||||
|
||||
if file_seed.ShouldPresent( self._file_import_options.GetPresentationImportOptions() ):
|
||||
|
||||
file_seed.PresentToPage( page_key )
|
||||
file_seed.PresentToPage( self._page_key )
|
||||
|
||||
did_substantial_work = True
|
||||
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._current_action = ''
|
||||
self._files_status = ''
|
||||
|
||||
|
||||
if did_substantial_work:
|
||||
|
@ -232,7 +263,7 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def _WorkOnQueue( self, page_key ):
|
||||
def _WorkOnGallery( self ):
|
||||
|
||||
if len( self._pending_jobs ) > 0:
|
||||
|
||||
|
@ -240,15 +271,18 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
( url, simple_downloader_formula ) = self._pending_jobs.pop( 0 )
|
||||
|
||||
self._parser_status = 'checking ' + url
|
||||
self._gallery_status = 'checking ' + url
|
||||
|
||||
|
||||
error_occurred = False
|
||||
|
||||
gallery_seed_status = CC.STATUS_ERROR
|
||||
parser_status = 'job not completed'
|
||||
|
||||
gallery_seed = ClientImportGallerySeeds.GallerySeed( url, can_generate_more_pages = False )
|
||||
|
||||
try:
|
||||
|
||||
gallery_seed = ClientImportGallerySeeds.GallerySeed( url, can_generate_more_pages = False )
|
||||
|
||||
self._gallery_seed_log.AddGallerySeeds( ( gallery_seed, ) )
|
||||
|
||||
network_job = self._NetworkJobFactory( 'GET', url )
|
||||
|
@ -325,6 +359,19 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
parser_status = 'page 404'
|
||||
|
||||
except HydrusExceptions.NetworkException as e:
|
||||
|
||||
delay = HG.client_controller.new_options.GetInteger( 'downloader_network_error_delay' )
|
||||
|
||||
self._DelayWork( delay, str( e ) )
|
||||
|
||||
gallery_seed_status = CC.STATUS_ERROR
|
||||
error_occurred = True
|
||||
|
||||
parser_status = str( e )
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
gallery_seed_status = CC.STATUS_ERROR
|
||||
|
@ -344,7 +391,7 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
self._parser_status = parser_status
|
||||
self._gallery_status = ClientImportControl.NeatenStatusText( parser_status )
|
||||
|
||||
|
||||
if error_occurred:
|
||||
|
@ -358,7 +405,7 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
self._parser_status = ''
|
||||
self._gallery_status = ''
|
||||
|
||||
|
||||
return False
|
||||
|
@ -440,7 +487,7 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
d[ 'files_paused' ] = self._files_paused
|
||||
|
||||
d[ 'gallery_paused' ] = self._queue_paused
|
||||
d[ 'gallery_paused' ] = self._gallery_paused
|
||||
|
||||
return d
|
||||
|
||||
|
@ -498,7 +545,10 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
return ( list( self._pending_jobs ), self._parser_status, self._current_action, self._queue_paused, self._files_paused )
|
||||
gallery_text = ClientImportControl.GenerateLiveStatusText( self._gallery_status, self._gallery_paused, self._no_work_until, self._no_work_until_reason )
|
||||
file_text = ClientImportControl.GenerateLiveStatusText( self._files_status, self._files_paused, self._no_work_until, self._no_work_until_reason )
|
||||
|
||||
return ( list( self._pending_jobs ), gallery_text, file_text, self._gallery_paused, self._files_paused )
|
||||
|
||||
|
||||
|
||||
|
@ -544,9 +594,9 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
self._queue_paused = not self._queue_paused
|
||||
self._gallery_paused = not self._gallery_paused
|
||||
|
||||
ClientImporting.WakeRepeatingJob( self._queue_repeating_job )
|
||||
ClientImporting.WakeRepeatingJob( self._gallery_repeating_job )
|
||||
|
||||
self._SerialisableChangeMade()
|
||||
|
||||
|
@ -560,7 +610,7 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._pending_jobs.append( job )
|
||||
|
||||
ClientImporting.WakeRepeatingJob( self._queue_repeating_job )
|
||||
ClientImporting.WakeRepeatingJob( self._gallery_repeating_job )
|
||||
|
||||
self._SerialisableChangeMade()
|
||||
|
||||
|
@ -602,89 +652,81 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
return
|
||||
|
||||
|
||||
self._files_repeating_job = HG.client_controller.CallRepeating( ClientImporting.GetRepeatingJobInitialDelay(), ClientImporting.REPEATING_JOB_TYPICAL_PERIOD, self.REPEATINGWorkOnFiles, page_key )
|
||||
self._queue_repeating_job = HG.client_controller.CallRepeating( ClientImporting.GetRepeatingJobInitialDelay(), ClientImporting.REPEATING_JOB_TYPICAL_PERIOD, self.REPEATINGWorkOnQueue, page_key )
|
||||
self._page_key = page_key
|
||||
|
||||
self._files_repeating_job = HG.client_controller.CallRepeating( ClientImporting.GetRepeatingJobInitialDelay(), ClientImporting.REPEATING_JOB_TYPICAL_PERIOD, self.REPEATINGWorkOnFiles )
|
||||
self._gallery_repeating_job = HG.client_controller.CallRepeating( ClientImporting.GetRepeatingJobInitialDelay(), ClientImporting.REPEATING_JOB_TYPICAL_PERIOD, self.REPEATINGWorkOnGallery )
|
||||
|
||||
self._files_repeating_job.SetThreadSlotType( 'misc' )
|
||||
self._queue_repeating_job.SetThreadSlotType( 'misc' )
|
||||
self._gallery_repeating_job.SetThreadSlotType( 'misc' )
|
||||
|
||||
self._have_started = True
|
||||
|
||||
|
||||
|
||||
def CanDoFileWork( self, page_key ):
|
||||
def CheckCanDoFileWork( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if ClientImporting.PageImporterShouldStopWorking( page_key ):
|
||||
|
||||
self._files_repeating_job.Cancel()
|
||||
|
||||
return False
|
||||
|
||||
|
||||
files_paused = self._files_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' )
|
||||
|
||||
if files_paused:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
try:
|
||||
|
||||
self._file_import_options.CheckReadyToImport()
|
||||
ClientImportControl.CheckImporterCanDoWorkBecauseStopped( self._page_key )
|
||||
|
||||
except Exception as e:
|
||||
except HydrusExceptions.VetoException:
|
||||
|
||||
self._current_action = str( e )
|
||||
self._files_repeating_job.Cancel()
|
||||
|
||||
HydrusData.ShowText( str( e ) )
|
||||
raise
|
||||
|
||||
|
||||
ClientImportControl.CheckImporterCanDoFileWorkBecausePaused( self._files_paused, self._file_seed_cache, self._page_key )
|
||||
|
||||
try:
|
||||
|
||||
ClientImportControl.CheckImporterCanDoFileWorkBecausePausifyingProblem( self._file_import_options )
|
||||
|
||||
except HydrusExceptions.VetoException:
|
||||
|
||||
self._files_paused = True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
work_to_do = self._file_seed_cache.WorkToDo()
|
||||
|
||||
if not work_to_do:
|
||||
|
||||
return False
|
||||
raise
|
||||
|
||||
|
||||
|
||||
return self.CanDoNetworkWork( page_key )
|
||||
self.CheckCanDoNetworkWork()
|
||||
|
||||
|
||||
def CanDoNetworkWork( self, page_key ):
|
||||
def CheckCanDoNetworkWork( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
page_shown = not HG.client_controller.PageClosedButNotDestroyed( page_key )
|
||||
|
||||
if not page_shown:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
network_engine_good = not HG.client_controller.network_engine.IsBusy()
|
||||
|
||||
if not network_engine_good:
|
||||
|
||||
return False
|
||||
|
||||
ClientImportControl.CheckCanDoNetworkWork( self._no_work_until, self._no_work_until_reason )
|
||||
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def REPEATINGWorkOnFiles( self, page_key ):
|
||||
def REPEATINGWorkOnFiles( self ):
|
||||
|
||||
while self.CanDoFileWork( page_key ):
|
||||
while True:
|
||||
|
||||
try:
|
||||
|
||||
self._WorkOnFiles( page_key )
|
||||
try:
|
||||
|
||||
self.CheckCanDoFileWork()
|
||||
|
||||
except HydrusExceptions.VetoException as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._files_status = str( e )
|
||||
|
||||
|
||||
break
|
||||
|
||||
|
||||
self._WorkOnFiles()
|
||||
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
|
@ -692,49 +734,67 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
except Exception as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._files_status = 'stopping work: {}'.format( str( e ) )
|
||||
|
||||
|
||||
HydrusData.ShowException( e )
|
||||
|
||||
return
|
||||
|
||||
|
||||
|
||||
|
||||
def CanDoQueueWork( self, page_key ):
|
||||
def CheckCanDoGalleryWork( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if ClientImporting.PageImporterShouldStopWorking( page_key ):
|
||||
try:
|
||||
|
||||
self._queue_repeating_job.Cancel()
|
||||
ClientImportControl.CheckImporterCanDoWorkBecauseStopped( self._page_key )
|
||||
|
||||
return False
|
||||
except HydrusExceptions.VetoException:
|
||||
|
||||
self._gallery_repeating_job.Cancel()
|
||||
|
||||
raise
|
||||
|
||||
|
||||
queue_paused = self._queue_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_gallery_searches' )
|
||||
if len( self._pending_jobs ) == 0:
|
||||
|
||||
raise HydrusExceptions.VetoException()
|
||||
|
||||
|
||||
if queue_paused:
|
||||
|
||||
return False
|
||||
|
||||
ClientImportControl.CheckImporterCanDoGalleryWorkBecausePaused( self._gallery_paused, None )
|
||||
|
||||
|
||||
return self.CanDoNetworkWork( page_key )
|
||||
return self.CheckCanDoNetworkWork()
|
||||
|
||||
|
||||
def REPEATINGWorkOnQueue( self, page_key ):
|
||||
def REPEATINGWorkOnGallery( self ):
|
||||
|
||||
while self.CanDoQueueWork( page_key ):
|
||||
while True:
|
||||
|
||||
try:
|
||||
|
||||
did_work = self._WorkOnQueue( page_key )
|
||||
try:
|
||||
|
||||
self.CheckCanDoGalleryWork()
|
||||
|
||||
except HydrusExceptions.VetoException as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._gallery_status = str( e )
|
||||
|
||||
|
||||
break
|
||||
|
||||
|
||||
if did_work:
|
||||
|
||||
time.sleep( ClientImporting.DID_SUBSTANTIAL_FILE_WORK_MINIMUM_SLEEP_TIME )
|
||||
|
||||
else:
|
||||
|
||||
return
|
||||
|
||||
self._WorkOnGallery()
|
||||
|
||||
time.sleep( 1 )
|
||||
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
|
@ -742,11 +802,19 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
except Exception as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._gallery_status = 'stopping work: {}'.format( str( e ) )
|
||||
|
||||
|
||||
HydrusData.ShowException( e )
|
||||
|
||||
return
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_SIMPLE_DOWNLOADER_IMPORT ] = SimpleDownloaderImport
|
||||
|
||||
class URLsImport( HydrusSerialisable.SerialisableBase ):
|
||||
|
@ -765,12 +833,19 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._tag_import_options = TagImportOptions.TagImportOptions( is_default = True )
|
||||
self._paused = False
|
||||
|
||||
self._no_work_until = 0
|
||||
self._no_work_until_reason = ''
|
||||
|
||||
self._page_key = b'initialising page key'
|
||||
self._downloader_key = HydrusData.GenerateKey()
|
||||
|
||||
self._lock = threading.Lock()
|
||||
|
||||
self._have_started = False
|
||||
|
||||
self._files_status = ''
|
||||
self._gallery_status = ''
|
||||
|
||||
self._files_network_job = None
|
||||
self._gallery_network_job = None
|
||||
|
||||
|
@ -783,6 +858,17 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
|
|||
HG.client_controller.sub( self, 'NotifyGallerySeedsUpdated', 'gallery_seed_log_gallery_seeds_updated' )
|
||||
|
||||
|
||||
def _DelayWork( self, time_delta, reason ):
|
||||
|
||||
if len( reason ) > 0:
|
||||
|
||||
reason = reason.splitlines()[0]
|
||||
|
||||
|
||||
self._no_work_until = HydrusData.GetNow() + time_delta
|
||||
self._no_work_until_reason = reason
|
||||
|
||||
|
||||
def _FileNetworkJobPresentationContextFactory( self, network_job ):
|
||||
|
||||
def enter_call():
|
||||
|
@ -886,7 +972,7 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def _WorkOnFiles( self, page_key ):
|
||||
def _WorkOnFiles( self ):
|
||||
|
||||
file_seed = self._file_seed_cache.GetNextFileSeed( CC.STATUS_UNKNOWN )
|
||||
|
||||
|
@ -907,11 +993,21 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
if file_seed.ShouldPresent( self._file_import_options.GetPresentationImportOptions() ):
|
||||
|
||||
file_seed.PresentToPage( page_key )
|
||||
file_seed.PresentToPage( self._page_key )
|
||||
|
||||
did_substantial_work = True
|
||||
|
||||
|
||||
except HydrusExceptions.NetworkException as e:
|
||||
|
||||
delay = HG.client_controller.new_options.GetInteger( 'downloader_network_error_delay' )
|
||||
|
||||
self._DelayWork( delay, str( e ) )
|
||||
|
||||
file_seed.SetStatus( CC.STATUS_ERROR, str( e ) )
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
status = CC.STATUS_ERROR
|
||||
|
@ -927,7 +1023,7 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def _WorkOnGallery( self, page_key ):
|
||||
def _WorkOnGallery( self ):
|
||||
|
||||
gallery_seed = self._gallery_seed_log.GetNextGallerySeed( CC.STATUS_UNKNOWN )
|
||||
|
||||
|
@ -948,6 +1044,16 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
gallery_seed.WorkOnURL( 'download page', self._gallery_seed_log, file_seeds_callable, status_hook, title_hook, self._NetworkJobFactory, self._GalleryNetworkJobPresentationContextFactory, self._file_import_options )
|
||||
|
||||
except HydrusExceptions.NetworkException as e:
|
||||
|
||||
delay = HG.client_controller.new_options.GetInteger( 'downloader_network_error_delay' )
|
||||
|
||||
self._DelayWork( delay, str( e ) )
|
||||
|
||||
gallery_seed.SetStatus( CC.STATUS_ERROR, str( e ) )
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
status = CC.STATUS_ERROR
|
||||
|
@ -1199,8 +1305,10 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
|
|||
return
|
||||
|
||||
|
||||
self._files_repeating_job = HG.client_controller.CallRepeating( ClientImporting.GetRepeatingJobInitialDelay(), ClientImporting.REPEATING_JOB_TYPICAL_PERIOD, self.REPEATINGWorkOnFiles, page_key )
|
||||
self._gallery_repeating_job = HG.client_controller.CallRepeating( ClientImporting.GetRepeatingJobInitialDelay(), ClientImporting.REPEATING_JOB_TYPICAL_PERIOD, self.REPEATINGWorkOnGallery, page_key )
|
||||
self._page_key = page_key
|
||||
|
||||
self._files_repeating_job = HG.client_controller.CallRepeating( ClientImporting.GetRepeatingJobInitialDelay(), ClientImporting.REPEATING_JOB_TYPICAL_PERIOD, self.REPEATINGWorkOnFiles )
|
||||
self._gallery_repeating_job = HG.client_controller.CallRepeating( ClientImporting.GetRepeatingJobInitialDelay(), ClientImporting.REPEATING_JOB_TYPICAL_PERIOD, self.REPEATINGWorkOnGallery )
|
||||
|
||||
self._files_repeating_job.SetThreadSlotType( 'misc' )
|
||||
self._gallery_repeating_job.SetThreadSlotType( 'misc' )
|
||||
|
@ -1209,124 +1317,132 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def CanDoFileWork( self, page_key ):
|
||||
def CheckCanDoFileWork( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if ClientImporting.PageImporterShouldStopWorking( page_key ):
|
||||
|
||||
self._files_repeating_job.Cancel()
|
||||
|
||||
return False
|
||||
|
||||
|
||||
files_paused = self._paused or HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' )
|
||||
|
||||
if files_paused:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
try:
|
||||
|
||||
self._file_import_options.CheckReadyToImport()
|
||||
ClientImportControl.CheckImporterCanDoWorkBecauseStopped( self._page_key )
|
||||
|
||||
except Exception as e:
|
||||
except HydrusExceptions.VetoException:
|
||||
|
||||
self._files_repeating_job.Cancel()
|
||||
|
||||
raise
|
||||
|
||||
|
||||
ClientImportControl.CheckImporterCanDoFileWorkBecausePaused( self._paused, self._file_seed_cache, self._page_key )
|
||||
|
||||
try:
|
||||
|
||||
ClientImportControl.CheckImporterCanDoFileWorkBecausePausifyingProblem( self._file_import_options )
|
||||
|
||||
except HydrusExceptions.VetoException:
|
||||
|
||||
self._paused = True
|
||||
|
||||
HydrusData.ShowText( str( e ) )
|
||||
|
||||
return False
|
||||
|
||||
|
||||
work_to_do = self._file_seed_cache.WorkToDo()
|
||||
|
||||
if not work_to_do:
|
||||
|
||||
return False
|
||||
raise
|
||||
|
||||
|
||||
|
||||
return self.CanDoNetworkWork( page_key )
|
||||
self.CheckCanDoNetworkWork()
|
||||
|
||||
|
||||
def CanDoNetworkWork( self, page_key ):
|
||||
def CheckCanDoNetworkWork( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
page_shown = not HG.client_controller.PageClosedButNotDestroyed( page_key )
|
||||
|
||||
if not page_shown:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
network_engine_good = not HG.client_controller.network_engine.IsBusy()
|
||||
|
||||
if not network_engine_good:
|
||||
|
||||
return False
|
||||
|
||||
ClientImportControl.CheckCanDoNetworkWork( self._no_work_until, self._no_work_until_reason )
|
||||
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def REPEATINGWorkOnFiles( self, page_key ):
|
||||
def REPEATINGWorkOnFiles( self ):
|
||||
|
||||
while self.CanDoFileWork( page_key ):
|
||||
while True:
|
||||
|
||||
try:
|
||||
|
||||
self._WorkOnFiles( page_key )
|
||||
try:
|
||||
|
||||
self.CheckCanDoFileWork()
|
||||
|
||||
except HydrusExceptions.VetoException as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._files_status = str( e )
|
||||
|
||||
|
||||
break
|
||||
|
||||
|
||||
self._WorkOnFiles()
|
||||
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
self._SerialisableChangeMade()
|
||||
|
||||
except Exception as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._files_status = 'stopping work: {}'.format( str( e ) )
|
||||
|
||||
|
||||
HydrusData.ShowException( e )
|
||||
|
||||
return
|
||||
|
||||
|
||||
|
||||
|
||||
def CanDoGalleryWork( self, page_key ):
|
||||
def CheckCanDoGalleryWork( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if ClientImporting.PageImporterShouldStopWorking( page_key ):
|
||||
try:
|
||||
|
||||
ClientImportControl.CheckImporterCanDoWorkBecauseStopped( self._page_key )
|
||||
|
||||
except HydrusExceptions.VetoException:
|
||||
|
||||
self._gallery_repeating_job.Cancel()
|
||||
|
||||
return False
|
||||
raise
|
||||
|
||||
|
||||
gallery_paused = self._paused or HG.client_controller.new_options.GetBoolean( 'pause_all_gallery_searches' )
|
||||
|
||||
if gallery_paused:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
work_to_do = self._gallery_seed_log.WorkToDo()
|
||||
|
||||
if not work_to_do:
|
||||
|
||||
return False
|
||||
|
||||
ClientImportControl.CheckImporterCanDoGalleryWorkBecausePaused( self._paused, self._gallery_seed_log )
|
||||
|
||||
|
||||
return self.CanDoNetworkWork( page_key )
|
||||
return self.CheckCanDoNetworkWork()
|
||||
|
||||
|
||||
def REPEATINGWorkOnGallery( self, page_key ):
|
||||
def REPEATINGWorkOnGallery( self ):
|
||||
|
||||
while self.CanDoGalleryWork( page_key ):
|
||||
while True:
|
||||
|
||||
try:
|
||||
|
||||
self._WorkOnGallery( page_key )
|
||||
try:
|
||||
|
||||
self.CheckCanDoGalleryWork()
|
||||
|
||||
except HydrusExceptions.VetoException as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._gallery_status = str( e )
|
||||
|
||||
|
||||
break
|
||||
|
||||
|
||||
self._WorkOnGallery()
|
||||
|
||||
time.sleep( 1 )
|
||||
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
|
@ -1334,9 +1450,17 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
except Exception as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._gallery_status = 'stopping work: {}'.format( str( e ) )
|
||||
|
||||
|
||||
HydrusData.ShowException( e )
|
||||
|
||||
return
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_URLS_IMPORT ] = URLsImport
|
||||
|
|
|
@ -541,6 +541,11 @@ class SubscriptionLegacy( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
def _DelayWork( self, time_delta, reason ):
|
||||
|
||||
if len( reason ) > 0:
|
||||
|
||||
reason = reason.splitlines()[0]
|
||||
|
||||
|
||||
self._no_work_until = HydrusData.GetNow() + time_delta
|
||||
self._no_work_until_reason = reason
|
||||
|
||||
|
|
|
@ -112,6 +112,11 @@ class Subscription( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
def _DelayWork( self, time_delta, reason ):
|
||||
|
||||
if len( reason ) > 0:
|
||||
|
||||
reason = reason.splitlines()[0]
|
||||
|
||||
|
||||
self._no_work_until = HydrusData.GetNow() + time_delta
|
||||
self._no_work_until_reason = reason
|
||||
|
||||
|
|
|
@ -8,12 +8,12 @@ from hydrus.core import HydrusSerialisable
|
|||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientData
|
||||
from hydrus.client.importing import ClientImportControl
|
||||
from hydrus.client.importing import ClientImporting
|
||||
from hydrus.client.importing import ClientImportFileSeeds
|
||||
from hydrus.client.importing import ClientImportGallerySeeds
|
||||
from hydrus.client.importing.options import ClientImportOptions
|
||||
from hydrus.client.importing.options import FileImportOptions
|
||||
from hydrus.client.importing.options import PresentationImportOptions
|
||||
from hydrus.client.importing.options import TagImportOptions
|
||||
from hydrus.client.metadata import ClientTags
|
||||
from hydrus.client.networking import ClientNetworkingJobs
|
||||
|
@ -32,7 +32,7 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._lock = threading.Lock()
|
||||
|
||||
self._page_key = 'initialising page key'
|
||||
self._page_key = b'initialising page key'
|
||||
|
||||
self._watchers = HydrusSerialisable.SerialisableList()
|
||||
|
||||
|
@ -565,7 +565,7 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
if ClientImporting.PageImporterShouldStopWorking( self._page_key ):
|
||||
if ClientImportControl.PageImporterShouldStopWorking( self._page_key ):
|
||||
|
||||
self._watchers_repeating_job.Cancel()
|
||||
|
||||
|
@ -617,7 +617,7 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
HydrusSerialisable.SerialisableBase.__init__( self )
|
||||
|
||||
self._page_key = 'initialising page key'
|
||||
self._page_key = b'initialising page key'
|
||||
self._publish_to_page = False
|
||||
|
||||
self._url = ''
|
||||
|
@ -650,7 +650,7 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._creation_time = HydrusData.GetNow()
|
||||
|
||||
self._file_velocity_status = ''
|
||||
self._file_status = ''
|
||||
self._files_status = ''
|
||||
self._watcher_status = ''
|
||||
|
||||
self._watcher_key = HydrusData.GenerateKey()
|
||||
|
@ -658,6 +658,8 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._have_started = False
|
||||
|
||||
self._lock = threading.Lock()
|
||||
self._files_working_lock = threading.Lock()
|
||||
self._checker_working_lock = threading.Lock()
|
||||
|
||||
self._last_pubbed_page_name = ''
|
||||
|
||||
|
@ -701,12 +703,7 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
if len( text ) > 0:
|
||||
|
||||
text = text.splitlines()[0]
|
||||
|
||||
|
||||
self._watcher_status = text
|
||||
self._watcher_status = ClientImportControl.NeatenStatusText( text )
|
||||
|
||||
|
||||
|
||||
|
@ -813,6 +810,11 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
def _DelayWork( self, time_delta, reason ):
|
||||
|
||||
if len( reason ) > 0:
|
||||
|
||||
reason = reason.splitlines()[0]
|
||||
|
||||
|
||||
self._no_work_until = HydrusData.GetNow() + time_delta
|
||||
self._no_work_until_reason = reason
|
||||
|
||||
|
@ -1060,18 +1062,11 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
return
|
||||
|
||||
|
||||
did_substantial_work = False
|
||||
|
||||
def status_hook( text ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if len( text ) > 0:
|
||||
|
||||
text = text.splitlines()[0]
|
||||
|
||||
|
||||
self._file_status = text
|
||||
self._files_status = ClientImportControl.NeatenStatusText( text )
|
||||
|
||||
|
||||
|
||||
|
@ -1093,7 +1088,7 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
self._file_status = ''
|
||||
self._files_status = ''
|
||||
|
||||
|
||||
if did_substantial_work:
|
||||
|
@ -1304,29 +1299,12 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
gallery_work_to_do = self._gallery_seed_log.WorkToDo()
|
||||
files_work_to_do = self._file_seed_cache.WorkToDo()
|
||||
|
||||
gallery_go = gallery_work_to_do and not self._checking_paused
|
||||
checker_go = HydrusData.TimeHasPassed( self._next_check_time ) and not self._checking_paused
|
||||
files_go = files_work_to_do and not self._files_paused
|
||||
|
||||
if self._watcher_status != '' or self._file_status != '':
|
||||
|
||||
return ( ClientImporting.DOWNLOADER_SIMPLE_STATUS_WORKING, 'working' )
|
||||
|
||||
elif gallery_go or files_go:
|
||||
|
||||
return ( ClientImporting.DOWNLOADER_SIMPLE_STATUS_PENDING, 'pending' )
|
||||
|
||||
elif self._checking_status == ClientImporting.CHECKER_STATUS_404:
|
||||
|
||||
return ( ClientImporting.DOWNLOADER_SIMPLE_STATUS_DONE, '404' )
|
||||
|
||||
elif self._checking_status == ClientImporting.CHECKER_STATUS_DEAD:
|
||||
|
||||
return ( ClientImporting.DOWNLOADER_SIMPLE_STATUS_DONE, 'DEAD' )
|
||||
|
||||
elif not HydrusData.TimeHasPassed( self._no_work_until ):
|
||||
if not HydrusData.TimeHasPassed( self._no_work_until ):
|
||||
|
||||
if self._next_check_time is None:
|
||||
|
||||
|
@ -1339,6 +1317,25 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
return ( ClientImporting.DOWNLOADER_SIMPLE_STATUS_DEFERRED, text )
|
||||
|
||||
elif checker_go or files_go:
|
||||
|
||||
if self._checker_working_lock.locked() or self._files_working_lock.locked():
|
||||
|
||||
return ( ClientImporting.DOWNLOADER_SIMPLE_STATUS_WORKING, 'working' )
|
||||
|
||||
else:
|
||||
|
||||
return ( ClientImporting.DOWNLOADER_SIMPLE_STATUS_PENDING, 'pending' )
|
||||
|
||||
|
||||
elif self._checking_status == ClientImporting.CHECKER_STATUS_404:
|
||||
|
||||
return ( ClientImporting.DOWNLOADER_SIMPLE_STATUS_DONE, '404' )
|
||||
|
||||
elif self._checking_status == ClientImporting.CHECKER_STATUS_DEAD:
|
||||
|
||||
return ( ClientImporting.DOWNLOADER_SIMPLE_STATUS_DONE, 'DEAD' )
|
||||
|
||||
else:
|
||||
|
||||
if self._checking_paused:
|
||||
|
@ -1364,17 +1361,7 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
file_status = self._file_status
|
||||
|
||||
if self._checking_status == ClientImporting.CHECKER_STATUS_404:
|
||||
|
||||
watcher_status = 'URL 404'
|
||||
|
||||
elif self._checking_status == ClientImporting.CHECKER_STATUS_DEAD:
|
||||
|
||||
watcher_status = 'URL DEAD'
|
||||
|
||||
elif not HydrusData.TimeHasPassed( self._no_work_until ):
|
||||
if not HydrusData.TimeHasPassed( self._no_work_until ):
|
||||
|
||||
if self._next_check_time is None:
|
||||
|
||||
|
@ -1385,15 +1372,31 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
no_work_text = '{} - next check {}'.format( self._no_work_until_reason, ClientData.TimestampToPrettyTimeDelta( max( self._no_work_until, self._next_check_time ) ) )
|
||||
|
||||
|
||||
file_status = no_work_text
|
||||
files_status = no_work_text
|
||||
watcher_status = no_work_text
|
||||
|
||||
else:
|
||||
|
||||
watcher_status = self._watcher_status
|
||||
files_work_to_do = self._file_seed_cache.WorkToDo()
|
||||
|
||||
checker_go = HydrusData.TimeHasPassed( self._next_check_time ) and not self._checking_paused
|
||||
files_go = files_work_to_do and not self._files_paused
|
||||
|
||||
if checker_go and not self._checker_working_lock.locked():
|
||||
|
||||
self._watcher_status = 'waiting for a work slot'
|
||||
|
||||
|
||||
if files_go and not self._files_working_lock.locked():
|
||||
|
||||
self._files_status = 'waiting for a work slot'
|
||||
|
||||
|
||||
files_status = ClientImportControl.GenerateLiveStatusText( self._files_status, self._files_paused, self._no_work_until, self._no_work_until_reason )
|
||||
watcher_status = ClientImportControl.GenerateLiveStatusText( self._watcher_status, self._checking_paused, self._no_work_until, self._no_work_until_reason )
|
||||
|
||||
|
||||
return ( file_status, self._files_paused, self._file_velocity_status, self._next_check_time, watcher_status, self._subject, self._checking_status, self._check_now, self._checking_paused )
|
||||
return ( files_status, self._files_paused, self._file_velocity_status, self._next_check_time, watcher_status, self._subject, self._checking_status, self._check_now, self._checking_paused )
|
||||
|
||||
|
||||
|
||||
|
@ -1677,72 +1680,43 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def CanDoFileWork( self ):
|
||||
def CheckCanDoFileWork( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if ClientImporting.PageImporterShouldStopWorking( self._page_key ):
|
||||
|
||||
self._files_repeating_job.Cancel()
|
||||
|
||||
return
|
||||
|
||||
|
||||
files_paused = self._files_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' )
|
||||
|
||||
if files_paused:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
try:
|
||||
|
||||
self._file_import_options.CheckReadyToImport()
|
||||
ClientImportControl.CheckImporterCanDoWorkBecauseStopped( self._page_key )
|
||||
|
||||
except Exception as e:
|
||||
except HydrusExceptions.VetoException:
|
||||
|
||||
self._file_status = str( e )
|
||||
self._files_repeating_job.Cancel()
|
||||
|
||||
HydrusData.ShowText( str( e ) )
|
||||
|
||||
return False
|
||||
raise
|
||||
|
||||
|
||||
work_to_do = self._file_seed_cache.WorkToDo()
|
||||
ClientImportControl.CheckImporterCanDoFileWorkBecausePaused( self._files_paused, self._file_seed_cache, self._page_key )
|
||||
|
||||
if not work_to_do:
|
||||
try:
|
||||
|
||||
return False
|
||||
ClientImportControl.CheckImporterCanDoFileWorkBecausePausifyingProblem( self._file_import_options )
|
||||
|
||||
except HydrusExceptions.VetoException:
|
||||
|
||||
self._files_paused = True
|
||||
|
||||
raise
|
||||
|
||||
|
||||
|
||||
return self.CanDoNetworkWork()
|
||||
self.CheckCanDoNetworkWork()
|
||||
|
||||
|
||||
def CanDoNetworkWork( self ):
|
||||
def CheckCanDoNetworkWork( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
no_delays = HydrusData.TimeHasPassed( self._no_work_until )
|
||||
|
||||
if not no_delays:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
page_shown = not HG.client_controller.PageClosedButNotDestroyed( self._page_key )
|
||||
|
||||
if not page_shown:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
network_engine_good = not HG.client_controller.network_engine.IsBusy()
|
||||
|
||||
if not network_engine_good:
|
||||
|
||||
return False
|
||||
|
||||
ClientImportControl.CheckCanDoNetworkWork( self._no_work_until, self._no_work_until_reason )
|
||||
|
||||
|
||||
return True
|
||||
|
@ -1750,32 +1724,60 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
def REPEATINGWorkOnFiles( self ):
|
||||
|
||||
while self.CanDoFileWork():
|
||||
with self._files_working_lock:
|
||||
|
||||
try:
|
||||
while True:
|
||||
|
||||
self._WorkOnFiles()
|
||||
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
self._SerialisableChangeMade()
|
||||
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.ShowException( e )
|
||||
try:
|
||||
|
||||
try:
|
||||
|
||||
self.CheckCanDoFileWork()
|
||||
|
||||
except HydrusExceptions.VetoException as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._files_status = str( e )
|
||||
|
||||
|
||||
break
|
||||
|
||||
|
||||
self._WorkOnFiles()
|
||||
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
self._SerialisableChangeMade()
|
||||
|
||||
except Exception as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._files_status = 'stopping work: {}'.format( str( e ) )
|
||||
|
||||
|
||||
HydrusData.ShowException( e )
|
||||
|
||||
return
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def CanDoCheckerWork( self ):
|
||||
def CheckCanDoCheckerWork( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if ClientImporting.PageImporterShouldStopWorking( self._page_key ):
|
||||
try:
|
||||
|
||||
ClientImportControl.CheckImporterCanDoWorkBecauseStopped( self._page_key )
|
||||
|
||||
except HydrusExceptions.VetoException:
|
||||
|
||||
self._checker_repeating_job.Cancel()
|
||||
|
||||
return
|
||||
raise
|
||||
|
||||
|
||||
while self._gallery_seed_log.WorkToDo():
|
||||
|
@ -1788,46 +1790,78 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._gallery_seed_log.NotifyGallerySeedsUpdated( ( gallery_seed, ) )
|
||||
|
||||
|
||||
checking_paused = self._checking_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_watcher_checkers' )
|
||||
|
||||
if checking_paused:
|
||||
if self._checking_paused:
|
||||
|
||||
return False
|
||||
raise HydrusExceptions.VetoException( 'paused' )
|
||||
|
||||
|
||||
able_to_check = self._checking_status == ClientImporting.CHECKER_STATUS_OK and self._HasURL()
|
||||
|
||||
if not able_to_check:
|
||||
if HG.client_controller.new_options.GetBoolean( 'pause_all_watcher_checkers' ):
|
||||
|
||||
return False
|
||||
raise HydrusExceptions.VetoException( 'all checkers are paused!' )
|
||||
|
||||
|
||||
if not self._HasURL():
|
||||
|
||||
raise HydrusExceptions.VetoException( 'no url set yet!' )
|
||||
|
||||
|
||||
if self._checking_status == ClientImporting.CHECKER_STATUS_404:
|
||||
|
||||
raise HydrusExceptions.VetoException( 'URL 404' )
|
||||
|
||||
elif self._checking_status == ClientImporting.CHECKER_STATUS_DEAD:
|
||||
|
||||
raise HydrusExceptions.VetoException( 'URL DEAD' )
|
||||
|
||||
|
||||
check_due = HydrusData.TimeHasPassed( self._next_check_time )
|
||||
|
||||
if not check_due:
|
||||
|
||||
return False
|
||||
raise HydrusExceptions.VetoException( '' )
|
||||
|
||||
|
||||
|
||||
return self.CanDoNetworkWork()
|
||||
return self.CheckCanDoNetworkWork()
|
||||
|
||||
|
||||
def REPEATINGWorkOnChecker( self ):
|
||||
|
||||
if self.CanDoCheckerWork():
|
||||
with self._checker_working_lock:
|
||||
|
||||
try:
|
||||
|
||||
try:
|
||||
|
||||
self.CheckCanDoCheckerWork()
|
||||
|
||||
except HydrusExceptions.VetoException as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._watcher_status = str( e )
|
||||
|
||||
|
||||
return
|
||||
|
||||
|
||||
self._CheckWatchableURL()
|
||||
|
||||
self._SerialisableChangeMade()
|
||||
|
||||
except Exception as e:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._watcher_status = 'stopping work: {}'.format( str( e ) )
|
||||
|
||||
|
||||
HydrusData.ShowException( e )
|
||||
|
||||
return
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_WATCHER_IMPORT ] = WatcherImport
|
||||
|
|
|
@ -20,13 +20,13 @@ DOWNLOADER_SIMPLE_STATUS_PENDING = 2
|
|||
DOWNLOADER_SIMPLE_STATUS_PAUSED = 3
|
||||
DOWNLOADER_SIMPLE_STATUS_DEFERRED = 4
|
||||
|
||||
downloader_enum_sort_lookup = {}
|
||||
|
||||
downloader_enum_sort_lookup[ DOWNLOADER_SIMPLE_STATUS_DONE ] = 0
|
||||
downloader_enum_sort_lookup[ DOWNLOADER_SIMPLE_STATUS_WORKING ] = 1
|
||||
downloader_enum_sort_lookup[ DOWNLOADER_SIMPLE_STATUS_PENDING ] = 2
|
||||
downloader_enum_sort_lookup[ DOWNLOADER_SIMPLE_STATUS_DEFERRED ] = 3
|
||||
downloader_enum_sort_lookup[ DOWNLOADER_SIMPLE_STATUS_PAUSED ] = 4
|
||||
downloader_enum_sort_lookup = {
|
||||
DOWNLOADER_SIMPLE_STATUS_DONE : 0,
|
||||
DOWNLOADER_SIMPLE_STATUS_WORKING : 1,
|
||||
DOWNLOADER_SIMPLE_STATUS_PENDING : 2,
|
||||
DOWNLOADER_SIMPLE_STATUS_DEFERRED : 3,
|
||||
DOWNLOADER_SIMPLE_STATUS_PAUSED : 4
|
||||
}
|
||||
|
||||
DID_SUBSTANTIAL_FILE_WORK_MINIMUM_SLEEP_TIME = 0.1
|
||||
|
||||
|
@ -106,10 +106,6 @@ def GetRepeatingJobInitialDelay():
|
|||
|
||||
return 0.5 + ( random.random() * 0.5 )
|
||||
|
||||
def PageImporterShouldStopWorking( page_key ):
|
||||
|
||||
return HG.started_shutdown or not HG.client_controller.PageAlive( page_key )
|
||||
|
||||
def PublishPresentationHashes( publishing_label, hashes, publish_to_popup_button, publish_files_to_page ):
|
||||
|
||||
if publish_to_popup_button:
|
||||
|
|
|
@ -309,9 +309,17 @@ def ParseClientAPIPOSTArgs( request ):
|
|||
|
||||
json_string = str( json_bytes, 'utf-8' )
|
||||
|
||||
args = json.loads( json_string )
|
||||
try:
|
||||
|
||||
args = json.loads( json_string )
|
||||
|
||||
except json.decoder.JSONDecodeError as e:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Sorry, did not understand the JSON you gave me: {}'.format( str( e ) ) )
|
||||
|
||||
|
||||
parsed_request_args = ParseClientAPIPOSTByteArgs( args )
|
||||
|
||||
|
||||
elif request_content_type_mime == HC.APPLICATION_CBOR:
|
||||
|
||||
|
|
|
@ -19,6 +19,7 @@ from hydrus.core.networking import HydrusNetworking
|
|||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientData
|
||||
from hydrus.client import ClientTime
|
||||
from hydrus.client.networking import ClientNetworkingContexts
|
||||
from hydrus.client.networking import ClientNetworkingFunctions
|
||||
|
||||
|
@ -165,7 +166,8 @@ class NetworkJob( object ):
|
|||
self._url = url
|
||||
|
||||
self._current_connection_attempt_number = 1
|
||||
self._max_connection_attempts_allowed = 5
|
||||
self._current_request_attempt_number = 1
|
||||
self._this_is_a_one_shot_request = False
|
||||
self._we_tried_cloudflare_once = False
|
||||
|
||||
self._domain = ClientNetworkingFunctions.ConvertURLIntoDomain( self._url )
|
||||
|
@ -245,21 +247,33 @@ class NetworkJob( object ):
|
|||
|
||||
def _CanReattemptConnection( self ):
|
||||
|
||||
return self._current_connection_attempt_number <= self._max_connection_attempts_allowed
|
||||
if self._this_is_a_one_shot_request:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
max_connection_attempts_allowed = HG.client_controller.new_options.GetInteger( 'max_connection_attempts_allowed' )
|
||||
|
||||
return self._current_connection_attempt_number <= max_connection_attempts_allowed
|
||||
|
||||
|
||||
def _CanReattemptRequest( self ):
|
||||
|
||||
if self._this_is_a_one_shot_request:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
if self._method == 'GET':
|
||||
|
||||
max_attempts_allowed = 5
|
||||
max_attempts_allowed = HG.client_controller.new_options.GetInteger( 'max_request_attempts_allowed_get' )
|
||||
|
||||
elif self._method == 'POST':
|
||||
else:
|
||||
|
||||
max_attempts_allowed = 1
|
||||
|
||||
|
||||
return self._current_connection_attempt_number <= max_attempts_allowed
|
||||
return self._current_request_attempt_number <= max_attempts_allowed
|
||||
|
||||
|
||||
def _GenerateModifiedDate( self, response: requests.Response ):
|
||||
|
@ -281,7 +295,12 @@ class NetworkJob( object ):
|
|||
|
||||
# the given struct is in GMT, so calendar.timegm is appropriate here
|
||||
|
||||
self._response_last_modified = int( calendar.timegm( struct_time ) )
|
||||
last_modified_time = int( calendar.timegm( struct_time ) )
|
||||
|
||||
if ClientTime.TimestampIsSensible( last_modified_time ):
|
||||
|
||||
self._response_last_modified = last_modified_time
|
||||
|
||||
|
||||
except:
|
||||
|
||||
|
@ -613,9 +632,9 @@ class NetworkJob( object ):
|
|||
self.engine.bandwidth_manager.ReportDataUsed( self._network_contexts, num_bytes )
|
||||
|
||||
|
||||
def _ResetForAnotherConnectionAttempt( self ):
|
||||
def _ResetForAnotherAttempt( self ):
|
||||
|
||||
self._current_connection_attempt_number += 1
|
||||
self._current_request_attempt_number += 1
|
||||
|
||||
self._content_type = None
|
||||
self._response_mime = None
|
||||
|
@ -631,6 +650,14 @@ class NetworkJob( object ):
|
|||
self._number_of_concurrent_empty_chunks = 0
|
||||
|
||||
|
||||
def _ResetForAnotherConnectionAttempt( self ):
|
||||
|
||||
self._ResetForAnotherAttempt()
|
||||
|
||||
self._current_connection_attempt_number += 1
|
||||
self._current_request_attempt_number = 1
|
||||
|
||||
|
||||
def _SendRequestAndGetResponse( self ) -> requests.Response:
|
||||
|
||||
with self._lock:
|
||||
|
@ -974,7 +1001,9 @@ class NetworkJob( object ):
|
|||
|
||||
serverside_bandwidth_wait_time = HG.client_controller.new_options.GetInteger( 'serverside_bandwidth_wait_time' )
|
||||
|
||||
self._serverside_bandwidth_wake_time = HydrusData.GetNow() + ( ( self._current_connection_attempt_number - 1 ) * serverside_bandwidth_wait_time )
|
||||
problem_rating = ( self._current_connection_attempt_number + self._current_request_attempt_number ) - 1
|
||||
|
||||
self._serverside_bandwidth_wake_time = HydrusData.GetNow() + ( problem_rating * serverside_bandwidth_wait_time )
|
||||
|
||||
while not HydrusData.TimeHasPassed( self._serverside_bandwidth_wake_time ) and not self._IsCancelled():
|
||||
|
||||
|
@ -1070,7 +1099,7 @@ class NetworkJob( object ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
if self._max_connection_attempts_allowed == 1:
|
||||
if self._this_is_a_one_shot_request:
|
||||
|
||||
return True
|
||||
|
||||
|
@ -1328,7 +1357,7 @@ class NetworkJob( object ):
|
|||
|
||||
def OnlyTryConnectionOnce( self ):
|
||||
|
||||
self._max_connection_attempts_allowed = 1
|
||||
self._this_is_a_one_shot_request = True
|
||||
|
||||
|
||||
def OverrideBandwidth( self, delay = None ):
|
||||
|
@ -1589,7 +1618,7 @@ class NetworkJob( object ):
|
|||
|
||||
except HydrusExceptions.BandwidthException as e:
|
||||
|
||||
self._ResetForAnotherConnectionAttempt()
|
||||
self._ResetForAnotherAttempt()
|
||||
|
||||
if self._CanReattemptRequest():
|
||||
|
||||
|
@ -1604,7 +1633,7 @@ class NetworkJob( object ):
|
|||
|
||||
except HydrusExceptions.ShouldReattemptNetworkException as e:
|
||||
|
||||
self._ResetForAnotherConnectionAttempt()
|
||||
self._ResetForAnotherAttempt()
|
||||
|
||||
if not self._CanReattemptRequest():
|
||||
|
||||
|
@ -1615,7 +1644,7 @@ class NetworkJob( object ):
|
|||
|
||||
except requests.exceptions.ChunkedEncodingError:
|
||||
|
||||
self._ResetForAnotherConnectionAttempt()
|
||||
self._ResetForAnotherAttempt()
|
||||
|
||||
if not self._CanReattemptRequest():
|
||||
|
||||
|
@ -1649,7 +1678,7 @@ class NetworkJob( object ):
|
|||
|
||||
except requests.exceptions.ReadTimeout:
|
||||
|
||||
self._ResetForAnotherConnectionAttempt()
|
||||
self._ResetForAnotherAttempt()
|
||||
|
||||
if not self._CanReattemptRequest():
|
||||
|
||||
|
|
|
@ -80,7 +80,7 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 20
|
||||
SOFTWARE_VERSION = 488
|
||||
SOFTWARE_VERSION = 489
|
||||
CLIENT_API_VERSION = 31
|
||||
|
||||
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
|
|
@ -373,7 +373,7 @@ class HydrusController( object ):
|
|||
|
||||
def ClearCaches( self ) -> None:
|
||||
|
||||
for cache in list(self._caches.values()): cache.Clear()
|
||||
for cache in self._caches.values(): cache.Clear()
|
||||
|
||||
|
||||
def CurrentlyIdle( self ) -> bool:
|
||||
|
|
|
@ -1121,9 +1121,9 @@ def LastShutdownWasBad( db_path, instance ):
|
|||
return False
|
||||
|
||||
|
||||
def MassUnion( lists ):
|
||||
def MassUnion( iterables ):
|
||||
|
||||
return { item for item in itertools.chain.from_iterable( lists ) }
|
||||
return { item for item in itertools.chain.from_iterable( iterables ) }
|
||||
|
||||
def MedianPop( population ):
|
||||
|
||||
|
|
|
@ -327,6 +327,32 @@ def GetMime( path, ok_to_look_for_hydrus_updates = False ):
|
|||
raise HydrusExceptions.ZeroSizeFileException( 'File is of zero length!' )
|
||||
|
||||
|
||||
if ok_to_look_for_hydrus_updates and size < 64 * 1024 * 1024:
|
||||
|
||||
with open( path, 'rb' ) as f:
|
||||
|
||||
update_network_bytes = f.read()
|
||||
|
||||
|
||||
try:
|
||||
|
||||
update = HydrusSerialisable.CreateFromNetworkBytes( update_network_bytes )
|
||||
|
||||
if isinstance( update, HydrusNetwork.ContentUpdate ):
|
||||
|
||||
return HC.APPLICATION_HYDRUS_UPDATE_CONTENT
|
||||
|
||||
elif isinstance( update, HydrusNetwork.DefinitionsUpdate ):
|
||||
|
||||
return HC.APPLICATION_HYDRUS_UPDATE_DEFINITIONS
|
||||
|
||||
|
||||
except:
|
||||
|
||||
pass
|
||||
|
||||
|
||||
|
||||
with open( path, 'rb' ) as f:
|
||||
|
||||
bit_to_check = f.read( 256 )
|
||||
|
@ -360,6 +386,13 @@ def GetMime( path, ok_to_look_for_hydrus_updates = False ):
|
|||
|
||||
|
||||
|
||||
if HydrusText.LooksLikeHTML( bit_to_check ):
|
||||
|
||||
return HC.TEXT_HTML
|
||||
|
||||
|
||||
# it is important this goes at the end, because ffmpeg has a billion false positives!
|
||||
# for instance, it once thought some hydrus update files were mpegs
|
||||
try:
|
||||
|
||||
mime = HydrusVideoHandling.GetMime( path )
|
||||
|
@ -379,37 +412,6 @@ def GetMime( path, ok_to_look_for_hydrus_updates = False ):
|
|||
HydrusData.PrintException( e, do_wait = False )
|
||||
|
||||
|
||||
if ok_to_look_for_hydrus_updates:
|
||||
|
||||
with open( path, 'rb' ) as f:
|
||||
|
||||
update_network_bytes = f.read()
|
||||
|
||||
|
||||
try:
|
||||
|
||||
update = HydrusSerialisable.CreateFromNetworkBytes( update_network_bytes )
|
||||
|
||||
if isinstance( update, HydrusNetwork.ContentUpdate ):
|
||||
|
||||
return HC.APPLICATION_HYDRUS_UPDATE_CONTENT
|
||||
|
||||
elif isinstance( update, HydrusNetwork.DefinitionsUpdate ):
|
||||
|
||||
return HC.APPLICATION_HYDRUS_UPDATE_DEFINITIONS
|
||||
|
||||
|
||||
except:
|
||||
|
||||
pass
|
||||
|
||||
|
||||
|
||||
if HydrusText.LooksLikeHTML( bit_to_check ):
|
||||
|
||||
return HC.TEXT_HTML
|
||||
|
||||
|
||||
return HC.APPLICATION_UNKNOWN
|
||||
|
||||
def GetThumbnailMime( path ):
|
||||
|
|
|
@ -50,7 +50,7 @@ class HydrusLogger( object ):
|
|||
self._log_file.close()
|
||||
|
||||
|
||||
def _GetLogPath( self ) -> os.PathLike:
|
||||
def _GetLogPath( self ) -> str:
|
||||
|
||||
current_time_struct = time.localtime()
|
||||
|
||||
|
|
|
@ -44,7 +44,7 @@ HASH_TYPE_SHA512 = 3 # 64 bytes long
|
|||
|
||||
class HydrusRatingArchive( object ):
|
||||
|
||||
def __init__( self, path : os.PathLike ) -> None:
|
||||
def __init__( self, path : str ) -> None:
|
||||
|
||||
self._path = path
|
||||
|
||||
|
|
|
@ -1,3 +1,5 @@
|
|||
import typing
|
||||
|
||||
import numpy
|
||||
import numpy.typing
|
||||
import os
|
||||
|
@ -984,7 +986,7 @@ def ParseFFMPEGVideoLine( lines, png_ok = False ) -> str:
|
|||
|
||||
return line
|
||||
|
||||
def ParseFFMPEGVideoResolution( lines, png_ok = False ) -> tuple[int,int]:
|
||||
def ParseFFMPEGVideoResolution( lines, png_ok = False ) -> typing.Tuple[ int, int ]:
|
||||
|
||||
try:
|
||||
|
||||
|
|
Loading…
Reference in New Issue