Version 453

closes #961, closes #954, closes #904, closes #841
This commit is contained in:
Hydrus Network Developer 2021-09-01 16:09:01 -05:00
parent 8a116cc79f
commit 2daf09a743
34 changed files with 1273 additions and 403 deletions

View File

@ -55,7 +55,7 @@
<p>Once you are comfortable with manually setting tags and ratings, you may be interested in setting some shortcuts to do it quicker. Try hitting <i>file->shortcuts</i> or clicking the keyboard icon on any media viewer window's top hover window.</p>
<p>There are two kinds of shortcuts in the program--<i>reserved</i>, which have fixed names, are undeletable, and are always active in certain contexts (related to their name), and <i>custom</i>, which you create and name and edit and are only active in a media viewer when you want them to. You can redefine some simple shortcut commands, but most importantly, you can create shortcuts for adding/removing a tag or setting/unsetting a rating.</p>
<p>Use the same 'keyboard' icon to set the current and default custom shortcuts. </p>
<h3 id="finding_duplicates"><a href="#finding_duplicates">finding duplicates</a></h3>
<h3 id="finding_duplicates"><a href="#finding_duplicates">finding duplicates</a></h3>
<p><i>system:similar_to</i> lets you run the duplicates processing page's searches manually. You can either insert the hash and hamming distance manually, or you can launch these searches automatically from the thumbnail <i>right-click->find similar files</i> menu. For example:</p>
<p><img src="similar_gununu.png" /></p>
<h3 id="file_import_errors"><a href="#file_import_errors">truncated/malformed file import errors</a></h3>

View File

@ -32,6 +32,26 @@
<p><img src="tag_parents_ac_2.png" /></p>
<h3 id="remote_parents"><a href="#remote_parents">remote parents</a></h3>
<p>Whenever you add or remove a tag parent pair to a tag repository, you will have to supply a reason (like when you petition a tag). A janitor will review this petition, and will approve or deny it. If it is approved, all users who synchronise with that tag repository will gain that parent pair. If it is denied, only you will see it.</p>
<h3 id="parent_favourites"><a href="#parent_favourites">parent 'favourites'</a></h3>
<p>As you use the client, you will likely make several processing workflows to archive/delete your different sorts of imports. You don't always want to go through things randomly--you might want to do some big videos for a bit, or focus on a particular character. A common search page is something like <code>[system:inbox, creator:blah, limit:256]</code>, which will show a sample of a creator in your inbox, so you can process just that creator. This is easy to set up and save in your favourite searches and quick to run, so you can load it up, do some archive/delete, and then dismiss it without too much hassle.</p>
<p>But what happens if you want to search for multiple creators? You might be tempted to make a large OR search predicate, like <code>creator:aaa OR creator:bbb OR creator:ccc OR creator:ddd</code>, of all your favourite creators so you can process them together as a 'premium' group. But if you want to add or remove a creator from that long OR, it can be cumbersome. And OR searches can just run slow sometimes. One answer is to use the new tag parents tools to apply a 'favourite' parent on all the artists and then search for that favourite.</p>
<p>Let's assume you want to search bunch of 'creator' tags on the PTR. What you will do is:</p>
<ul>
<li>Create a new 'local tag service' in <i>manage services</i> called 'my parent favourites'. This will hold our subjective parents without uploading anything to the PTR.</li>
<li>Go to <i>tags->manage where tag siblings and parents apply</i> and add 'my parent favourites' as the top priority for parents, leaving 'PTR' as second priority.</li>
<li>
<p>Under <i>tags->manage tag parents</i>, on your 'my parent favourites' service, add:</p>
<ul>
<li><code>creator:aaa->favourite:aesthetic art</code></li>
<li><code>creator:bbb->favourite:aesthetic art</code></li>
<li><code>creator:ccc->favourite:aesthetic art</code></li>
<li><code>creator:ddd->favourite:aesthetic art</code></li>
</ul>
<p>Watch/wait a few seconds for the parents to apply across the PTR for those creator tags.</p>
</li>
<li>Then save a new favourite search of <code>[system:inbox, favourite:aesthetic art, limit:256]</code>. This search will deliver results with any of the child 'creator' tags, just like a big OR search, and real fast!</li>
</ul>
<p>If you want to add or remove any creators to the 'aesthetic art' group, you can simply go back to <i>tags->manage tag parents</i>, and it will apply everywhere. You can create more umbrella/group tags if you like (and not just creators--think about clothing, or certain characters), and also use them in regular searches when you just want to browse some cool files.</p>
</div>
</body>
</html>

View File

@ -8,6 +8,38 @@
<div class="content">
<h3 id="changelog"><a href="#changelog">changelog</a></h3>
<ul>
<li><h3 id="version_453"><a href="#version_453">version 453</a></h3></li>
<ul>
<li>qol and misc:</li>
<li>the network job status labels around waiting for 'subscription'/'download page'/'watcher' forced wait slots are reworded. now they just say a more plain 'waiting to work' with a time estimate, and if a job does not get a chance to work this check cycle, it says 'a different xxx got the chance to work' for a few seconds.</li>
<li>if a network job does not get bandwidth on a check cycle, it now says 'a different network job got the bandwidth' for a few seconds</li>
<li>when waiting on bandwidth or gallery work, network jobs should count down more smoothly, one second at a time, not skip a second so often</li>
<li>network job widgets are now better about updating the layout of their two text labels. the status text on the left should take all the available pixels much better, sharing with the '64KB/s' speed text as it changes width and disappear</li>
<li>added a new user-made darkmode QSS stylesheet called 'OledBlack' to default hydrus, try it out under _options->style_</li>
<li>if the tag domain in a search page is other than 'all known tags', the 'selection tags' box, which limits itself to the current domain's tags, now explicitly labels itself with that domain</li>
<li>consolidated and optimised the pre-work checks on all importers/downloaders in pages. pages with idling/finished/paused downloaders will consume just a little less CPU and need to talk to fewer important objects</li>
<li>renamed the shortcut sets for viewer/preview media windows and clarified that they are mouse only for now. the new seek command works with these, but you'd have to map ctrl+right-click or something</li>
<li>improved the system predicate unit tests to catch datatype problems like with last week's hotfix and system:time imported</li>
<li>advanced archive/delete stuff: wrote up a neat idea I had about using local parents applied to the PTR to make fast multi-tag processing workflows here: https://hydrusnetwork.github.io/hydrus/help/advanced_parents.html#parent_favourites</li>
<li>.</li>
<li>bug fixes:</li>
<li>an important tag search bug is fixed. for some users, files that were imported before a service was added were not appearing in some of that service's search results, or their tag counts were not added in certain tag autocomplete results. this file miscount is fixed, and holes will be filled on database update. it should not take too long to fix, although different users will have different situations</li>
<li>this bug was leading to artificially fast PTR processing speeds on some clients as their older files were being skipped. if you have used the client for a long time but only added the PTR recently, sorry if you notice it slow down! it is now working correct!</li>
<li>fixed an important bug in the image rendering system that was causing tile artifacts (little lines of double-pixel jank along tile borders) at a variety of regular zoom levels. the way ideal tile size was being calculated was often incorrect, so I have replaced it with a better calculation</li>
<li>the system predicate parser can now parse 'system:is not the best quality file of its duplicate group' (only 'isn't' was working, previously) (issue #954)</li>
<li>if the collect-by dropdown is fed garbage namespace data from the namespace sort options, it now recovers with a nicer error message (issue #904)</li>
<li>misc db code cleanup and minor refactoring</li>
<li>.</li>
<li>client api:</li>
<li>OR predicates are now supported in the client api! Just nest within the tag list, and it'll bundle the nested list into an OR. there's an example in the client api help</li>
<li>some permissions testing in file search is tightened up--now we have OR and system predicates, if you do not submit any regular positive tags, the search permissions have to be 'allow anything'</li>
<li>fixed an issue where the client api would let you ask about sha256 hashes of incorrect length (and would ultimately make a master database id for these borked hashes, even the empty string!!). now the client api throws a 400</li>
<li>fixed a bug in /manage_pages/get_pages where all pages were marked as 'selected'=true (issue #841)</li>
<li>in the client api, if you use missing file_id(s) on a request for a file, thumbnail, metadata about a file, or when trying to add a files to a page, it now gives 404 correctly (rather than 500) (issue #961)</li>
<li>added a section to the client api help on variable encoding, including an example of how to convert a python tag list to JSON+URL encoded string</li>
<li>added new unit tests for OR pred parsing and the hash length check</li>
<li>client api version is now 20</li>
</ul>
<li><h3 id="version_452"><a href="#version_452">version 452</a></h3></li>
<ul>
<li>misc:</li>

View File

@ -10,7 +10,7 @@
<p>The hydrus client now supports a very simple API so you can access it with external programs.</p>
<p>By default, the Client API is not turned on. Go to <i>services->manage services</i> and give it a port to get it started. I recommend you not allow non-local connections (i.e. only requests from the same computer will work) to start with.</p>
<p>The Client API should start immediately. It will only be active while the client is open. To test it is running all correct (and assuming you used the default port of 45869), try loading this:</p>
<a href="http://127.0.0.1:45869"><pre>http://127.0.0.1:45869</pre></a>
<a href="http://127.0.0.1:45869"><code>http://127.0.0.1:45869</code></a>
<p>You should get a welcome page. By default, the Client API is HTTP, which means it is ok for communication on the same computer or across your home network (e.g. your computer's web browser talking to your computer's hydrus), but not secure for transmission across the internet (e.g. your phone to your home computer). You can turn on HTTPS, but due to technical complexities it will give itself a self-signed 'certificate', so the security is good but imperfect, and whatever is talking to it (e.g. your web browser looking at <a href="https://127.0.0.1:45869">https://127.0.0.1:45869</a>) may need to add an exception.</p>
<p>The Client API is still experimental and sometimes not user friendly. If you want to talk to your home computer across the internet, you will need some networking experience. You'll need a static IP or reverse proxy service or dynamic domain solution like no-ip.org so your device can locate it, and potentially port-forwarding on your router to expose the port. If you have a way of hosting a domain and have a signed certificate (e.g. from <a href="https://letsencrypt.org/">Let's Encrypt</a>), you can overwrite the client.crt and client.key files in your 'db' directory and HTTPS hydrus should host with those.</p>
<p>Once the API is running, go to its entry in <i>services->review services</i>. Each external program trying to access the API will need its own access key, which is the familiar 64-character hexadecimal used in many places in hydrus. You can enter the details manually from the review services panel and then copy/paste the key to your external program, or the program may have the ability to request its own access while a mini-dialog launched from the review services panel waits to catch the request.</p>
@ -33,8 +33,31 @@
<li><a href="https://github.com/cravxx/hydrus.js">https://github.com/cravxx/hydrus.js</a> - A node.js module that talks to the API.</li>
</ul>
<h3 id="api"><a href="#api">API</a></h3>
<p>On 200 OK, the API returns JSON for everything except actual file/thumbnail requests. On 4XX and 5XX, assume it will return plain text, sometimes a raw traceback. You'll typically get 400 for a missing parameter, 401/403/419 for missing/insufficient/expired access, and 500 for a real deal serverside error.</p>
<h3 id="access"><a href="#access">Access and permissions</a></h3>
<p>In general, the API deals with standard UTF-8 JSON. POST requests and 200 OK responses are generally going to be a JSON 'Object' with variable names as keys and values obviously as values. There are examples throughout this document. For GET requests, everything is in standard GET parameters, but some variables are complicated and will need to be JSON encoded and then URL encoded. An example would be the 'tags' parameter on <a href="#get_files_search_files">GET /get_files/search_files</a>, which is a list of strings. Since GET http URLs have limits on what characters are allowed, but hydrus tags can have all sorts of characters, you'll be doing this:</p>
<ul>
<li>
<p>Your list of tags:</p>
<p><code>[ 'character:samus aran', 'creator:青い桜', 'system:height > 2000' ]</code></p>
</li>
<li>
<p>JSON encoded:</p>
<p><code>["character:samus aran", "creator:\\u9752\\u3044\\u685c", "system:height > 2000"]</code></p>
</li>
<li>
<p>Then URL encoded:</p>
<p><code>%5B%22character%3Asamus%20aran%22%2C%20%22creator%3A%5Cu9752%5Cu3044%5Cu685c%22%2C%20%22system%3Aheight%20%3E%202000%22%5D</code></p>
</li>
<li>
<p>In python, converting your tag list to the URL encoded string would be:</p>
<p><code>urllib.parse.quote( json.dumps( tag_list ) )</code></p>
</li>
<li>
<p>Full URL path example:</p>
<p><code>/get_files/search_files?file_sort_type=6&file_sort_asc=false&tags=%5B%22character%3Asamus%20aran%22%2C%20%22creator%3A%5Cu9752%5Cu3044%5Cu685c%22%2C%20%22system%3Aheight%20%3E%202000%22%5D</code></p>
</li>
</ul>
<p>On 200 OK, the API returns JSON for everything except actual file/thumbnail requests. On 4XX and 5XX, assume it will return plain text, which may be a raw traceback that I'd be interested in seeing. You'll typically get 400 for a missing parameter, 401/403/419 for missing/insufficient/expired access, and 500 for a real deal serverside error.</p>
<h3 id="access"><a href="#access">Access and permissions</a></h3>
<p>The client gives access to its API through different 'access keys', which are the typical 64-character hex used in many other places across hydrus. Each guarantees different permissions such as handling files or tags. Most of the time, a user will provide full access, but do not assume this. If the access header or parameter is not provided, you will get 401, and all insufficient permission problems will return 403 with appropriate error text.</p>
<p>Access is required for every request. You can provide this as an http header, like so:</p>
<ul>
@ -1141,9 +1164,9 @@
<li><p>/get_files/search_files?tags=%5B%22blue%20eyes%22%2C%20%22blonde%20hair%22%2C%20%22%5Cu043a%5Cu0438%5Cu043d%5Cu043e%22%2C%20%22system%3Ainbox%22%2C%20%22system%3Alimit%3D16%22%5D</p></li>
</ul>
</li>
<p>If the access key's permissions only permit search for certain tags, at least one whitelisted/non-blacklisted tag must be in the "tags" list or this will 403. Tags can be prepended with a hyphen to make a negated tag (e.g. "-green eyes"), but these will not be checked against the permissions whitelist.</p>
<p>If the access key's permissions only permit search for certain tags, at least one positive whitelisted/non-blacklisted tag must be in the "tags" list or this will 403. Tags can be prepended with a hyphen to make a negated tag (e.g. "-green eyes"), but these will not be checked against the permissions whitelist.</p>
<p>Wildcards and namespace searches are supported, so if you search for 'character:sam*' or 'series:*', this will be handled correctly clientside.</p>
<p>Many system predicates are also supported using a text parser! The parser was designed by a clever user for human input and allows for a certain amount of error (e.g. ~= instead of ≈, or "isn't" instead of "is not") or requires more information (e.g. the specific hashes for a hash lookup). Here's a big list of current formats supported:</p>
<p>Many system predicates are also supported using a text parser! The parser was designed by a clever user for human input and allows for a certain amount of error (e.g. ~= instead of ≈, or "isn't" instead of "is not") or requires more information (e.g. the specific hashes for a hash lookup). Here's a big list of current formats supported:</p>
<ul>
<li>system:everything</li>
<li>system:inbox</li>
@ -1217,6 +1240,18 @@
<li>system:tag as number page &lt; 5</li>
</ul>
<p>More system predicate types and input formats will be available in future. Please test out the system predicates you want to send. Reverse engineering system predicate data from text is obviously tricky. If a system predicate does not parse, you'll get 400.</p>
<p>Also, OR predicates are now supported! Just nest within the tag list, and it'll be treated like an OR. For instance:</p>
<ul>
<li>
<code>[ "skirt", [ "samus aran", "lara croft" ], "system:height > 1000" ]</code>
<p>Makes:</p>
<ul>
<li>skirt</li>
<li>samus aran OR lara croft</li>
<li>system:height > 1000</li>
</ul>
</li>
</ul>
<p>The file and tag services are for search domain selection, just like clicking the buttons in the client. They are optional--default is 'my files' and 'all known tags', and you can use either key or name as in <a href="#get_services">GET /get_services</a>, whichever is easiest for your situation.</p>
<p>file_sort_asc is 'true' for ascending, and 'false' for descending. The default is descending.</p>
<p>file_sort_type is by default <i>import time</i>. It is an integer according to the following enum, and I have written the semantic (asc/desc) meaning for each type after:</p>

View File

@ -274,14 +274,6 @@ def GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id ):
return ( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name )
def GenerateSpecificFilesTableName( file_service_id, tag_service_id ):
suffix = '{}_{}'.format( file_service_id, tag_service_id )
cache_files_table_name = 'external_caches.specific_files_cache_{}'.format( suffix )
return cache_files_table_name
def GenerateSpecificIntegerSubtagsTableName( file_service_id, tag_service_id ):
name = 'specific_integer_subtags_cache'
@ -1992,10 +1984,6 @@ class DB( HydrusDB.HydrusDB ):
def _CacheSpecificMappingsAddFiles( self, file_service_id, tag_service_id, hash_ids, hash_ids_table_name ):
cache_files_table_name = GenerateSpecificFilesTableName( file_service_id, tag_service_id )
self._Execute( 'INSERT OR IGNORE INTO {} SELECT hash_id FROM {};'.format( cache_files_table_name, hash_ids_table_name ) )
( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id )
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = ClientDBMappingsStorage.GenerateMappingsTableNames( tag_service_id )
@ -2101,13 +2089,10 @@ class DB( HydrusDB.HydrusDB ):
def _CacheSpecificMappingsClear( self, file_service_id, tag_service_id, keep_pending = False ):
cache_files_table_name = GenerateSpecificFilesTableName( file_service_id, tag_service_id )
specific_ac_cache_table_name = GenerateSpecificACCacheTableName( ClientTags.TAG_DISPLAY_STORAGE, file_service_id, tag_service_id )
( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id )
self._Execute( 'DELETE FROM {};'.format( cache_files_table_name ) )
self._Execute( 'DELETE FROM {};'.format( cache_current_mappings_table_name ) )
self._Execute( 'DELETE FROM {};'.format( cache_deleted_mappings_table_name ) )
@ -2117,8 +2102,6 @@ class DB( HydrusDB.HydrusDB ):
self._Execute( 'DELETE FROM {} WHERE current_count = 0 AND pending_count = 0;'.format( specific_ac_cache_table_name ) )
self._Execute( 'INSERT OR IGNORE INTO {} ( hash_id ) SELECT DISTINCT hash_id FROM {};'.format( cache_files_table_name, cache_pending_mappings_table_name ) )
else:
self._Execute( 'DELETE FROM {};'.format( cache_pending_mappings_table_name ) )
@ -2130,13 +2113,10 @@ class DB( HydrusDB.HydrusDB ):
def _CacheSpecificMappingsDrop( self, file_service_id, tag_service_id ):
cache_files_table_name = GenerateSpecificFilesTableName( file_service_id, tag_service_id )
specific_ac_cache_table_name = GenerateSpecificACCacheTableName( ClientTags.TAG_DISPLAY_STORAGE, file_service_id, tag_service_id )
( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_files_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_current_mappings_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_deleted_mappings_table_name ) )
self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_pending_mappings_table_name ) )
@ -2149,10 +2129,6 @@ class DB( HydrusDB.HydrusDB ):
self._CacheSpecificDisplayMappingsDeleteFiles( file_service_id, tag_service_id, hash_ids, hash_id_table_name )
cache_files_table_name = GenerateSpecificFilesTableName( file_service_id, tag_service_id )
self._ExecuteMany( 'DELETE FROM ' + cache_files_table_name + ' WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) )
( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id )
# temp hashes to mappings
@ -2229,14 +2205,10 @@ class DB( HydrusDB.HydrusDB ):
def _CacheSpecificMappingsGenerate( self, file_service_id, tag_service_id ):
cache_files_table_name = GenerateSpecificFilesTableName( file_service_id, tag_service_id )
specific_ac_cache_table_name = GenerateSpecificACCacheTableName( ClientTags.TAG_DISPLAY_STORAGE, file_service_id, tag_service_id )
( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id )
self._Execute( 'CREATE TABLE ' + cache_files_table_name + ' ( hash_id INTEGER PRIMARY KEY );' )
self._Execute( 'CREATE TABLE ' + specific_ac_cache_table_name + ' ( tag_id INTEGER PRIMARY KEY, current_count INTEGER, pending_count INTEGER );' )
self._Execute( 'CREATE TABLE ' + cache_current_mappings_table_name + ' ( hash_id INTEGER, tag_id INTEGER, PRIMARY KEY ( hash_id, tag_id ) ) WITHOUT ROWID;' )
@ -2263,7 +2235,6 @@ class DB( HydrusDB.HydrusDB ):
self.modules_db_maintenance.AnalyzeTable( cache_files_table_name )
self.modules_db_maintenance.AnalyzeTable( specific_ac_cache_table_name )
self.modules_db_maintenance.AnalyzeTable( cache_current_mappings_table_name )
self.modules_db_maintenance.AnalyzeTable( cache_deleted_mappings_table_name )
@ -2280,7 +2251,7 @@ class DB( HydrusDB.HydrusDB ):
for file_service_id in file_service_ids:
table_join = self.modules_files_storage.GetCurrentTableJoinPhrase( file_service_id, temp_table_name )
table_join = self.modules_files_storage.GetTableJoinLimitedByFileDomain( file_service_id, temp_table_name, HC.CONTENT_STATUS_CURRENT )
valid_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {};'.format( table_join ) ) )
@ -2301,7 +2272,7 @@ class DB( HydrusDB.HydrusDB ):
for file_service_id in file_service_ids:
table_join = self.modules_files_storage.GetCurrentTableJoinPhrase( file_service_id, temp_table_name )
table_join = self.modules_files_storage.GetTableJoinLimitedByFileDomain( file_service_id, temp_table_name, HC.CONTENT_STATUS_CURRENT )
valid_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {};'.format( table_join ) ) )
@ -2339,7 +2310,6 @@ class DB( HydrusDB.HydrusDB ):
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = ClientDBMappingsStorage.GenerateMappingsTableNames( tag_service_id )
( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id )
cache_files_table_name = GenerateSpecificFilesTableName( file_service_id, tag_service_id )
if status_hook is not None:
@ -2359,6 +2329,8 @@ class DB( HydrusDB.HydrusDB ):
num_to_do = len( all_pending_storage_tag_ids )
select_table_join = self.modules_files_storage.GetTableJoinLimitedByFileDomain( file_service_id, pending_mappings_table_name, HC.CONTENT_STATUS_CURRENT )
for ( i, storage_tag_id ) in enumerate( all_pending_storage_tag_ids ):
if i % 100 == 0 and status_hook is not None:
@ -2368,7 +2340,7 @@ class DB( HydrusDB.HydrusDB ):
status_hook( message )
self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id, hash_id ) SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( cache_pending_mappings_table_name, pending_mappings_table_name, cache_files_table_name ), ( storage_tag_id, ) )
self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id, hash_id ) SELECT tag_id, hash_id FROM {} WHERE tag_id = ?;'.format( cache_pending_mappings_table_name, select_table_join ), ( storage_tag_id, ) )
pending_delta = self._GetRowCount()
@ -9128,11 +9100,6 @@ class DB( HydrusDB.HydrusDB ):
file_service_id = self.modules_services.GetServiceId( file_service_key )
tag_service_id = self.modules_services.GetServiceId( tag_search_context.service_key )
if hash_ids_table_name is None and file_service_id != self.modules_services.combined_file_service_id and tag_service_id != self.modules_services.combined_tag_service_id:
hash_ids_table_name = GenerateSpecificFilesTableName( file_service_id, tag_service_id )
with self._MakeTemporaryIntegerTable( namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name:
mapping_and_tag_table_names = self._GetMappingAndTagTables( tag_display_type, file_service_key, tag_search_context )
@ -14430,13 +14397,13 @@ class DB( HydrusDB.HydrusDB ):
name = service.GetName()
cache_files_table_name = GenerateSpecificFilesTableName( self.modules_services.combined_local_file_service_id, tag_service_id )
( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( self.modules_services.combined_local_file_service_id, tag_service_id )
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = ClientDBMappingsStorage.GenerateMappingsTableNames( tag_service_id )
select_statement = 'SELECT hash_id FROM {};'.format( cache_files_table_name )
current_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( self.modules_services.combined_local_file_service_id, HC.CONTENT_STATUS_CURRENT )
select_statement = 'SELECT hash_id FROM {};'.format( current_files_table_name )
for ( group_of_hash_ids, num_done, num_to_do ) in HydrusDB.ReadLargeIdQueryInSeparateChunks( self._c, select_statement, BLOCK_SIZE ):
@ -16418,7 +16385,7 @@ class DB( HydrusDB.HydrusDB ):
self._controller.frame_splash_status.SetSubtext( 'scheduling PSD files for thumbnail regen' )
table_join = self.modules_files_storage.GetCurrentTableJoinPhrase( self.modules_services.combined_local_file_service_id, 'files_info' )
table_join = self.modules_files_storage.GetTableJoinLimitedByFileDomain( self.modules_services.combined_local_file_service_id, 'files_info', HC.CONTENT_STATUS_CURRENT )
hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM {} WHERE mime = ?;'.format( table_join ), ( HC.APPLICATION_PSD, ) ) )
@ -16606,6 +16573,54 @@ class DB( HydrusDB.HydrusDB ):
if version == 452:
file_service_ids = self.modules_services.GetServiceIds( HC.TAG_CACHE_SPECIFIC_FILE_SERVICES )
tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
for ( file_service_id, tag_service_id ) in itertools.product( file_service_ids, tag_service_ids ):
suffix = '{}_{}'.format( file_service_id, tag_service_id )
cache_files_table_name = 'external_caches.specific_files_cache_{}'.format( suffix )
result = self._Execute( 'SELECT 1 FROM external_caches.sqlite_master WHERE name = ?;', ( cache_files_table_name.split( '.', 1 )[1], ) ).fetchone()
if result is None:
continue
self._controller.frame_splash_status.SetText( 'filling holes in specific tags cache - {} {}'.format( file_service_id, tag_service_id ) )
# it turns out cache_files_table_name was not being populated on service creation/reset, so files imported before a tag service was created were not being stored in specific mapping cache data!
# furthermore, there was confusion whether cache_files_table_name was for mappings (files that have tags) on the tag service or just files on the file service.
# since we now store current files for each file service on a separate table, and the clever mappings intepretation seems expensive and not actually so useful, we are moving to our nice table instead in various joins/filters/etc...
current_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( file_service_id, HC.CONTENT_STATUS_CURRENT )
query = 'SELECT hash_id FROM {} EXCEPT SELECT hash_id FROM {};'.format( current_files_table_name, cache_files_table_name )
BLOCK_SIZE = 10000
for ( group_of_hash_ids, num_done, num_to_do ) in HydrusDB.ReadLargeIdQueryInSeparateChunks( self._c, query, BLOCK_SIZE ):
message = HydrusData.ConvertValueRangeToPrettyString( num_done, num_to_do )
self._controller.frame_splash_status.SetSubtext( message )
with self._MakeTemporaryIntegerTable( group_of_hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
self._CacheSpecificMappingsAddFiles( file_service_id, tag_service_id, group_of_hash_ids, temp_hash_ids_table_name )
self._CacheSpecificDisplayMappingsAddFiles( file_service_id, tag_service_id, group_of_hash_ids, temp_hash_ids_table_name )
self._Execute( 'DROP TABLE {};'.format( cache_files_table_name ) )
self._controller.frame_splash_status.SetTitleText( 'updated db to v{}'.format( HydrusData.ToHumanInt( version + 1 ) ) )
self._Execute( 'UPDATE version SET version = ?;', ( version + 1, ) )

View File

@ -61,7 +61,7 @@ class DBLocationSearchContext( object ):
return self.location_search_context
def GetFileIteratorTableJoin( self, table_phrase: str ):
def GetTableJoinIteratedByFileDomain( self, table_phrase: str ):
if self.location_search_context.IsAllKnownFiles():
@ -423,13 +423,6 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
return rows
def GetCurrentTableJoinPhrase( self, service_id, table_name ):
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
return '{} CROSS JOIN {} USING ( hash_id )'.format( table_name, current_files_table_name )
def GetCurrentTimestamp( self, service_id: int, hash_id: int ):
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
@ -686,6 +679,20 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
return petitioned_rows
def GetTableJoinIteratedByFileDomain( self, service_id, table_name, status ):
files_table_name = GenerateFilesTableName( service_id, status )
return '{} CROSS JOIN {} USING ( hash_id )'.format( files_table_name, table_name )
def GetTableJoinLimitedByFileDomain( self, service_id, table_name, status ):
files_table_name = GenerateFilesTableName( service_id, status )
return '{} CROSS JOIN {} USING ( hash_id )'.format( table_name, files_table_name )
def GetTablesAndColumnsThatUseDefinitions( self, content_type: int ) -> typing.List[ typing.Tuple[ str, str ] ]:
tables_and_columns = []

View File

@ -184,7 +184,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id )
table_join = self.modules_files_storage.GetCurrentTableJoinPhrase( self.modules_services.local_update_service_id, repository_updates_table_name )
table_join = self.modules_files_storage.GetTableJoinLimitedByFileDomain( self.modules_services.local_update_service_id, repository_updates_table_name, HC.CONTENT_STATUS_CURRENT )
update_hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM {};'.format( table_join ) ) )
@ -279,7 +279,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
( num_updates, ) = self._Execute( 'SELECT COUNT( * ) FROM {}'.format( repository_updates_table_name ) ).fetchone()
table_join = self.modules_files_storage.GetCurrentTableJoinPhrase( self.modules_services.local_update_service_id, repository_updates_table_name )
table_join = self.modules_files_storage.GetTableJoinLimitedByFileDomain( self.modules_services.local_update_service_id, repository_updates_table_name, HC.CONTENT_STATUS_CURRENT )
( num_local_updates, ) = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( table_join ) ).fetchone()
@ -402,7 +402,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
all_hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM {} ORDER BY update_index ASC;'.format( repository_updates_table_name ) ) )
table_join = self.modules_files_storage.GetCurrentTableJoinPhrase( self.modules_services.local_update_service_id, repository_updates_table_name )
table_join = self.modules_files_storage.GetTableJoinLimitedByFileDomain( self.modules_services.local_update_service_id, repository_updates_table_name, HC.CONTENT_STATUS_CURRENT )
existing_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {};'.format( table_join ) ) )

View File

@ -182,12 +182,12 @@ shortcut_names_to_pretty_names[ 'global' ] = 'global'
shortcut_names_to_pretty_names[ 'main_gui' ] = 'the main window'
shortcut_names_to_pretty_names[ 'tags_autocomplete' ] = 'tag autocomplete'
shortcut_names_to_pretty_names[ 'media' ] = 'media actions, either thumbnails or the viewer'
shortcut_names_to_pretty_names[ 'media_viewer' ] = 'media viewers - all (zoom and pan)'
shortcut_names_to_pretty_names[ 'media_viewer_browser' ] = 'media viewer - \'normal\' browser'
shortcut_names_to_pretty_names[ 'archive_delete_filter' ] = 'media viewer - archive/delete filter'
shortcut_names_to_pretty_names[ 'duplicate_filter' ] = 'media viewer - duplicate filter'
shortcut_names_to_pretty_names[ 'preview_media_window' ] = 'media viewer - the preview window'
shortcut_names_to_pretty_names[ 'media_viewer_media_window' ] = 'the actual media in a media viewer'
shortcut_names_to_pretty_names[ 'media_viewer' ] = 'media viewers - all'
shortcut_names_to_pretty_names[ 'media_viewer_browser' ] = 'media viewers - \'normal\' browser'
shortcut_names_to_pretty_names[ 'archive_delete_filter' ] = 'media viewers - archive/delete filter'
shortcut_names_to_pretty_names[ 'duplicate_filter' ] = 'media viewers - duplicate filter'
shortcut_names_to_pretty_names[ 'preview_media_window' ] = 'the actual media in a preview window (mouse only)'
shortcut_names_to_pretty_names[ 'media_viewer_media_window' ] = 'the actual media in a media viewer (mouse only)'
shortcut_names_sorted = [
'global',
@ -212,8 +212,8 @@ shortcut_names_to_descriptions[ 'main_gui' ] = 'Actions to control pages in the
shortcut_names_to_descriptions[ 'tags_autocomplete' ] = 'Actions to control tag autocomplete when its input text box is focused.'
shortcut_names_to_descriptions[ 'media_viewer_browser' ] = 'Navigation actions for the regular browsable media viewer.'
shortcut_names_to_descriptions[ 'media_viewer' ] = 'Zoom and pan and player actions for any media viewer.'
shortcut_names_to_descriptions[ 'media_viewer_media_window' ] = 'Actions for any video or audio player in a media viewer window.'
shortcut_names_to_descriptions[ 'preview_media_window' ] = 'Actions for any video or audio player in a preview window.'
shortcut_names_to_descriptions[ 'media_viewer_media_window' ] = 'Actions for any video or audio player in a media viewer window. Mouse only!'
shortcut_names_to_descriptions[ 'preview_media_window' ] = 'Actions for any video or audio player in a preview window. Mouse only!'
# shortcut commands

View File

@ -2603,7 +2603,16 @@ class CollectComboCtrl( QW.QComboBox ):
namespaces = media_sort.GetNamespaces()
text_and_data_tuples.update( namespaces )
try:
text_and_data_tuples.update( namespaces )
except:
HydrusData.DebugPrint( 'Bad namespaces: {}'.format( namespaces ) )
HydrusData.ShowText( 'Hey, your namespace-based sorts are likely damaged. Details have been written to the log, please let hydev know!' )
text_and_data_tuples = sorted( ( ( namespace, ( 'namespace', namespace ) ) for namespace in text_and_data_tuples ) )
@ -2614,8 +2623,8 @@ class CollectComboCtrl( QW.QComboBox ):
text_and_data_tuples.append( ( ratings_service.GetName(), ('rating', ratings_service.GetServiceKey() ) ) )
for (text, data) in text_and_data_tuples:
for ( text, data ) in text_and_data_tuples:
self.Append( text, data )

View File

@ -1,3 +1,4 @@
import fractions
import itertools
import typing
@ -1680,28 +1681,41 @@ class StaticImage( QW.QWidget ):
def _ClearCanvasTileCache( self ):
if self._media is None:
if self._media is None or self.width() == 0 or self.height() == 0:
self._zoom = 1.0
tile_dimension = 0
else:
self._zoom = self.width() / self._media.GetResolution()[ 0 ]
# it is most convenient to have tiles that line up with the current zoom ratio
# 768 is a convenient size for meaty GPU blitting, but as a number it doesn't make for nice multiplication
# a 'nice' size is one that divides nicely by our zoom, so that integer translations between canvas and native res aren't losing too much in the float remainder
if self.width() == 0 or self.height() == 0:
# it is most convenient to have tiles that line up with the current zoom ratio
# 768 is a convenient size for meaty GPU blitting, but as a number it doesn't make for nice multiplication
tile_dimension = 0
# a 'nice' size is one that divides nicely by our zoom, so that integer translations between canvas and native res aren't losing too much in the float remainder
else:
# the trick of going ( 123456 // 16 ) * 16 to give you a nice multiple of 16 does not work with floats like 1.4 lmao.
# what we can do instead is phrase 1.4 as 7/5 and use 7 as our int. any number cleanly divisible by 7 is cleanly divisible by 1.4
# the max( x, 1 ) bit here ensures that superzoomed 1px things just get one tile
tile_dimension = round( max( ( 768 // self._zoom ), 1 ) * self._zoom )
ideal_tile_dimension = 768
frac = fractions.Fraction( self._zoom ).limit_denominator( 100 )
n = frac.numerator
if n > ideal_tile_dimension:
# we are in extreme zoom land. nice multiples are impossible with reasonable size tiles, so we'll have to settle for some problems
tile_dimension = ideal_tile_dimension
else:
tile_dimension = ( ideal_tile_dimension // n ) * n
tile_dimension = max( min( tile_dimension, 2048 ), 1 )
self._canvas_tile_size = QC.QSize( tile_dimension, tile_dimension )

View File

@ -3732,6 +3732,10 @@ class StaticBoxSorterForListBoxTags( ClientGUICommon.StaticBox ):
ClientGUICommon.StaticBox.__init__( self, parent, title )
self._original_title = title
self._tags_box = None
# make this its own panel
self._tag_sort = ClientGUITagSorting.TagSortControl( self, HG.client_controller.new_options.GetDefaultTagSort(), show_siblings = show_siblings_sort )
@ -3742,11 +3746,30 @@ class StaticBoxSorterForListBoxTags( ClientGUICommon.StaticBox ):
def SetTagServiceKey( self, service_key ):
if self._tags_box is None:
return
self._tags_box.SetTagServiceKey( service_key )
title = self._original_title
if service_key != CC.COMBINED_TAG_SERVICE_KEY:
title = '{} for {}'.format( title, HG.client_controller.services_manager.GetName( service_key ) )
self.SetTitle( title )
def EventSort( self ):
if self._tags_box is None:
return
sort = self._tag_sort.GetValue()
self._tags_box.SetSort( sort )
@ -3761,6 +3784,11 @@ class StaticBoxSorterForListBoxTags( ClientGUICommon.StaticBox ):
def SetTagsByMedia( self, media ):
if self._tags_box is None:
return
self._tags_box.SetTagsByMedia( media )

View File

@ -181,7 +181,7 @@ class NetworkJobControl( QW.QFrame ):
if not self._network_job.TokensOK():
ClientGUIMenus.AppendMenuItem( menu, 'override gallery slot requirements for this job', 'Force-allow this download to proceed, ignoring the normal gallery wait times.', self._network_job.OverrideToken )
ClientGUIMenus.AppendMenuItem( menu, 'override forced gallery wait times for this job', 'Force-allow this download to proceed, ignoring the normal gallery wait times.', self._network_job.OverrideToken )
ClientGUIMenus.AppendSeparator( menu )
@ -226,23 +226,16 @@ class NetworkJobControl( QW.QFrame ):
if self._network_job is None or self._network_job.NoEngineYet():
can_cancel = False
self._left_text.clear()
self._right_text.clear()
self._gauge.SetRange( 1 )
self._gauge.SetValue( 0 )
can_cancel = False
else:
if self._network_job.IsDone():
can_cancel = False
else:
can_cancel = True
can_cancel = not self._network_job.IsDone()
( status_text, current_speed, bytes_read, bytes_to_read ) = self._network_job.GetStatus()
@ -267,36 +260,40 @@ class NetworkJobControl( QW.QFrame ):
self._right_text.setText( speed_text )
show_right_text = speed_text != ''
right_width = ClientGUIFunctions.ConvertTextToPixelWidth( self._right_text, len( speed_text ) )
right_min_width = right_width
if right_min_width != self._last_right_min_width:
if self._right_text.isVisible() != show_right_text:
self._last_right_min_width = right_min_width
self._right_text.setVisible( show_right_text )
self._right_text.setMinimumWidth( right_min_width )
self.updateGeometry()
if speed_text != '':
self._right_text.setText( speed_text )
right_width = ClientGUIFunctions.ConvertTextToPixelWidth( self._right_text, len( speed_text ) )
right_min_width = right_width
if right_min_width != self._last_right_min_width:
self._last_right_min_width = right_min_width
self._right_text.setMinimumWidth( right_min_width )
self.updateGeometry()
self._gauge.SetRange( bytes_to_read )
self._gauge.SetValue( bytes_read )
if can_cancel:
if self._cancel_button.isEnabled() != can_cancel:
if not self._cancel_button.isEnabled():
self._cancel_button.setEnabled( True )
else:
if self._cancel_button.isEnabled():
self._cancel_button.setEnabled( False )
self._cancel_button.setEnabled( can_cancel )

View File

@ -857,13 +857,13 @@ class ManagementPanel( QW.QScrollArea ):
def _MakeCurrentSelectionTagsBox( self, sizer ):
tags_box = ClientGUIListBoxes.StaticBoxSorterForListBoxTags( self, 'selection tags' )
self._current_selection_tags_box = ClientGUIListBoxes.StaticBoxSorterForListBoxTags( self, 'selection tags' )
self._current_selection_tags_list = ListBoxTagsMediaManagementPanel( tags_box, self._management_controller, self._page_key )
self._current_selection_tags_list = ListBoxTagsMediaManagementPanel( self._current_selection_tags_box, self._management_controller, self._page_key )
tags_box.SetTagsBox( self._current_selection_tags_list )
self._current_selection_tags_box.SetTagsBox( self._current_selection_tags_list )
QP.AddToLayout( sizer, tags_box, CC.FLAGS_EXPAND_BOTH_WAYS )
QP.AddToLayout( sizer, self._current_selection_tags_box, CC.FLAGS_EXPAND_BOTH_WAYS )
def CheckAbleToClose( self ):
@ -4773,11 +4773,20 @@ class ManagementPanelQuery( ManagementPanel ):
def _MakeCurrentSelectionTagsBox( self, sizer ):
tags_box = ClientGUIListBoxes.StaticBoxSorterForListBoxTags( self, 'selection tags' )
self._current_selection_tags_box = ClientGUIListBoxes.StaticBoxSorterForListBoxTags( self, 'selection tags' )
if self._search_enabled:
self._current_selection_tags_list = ListBoxTagsMediaManagementPanel( tags_box, self._management_controller, self._page_key, tag_autocomplete = self._tag_autocomplete )
self._current_selection_tags_list = ListBoxTagsMediaManagementPanel( self._current_selection_tags_box, self._management_controller, self._page_key, tag_autocomplete = self._tag_autocomplete )
else:
self._current_selection_tags_list = ListBoxTagsMediaManagementPanel( self._current_selection_tags_box, self._management_controller, self._page_key )
self._current_selection_tags_box.SetTagsBox( self._current_selection_tags_list )
if self._search_enabled:
file_search_context = self._management_controller.GetVariable( 'file_search_context' )
@ -4785,18 +4794,12 @@ class ManagementPanelQuery( ManagementPanel ):
tag_service_key = file_search_context.GetTagSearchContext().service_key
self._current_selection_tags_list.SetTagServiceKey( tag_service_key )
self._current_selection_tags_box.SetTagServiceKey( tag_service_key )
self._tag_autocomplete.tagServiceChanged.connect( self._current_selection_tags_list.SetTagServiceKey )
else:
self._current_selection_tags_list = ListBoxTagsMediaManagementPanel( tags_box, self._management_controller, self._page_key )
self._tag_autocomplete.tagServiceChanged.connect( self._current_selection_tags_box.SetTagServiceKey )
tags_box.SetTagsBox( self._current_selection_tags_list )
QP.AddToLayout( sizer, tags_box, CC.FLAGS_EXPAND_BOTH_WAYS )
QP.AddToLayout( sizer, self._current_selection_tags_box, CC.FLAGS_EXPAND_BOTH_WAYS )
def _RefreshQuery( self ):

View File

@ -2342,7 +2342,9 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
for page in self._GetPages():
page_info_dict = page.GetSessionAPIInfoDict( is_selected = is_selected )
page_is_selected = is_selected and page == current_page
page_info_dict = page.GetSessionAPIInfoDict( is_selected = page_is_selected )
my_pages_list.append( page_info_dict )

View File

@ -1663,10 +1663,10 @@ class StaticBox( QW.QFrame ):
title_font = QG.QFont( normal_font_family, int( normal_font_size ), QG.QFont.Bold )
title_text = QW.QLabel( title, self )
title_text.setFont( title_font )
self._title_st = BetterStaticText( self, label = title )
self._title_st.setFont( title_font )
QP.AddToLayout( self._sizer, title_text, CC.FLAGS_CENTER )
QP.AddToLayout( self._sizer, self._title_st, CC.FLAGS_CENTER )
self.setLayout( self._sizer )
@ -1682,6 +1682,11 @@ class StaticBox( QW.QFrame ):
self.layout().addSpacerItem( self._spacer )
def SetTitle( self, title ):
self._title_st.setText( title )
class RadioBox( StaticBox ):
def __init__( self, parent, title, choice_pairs, initial_index = None ):

View File

@ -387,7 +387,7 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
self._gallery_seed_log.NotifyGallerySeedsUpdated( ( gallery_seed, ) )
return True
return
def CanRetryFailed( self ):
@ -804,7 +804,7 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
def REPEATINGWorkOnFiles( self ):
def CanDoFileWork( self ):
with self._lock:
@ -812,21 +812,61 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
self._files_repeating_job.Cancel()
return
return False
files_paused = self._files_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' )
work_pending = self._file_seed_cache.WorkToDo() and not files_paused
if files_paused:
return False
work_pending = self._file_seed_cache.WorkToDo()
if not work_pending:
return False
return self.CanDoNetworkWork()
def CanDoNetworkWork( self ):
with self._lock:
no_delays = HydrusData.TimeHasPassed( self._no_work_until )
if not no_delays:
return False
page_shown = not HG.client_controller.PageClosedButNotDestroyed( self._page_key )
if not page_shown:
return False
network_engine_good = not HG.client_controller.network_engine.IsBusy()
ok_to_work = work_pending and no_delays and page_shown and network_engine_good
if not network_engine_good:
return False
return True
def REPEATINGWorkOnFiles( self ):
try:
while ok_to_work:
while self.CanDoFileWork():
try:
@ -839,24 +879,6 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
HydrusData.ShowException( e )
with self._lock:
if ClientImporting.PageImporterShouldStopWorking( self._page_key ):
self._files_repeating_job.Cancel()
return
files_paused = self._files_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' )
work_pending = self._file_seed_cache.WorkToDo() and not files_paused
no_delays = HydrusData.TimeHasPassed( self._no_work_until )
page_shown = not HG.client_controller.PageClosedButNotDestroyed( self._page_key )
network_engine_good = not HG.client_controller.network_engine.IsBusy()
ok_to_work = work_pending and no_delays and page_shown and network_engine_good
finally:
@ -867,7 +889,7 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
def REPEATINGWorkOnGallery( self ):
def CanDoGalleryWork( self ):
with self._lock:
@ -875,22 +897,32 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
self._gallery_repeating_job.Cancel()
return
return False
gallery_paused = self._gallery_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_gallery_searches' )
work_pending = self._gallery_seed_log.WorkToDo() and not gallery_paused
no_delays = HydrusData.TimeHasPassed( self._no_work_until )
page_shown = not HG.client_controller.PageClosedButNotDestroyed( self._page_key )
network_engine_good = not HG.client_controller.network_engine.IsBusy()
if gallery_paused:
return False
ok_to_work = work_pending and no_delays and page_shown and network_engine_good
work_pending = self._gallery_seed_log.WorkToDo()
if not work_pending:
return False
return self.CanDoNetworkWork()
def REPEATINGWorkOnGallery( self ):
try:
while ok_to_work:
while self.CanDoGalleryWork():
try:
@ -905,25 +937,6 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
HydrusData.ShowException( e )
with self._lock:
if ClientImporting.PageImporterShouldStopWorking( self._page_key ):
self._gallery_repeating_job.Cancel()
return
gallery_paused = self._gallery_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_gallery_searches' )
work_pending = self._gallery_seed_log.WorkToDo() and not gallery_paused
no_delays = HydrusData.TimeHasPassed( self._no_work_until )
page_shown = not HG.client_controller.PageClosedButNotDestroyed( self._page_key )
network_engine_good = not HG.client_controller.network_engine.IsBusy()
ok_to_work = work_pending and no_delays and page_shown and network_engine_good
finally:

View File

@ -302,7 +302,7 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
self._files_repeating_job.SetThreadSlotType( 'misc' )
def REPEATINGWorkOnFiles( self, page_key ):
def CanDoFileWork( self, page_key ):
with self._lock:
@ -310,15 +310,37 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
self._files_repeating_job.Cancel()
return
return False
paused = self._paused or HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' )
work_to_do = self._file_seed_cache.WorkToDo() and not ( paused or HG.client_controller.PageClosedButNotDestroyed( page_key ) )
if paused:
return False
work_to_do = self._file_seed_cache.WorkToDo()
if not work_to_do:
return False
page_shown = not HG.client_controller.PageClosedButNotDestroyed( page_key )
if not page_shown:
return False
while work_to_do:
return True
def REPEATINGWorkOnFiles( self, page_key ):
while self.CanDoFileWork( page_key ):
try:
@ -331,20 +353,6 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
HydrusData.ShowException( e )
with self._lock:
if ClientImporting.PageImporterShouldStopWorking( page_key ):
self._files_repeating_job.Cancel()
return
paused = self._paused or HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' )
work_to_do = self._file_seed_cache.WorkToDo() and not ( paused or HG.client_controller.PageClosedButNotDestroyed( page_key ) )
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_HDD_IMPORT ] = HDDImport

View File

@ -572,7 +572,7 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
def REPEATINGWorkOnFiles( self, page_key ):
def CanDoFileWork( self, page_key ):
with self._lock:
@ -580,18 +580,52 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
self._files_repeating_job.Cancel()
return
return False
files_paused = self._files_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' )
work_to_do = self._file_seed_cache.WorkToDo() and not ( files_paused or HG.client_controller.PageClosedButNotDestroyed( page_key ) )
network_engine_good = not HG.client_controller.network_engine.IsBusy()
if files_paused:
return False
ok_to_work = work_to_do and network_engine_good
work_to_do = self._file_seed_cache.WorkToDo()
if not work_to_do:
return False
while ok_to_work:
return self.CanDoNetworkWork( page_key )
def CanDoNetworkWork( self, page_key ):
with self._lock:
page_shown = not HG.client_controller.PageClosedButNotDestroyed( page_key )
if not page_shown:
return False
network_engine_good = not HG.client_controller.network_engine.IsBusy()
if not network_engine_good:
return False
return True
def REPEATINGWorkOnFiles( self, page_key ):
while self.CanDoFileWork( page_key ):
try:
@ -604,26 +638,9 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
HydrusData.ShowException( e )
with self._lock:
if ClientImporting.PageImporterShouldStopWorking( page_key ):
self._files_repeating_job.Cancel()
return
files_paused = self._files_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' )
work_to_do = self._file_seed_cache.WorkToDo() and not ( files_paused or HG.client_controller.PageClosedButNotDestroyed( page_key ) )
network_engine_good = not HG.client_controller.network_engine.IsBusy()
ok_to_work = work_to_do and network_engine_good
def REPEATINGWorkOnQueue( self, page_key ):
def CanDoQueueWork( self, page_key ):
with self._lock:
@ -631,19 +648,23 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
self._queue_repeating_job.Cancel()
return
return False
queue_paused = self._queue_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_gallery_searches' )
queue_good = not queue_paused
page_shown = not HG.client_controller.PageClosedButNotDestroyed( page_key )
network_engine_good = not HG.client_controller.network_engine.IsBusy()
ok_to_work = queue_good and page_shown and network_engine_good
if queue_paused:
return False
while ok_to_work:
return self.CanDoNetworkWork( page_key )
def REPEATINGWorkOnQueue( self, page_key ):
while self.CanDoQueueWork( page_key ):
try:
@ -665,24 +686,6 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
HydrusData.ShowException( e )
with self._lock:
if ClientImporting.PageImporterShouldStopWorking( page_key ):
self._queue_repeating_job.Cancel()
return
queue_paused = self._queue_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_gallery_searches' )
queue_good = not queue_paused
page_shown = not HG.client_controller.PageClosedButNotDestroyed( page_key )
network_engine_good = not HG.client_controller.network_engine.IsBusy()
ok_to_work = queue_good and page_shown and network_engine_good
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_SIMPLE_DOWNLOADER_IMPORT ] = SimpleDownloaderImport
@ -1104,7 +1107,7 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
def REPEATINGWorkOnFiles( self, page_key ):
def CanDoFileWork( self, page_key ):
with self._lock:
@ -1112,18 +1115,52 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
self._files_repeating_job.Cancel()
return
return False
files_paused = self._paused or HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' )
work_to_do = self._file_seed_cache.WorkToDo() and not ( files_paused or HG.client_controller.PageClosedButNotDestroyed( page_key ) )
network_engine_good = not HG.client_controller.network_engine.IsBusy()
if files_paused:
return False
ok_to_work = work_to_do and network_engine_good
work_to_do = self._file_seed_cache.WorkToDo()
if not work_to_do:
return False
while ok_to_work:
return self.CanDoNetworkWork( page_key )
def CanDoNetworkWork( self, page_key ):
with self._lock:
page_shown = not HG.client_controller.PageClosedButNotDestroyed( page_key )
if not page_shown:
return False
network_engine_good = not HG.client_controller.network_engine.IsBusy()
if not network_engine_good:
return False
return True
def REPEATINGWorkOnFiles( self, page_key ):
while self.CanDoFileWork( page_key ):
try:
@ -1136,26 +1173,9 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
HydrusData.ShowException( e )
with self._lock:
if ClientImporting.PageImporterShouldStopWorking( page_key ):
self._files_repeating_job.Cancel()
return
files_paused = self._paused or HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' )
work_to_do = self._file_seed_cache.WorkToDo() and not ( files_paused or HG.client_controller.PageClosedButNotDestroyed( page_key ) )
network_engine_good = not HG.client_controller.network_engine.IsBusy()
ok_to_work = work_to_do and network_engine_good
def REPEATINGWorkOnGallery( self, page_key ):
def CanDoGalleryWork( self, page_key ):
with self._lock:
@ -1163,18 +1183,30 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
self._gallery_repeating_job.Cancel()
return
return False
gallery_paused = self._paused or HG.client_controller.new_options.GetBoolean( 'pause_all_gallery_searches' )
work_to_do = self._gallery_seed_log.WorkToDo() and not ( gallery_paused or HG.client_controller.PageClosedButNotDestroyed( page_key ) )
network_engine_good = not HG.client_controller.network_engine.IsBusy()
if gallery_paused:
return False
ok_to_work = work_to_do and network_engine_good
work_to_do = self._gallery_seed_log.WorkToDo()
if not work_to_do:
return False
while ok_to_work:
return self.CanDoNetworkWork( page_key )
def REPEATINGWorkOnGallery( self, page_key ):
while self.CanDoGalleryWork( page_key ):
try:
@ -1187,23 +1219,6 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
HydrusData.ShowException( e )
with self._lock:
if ClientImporting.PageImporterShouldStopWorking( page_key ):
self._gallery_repeating_job.Cancel()
return
gallery_paused = self._paused or HG.client_controller.new_options.GetBoolean( 'pause_all_gallery_searches' )
work_to_do = self._gallery_seed_log.WorkToDo() and not ( gallery_paused or HG.client_controller.PageClosedButNotDestroyed( page_key ) )
network_engine_good = not HG.client_controller.network_engine.IsBusy()
ok_to_work = work_to_do and network_engine_good
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_URLS_IMPORT ] = URLsImport

View File

@ -1553,7 +1553,7 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
def REPEATINGWorkOnFiles( self ):
def CanDoFileWork( self ):
with self._lock:
@ -1565,15 +1565,55 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
files_paused = self._files_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' )
work_pending = self._file_seed_cache.WorkToDo() and not files_paused
no_delays = HydrusData.TimeHasPassed( self._no_work_until )
page_shown = not HG.client_controller.PageClosedButNotDestroyed( self._page_key )
network_engine_good = not HG.client_controller.network_engine.IsBusy()
ok_to_work = work_pending and no_delays and page_shown and network_engine_good
if files_paused:
return False
work_to_do = self._file_seed_cache.WorkToDo()
if not work_to_do:
return False
while ok_to_work:
return self.CanDoNetworkWork()
def CanDoNetworkWork( self ):
with self._lock:
no_delays = HydrusData.TimeHasPassed( self._no_work_until )
if not no_delays:
return False
page_shown = not HG.client_controller.PageClosedButNotDestroyed( self._page_key )
if not page_shown:
return False
network_engine_good = not HG.client_controller.network_engine.IsBusy()
if not network_engine_good:
return False
return True
def REPEATINGWorkOnFiles( self ):
while self.CanDoFileWork():
try:
@ -1586,27 +1626,9 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
HydrusData.ShowException( e )
with self._lock:
if ClientImporting.PageImporterShouldStopWorking( self._page_key ):
self._files_repeating_job.Cancel()
return
files_paused = self._files_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' )
work_pending = self._file_seed_cache.WorkToDo() and not files_paused
no_delays = HydrusData.TimeHasPassed( self._no_work_until )
page_shown = not HG.client_controller.PageClosedButNotDestroyed( self._page_key )
network_engine_good = not HG.client_controller.network_engine.IsBusy()
ok_to_work = work_pending and no_delays and page_shown and network_engine_good
def REPEATINGWorkOnChecker( self ):
def CanDoCheckerWork( self ):
with self._lock:
@ -1629,16 +1651,32 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
checking_paused = self._checking_paused or HG.client_controller.new_options.GetBoolean( 'pause_all_watcher_checkers' )
able_to_check = self._checking_status == ClientImporting.CHECKER_STATUS_OK and self._HasURL() and not checking_paused
check_due = HydrusData.TimeHasPassed( self._next_check_time )
no_delays = HydrusData.TimeHasPassed( self._no_work_until )
page_shown = not HG.client_controller.PageClosedButNotDestroyed( self._page_key )
network_engine_good = not HG.client_controller.network_engine.IsBusy()
if checking_paused:
return False
time_to_check = able_to_check and check_due and no_delays and page_shown and network_engine_good
able_to_check = self._checking_status == ClientImporting.CHECKER_STATUS_OK and self._HasURL()
if not able_to_check:
return False
check_due = HydrusData.TimeHasPassed( self._next_check_time )
if not check_due:
return False
if time_to_check:
return self.CanDoNetworkWork()
def REPEATINGWorkOnChecker( self ):
if self.CanDoCheckerWork():
try:

View File

@ -43,6 +43,32 @@ CLIENT_API_STRING_PARAMS = { 'name', 'url', 'domain', 'file_service_name', 'tag_
CLIENT_API_JSON_PARAMS = { 'basic_permissions', 'system_inbox', 'system_archive', 'tags', 'file_ids', 'only_return_identifiers', 'detailed_url_information', 'simple', 'file_sort_asc' }
CLIENT_API_JSON_BYTE_LIST_PARAMS = { 'hashes' }
def CheckHashLength( hashes, hash_type = 'sha256' ):
hash_types_to_length = {
'sha256' : 32,
'md5' : 16,
'sha1' : 20,
'sha512' : 64
}
hash_length = hash_types_to_length[ hash_type ]
for hash in hashes:
if len( hash ) != hash_length:
raise HydrusExceptions.BadRequestException(
'Sorry, one of the given hashes was the wrong length! {} hashes should be {} bytes long, but {} is {} bytes long!'.format(
hash_type,
hash_length,
hash.hex(),
len( hash )
)
)
def ParseLocalBooruGETArgs( requests_args ):
args = HydrusNetworkVariableHandling.ParseTwistedRequestGETArgs( requests_args, LOCAL_BOORU_INT_PARAMS, LOCAL_BOORU_BYTE_PARAMS, LOCAL_BOORU_STRING_PARAMS, LOCAL_BOORU_JSON_PARAMS, LOCAL_BOORU_JSON_BYTE_LIST_PARAMS )
@ -199,12 +225,43 @@ def ParseClientAPISearchPredicates( request ):
tags = request.parsed_request_args[ 'tags' ]
system_inbox = request.parsed_request_args[ 'system_inbox' ]
system_archive = request.parsed_request_args[ 'system_archive' ]
system_predicate_strings = [ tag for tag in tags if tag.startswith( 'system:' ) ]
tags = [ tag for tag in tags if not tag.startswith( 'system:' ) ]
tags = request.parsed_request_args[ 'tags' ]
predicates = ConvertTagListToPredicates( request, tags )
if len( predicates ) == 0:
try:
request.client_api_permissions.CheckCanSeeAllFiles()
except HydrusExceptions.InsufficientCredentialsException:
raise HydrusExceptions.InsufficientCredentialsException( 'Sorry, you do not have permission to see all files on this client. Please add a regular tag to your search.' )
if system_inbox:
predicates.append( ClientSearch.Predicate( predicate_type = ClientSearch.PREDICATE_TYPE_SYSTEM_INBOX ) )
elif system_archive:
predicates.append( ClientSearch.Predicate( predicate_type = ClientSearch.PREDICATE_TYPE_SYSTEM_ARCHIVE ) )
return predicates
def ConvertTagListToPredicates( request, tag_list, do_permission_check = True ) -> list:
or_tag_lists = [ tag for tag in tag_list if isinstance( tag, list ) ]
tag_strings = [ tag for tag in tag_list if isinstance( tag, str ) ]
system_predicate_strings = [ tag for tag in tag_strings if tag.startswith( 'system:' ) ]
tags = [ tag for tag in tag_strings if not tag.startswith( 'system:' ) ]
negated_tags = [ tag for tag in tags if tag.startswith( '-' ) ]
tags = [ tag for tag in tags if not tag.startswith( '-' ) ]
@ -212,16 +269,67 @@ def ParseClientAPISearchPredicates( request ):
negated_tags = HydrusTags.CleanTags( negated_tags )
tags = HydrusTags.CleanTags( tags )
# check positive tags, not negative!
request.client_api_permissions.CheckCanSearchTags( tags )
search_tags = [ ( True, tag ) for tag in tags ]
search_tags.extend( ( ( False, tag ) for tag in negated_tags ) )
if do_permission_check:
if len( tags ) == 0:
if len( negated_tags ) > 0:
try:
request.client_api_permissions.CheckCanSeeAllFiles()
except HydrusExceptions.InsufficientCredentialsException:
raise HydrusExceptions.InsufficientCredentialsException( 'Sorry, if you want to search negated tags without regular tags, you need permission to search everything!' )
if len( system_predicate_strings ) > 0:
try:
request.client_api_permissions.CheckCanSeeAllFiles()
except HydrusExceptions.InsufficientCredentialsException:
raise HydrusExceptions.InsufficientCredentialsException( 'Sorry, if you want to search system predicates without regular tags, you need permission to search everything!' )
if len( or_tag_lists ) > 0:
try:
request.client_api_permissions.CheckCanSeeAllFiles()
except HydrusExceptions.InsufficientCredentialsException:
raise HydrusExceptions.InsufficientCredentialsException( 'Sorry, if you want to search OR predicates without regular tags, you need permission to search everything!' )
else:
# check positive tags, not negative!
request.client_api_permissions.CheckCanSearchTags( tags )
predicates = []
for or_tag_list in or_tag_lists:
or_preds = ConvertTagListToPredicates( request, or_tag_list, do_permission_check = False )
predicates.append( ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_OR_CONTAINER, or_preds ) )
predicates.extend( ClientSearchParseSystemPredicates.ParseSystemPredicateStringsToPredicates( system_predicate_strings ) )
search_tags = [ ( True, tag ) for tag in tags ]
search_tags.extend( ( ( False, tag ) for tag in negated_tags ) )
for ( inclusive, tag ) in search_tags:
( namespace, subtag ) = HydrusTags.SplitTag( tag )
@ -246,15 +354,6 @@ def ParseClientAPISearchPredicates( request ):
predicates.append( ClientSearch.Predicate( predicate_type = predicate_type, value = tag, inclusive = inclusive ) )
if system_inbox:
predicates.append( ClientSearch.Predicate( predicate_type = ClientSearch.PREDICATE_TYPE_SYSTEM_INBOX ) )
elif system_archive:
predicates.append( ClientSearch.Predicate( predicate_type = ClientSearch.PREDICATE_TYPE_SYSTEM_ARCHIVE ) )
return predicates
class HydrusResourceBooru( HydrusServerResources.HydrusResource ):
@ -932,6 +1031,8 @@ class HydrusResourceClientAPIRestrictedAddFilesArchiveFiles( HydrusResourceClien
hashes.update( more_hashes )
CheckHashLength( hashes )
content_update = HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_ARCHIVE, hashes )
service_keys_to_content_updates = { CC.COMBINED_LOCAL_FILE_SERVICE_KEY : [ content_update ] }
@ -966,6 +1067,8 @@ class HydrusResourceClientAPIRestrictedAddFilesDeleteFiles( HydrusResourceClient
hashes.update( more_hashes )
CheckHashLength( hashes )
# expand this to take file service and reason
content_update = HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_DELETE, hashes )
@ -1002,6 +1105,8 @@ class HydrusResourceClientAPIRestrictedAddFilesUnarchiveFiles( HydrusResourceCli
hashes.update( more_hashes )
CheckHashLength( hashes )
content_update = HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_INBOX, hashes )
service_keys_to_content_updates = { CC.COMBINED_LOCAL_FILE_SERVICE_KEY : [ content_update ] }
@ -1036,6 +1141,8 @@ class HydrusResourceClientAPIRestrictedAddFilesUndeleteFiles( HydrusResourceClie
hashes.update( more_hashes )
CheckHashLength( hashes )
# expand this to take file service, if and when we move to multiple trashes or whatever
content_update = HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_UNDELETE, hashes )
@ -1079,6 +1186,8 @@ class HydrusResourceClientAPIRestrictedAddTagsAddTags( HydrusResourceClientAPIRe
hashes.update( more_hashes )
CheckHashLength( hashes )
if len( hashes ) == 0:
raise HydrusExceptions.BadRequestException( 'There were no hashes given!' )
@ -1364,6 +1473,8 @@ class HydrusResourceClientAPIRestrictedAddURLsAssociateURL( HydrusResourceClient
applicable_hashes.extend( hashes )
CheckHashLength( applicable_hashes )
if len( applicable_hashes ) == 0:
raise HydrusExceptions.BadRequestException( 'Did not find any hashes to apply the urls to!' )
@ -1793,6 +1904,8 @@ class HydrusResourceClientAPIRestrictedGetFilesFileMetadata( HydrusResourceClien
hashes = request.parsed_request_args.GetValue( 'hashes', list, expected_list_type = bytes )
CheckHashLength( hashes )
if only_return_identifiers:
file_ids_to_hashes = HG.client_controller.Read( 'hash_ids_to_hashes', hashes = hashes )
@ -2246,6 +2359,8 @@ class HydrusResourceClientAPIRestrictedManagePagesAddFiles( HydrusResourceClient
hashes = request.parsed_request_args.GetValue( 'hashes', list, expected_list_type = bytes )
CheckHashLength( hashes )
media_results = HG.client_controller.Read( 'media_results', hashes )
elif 'file_ids' in request.parsed_request_args:

View File

@ -18,7 +18,7 @@ job_status_str_lookup = {}
job_status_str_lookup[ JOB_STATUS_AWAITING_VALIDITY ] = 'waiting for validation'
job_status_str_lookup[ JOB_STATUS_AWAITING_BANDWIDTH ] = 'waiting for bandwidth'
job_status_str_lookup[ JOB_STATUS_AWAITING_LOGIN ] = 'waiting for login'
job_status_str_lookup[ JOB_STATUS_AWAITING_SLOT ] = 'waiting for slot'
job_status_str_lookup[ JOB_STATUS_AWAITING_SLOT ] = 'waiting for free work slot'
job_status_str_lookup[ JOB_STATUS_RUNNING ] = 'running'
class NetworkEngine( object ):
@ -370,7 +370,7 @@ class NetworkEngine( object ):
elif self._active_domains_counter[ job.GetSecondLevelDomain() ] >= self.MAX_JOBS_PER_DOMAIN:
job.SetStatus( 'waiting for a slot on this domain' )
job.SetStatus( 'waiting for other jobs on this domain to finish' )
job.Sleep( 2 )
@ -402,7 +402,7 @@ class NetworkEngine( object ):
else:
job.SetStatus( 'waiting for a slot\u2026' )
job.SetStatus( 'waiting for other jobs to finish\u2026' )
return True

View File

@ -1339,19 +1339,20 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ):
return True
number_of_errors = HG.client_controller.new_options.GetInteger( 'domain_network_infrastructure_error_number' )
error_time_delta = HG.client_controller.new_options.GetInteger( 'domain_network_infrastructure_error_time_delta' )
if number_of_errors == 0:
return True
# this will become flexible and customisable when I have domain profiles/status/ui
# also should extend it to 'global', so if multiple domains are having trouble, we maybe assume the whole connection is down? it would really be nicer to have a better sockets-level check there
if domain in self._second_level_domains_to_network_infrastructure_errors:
number_of_errors = HG.client_controller.new_options.GetInteger( 'domain_network_infrastructure_error_number' )
if number_of_errors == 0:
return True
error_time_delta = HG.client_controller.new_options.GetInteger( 'domain_network_infrastructure_error_time_delta' )
network_infrastructure_errors = self._second_level_domains_to_network_infrastructure_errors[ domain ]
network_infrastructure_errors = [ timestamp for timestamp in network_infrastructure_errors if not HydrusData.TimeHasPassed( timestamp + error_time_delta ) ]

View File

@ -182,7 +182,7 @@ class NetworkJob( object ):
self._connection_error_wake_time = 0
self._serverside_bandwidth_wake_time = 0
self._wake_time = 0
self._wake_time_float = 0.0
self._content_type = None
@ -203,8 +203,10 @@ class NetworkJob( object ):
self._gallery_token_name = None
self._gallery_token_consumed = False
self._last_gallery_token_estimate = 0
self._bandwidth_manual_override = False
self._bandwidth_manual_override_delayed_timestamp = None
self._last_bandwidth_time_estimate = 0
self._last_time_ongoing_bandwidth_failed = 0
@ -556,9 +558,9 @@ class NetworkJob( object ):
self._is_done_event.set()
def _Sleep( self, seconds ):
def _Sleep( self, seconds_float ):
self._wake_time = HydrusData.GetNow() + seconds
self._wake_time_float = HydrusData.GetNowFloat() + seconds_float
def _SolveCloudFlare( self, response ):
@ -981,7 +983,7 @@ class NetworkJob( object ):
with self._lock:
return not HydrusData.TimeHasPassed( self._wake_time )
return not HydrusData.TimeHasPassedFloat( self._wake_time_float )
@ -1055,13 +1057,13 @@ class NetworkJob( object ):
self._bandwidth_manual_override = True
self._wake_time = 0
self._wake_time_float = 0.0
else:
self._bandwidth_manual_override_delayed_timestamp = HydrusData.GetNow() + delay
self._wake_time = min( self._wake_time, self._bandwidth_manual_override_delayed_timestamp + 1 )
self._wake_time_float = min( self._wake_time_float, self._bandwidth_manual_override_delayed_timestamp + 1.0 )
@ -1088,7 +1090,7 @@ class NetworkJob( object ):
self._gallery_token_consumed = True
self._wake_time = 0
self._wake_time_float = 0.0
@ -1417,15 +1419,24 @@ class NetworkJob( object ):
if consumed:
self._status_text = 'slot consumed, starting soon'
self._status_text = 'starting soon'
self._gallery_token_consumed = True
else:
self._status_text = 'waiting for a ' + self._gallery_token_name + ' slot: next ' + ClientData.TimestampToPrettyTimeDelta( next_timestamp, just_now_threshold = 1 )
if HydrusData.TimeHasPassed( self._last_gallery_token_estimate ) and not HydrusData.TimeHasPassed( self._last_gallery_token_estimate + 3 ):
self._status_text = 'a different {} got the chance to work'.format( self._gallery_token_name )
else:
self._status_text = 'waiting to start: {}'.format( ClientData.TimestampToPrettyTimeDelta( next_timestamp, just_now_threshold = 2, just_now_string = 'checking', no_prefix = True ) )
self._last_gallery_token_estimate = next_timestamp
self._Sleep( 1 )
self._Sleep( 0.8 )
return False
@ -1474,7 +1485,18 @@ class NetworkJob( object ):
waiting_duration = bandwidth_waiting_duration
waiting_str = 'bandwidth free ' + ClientData.TimestampToPrettyTimeDelta( HydrusData.GetNow() + waiting_duration, just_now_string = 'imminently', just_now_threshold = just_now_threshold )
bandwidth_time_estimate = HydrusData.GetNow() + waiting_duration
if HydrusData.TimeHasPassed( self._last_bandwidth_time_estimate ) and not HydrusData.TimeHasPassed( self._last_bandwidth_time_estimate + 3 ):
waiting_str = 'a different network job got the bandwidth'
else:
waiting_str = 'bandwidth free ' + ClientData.TimestampToPrettyTimeDelta( bandwidth_time_estimate, just_now_string = 'imminently', just_now_threshold = just_now_threshold )
self._last_bandwidth_time_estimate = bandwidth_time_estimate
waiting_str += '\u2026 (' + bandwidth_network_context.ToHumanString() + ')'
@ -1491,7 +1513,7 @@ class NetworkJob( object ):
elif waiting_duration > 10:
self._Sleep( 1 )
self._Sleep( 0.8 )

View File

@ -81,8 +81,8 @@ options = {}
# Misc
NETWORK_VERSION = 20
SOFTWARE_VERSION = 452
CLIENT_API_VERSION = 19
SOFTWARE_VERSION = 453
CLIENT_API_VERSION = 20
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )

View File

@ -914,16 +914,12 @@ class HydrusController( object ):
def WaitUntilPubSubsEmpty( self ):
while True:
while self.CurrentlyPubSubbing():
if HG.model_shutdown:
raise HydrusExceptions.ShutdownException( 'Application shutting down!' )
elif not self.CurrentlyPubSubbing():
return
else:
time.sleep( 0.00001 )

View File

@ -609,55 +609,62 @@ class HydrusResource( Resource ):
try:
e = failure.value
if isinstance( e, HydrusExceptions.DBException ):
e = e.db_e # could well be a DataException
try: self._CleanUpTempFile( request )
except: pass
default_mime = HC.TEXT_HTML
default_encoding = str
if failure.type == HydrusExceptions.BadRequestException:
if isinstance( e, HydrusExceptions.BadRequestException ):
response_context = ResponseContext( 400, mime = default_mime, body = default_encoding( failure.value ) )
response_context = ResponseContext( 400, mime = default_mime, body = default_encoding( e ) )
elif failure.type in ( HydrusExceptions.MissingCredentialsException, HydrusExceptions.DoesNotSupportCORSException ):
elif isinstance( e, ( HydrusExceptions.MissingCredentialsException, HydrusExceptions.DoesNotSupportCORSException ) ):
response_context = ResponseContext( 401, mime = default_mime, body = default_encoding( failure.value ) )
response_context = ResponseContext( 401, mime = default_mime, body = default_encoding( e ) )
elif failure.type == HydrusExceptions.InsufficientCredentialsException:
elif isinstance( e, HydrusExceptions.InsufficientCredentialsException ):
response_context = ResponseContext( 403, mime = default_mime, body = default_encoding( failure.value ) )
response_context = ResponseContext( 403, mime = default_mime, body = default_encoding( e ) )
elif failure.type in ( HydrusExceptions.NotFoundException, HydrusExceptions.DataMissing, HydrusExceptions.FileMissingException ):
elif isinstance( e, ( HydrusExceptions.NotFoundException, HydrusExceptions.DataMissing, HydrusExceptions.FileMissingException ) ):
response_context = ResponseContext( 404, mime = default_mime, body = default_encoding( failure.value ) )
response_context = ResponseContext( 404, mime = default_mime, body = default_encoding( e ) )
elif failure.type == HydrusExceptions.ConflictException:
elif isinstance( e, HydrusExceptions.ConflictException ):
response_context = ResponseContext( 409, mime = default_mime, body = default_encoding( failure.value ) )
response_context = ResponseContext( 409, mime = default_mime, body = default_encoding( e ) )
elif failure.type == HydrusExceptions.RangeNotSatisfiableException:
elif isinstance( e, HydrusExceptions.RangeNotSatisfiableException ):
response_context = ResponseContext( 416, mime = default_mime, body = default_encoding( failure.value ) )
response_context = ResponseContext( 416, mime = default_mime, body = default_encoding( e ) )
elif failure.type == HydrusExceptions.SessionException:
elif isinstance( e, HydrusExceptions.SessionException ):
response_context = ResponseContext( 419, mime = default_mime, body = default_encoding( failure.value ) )
response_context = ResponseContext( 419, mime = default_mime, body = default_encoding( e ) )
elif failure.type == HydrusExceptions.NetworkVersionException:
elif isinstance( e, HydrusExceptions.NetworkVersionException ):
response_context = ResponseContext( 426, mime = default_mime, body = default_encoding( failure.value ) )
response_context = ResponseContext( 426, mime = default_mime, body = default_encoding( e ) )
elif failure.type == HydrusExceptions.ServerBusyException:
elif isinstance( e, HydrusExceptions.ServerBusyException ):
response_context = ResponseContext( 503, mime = default_mime, body = default_encoding( failure.value ) )
response_context = ResponseContext( 503, mime = default_mime, body = default_encoding( e ) )
elif failure.type == HydrusExceptions.BandwidthException:
elif isinstance( e, HydrusExceptions.BandwidthException ):
response_context = ResponseContext( 509, mime = default_mime, body = default_encoding( failure.value ) )
response_context = ResponseContext( 509, mime = default_mime, body = default_encoding( e ) )
elif failure.type == HydrusExceptions.ServerException:
elif isinstance( e, HydrusExceptions.ServerException ):
response_context = ResponseContext( 500, mime = default_mime, body = default_encoding( failure.value ) )
response_context = ResponseContext( 500, mime = default_mime, body = default_encoding( e ) )
else:

View File

@ -143,7 +143,7 @@ SYSTEM_PREDICATES = {
'has duration': (Predicate.HAS_DURATION, None, None, None),
'no duration': (Predicate.NO_DURATION, None, None, None),
'(is the )?best quality( file)? of( its)?( duplicate)? group': (Predicate.BEST_QUALITY_OF_GROUP, None, None, None),
'((is )?not)|(isn\'t)( the)? best quality( file)? of( its)?( duplicate)? group': (Predicate.NOT_BEST_QUALITY_OF_GROUP, None, None, None),
'(((is )?not)|(isn\'t))( the)? best quality( file)? of( its)?( duplicate)? group': (Predicate.NOT_BEST_QUALITY_OF_GROUP, None, None, None),
'has audio': (Predicate.HAS_AUDIO, None, None, None),
'no audio': (Predicate.NO_AUDIO, None, None, None),
'has tags': (Predicate.HAS_TAGS, None, None, None),

View File

@ -2167,6 +2167,34 @@ class TestClientAPI( unittest.TestCase ):
self.assertEqual( set( predicates ), set( expected_predicates ) )
#
pretend_request = PretendRequest()
pretend_request.parsed_request_args = { 'tags' : [ 'green', [ 'red', 'blue' ], 'system:archive' ] }
pretend_request.client_api_permissions = set_up_permissions[ 'search_green_files' ]
predicates = ClientLocalServerResources.ParseClientAPISearchPredicates( pretend_request )
expected_predicates = []
expected_predicates.append( ClientSearch.Predicate( predicate_type = ClientSearch.PREDICATE_TYPE_TAG, value = 'green' ) )
expected_predicates.append(
ClientSearch.Predicate(
predicate_type = ClientSearch.PREDICATE_TYPE_OR_CONTAINER,
value = [
ClientSearch.Predicate( predicate_type = ClientSearch.PREDICATE_TYPE_TAG, value = 'red' ),
ClientSearch.Predicate( predicate_type = ClientSearch.PREDICATE_TYPE_TAG, value = 'blue' )
]
)
)
expected_predicates.append( ClientSearch.Predicate( predicate_type = ClientSearch.PREDICATE_TYPE_SYSTEM_ARCHIVE ) )
self.assertEqual( { pred for pred in predicates if pred.GetType() != ClientSearch.PREDICATE_TYPE_OR_CONTAINER }, { pred for pred in expected_predicates if pred.GetType() != ClientSearch.PREDICATE_TYPE_OR_CONTAINER } )
self.assertEqual( { frozenset( pred.GetValue() ) for pred in predicates if pred.GetType() == ClientSearch.PREDICATE_TYPE_OR_CONTAINER }, { frozenset( pred.GetValue() ) for pred in expected_predicates if pred.GetType() == ClientSearch.PREDICATE_TYPE_OR_CONTAINER } )
def _test_file_metadata( self, connection, set_up_permissions ):
@ -2417,6 +2445,18 @@ class TestClientAPI( unittest.TestCase ):
self.assertEqual( d, expected_metadata_result )
# fails on borked hashes
path = '/get_files/file_metadata?hashes={}'.format( urllib.parse.quote( json.dumps( [ 'deadbeef' ] ) ) )
connection.request( 'GET', path, headers = headers )
response = connection.getresponse()
data = response.read()
self.assertEqual( response.status, 400 )
# metadata from hashes with detailed url info
path = '/get_files/file_metadata?hashes={}&detailed_url_information=true'.format( urllib.parse.quote( json.dumps( [ hash.hex() for hash in file_ids_to_hashes.values() ] ) ) )
@ -2435,6 +2475,29 @@ class TestClientAPI( unittest.TestCase ):
self.assertEqual( d, expected_detailed_known_urls_metadata_result )
# failure on missing file_ids
HG.test_controller.SetRead( 'media_results_from_ids', HydrusExceptions.DataMissing( 'test missing' ) )
api_permissions = set_up_permissions[ 'everything' ]
access_key_hex = api_permissions.GetAccessKey().hex()
headers = { 'Hydrus-Client-API-Access-Key' : access_key_hex }
path = '/get_files/file_metadata?file_ids={}'.format( urllib.parse.quote( json.dumps( [ 123456 ] ) ) )
connection.request( 'GET', path, headers = headers )
response = connection.getresponse()
data = response.read()
text = str( data, 'utf-8' )
self.assertEqual( response.status, 404 )
self.assertIn( 'test missing', text )
def _test_get_files( self, connection, set_up_permissions ):

View File

@ -271,6 +271,11 @@ class TestClientDB( unittest.TestCase ):
file_query_ids = self._read( 'file_query_ids', search_context )
for file_query_id in file_query_ids:
self.assertEqual( type( file_query_id ), int )
self.assertEqual( len( file_query_ids ), result )
@ -287,6 +292,11 @@ class TestClientDB( unittest.TestCase ):
file_query_ids = self._read( 'file_query_ids', search_context )
for file_query_id in file_query_ids:
self.assertEqual( type( file_query_id ), int )
self.assertEqual( len( file_query_ids ), result )
@ -303,6 +313,11 @@ class TestClientDB( unittest.TestCase ):
file_query_ids = self._read( 'file_query_ids', search_context )
for file_query_id in file_query_ids:
self.assertEqual( type( file_query_id ), int )
self.assertEqual( len( file_query_ids ), result )
@ -317,6 +332,11 @@ class TestClientDB( unittest.TestCase ):
file_query_ids = self._read( 'file_query_ids', search_context )
for file_query_id in file_query_ids:
self.assertEqual( type( file_query_id ), int )
self.assertEqual( len( file_query_ids ), result )

View File

@ -551,9 +551,9 @@ class TestNetworkingJob( unittest.TestCase ):
self.assertTrue( job.IsAsleep() )
five_secs_from_now = HydrusData.GetNow() + 5
five_secs_from_now = HydrusData.GetNowFloat() + 5
with patch.object( HydrusData, 'GetNow', return_value = five_secs_from_now ):
with patch.object( HydrusData, 'GetNowFloat', return_value = five_secs_from_now ):
self.assertFalse( job.IsAsleep() )

View File

@ -1827,6 +1827,7 @@ class TestTagObjects( unittest.TestCase ):
( 'system:no duration', "system:no duration" ),
( 'system:is the best quality file of its duplicate group', "system:is the best quality file of its group" ),
( 'system:is not the best quality file of its duplicate group', "system:isn't the best quality file of its duplicate group" ),
( 'system:is not the best quality file of its duplicate group', 'system:is not the best quality file of its duplicate group' ),
( 'system:has audio', "system:has_audio" ),
( 'system:no audio', "system:no audio" ),
( 'system:has tags', "system:has tags" ),

View File

@ -693,7 +693,14 @@ class Controller( object ):
pass
return self._name_read_responses[ name ]
result = self._name_read_responses[ name ]
if isinstance( result, Exception ):
raise HydrusExceptions.DBException( result, str( result ), 'test trace' )
return result
def RegisterUIUpdateWindow( self, window ):

View File

@ -1,3 +1,3 @@
PyInstaller==3.5
PyInstaller
mock>=4.0.0
httmock>=1.4.0

397
static/qss/OledBlack.qss Normal file
View File

@ -0,0 +1,397 @@
/* OledBlack.qss based on CutieDuck Hydrus.qss */
QToolTip
{
color: black;
border: 1px solid black;
background-color: #9B1800;
padding: 1px;
}
QWidget
{
color: white;
background-color: #101010;
alternate-background-color: #101010;
}
QWidget:disabled
{
background-color: #101010;
}
QWidget:item:hover
{
background-color: #FF2800;
color: black;
}
QWidget:item:selected
{
background-color: #9B1800;
}
QMenuBar::item
{
background: transparent;
}
QMenuBar::item:selected
{
background: transparent;
border: 1px solid #9B1800;
}
QMenu
{
border: 1px solid #000;
}
QMenu::item
{
padding: 2px 20px 2px 20px;
}
QMenu::item:selected
{
color: black;
}
QAbstractItemView
{
background-color: #242424;
}
QLineEdit
{
color: white;
background-color: #242424;
padding: 1px;
border-style: solid;
border: 1px solid #000000;
}
QPushButton
{
color: white;
background-color: #242424;
border-width: 1px;
border-color: #000000;
border-style: solid;
padding: 3px;
padding-left: 5px;
padding-right: 5px;
}
QPushButton:pressed
{
background-color: #323232;
}
QComboBox
{
selection-background-color: #9B1800;
background-color: #242424;
border-style: solid;
border: 1.0px solid #000000;
}
QComboBox:hover,QPushButton:hover
{
border: 1px solid #FF2800;
}
QComboBox:on
{
padding-top: 3px;
padding-left: 4px;
background-color: #2d2d2d;
selection-background-color: #9B1800;
}
QComboBox QAbstractItemView
{
border: 1px solid darkgray;
selection-background-color: #9B1800;
}
QComboBox::drop-down
{
subcontrol-origin: padding;
subcontrol-position: top right;
width: 15px;
border-left-width: 0px;
border-left-color: darkgray;
border-left-style: solid; /* just a single line */
}
QComboBox::down-arrow
{
image: url(:/down_arrow.png);
}
QGroupBox:focus
{
border: 1px solid #FF2800;
}
QTextEdit:focus
{
border: 1px solid #2d2d2d;
}
QScrollBar:horizontal {
border: 1px solid #000000;
background: black;
height: 7px;
margin: 0px 16px 0 16px;
}
QScrollBar::handle:horizontal
{
background: #9B1800;
min-height: 20px;
border-radius: 2px;
}
QScrollBar::add-line:horizontal {
border: 1px solid #000000;
border-radius: 2px;
background: #FF2800;
width: 14px;
subcontrol-position: right;
subcontrol-origin: margin;
}
QScrollBar::sub-line:horizontal {
border: 1px solid #000000;
border-radius: 2px;
background: #FF2800;
width: 14px;
subcontrol-position: left;
subcontrol-origin: margin;
}
QScrollBar::right-arrow:horizontal, QScrollBar::left-arrow:horizontal
{
border: 1px solid black;
width: 1px;
height: 1px;
background: white;
}
QScrollBar::add-page:horizontal, QScrollBar::sub-page:horizontal
{
background: none;
}
QScrollBar:vertical
{
background: #484848;
width: 7px;
margin: 16px 0 16px 0;
border: 1px solid #000000;
}
QScrollBar::handle:vertical
{
background: #9B1800;
min-height: 20px;
}
QScrollBar::add-line:vertical
{
border: 1px solid #000000;
background: #9B1800;
height: 14px;
subcontrol-position: bottom;
subcontrol-origin: margin;
}
QScrollBar::sub-line:vertical
{
border: 1px solid #000000;
background: #9B1800;
height: 14px;
subcontrol-position: top;
subcontrol-origin: margin;
}
QScrollBar::up-arrow:vertical, QScrollBar::down-arrow:vertical
{
border: 1px solid black;
width: 1px;
height: 1px;
background: white;
}
QScrollBar::add-page:vertical, QScrollBar::sub-page:vertical
{
background: none;
}
QTextEdit
{
background-color: #000000;
}
QPlainTextEdit
{
background-color: #000000;
}
QHeaderView::section
{
background-color: #616161;
color: white;
padding-left: 4px;
border: 1px solid #6c6c6c;
}
QProgressBar
{
border: 1px solid grey;
text-align: center;
}
QProgressBar::chunk
{
background-color: #FF2800;
}
QTabBar::tab {
color: #b1b1b1;
background-color: #323232;
padding-left: 10px;
padding-right: 10px;
padding-top: 3px;
padding-bottom: 2px;
}
QTabWidget::pane {
border: 1px solid #444;
top: 1px;
}
QTabBar::tab:last
{
margin-right: 0; /* the last selected tab has nothing to overlap with on the right */
border-top-right-radius: 3px;
}
QTabBar::tab:first:!selected
{
margin-left: 0px; /* the last selected tab has nothing to overlap with on the right */
border-top-left-radius: 3px;
}
QTabBar::tab:selected
{
color: black;
border-bottom-style: solid;
background-color: #9B1800;
}
QTabBar::tab:hover:!selected
{
color: black;
background-color: #FF2800;
}
/*
DUCK
*/
/*
Default QSS for hydrus. This is prepended to any stylesheet loaded in hydrus.
Copying these entries in your own stylesheets should override these settings.
This will get more work in future.
*/
/*
Here are some text and background colours
*/
/* Example: This regex is valid */
QLabel#HydrusValid
{
color: #2ed42e;
}
QLineEdit#HydrusValid
{
background-color: #80ff80;
}
QTextEdit#HydrusValid
{
background-color: #80ff80;
}
/* Example: This regex is invalid */
QLabel#HydrusInvalid
{
color: #ff7171;
}
QLineEdit#HydrusInvalid
{
background-color: #ff8080;
}
QTextEdit#HydrusInvalid
{
background-color: #ff8080;
}
/* Example: Your files are going to be deleted! */
QLabel#HydrusWarning
{
color: #ff7171;
}
QCheckBox#HydrusWarning
{
color: #ff7171;
}
/*
Buttons on dialogs
*/
QPushButton#HydrusAccept
{
color: #2ed42e;
}
QPushButton#HydrusCancel
{
color: #ff7171;
}
/*
This is the green/red button that switches 'include current tags' and similar states on/off
*/
QPushButton#HydrusOnOffButton[hydrus_on=true]
{
color: #2ed42e;
}
QPushButton#HydrusOnOffButton[hydrus_on=false]
{
color: #ff7171;
}