Version 514

This commit is contained in:
Hydrus Network Developer 2023-01-25 16:59:39 -06:00
parent b7e69d0e02
commit bac0421831
No known key found for this signature in database
GPG Key ID: 76249F053212133C
70 changed files with 2431 additions and 3461 deletions

134
docs/advanced_sidecars.md Normal file
View File

@ -0,0 +1,134 @@
---
title: Sidecars
---
# sidecars
Sidecars are files that provide additional metadata about a master file. They typically share the same basic filename--if the master is 'Image_123456.jpg', the sidecar will be something like 'Image_123456.txt' or 'Image_123456.jpg.json'. This obviously makes it easy to figure out which sidecar goes with which file.
Hydrus does not use sidecars in its own storage, but it can import data from them and export data to them. It currently supports raw data in .txt files and encoded data in .json files, and that data can be either tags or URLs. I expect to extend this system in future to support XML and other metadata types such as ratings, timestamps, and inbox/archive status.
We'll start with .txt, since they are simpler.
## Importing Sidecars
Imagine you have some jpegs you downloaded with another program. That program grabbed the files' tags somehow, and you want to import the files with their tags without messing around with the Client API.
If your extra program can export the tags to a simple format--let's say newline-separated .txt files with the same basic filename as the jpegs, or you can, with some very simple scripting, convert to that format--then importing them to hydrus is easy!
Put the jpegs and the .txt files in the same directory and then drag and drop the directory onto the client, as you would for a normal import. The .txt files should not be added to the list. Then click 'add tags/urls with the import'. The sidecars are managed on one of the tabs:
[![](images/sidecars_example_manual_import.png)](images/sidecars_example_manual_import.png)
This system can get quite complicated, but the essential idea is that you are selecting one or more sidecar `sources`, parsing their text, and sending that list of data to one hydrus service `destination`. Most of the time you will be pulling from just one sidecar at a time.
### The Source Dialog
The `source` is a description of a sidecar to load and how to read what it contains.
In this example, the texts are like so:
``` title="4e01850417d1978e6328d4f40c3b550ef582f8558539b4ad46a1cb7650a2e10b.jpg.txt"
flowers
landscape
blue sky
```
``` title="5e390f043321de57cb40fd7ca7cf0cfca29831670bd4ad71622226bc0a057876.jpg.txt"
fast car
anime girl
night sky
```
Since our sidecars in this example are named (filename.ext).txt, and use newlines as the separator character, we can leave things mostly as default.
If you do not have newline-separated tags, for instance comma-separated tags (`flowers, landscape, blue sky`), then you can set that here. Be careful if you are making your own sidecars, since any separator character obviously cannot be used in tag text!
If your sidecars are named (filename).txt instead of (filename.ext).txt, then just hit the checkbox, but if the conversion is more complicated, then play around with the filename string converter and the test boxes.
If you need to, you can further process the texts that are loaded. They'll be trimmed of extra whitespace and so on automatically, so no need to worry about that, but if you need to, let's say, add the `creator:` prefix to everything, or filter out some mis-parsed garbage, this is the place.
### The Router Dialog
A 'Router' is a single set of orders to grab from one or more sidecars and send to a destination. You can have several routers in a single import or export context.
You can do more string processing here, and it will apply to everything loaded from every sidecar.
The destination is either a tag service (adding the loaded strings as tags), or your known URLs store.
### Previewing
Once you have something set up, you can see the results are live-loaded in the dialog. Make sure everything looks all correct, and then start the import as normal and you should see the tags or URLs being added as the import works.
It is good to try out some simple situations with one or two files just to get a feel for the system.
### Import Folders
If you have a constant flow of sidecar-attached media, then you can add sidecars to Import Folders too. Do a trial-run of anything you want to parse with a manual import before setting up the automatic system.
## Exporting Sidecars
The rules for exporting are similar, but now you are pulling from one or more hydrus service `sources` and sending to a single `destination` sidecar every time. Let's look at the UI:
[![](images/sidecars_example_manual_export.png)](images/sidecars_example_manual_export.png)
I have chosen to select these files' URLs and send them to newline-separated .urls.txt files. If I wanted to get the tags too, I could pull from one or more tag services, filter and convert the tags as needed, and then output to a .tags.txt file.
The best way to learn with this is just to experiment. The UI may seem intimidating, but most jobs don't need you to work with multiple sidecars or string processing or clever filenames.
## JSON Files
JSON is more complicated than .txt. You might have multiple metadata types all together in one file, so you may end up setting up multiple routers that parse the same file for different content, or for an export you might want to populate the same export file with multiple kinds of content. Hydrus can do it!
### Importing
Since JSON files are richly structured, we will have to dip into the Hydrus parsing system:
[![](images/sidecars_example_json_import.png)](images/sidecars_example_json_import.png)
If you have made a downloader before, you will be familiar with this. If not, then you can brave [the help](downloader_parsers_formulae.md#json_formula) or just have a play around with the UI. In this example, I am getting the URL(s) of each JSON file, which are stored in a list under the `file_info_urls` key.
It is important to paste an example JSON file that you want to parse into the parsing testing area (click the paste button) so you can test on read data live.
Once you have the parsing set up, the rest of the sidecar UI is the same as for .txt. The JSON Parsing formula is just the replacement/equivalent for the .txt 'separator' setting.
_Note that you could set up a second Router to import the tags from this file!_
### Exporting
In Hydrus, the exported JSON is typically a nested Object with a similar format as in the Import example. You set the names of the Object keys.
[![](images/sidecars_example_json_export.png)](images/sidecars_example_json_export.png)
Here I have set the URLs of each file to be stored under `metadata->urls`, which will make this sort of structure:
``` json
{
"metadata" : {
"urls" : [
"http://example.com/123456",
"https://site.org/post/45678"
]
}
}
```
The cool thing about JSON files is I can export multiple times to the same file and it will update it! Lets say I made a second Router that grabbed the tags, and it was set to export to the same filename but under `metadata->tags`. The final sidecar would look like this:
``` json
{
"metadata" : {
"tags" : [
"blonde hair",
"blue eyes",
"skirt"
],
"urls" : [
"http://example.com/123456",
"https://site.org/post/45678"
]
}
}
```
You should be careful that the location you are exporting to does not have any old JSON files with conflicting filenames in it--hydrus will update them, not overwrite them! This may be an issue if you have an synchronising Export Folder that exports random files with the same filenames.

View File

@ -7,6 +7,95 @@ title: Changelog
!!! note
This is the new changelog, only the most recent builds. For all versions, see the [old changelog](old_changelog.html).
## [Version 514](https://github.com/hydrusnetwork/hydrus/releases/tag/v514)
### downloaders
* twitter took down the API we were using, breaking all our nice twitter downloaders! argh!
* a user has figured out a basic new downloader that grabs the tweets amongst the first twenty tweets-and-retweets of an account. yes, only the first twenty max, and usually fewer. because this is a big change, the client will ask about it when you update. if you have some complicated situation where you are working on the old default twitter downloaders and don't want them deleted, you can select 'no' on the dialog it throws up, but everyone else wants to say 'yes'. then check your twitter subs: make sure they moved to the new downloader, and you probably want to make them check more frequently too.
* given the rate of changes at twitter, I think we can expect more changes and blocks in future. I don't know whether nitter will be viable alternative, so if the artists you like end up on a nice simple booru _anywhere_, I strongly recommend just moving there. twitter appears to be explicitly moving to non-third-party-friendly
* thanks to a user's work, the 'danbooru - get webm ugoira' parser is fixed!
* thanks to a user's work, the deviant art parser is updated to get the highest res image in more situations!
* thanks to a user's work, the pixiv downloader now gets the artist note, in japanese (and translated, if there is one), and a 'medium:ai generated' tag!
### sidecars
* I wrote some sidecar help here! https://hydrusnetwork.github.io/hydrus/advanced_sidecars.html
* when the client parses files for import, the 'does this look like a sidecar?' test now also checks that the base component of the base filename (e.g. 'Image123' from 'Image123.jpg.txt') actually appears in the list of non-txt/json/xml ext files. a random yo.txt file out of nowhere will now be inspected in case it is secretly a jpeg again, for good or ill
* when you drop some files on the client, the number of files skipped because they looked like sidecars is now stated in the status label
* fixed a typo bug that meant tags imported from sidecars were not being properly cleaned, despite preview appearance otherwise, for instance ':)', which in hydrus needs to be secretly stored as '::)' was being imported as ')'
* as a special case, tags that in hydrus are secretly '::)' will be converted to ':)' on export to sidecar too, the inverse of the above problem. there may be some other tag cleaning quirks to undo here, so let me know what you run into
### related tags overhaul
* the 'related tags' suggestion system, turned on under _options->tag suggestions_, has several changes, including some prototype tech I'd love feedback on
* first off, there are two new search buttons, 'new 1' and 'new 2' ('2' is available on repositories only).. these use an upgraded statistical search and scoring system that a user worked on and sent in. I have butchered his specific namespace searching system to something more general/flexible and easy for me to maintain, but it works better and more comprehensibly than my old method! give it a go and let me know how each button does--the first one will be fast but less useful on the PTR, the second will be slower but generally give richer results (although it cannot do tags with too-high count)
* the new search routine works on multiple files, so 'related tags' now shows on tag dialogs launched from a selection of thumbnails!
* also, all the related search buttons now search any selection of tags you make!!! so if you can't remember that character's name, just click on the series or another character they are often with and hit the search, and you should get a whole bunch appear
* I am going to keep working on this in the future. the new buttons will become the only buttons, I'll try and mitigate the prototype search limitations, add some cancel tech, move to a time-based search length like the current buttons, and I'll add more settings, including for filtering so we aren't looking up related tags for 'page:x' and so on. I'm interested in knowing how you get on with IRL data. are there too many recommendations (is the tolerance too high?)? is the sorting good (is the stuff at the top relevant or often just noise?)?
### misc
* all users can now copy their service keys (which are a technical non-changing hex identifier for your client's services) from the review services window--advanced mode is no longer needed. this may be useful as the client api transitions to service keys
* when a job in the downloader search log generates new jobs (e.g. fetches the next page), the new job(s) are now inserted after the parent. previously, they were appended to the end of the list. this changes how ngugs operate, converting their searches from interleaved to sequential!
* restarting search log jobs now also places the new job after the restarted job
* when you create a new export folder, if you have default metadata export sidecar settings from a previous manual file export, the program now asks if you want those for the new export folder or an empty list. previously, it just assigned the saved default, which could be jarring if it was saved from ages ago
* added a migration guide to the running from source help. also brushed up some language and fixed a bunch of borked title weights in that document
* the max initial and periodic file limits in subscriptions is now 50k when in advanced mode. I can't promise that would be nice though!
* the file history chart no longer says that inbox and delete time tracking are new
### misc fixes
* fixed a cursor type detection test that was stopping the cursor from hiding immediately when you do a media viewer drag in Qt6
* fixed an issue where 'clear deletion record' calls were not deleting from the newer 'all my files' domain. the erroneous extra records will be searched for and scrubbed on update
* fixed the issue where if you had the new 'unnamespaced input gives (any namespace) wildcard results' search option on, you couldn't add any novel tags in WRITE autocomplete contexts like 'manage tags'!!! it could only offer the automatically converted wildcard tags as suggested input, which of course aren't appropriate for a WRITE context. the way I ultimately fixed this was horrible; the whole thing needs more work to deal with clever logic like this better, so let me know if you get any more trouble here
* I think I fixed an infinite hang when trying to add certain siblings in manage tag siblings. I believe this was occuring when the dialog was testing if the new pair would create a loop when the sibling structure already contains a loop. now it throws up a message and breaks the test
* fixed an issue where certain system:filetype predicates would spawn apparent duplicates of themselves instead of removing on double-click. images+audio+video+swf+pdf was one example. it was a 'all the image types' vs 'list of (all the) image types' conversion/comparison/sorting issue
### client api
* **this is later than I expected, but as was planned last year, I am clearing up several obsolete parameters and data structures this week. mostly it is bad service name-identification that seemed simple or flexible to support but just added maintenance debt, induced bad implementation practises, and hindered future expansions. if you have a custom api script, please read on--and if you have not yet moved to the alternatives, do so before updating!**
* **all `...service_name...` parameters are officially obsolete! they will still work via some legacy hacks, so old scripts shouldn't break, but they are no longer documented. please move to the `...service_key...` alternates as soon as reasonably possible (check out `/get_services` if you need to learn about service keys)**
* **`/add_tags/get_tag_services` is removed! use `/get_services` instead!**
* **`hide_service_names_tags`, previously made default true, is removed and its data structures `service_names_to_statuses_to_...` are also gone! move to the new `tags` structure.**
* **`hide_service_keys_tags` is now default true. it will be removed in 4 weeks or so. same deal as with `service_names_to_statuses_to_...`--move to `tags`**
* **`system_inbox` and `system_archive` are removed from `/get_files/search_files`! just use 'system:inbox/archive' in the tags list**
* **the 'set_file_relationships' command from last week has been reworked to have a nicer Object parameter with a new name. please check the updated help!** normally I wouldn't change something so quick, but we are still in early prototype, so I'm ok shifting it (and the old method still works lmao, but I'll clear that code out in a few weeks, so please move over--the Object will be much nicer to expand in future, which I forgot about in v513)
### many Client API commands now support modern file domain objects, meaning you can search a UNION of file services and 'deleted-from' file services. affected commands are
* * /add_files/delete_files
* * /add_files/undelete_files
* * /add_tags/search_tags
* * /get_files/search_files
* * /manage_file_relationships/get_everything
* a new `/get_service` call now lets you ask about an individual service by service name or service key, basically a parameterised /get_services
* the `/manage_pages/get_pages` and `/manage_pages/get_page_info` calls now give the `page_state`, a new enum that says if the page is ready, initialised, searching, or search-cancelled
* to reduce duplicate argument spam, the client api help now specifies the complicated 'these files' and now 'this file domain' arguments into sub-sections, and the commands that use them just point to the subsections. check it out--it makes sense when you look at it.
* `/add_tags/add_tags` now raises 400 if you give an invalid content action (e.g. pending to a local tag service). previously it skipped these rows silently
* added and updated unit tests and help for the above changes
* client api version is now 41
### boring optimisation
* when you are looking at a search log or file log, if entries are added, removed, or moved around, all the log entries that have changed row # now update (previously it just sent a redraw signal for the new rows, not the second-order affected rows that were shuffled up/down. many access routines for these logs are sped up
* file log status checking is completely rewritten. the ways it searches, caches and optimises the 'which is the next item with x status' queues is faster and requires far less maintenance. large import queues have less overhead, so the in and outs of general download work should scale up much better now
* the main data cache that stores rendered images, image tiles, and thumbnails now maintains itself far more efficiently. there was a hellish O(n) overhead when adding or removing an item which has been reduced to constant time. this gonk was being spammed every few minutes during normal memory maintenance, when hundreds of thumbs can be purged at once. clients with tens of thousands of thumbnails in memory will maintain that list far more smoothly
* physical file delete is now more efficient, requiring far fewer hard drive hits to delete a media file. it is also far less aggressive, with a new setting in _options->files and trash_ that sets how long to wait between individual file deletes, default 250ms. before, it was full LFG mode with minor delays every hundred/thousand jobs, and since it takes a write lock, it was lagging out thumbnail load when hitting a lot of work. the daemon here also shuts down faster if caught working during program shut down
### boring code cleanup
* refactored some parsing routines to be more flexible
* added some more dictionary and enum type testing to the client api parameter parsing routines. error messages should be better!
* improved how `/add_tags/add_tags` parsing works. ensuring both access methods check all types and report nicer errors
* cleaned up the `/search_files/file_metadata` call's parsing, moving to the new generalised method and smoothing out some old code flow. it now checks hashes against the last search, too
* cleaned up `/manage_pages/add_files` similarly
* cleaned up how tag services are parsed and their errors reported in the client api
* the client api is better about processing the file identifiers you give it in the same order you gave
* fixed bad 'potentials_search_type'/'search_type' inconsistency in the client api help examples
* obviously a bunch of client api unit test and help cleanup to account for the obsolete stuff and various other changes here
* updated a bunch of the client api unit tests to handle some of the new parsing
* fixed the remaining 'randomly fail due to complex counting logic' potential count unit tests. turns out there were like seven more of them
## [Version 513](https://github.com/hydrusnetwork/hydrus/releases/tag/v513)
### client api
@ -430,52 +519,3 @@ title: Changelog
* cleaned up some edge cases in the 'which account added this file/mapping to the server?' tech, where it might have been possible, when looking up deleted content, to get another janitor account (i.e. who deleted the content), although I am pretty sure this situation was never possible to actually start in UI. if I add 'who deleted this?' tech in future, it'll be a separate specific call
* cleaned up some specifically 'Qt6' references in the build script. the build requirements.txts and spec files are also collapsed down, with old Qt5 versions removed
* filled out some incomplete abstract class definitions
## [Version 504](https://github.com/hydrusnetwork/hydrus/releases/tag/v504)
### Qt5
* as a reminder, I am no longer supporting Qt5 with the official builds. if you are on Windows 7 (and I have heard at least one version of Win 8.1), or a similarly old OS, you likely cannot run the official builds now. if this is you, please check the 'running from source' guide in the help, which will allow you to keep updating the program. this process is now easy in Windows and should be similarly easy on other platforms soon
### misc
* if you run from source in windows, the program _should_ now have its own taskbar group and use the correct hydrus icon. if you try and pin it to taskbar, it will revert to the 'python' icon, but you can give a shortcut to a batch file an icon and pin that to start
* unfortunately, I have to remove the 'deviant art tag search' downloader this week. they killed the old API we were using, and what remaining open date-paginated search results the site offers is obfuscated and tokenised (no permanent links), more than I could quickly unravel. other downloader creators are welcome to give it a go. if you have a subscription for a da tag search, it will likely complain on its next run. please pause it and try to capture the best artists from that search (until DA kill their free artist api, then who knows what will happen). the oauth/phone app menace marches on
* focus on the thumbnail panel is now preserved whenever it swaps out for another (like when you refresh the search)
* fixed an issue where cancelling service selection on database->c&r->repopulate truncated would create an empty modal message
* fixed a stupid typo in the recently changed server petition counting auto-fixing code
### importer/exporter sidecar expansion
* when you import or export files from/to disk, either manually or automatically, the option to pull or send tags to .txt files is now expanded:
* - you can now import or export URLs
* - you can now read or write .json files
* - you can now import from or export to multiple sidecars, and have multiple separate pipelines
* - you can now give sidecar files suffixes, for ".tags.txt" and similar
* - you can now filter and transform all the strings in this pipeline using the powerful String Processor just like in the parsing system
* this affects manual imports, manual exports, import folders, and export folders. instead of smart .txt checkboxes, there's now a button leading to some nested dialogs to customise your 'routers' and, in manual imports, a new page tab in the 'add tags before import' window
* this bones of this system was already working in the background when I introduced it earlier this year, but now all components are exposed
* new export folders now start with the same default metadata migration as set in the last manual file export dialog
* this system will expand in future. most important is to add a 'favourites' system so you can easily save/load your different setups. then adding more content types (e.g. ratings) and .xml. I'd also like to add purely internal file-to-itself datatype transformation (e.g. pulling url:(url) tags and converting them to actual known urls, and vice versa)
### importer/exporter sidecar expansion (boring stuff)
* split the importer/exporter objects into separate importers and exporters. existing router objects will update and split their internal objects safely
* all objects in this system can now describe themselves
* all import/export nodes now produce appropriate example texts for string processing and parsing UI test panels
* Filename Tagging Options objects no longer track neighbouring .txt file importing, and their UI removes it too. Import Folders will suck their old data on update and convert to metadata routers
* wrote a json sidecar importer that takes a parsing formula
* wrote a json sidecar exporter that takes a list of dictionary names to export to. it will edit an existing file
* wrote some ui panels to edit single file metadata migration routers
* wrote some ui panels to edit single file metadata migration importers
* wrote some ui panels to edit single file metadata migration exporters
* updated edit export folder panel to use the new UI. it was already using a full static version of the system behind the scenes; now this is exposed and editable
* updated the manual file export panel to use the new UI. it was using a half version of the system before--now the default options are updated to the new router object and you can create multiple exports
* updated import folders to use the new UI. the filename tagging options no longer handles .txt, it is now on a separate button on the import folder
* updated manual file imports to use the new UI. the 'add tags before import' window now has a 'sidecars' page tab, which lets you edit metadata routers. it updates a path preview list live with what it expects to parse
* a full suite of new unit tests now checks the router, the four import nodes, and the four export nodes thoroughly
* renamed ClientExportingMetadata to ClientMetadataMigration and moved to the metadata module. refactored the importers, exporters, and shared methods to their own files in the same module
* created a gui.metadata module for the new router and metadata import/export widgets and panels
* created a gui.exporting module for the existing export folder and manual export gui code
* reworked some of the core importer/exporter objects and inheritance in clientmetadatamigration
* updated the HDDImport object and creation pipeline to handle metadata routers (as piped from the new sidecars tab)
* when the hdd import or import folder is set to delete original files, now all defined sidecars are deleted along with the media file
* cleaned up a bunch of related metadata importer/exporter code
* cleaned import folder code
* cleaned hdd importer code

View File

@ -106,6 +106,68 @@ Session keys will expire if they are not used within 24 hours, or if the client
Bear in mind the Client API is still under construction. Setting up the Client API to be accessible across the internet requires technical experience to be convenient. HTTPS is available for encrypted comms, but the default certificate is self-signed (which basically means an eavesdropper can't see through it, but your ISP/government could if they decided to target you). If you have your own domain and SSL cert, you can replace them though (check the db directory for client.crt and client.key). Otherwise, be careful about transmitting sensitive content outside of your localhost/network.
## Common Complex Parameters
### **files** { id="parameters_files" }
If you need to refer to some files, you can use any of the following:
Arguments:
:
* `file_id`: (selective, a numerical file id)
* `file_ids`: (selective, a list of numerical file ids)
* `hash`: (selective, a hexadecimal SHA256 hash)
* `hashes`: (selective, a list of hexadecimal SHA256 hashes)
In GET requests, make sure any list is percent-encoded.
### **file domain** { id="parameters_file_domain" }
When you are searching, you may want to specify a particular file domain. Most of the time, you'll want to just set `file_service_key`, but this can get complex:
Arguments:
:
* `file_service_key`: (optional, selective A, hexadecimal, the file domain on which to search)
* `file_service_keys`: (optional, selective A, list of hexadecimals, the union of file domains on which to search)
* `deleted_file_service_key`: (optional, selective B, hexadecimal, the 'deleted from this file domain' on which to search)
* `deleted_file_service_keys`: (optional, selective B, list of hexadecimals, the union of 'deleted from this file domain' on which to search)
The service keys are as in [/get\_services](#get_services).
Hydrus supports two concepts here:
* Searching over a UNION of subdomains. If the user has several local file domains, e.g. 'favourites', 'personal', 'sfw', and 'nsfw', they might like to search two of them at once.
* Searching deleted files of subdomains. You can specifically, and quickly, search the files that have been deleted from somewhere.
You can play around with this yourself by clicking 'multiple locations' in the client with _help->advanced mode_ on.
In extreme edge cases, these two can be mixed by populating both A and B selective, making a larger union of both current and deleted file records.
Please note that unions can be very very computationally expensive. If you can achieve what you want with a single file_service_key, two queries in a row with different service keys, or an umbrella like `all my files` or `all local files`, please do. Otherwise, let me know what is running slow and I'll have a look at it.
'deleted from all local files' includes all files that have been physically deleted (i.e. deleted from the trash) and not available any more for fetch file/thumbnail requests. 'deleted from all my files' includes all of those physically deleted files _and_ the trash. If a file is deleted with the special 'do not leave a deletion record' command, then it won't show up in a 'deleted from file domain' search!
'all known files' is a tricky domain. It converts much of the search tech to ignore where files actually are and look at the accompanying tag domain (e.g. all the files that have been tagged), and can sometimes be very expensive.
Also, if you have the option to set both file and tag domains, you cannot enter 'all known files'/'all known tags'. It is too complicated to support, sorry!
### **legacy service_name parameters** { id="legacy_service_name_parameters" }
The Client API used to respond to name-based service identifiers, for instance using 'my tags' instead of something like '6c6f63616c2074616773'. Service names can change, and they aren't _strictly_ unique either, so I have moved away from them, but there is some soft legacy support.
The client will attempt to convert any of these to their 'service_key(s)' equivalents:
* file_service_name
* tag_service_name
* service_names_to_tags
* service_names_to_actions_to_tags
* service_names_to_additional_tags
But I strongly encourage you to move away from them as soon as reasonably possible. Look up the service keys you need with [/get\_service](#get_service) or [/get\_services](#get_services).
If you have a clever script/program that does many things, then hit up [/get\_services](#get_services) on session initialisation and cache an internal map of key_to_name for the labels to use when you present services to the user.
Also, note that all users can now copy their service keys from _review services_.
## Access Management
@ -209,6 +271,44 @@ Response:
```
### **GET `/get_service`** { id="get_service" }
_Ask the client about a specific service._
Restricted access:
: YES. At least one of Add Files, Add Tags, Manage Pages, or Search Files permission needed.
Required Headers: n/a
Arguments:
:
* `service_name`: (selective, string, the name of the service)
* `service_key`: (selective, hex string, the service key of the service)
Example requests:
:
```title="Example requests"
/get_service?service_name=my%20tags
/get_service?service_key=6c6f63616c2074616773
```
Response:
: Some JSON about the service. The same basic format as [/get\_services](#get_services)
```json title="Example response"
{
"service" : {
"name" : "my tags",
"service_key" : "6c6f63616c2074616773",
"type" : 5,
"type_pretty" : "local tag service"
}
}
```
If the service does not exist, this gives 404. It is very unlikely but edge-case possible that two services will have the same name, in this case you'll get the pseudorandom first.
It will only respond to services in the /get_services list. I will expand the available types in future as we add ratings etc... to the Client API.
### **GET `/get_services`** { id="get_services" }
_Ask the client about its file and tag services._
@ -312,11 +412,14 @@ Response:
]
}
```
These services may be referred to in various metadata responses or required in request parameters (e.g. where to add tag mappings). Note that a user can rename their services. The older parts of the Client API use the renameable 'service name' as service identifier, but wish to move away from this. Please use the hex 'service_key', which is a non-mutable ID specific to each client. The hardcoded services have shorter service key strings (it is usually just 'all known files' etc.. ASCII-converted to hex), but user-made stuff will have 64-character hex.
Note that a user can rename their services, so while they will recognise `name`, it is not an excellent identifier, and definitely not something to save to any permanent config file.
Now that I state `type` and `type_pretty` here, I may rearrange this call, probably to make the `service_key` the Object key, rather than the arbitrary 'all_known_tags' strings.
`service_key` is non-mutable and is the main service identifier. The hardcoded/initial services have shorter fixed service key strings (it is usually just 'all known files' etc.. ASCII-converted to hex), but user-created services will have random 64-character hex.
You won't see all these, and you'll only ever need some, but `type` is:
Now that I state `type` and `type_pretty` here, I may rearrange this call, probably into a flat list. The `all_known_files` Object keys here are arbitrary.
For service `type`, you won't see all these, and you'll only ever need some, but the enum is:
* 0 - tag repository
* 1 - file repository
@ -337,6 +440,7 @@ Response:
* 21 - all my files -- union of all local file domains
* 99 - server administration
`type_pretty` is something you can show users if you like. Hydrus uses the same labels in _manage services_ and so on.
## Importing and Deleting Files
@ -396,12 +500,8 @@ Required Headers:
Arguments (in JSON):
:
* `hash`: (an SHA256 hash for a file in 64 characters of hexadecimal)
* `hashes`: (a list of SHA256 hashes)
* `file_id`: (a numerical file id)
* `file_ids`: (a list of numerical file ids)
* `file_service_name`: (optional, selective, string, the local file domain from which to delete, or all local files)
* `file_service_key`: (optional, selective, hexadecimal, the local file domain from which to delete, or all local files)
* [files](#parameters_files)
* [file domain](#parameters_file_domain) (optional, defaults to 'all my files')
* `reason`: (optional, string, the reason attached to the delete action)
```json title="Example request body"
@ -411,9 +511,7 @@ Arguments (in JSON):
Response:
: 200 and no content.
You can use hash or hashes, whichever is more convenient.
If you specify a file service, the file will only be deleted from that location. Only local file domains are allowed (so you can't delete from a file repository or unpin from ipfs yet), but if you specific 'all local files', you should be able to trigger a physical delete if you wish.
If you specify a file service, the file will only be deleted from that location. Only local file domains are allowed (so you can't delete from a file repository or unpin from ipfs yet). It defaults to 'all my files', which will delete from all local services (i.e. force sending to trash). Sending 'all local files' on a file already in the trash will trigger a physical file delete.
### **POST `/add_files/undelete_files`** { id="add_files_undelete_files" }
@ -428,12 +526,8 @@ Required Headers:
Arguments (in JSON):
:
* `hash`: (an SHA256 hash for a file in 64 characters of hexadecimal)
* `hashes`: (a list of SHA256 hashes)
* `file_id`: (a numerical file id)
* `file_ids`: (a list of numerical file ids)
* `file_service_name`: (optional, selective, string, the local file domain to which to undelete)
* `file_service_key`: (optional, selective, hexadecimal, the local file domain to which to undelete)
* [files](#parameters_files)
* [file domain](#parameters_file_domain) (optional, defaults to 'all my files')
```json title="Example request body"
{"hash" : "78f92ba4a786225ee2a1236efa6b7dc81dd729faf4af99f96f3e20bad6d8b538"}
@ -444,7 +538,7 @@ Response:
You can use hash or hashes, whichever is more convenient.
This is the reverse of a delete_files--removing files from trash and putting them back where they came from. If you specify a file service, the files will only be undeleted to there (if they have a delete record, otherwise this is nullipotent). If you do not specify a file service, they will be undeleted to all local file services for which there are deletion records. There is no error if any files do not currently exist in 'trash'.
This is the reverse of a delete_files--removing files from trash and putting them back where they came from. If you specify a file service, the files will only be undeleted to there (if they have a delete record, otherwise this is nullipotent). The default, 'all my files', undeletes to all local file services for which there are deletion records. There is no error if any of the files do not currently exist in 'trash'.
### **POST `/add_files/archive_files`** { id="add_files_archive_files" }
@ -460,10 +554,7 @@ Required Headers:
Arguments (in JSON):
:
* `hash`: (an SHA256 hash for a file in 64 characters of hexadecimal)
* `hashes`: (a list of SHA256 hashes)
* `file_id`: (a numerical file id)
* `file_ids`: (a list of numerical file ids)
* [files](#parameters_files)
```json title="Example request body"
{"hash" : "78f92ba4a786225ee2a1236efa6b7dc81dd729faf4af99f96f3e20bad6d8b538"}
@ -472,8 +563,6 @@ Arguments (in JSON):
Response:
: 200 and no content.
You can use hash or hashes, whichever is more convenient.
This puts files in the 'archive', taking them out of the inbox. It only has meaning for files currently in 'my files' or 'trash'. There is no error if any files do not currently exist or are already in the archive.
@ -490,10 +579,7 @@ Required Headers:
Arguments (in JSON):
:
* `hash`: (an SHA256 hash for a file in 64 characters of hexadecimal)
* `hashes`: (a list of SHA256 hashes)
* `file_id`: (a numerical file id)
* `file_ids`: (a list of numerical file ids)
* [files](#parameters_files)
```json title="Example request body"
{"hash" : "78f92ba4a786225ee2a1236efa6b7dc81dd729faf4af99f96f3e20bad6d8b538"}
@ -502,8 +588,6 @@ Arguments (in JSON):
Response:
: 200 and no content.
You can use hash or hashes, whichever is more convenient.
This puts files back in the inbox, taking them out of the archive. It only has meaning for files currently in 'my files' or 'trash'. There is no error if any files do not currently exist or are already in the inbox.
@ -616,10 +700,8 @@ Arguments (in JSON):
* `destination_page_key`: (optional page identifier for the page to receive the url)
* `destination_page_name`: (optional page name to receive the url)
* `show_destination_page`: (optional, defaulting to false, controls whether the UI will change pages on add)
* `service_names_to_additional_tags`: (optional, selective, tags to give to any files imported from this url)
* `service_keys_to_additional_tags`: (optional, selective, tags to give to any files imported from this url)
* `filterable_tags`: (optional tags to be filtered by any tag import options that applies to the URL)
* _`service_names_to_tags`: (obsolete, legacy synonym for service\_names\_to\_additional_tags)_
If you specify a `destination_page_name` and an appropriate importer page already exists with that name, that page will be used. Otherwise, a new page with that name will be recreated (and used by subsequent calls with that name). Make sure it that page name is unique (e.g. '/b/ threads', not 'watcher') in your client, or it may not be found.
@ -627,7 +709,7 @@ Alternately, `destination_page_key` defines exactly which page should be used. B
`show_destination_page` defaults to False to reduce flicker when adding many URLs to different pages quickly. If you turn it on, the client will behave like a URL drag and drop and select the final page the URL ends up on.
`service_names_to_additional_tags` and `service_keys_to_additional_tags` use the same data structure as in /add\_tags/add\_tags--service ids to a list of tags to add. You will need 'add tags' permission or this will 403. These tags work exactly as 'additional' tags work in a _tag import options_. They are service specific, and always added unless some advanced tag import options checkbox (like 'only add tags to new files') is set.
`service_keys_to_additional_tags` uses the same data structure as in /add\_tags/add\_tags--service keys to a list of tags to add. You will need 'add tags' permission or this will 403. These tags work exactly as 'additional' tags work in a _tag import options_. They are service specific, and always added unless some advanced tag import options checkbox (like 'only add tags to new files') is set.
filterable_tags works like the tags parsed by a hydrus downloader. It is just a list of strings. They have no inherant service and will be sent to a _tag import options_, if one exists, to decide which tag services get what. This parameter is useful if you are pulling all a URL's tags outside of hydrus and want to have them processed like any other downloader, rather than figuring out service names and namespace filtering on your end. Note that in order for a tag import options to kick in, I think you will have to have a Post URL URL Class hydrus-side set up for the URL so some tag import options (whether that is Class-specific or just the default) can be loaded at import time.
@ -635,8 +717,8 @@ filterable_tags works like the tags parsed by a hydrus downloader. It is just a
{
"url" : "https://8ch.net/tv/res/1846574.html",
"destination_page_name" : "kino zone",
"service_names_to_additional_tags" : {
"my tags" : ["as seen on /tv/"]
"service_keys_to_additional_tags" : {
"6c6f63616c2074616773" : ["as seen on /tv/"]
}
}
```
@ -703,16 +785,13 @@ Required Headers:
Arguments (in JSON):
:
* `url_to_add`: (an url you want to associate with the file(s))
* `urls_to_add`: (a list of urls you want to associate with the file(s))
* `url_to_delete`: (an url you want to disassociate from the file(s))
* `urls_to_delete`: (a list of urls you want to disassociate from the file(s))
* `hash`: (an SHA256 hash for a file in 64 characters of hexadecimal)
* `hashes`: (a list of SHA256 hashes)
* `file_id`: (a numerical file id)
* `file_ids`: (a list of numerical file ids)
* `url_to_add`: (optional, selective A, an url you want to associate with the file(s))
* `urls_to_add`: (optional, selective A, a list of urls you want to associate with the file(s))
* `url_to_delete`: (optional, selective B, an url you want to disassociate from the file(s))
* `urls_to_delete`: (optional, selective B, a list of urls you want to disassociate from the file(s))
* [files](#parameters_files)
All of these are optional, but you obviously need to have at least one of `url` arguments and one of the `hash` arguments. The single/multiple arguments work the same--just use whatever is convenient for you. Unless you really know what you are doing with URL Classes, I strongly recommend you stick to associating URLs with just one single 'hash' at a time. Multiple hashes pointing to the same URL is unusual and frequently unhelpful.
The single/multiple arguments work the same--just use whatever is convenient for you. Unless you really know what you are doing with URL Classes, I strongly recommend you stick to associating URLs with just one single 'hash' at a time. Multiple hashes pointing to the same URL is unusual and frequently unhelpful.
```json title="Example request body"
{
"url_to_add" : "https://rule34.xxx/index.php?id=2588418&page=post&s=view",
@ -756,32 +835,6 @@ Response:
Mostly, hydrus simply trims excess whitespace, but the other examples are rare issues you might run into. 'system' is an invalid namespace, tags cannot be prefixed with hyphens, and any tag starting with ':' is secretly dealt with internally as "\[no namespace\]:\[colon-prefixed-subtag\]". Again, you probably won't run into these, but if you see a mismatch somewhere and want to figure it out, or just want to sort some numbered tags, you might like to try this.
### **GET `/add_tags/get_tag_services`** { id="add_tags_get_tag_services" }
!!! warning "Deprecated"
This is becoming obsolete and will be removed! Use [/get_services](#get_services) instead!
_Ask the client about its tag services._
Restricted access:
: YES. Add Tags permission needed.
Required Headers: n/a
Arguments: n/a
Response:
: Some JSON listing the client's 'local tags' and tag repository services by name.
```json title="Example response"
{
"local_tags" : ["my tags"],
"tag_repositories" : [ "public tag repository", "mlp fanfic tagging server" ]
}
```
!!! note
A user can rename their services. Don't assume the client's local tags service will be "my tags".
### **GET `/add_tags/search_tags`** { id="add_tags_search_tags" }
_Search the client for tags._
@ -793,15 +846,21 @@ Required Headers: n/a
Arguments:
:
* `search`: (the tag text to search for, enter exactly what you would in the client UI)
* `tag_service_key`: (optional, selective, hexadecimal, the tag domain on which to search)
* `tag_service_name`: (optional, selective, string, the tag domain on which to search)
* `tag_display_type`: (optional, string, to select whether to search raw or sibling-processed tags)
* `search`: (the tag text to search for, enter exactly what you would in the client UI)
* [file domain](#parameters_file_domain) (optional, defaults to 'all my files')
* `tag_service_key`: (optional, hexadecimal, the tag domain on which to search, defaults to 'all known tags')
* `tag_display_type`: (optional, string, to select whether to search raw or sibling-processed tags, defaults to 'storage')
The `file domain` and `tag_service_key` perform the function of the file and tag domain buttons in the client UI.
The `tag_display_type` can be either `storage` (the default), which searches your file's stored tags, just as they appear in a 'manage tags' dialog, or `display`, which searches the sibling-processed tags, just as they appear in a normal file search page. In the example above, setting the `tag_display_type` to `display` could well combine the two kim possible tags and give a count of 3 or 4.
'all my files'/'all known tags' works fine for most cases, but a specific tag service or 'all known files'/'tag service' can work better for editing tag repository `storage` contexts, since it provides results just for that service, and for repositories, it gives tags for all the non-local files other users have tagged.
Example request:
:
```http title="Example request"
/add_tags/search_tags?search=kim
/add_tags/search_tags?search=kim&tag_display_type=display
```
Response:
@ -827,14 +886,10 @@ Response:
}
```
The `tags` list will be sorted by descending count. If you do not specify a tag service, it will default to 'all known tags'. The various rules in _tags->manage tag display and search_ (e.g. no pure `*` searches on certain services) will also be checked--and if violated, you will get 200 OK but an empty result.
The `tag_display_type` can be either `storage` (the default), which searches your file's stored tags, just as they appear in a 'manage tags' dialog, or `display`, which searches the sibling-processed tags, just as they appear in a normal file search page. In the example above, setting the `tag_display_type` to `display` could well combine the two kim possible tags and give a count of 3 or 4.
The `tags` list will be sorted by descending count. The various rules in _tags->manage tag display and search_ (e.g. no pure `*` searches on certain services) will also be checked--and if violated, you will get 200 OK but an empty result.
Note that if your client api access is only allowed to search certain tags, the results will be similarly filtered.
Also, for now, it gives you the 'storage' tags, which are the 'raw' ones you see in the manage tags dialog, without collapsed siblings, but more options will be added in future.
### **POST `/add_tags/add_tags`** { id="add_tags_add_tags" }
_Make changes to the tags that files have._
@ -846,21 +901,14 @@ Required Headers: n/a
Arguments (in JSON):
:
* `hash`: (selective A, an SHA256 hash for a file in 64 characters of hexadecimal)
* `hashes`: (selective A, a list of SHA256 hashes)
* `file_id`: (a numerical file id)
* `file_ids`: (a list of numerical file ids)
* `service_names_to_tags`: (selective B, an Object of service names to lists of tags to be 'added' to the files)
* [files](#parameters_files)
* `service_keys_to_tags`: (selective B, an Object of service keys to lists of tags to be 'added' to the files)
* `service_names_to_actions_to_tags`: (selective B, an Object of service names to content update actions to lists of tags)
* `service_keys_to_actions_to_tags`: (selective B, an Object of service keys to content update actions to lists of tags)
You can use either 'hash' or 'hashes'.
You can use either 'service\_names\_to...' or 'service\_keys\_to...', where names is simple and human-friendly "my tags" and similar (but may be renamed by a user), but keys is a little more complicated but accurate/unique. Since a client may have multiple tag services with non-default names and pseudo-random keys, if it is not your client you will need to check the [/get_services](#get_services) call to get the names or keys, and you may need some selection UI on your end so the user can pick what to do if there are multiple choices. I encourage using keys if you can.
In 'service\_keys\_to...', the keys are as in [/get\_services](#get_services). You may need some selection UI on your end so the user can pick what to do if there are multiple choices.
Also, you can use either '...to\_tags', which is simple and add-only, or '...to\_actions\_to\_tags', which is more complicated and allows you to remove/petition or rescind pending content.
The permitted 'actions' are:
* 0 - Add to a local tag service.
@ -877,8 +925,8 @@ Some example requests:
```json title="Adding some tags to a file"
{
"hash" : "df2a7b286d21329fc496e3aa8b8a08b67bb1747ca32749acb3f5d544cbfc0f56",
"service_names_to_tags" : {
"my tags" : ["character:supergirl", "rating:safe"]
"service_keys_to_tags" : {
"6c6f63616c2074616773" : ["character:supergirl", "rating:safe"]
}
}
```
@ -888,9 +936,9 @@ Some example requests:
"df2a7b286d21329fc496e3aa8b8a08b67bb1747ca32749acb3f5d544cbfc0f56",
"f2b022214e711e9a11e2fcec71bfd524f10f0be40c250737a7861a5ddd3faebf"
],
"service_names_to_tags" : {
"my tags" : ["process this"],
"public tag repository" : ["creator:dandon fuga"]
"service_keys_to_tags" : {
"6c6f63616c2074616773" : ["process this"],
"ccb0cf2f9e92c2eb5bd40986f72a339ef9497014a5fb8ce4cea6d6c9837877d9" : ["creator:dandon fuga"]
}
}
```
@ -914,7 +962,7 @@ Some example requests:
This last example is far more complicated than you will usually see. Pend rescinds and petition rescinds are not common. Petitions are also quite rare, and gathering a good petition reason for each tag is often a pain.
Note that the enumerated status keys in the service\_names\_to\_actions\_to_tags structure are strings, not ints (JSON does not support int keys for Objects).
Note that the enumerated status keys in the service\_keys\_to\_actions\_to_tags structure are strings, not ints (JSON does not support int keys for Objects).
Response description:
: 200 and no content.
@ -1028,16 +1076,12 @@ Required Headers: n/a
Arguments (in percent-encoded JSON):
:
* `tags`: (a list of tags you wish to search for)
* `file_service_name`: (optional, selective, string, the file domain on which to search)
* `file_service_key`: (optional, selective, hexadecimal, the file domain on which to search)
* `tag_service_name`: (optional, selective, string, the tag domain on which to search)
* `tag_service_key`: (optional, selective, hexadecimal, the tag domain on which to search)
* `file_sort_type`: (optional, integer, the results sort method)
* [file domain](#parameters_file_domain) (optional, defaults to 'all my files')
* `tag_service_key`: (optional, hexadecimal, the tag domain on which to search, defaults to 'all my files')
* `file_sort_type`: (optional, integer, the results sort method, defaults to 'all known tags')
* `file_sort_asc`: true or false (optional, the results sort order)
* `return_file_ids`: true or false (optional, default true, returns file id results)
* `return_hashes`: true or false (optional, default false, returns hex hash results)
* _`system_inbox`: true or false (obsolete, use tags)_
* _`system_archive`: true or false (obsolete, use tags)_
``` title='Example request for 16 files (system:limit=16) in the inbox with tags "blue eyes", "blonde hair", and "кино"'
/get_files/search_files?tags=%5B%22blue%20eyes%22%2C%20%22blonde%20hair%22%2C%20%22%5Cu043a%5Cu0438%5Cu043d%5Cu043e%22%2C%20%22system%3Ainbox%22%2C%20%22system%3Alimit%3D16%22%5D
@ -1046,8 +1090,6 @@ Arguments (in percent-encoded JSON):
If the access key's permissions only permit search for certain tags, at least one positive whitelisted/non-blacklisted tag must be in the "tags" list or this will 403. Tags can be prepended with a hyphen to make a negated tag (e.g. "-green eyes"), but these will not be checked against the permissions whitelist.
File searches occur in the `display` `tag_display_type`. If you want to pair autocomplete tag lookup from [/search_tags](#add_tags_search_tags) to this file search (e.g. for making a standard booru search interface), then make sure you are searching `display` tags there.
Wildcards and namespace searches are supported, so if you search for 'character:sam*' or 'series:*', this will be handled correctly clientside.
**Many system predicates are also supported using a text parser!** The parser was designed by a clever user for human input and allows for a certain amount of error (e.g. ~= instead of ≈, or "isn't" instead of "is not") or requires more information (e.g. the specific hashes for a hash lookup). **Here's a big list of examples that are supported:**
@ -1109,8 +1151,10 @@ Wildcards and namespace searches are supported, so if you search for 'character:
* system:file service currently in my files
* system:file service is not currently in my files
* system:file service is not pending to my files
* system:number of file relationships = 2 duplicates
* system:number of file relationships > 10 potential duplicates
* system:num file relationships < 3 alternates
* system:number of file relationships > 3 false positives
* system:num file relationships > 3 false positives
* system:ratio is wider than 16:9
* system:ratio is 16:9
* system:ratio taller than 1:1
@ -1153,7 +1197,9 @@ Makes:
* samus aran OR lara croft
* system:height > 1000
The file and tag services are for search domain selection, just like clicking the buttons in the client. They are optional--default is 'my files' and 'all known tags', and you can use either key or name as in [GET /get_services](#get_services), whichever is easiest for your situation.
The file and tag services are for search domain selection, just like clicking the buttons in the client. They are optional--default is 'all my files' and 'all known tags'.
File searches occur in the `display` `tag_display_type`. If you want to pair autocomplete tag lookup from [/search_tags](#add_tags_search_tags) to this file search (e.g. for making a standard booru search interface), then make sure you are searching `display` tags there.
file\_sort\_asc is 'true' for ascending, and 'false' for descending. The default is descending.
@ -1242,24 +1288,20 @@ _Get metadata about files in the client._
Restricted access:
: YES. Search for Files permission needed. Additional search permission limits may apply.
Required Headers: n/a
Arguments (in percent-encoded JSON):
:
* `file_id`: (selective, a numerical file id)
* `file_ids`: (selective, a list of numerical file ids)
* `hash`: (selective, a hexadecimal SHA256 hash)
* `hashes`: (selective, a list of hexadecimal SHA256 hashes)
* [files](#parameters_files)
* `create_new_file_ids`: true or false (optional if asking with hash(es), defaulting to false)
* `only_return_identifiers`: true or false (optional, defaulting to false)
* `only_return_basic_information`: true or false (optional, defaulting to false)
* `detailed_url_information`: true or false (optional, defaulting to false)
* `include_notes`: true or false (optional, defaulting to false)
* `hide_service_keys_tags`: **Will be set default false and deprecated soon!** true or false (optional, defaulting to false)
* `hide_service_names_tags`: **Deprecated, will be deleted soon!** true or false (optional, defaulting to true)
* `hide_service_keys_tags`: **Deprecated, will be deleted soon!** true or false (optional, defaulting to true)
You need one of file_ids or hashes. If your access key is restricted by tag, you cannot search by hashes, and **the file_ids you search for must have been in the most recent search result**.
If your access key is restricted by tag, **the files you search for must have been in the most recent search result**.
``` title="Example request for two files with ids 123 and 4567"
/get_files/file_metadata?file_ids=%5B123%2C%204567%5D
@ -1309,8 +1351,6 @@ Response:
"has_human_readable_embedded_metadata" : true,
"has_icc_profile" : true,
"known_urls" : [],
"service_keys_to_statuses_to_tags" : {},
"service_keys_to_statuses_to_display_tags" : {},
"tags" : {
"6c6f63616c2074616773" : {
"name" : "local tags",
@ -1400,34 +1440,6 @@ Response:
"https://img2.gelbooru.com//images/80/c8/80c8646b4a49395fb36c805f316c49a9.jpg",
"http://origin-orig.deviantart.net/ed31/f/2019/210/7/8/beachqueen_samus_by_dandonfuga-ddcu1xg.jpg"
],
"service_keys_to_statuses_to_tags" : {
"6c6f63616c2074616773" : {
"0" : ["samus favourites"],
"2" : ["process this later"]
},
"37e3849bda234f53b0e9792a036d14d4f3a9a136d1cb939705dbcd5287941db4" : {
"0" : ["blonde_hair", "blue_eyes", "looking_at_viewer"],
"1" : ["bodysuit"]
},
"616c6c206b6e6f776e2074616773" : {
"0" : ["samus favourites", "blonde_hair", "blue_eyes", "looking_at_viewer"],
"1" : ["bodysuit"]
}
},
"service_keys_to_statuses_to_display_tags" : {
"6c6f63616c2074616773" : {
"0" : ["samus favourites", "favourites"],
"2" : ["process this later"]
},
"37e3849bda234f53b0e9792a036d14d4f3a9a136d1cb939705dbcd5287941db4" : {
"0" : ["blonde hair", "blue_eyes", "looking at viewer"],
"1" : ["bodysuit", "clothing"]
},
"616c6c206b6e6f776e2074616773" : {
"0" : ["samus favourites", "favourites", "blonde hair", "blue_eyes", "looking at viewer"],
"1" : ["bodysuit", "clothing"]
}
},
"tags" : {
"6c6f63616c2074616773" : {
"name" : "local tags",
@ -1530,17 +1542,15 @@ Size is in bytes. Duration is in milliseconds, and may be an int or a float.
`ipfs_multihashes` stores the ipfs service key to any known multihash for the file.
The `thumbnail_width` and `thumbnail_height` are a generally reliable prediction but aren't a promise. The actual thumbnail you get from [/get_files/thumbnail](#get_files_thumbnail) will be different if the user hasn't looked at it since changing their thumbnail options. You only get these rows for files that hydrus actually generates an actual thumbnail for. Things like pdf won't have it. You can use your own thumb, or ask the api and it'll give you a fixed fallback; those are mostly 200x200, but you can and should size them to whatever you want.
The `thumbnail_width` and `thumbnail_height` are a generally reliable prediction but aren't a promise. The actual thumbnail you get from [/get\_files/thumbnail](#get_files_thumbnail) will be different if the user hasn't looked at it since changing their thumbnail options. You only get these rows for files that hydrus actually generates an actual thumbnail for. Things like pdf won't have it. You can use your own thumb, or ask the api and it'll give you a fixed fallback; those are mostly 200x200, but you can and should size them to whatever you want.
#### tags
The 'tags' structures are undergoing transition. Previously, this was a mess of different Objects in different domains, all `service_xxx_to_xxx_tags`, but they are being transitioned to the combined `tags` Object.
`hide_service_names_tags` is deprecated and will be deleted soon. When set to `false`, it shows the old `service_names_to_statuses_to_tags` and `service_names_to_statuses_to_display_tags` Objects. The new `tags` structure now shows the service name--migrate to this asap.
`hide_service_keys_tags` is deprecated and will be deleted soon. When set to `false`, it shows the old `service_keys_to_statuses_to_tags` and `service_keys_to_statuses_to_display_tags` Objects.
`hide_service_keys_tags` will soon be set to default `false` and deprecated in the same way. Move to `tags` please!
The `tags` structures are similar to the [/add_tags/add_tags](#add_tags_add_tags) scheme, excepting that the status numbers are:
The `tags` structures are similar to the [/add\_tags/add\_tags](#add_tags_add_tags) scheme, excepting that the status numbers are:
* 0 - current
* 1 - pending
@ -1550,7 +1560,7 @@ The `tags` structures are similar to the [/add_tags/add_tags](#add_tags_add_tags
!!! note
Since JSON Object keys must be strings, these status numbers are strings, not ints.
To learn more about service names and keys on a client, use the [/get_services](#get_services) call.
To learn more about service names and keys on a client, use the [/get\_services](#get_services) call.
While the 'storage_tags' represent the actual tags stored on the database for a file, 'display_tags' reflect how tags appear in the UI, after siblings are collapsed and parents are added. If you want to edit a file's tags, refer to the storage tags. If you want to render to the user, use the display tags. The display tag calculation logic is very complicated; if the storage tags change, do not try to guess the new display tags yourself--just ask the API again.
@ -1665,7 +1675,7 @@ This refers to the File Relationships system, which includes 'potential duplicat
This system is pending significant rework and expansion, so please do not get too married to some of the routines here. I am mostly just exposing my internal commands, so things are a little ugly/hacked. I expect duplicate and alternate groups to get some form of official identifier in future, which may end up being the way to refer and edit things here.
Also, at least for now, 'Manage File Relationships' permission is not going to be bound by the search permission restrictions that normal file search does. Getting this permission allows you to search anything. I expect to add this permission filtering tech in future, particularly for file domains.
Also, at least for now, 'Manage File Relationships' permission is not going to be bound by the search permission restrictions that normal file search does. Getting this file relationship management permission allows you to search anything.
_There is more work to do here, including adding various 'dissolve'/'undo' commands to break groups apart._
@ -1680,10 +1690,8 @@ Required Headers: n/a
Arguments (in percent-encoded JSON):
:
* `file_id`: (selective, a numerical file id)
* `file_ids`: (selective, a list of numerical file ids)
* `hash`: (selective, a hexadecimal SHA256 hash)
* `hashes`: (selective, a list of hexadecimal SHA256 hashes)
* [files](#parameters_files)
* [file domain](#parameters_file_domain) (optional, defaults to 'all my files')
``` title="Example request"
/manage_file_relationships/get_file_relationships?hash=ac940bb9026c430ea9530b4f4f6980a12d9432c2af8d9d39dfc67b05d91df11d
@ -1714,7 +1722,9 @@ Response:
`is_king` and `king` relate to which file is the set best of a group. The king is usually the best representative of a group if you need to do comparisons between groups, and the 'get some pairs to filter'-style commands usually try to select the kings of the various to-be-compared duplicate groups.
**It is possible for the king to not be available, in which case `king` is null.** The king can be unavailable in several duplicate search contexts, generally when you have the option to search/filter and it is outside of that domain. For this request, the king will usually be available unless the user has deleted it. You have to deal with the king being unavailable--in this situation, your best bet is to just use the file itself as its own representative.
The relationships you get are filtered by the file domain. If you set the file domain to 'all known files', you will get every relationship a file has, including all deleted files, which is often less useful than you would think. The default, 'all my files' is usually most useful.
**It is possible for the king to not be available, in which case `king` is null.** The king can be unavailable in several duplicate search contexts, generally when it is outside of the set file domain. For the default domain, 'all my files', the king will be available unless the user has deleted it. You have to deal with the king being unavailable--in this situation, your best bet is to just use the file itself as its own representative.
A file that has no duplicates is considered to be in a duplicate group of size 1 and thus is always its own king.
@ -1740,6 +1750,7 @@ Required Headers: n/a
Arguments (in percent-encoded JSON):
:
* [file domain](#parameters_file_domain) (optional, defaults to 'all my files')
* `tag_service_key_1`: (optional, default 'all known tags', a hex tag service key)
* `tags_1`: (optional, default system:everything, a list of tags you wish to search for)
* `tag_service_key_2`: (optional, default 'all known tags', a hex tag service key)
@ -1749,10 +1760,10 @@ Arguments (in percent-encoded JSON):
* `max_hamming_distance`: (optional, integer, default 4, the max 'search distance' of the pairs)
``` title="Example request"
/manage_file_relationships/get_potentials_count?tag_service_key_1=c1ba23c60cda1051349647a151321d43ef5894aacdfb4b4e333d6c4259d56c5f&tags_1=%5B%22dupes_to_process%22%2C%20%22system%3Awidth%3C400%22%5D&search_type=1&pixel_duplicates=2&max_hamming_distance=0&max_num_pairs=50
/manage_file_relationships/get_potentials_count?tag_service_key_1=c1ba23c60cda1051349647a151321d43ef5894aacdfb4b4e333d6c4259d56c5f&tags_1=%5B%22dupes_to_process%22%2C%20%22system%3Awidth%3C400%22%5D&potentials_search_type=1&pixel_duplicates=2&max_hamming_distance=0&max_num_pairs=50
```
`tag_service_key` and `tags` work the same as [/get\_files/search\_files](#get_files_search_files). The `_2` variants are only useful if the `potentials_search_type` is 2. For now the file domain is locked to 'all my files'.
`tag_service_key_x` and `tags_x` work the same as [/get\_files/search\_files](#get_files_search_files). The `_2` variants are only useful if the `potentials_search_type` is 2.
`potentials_search_type` and `pixel_duplicates` are enums:
@ -1789,6 +1800,7 @@ Required Headers: n/a
Arguments (in percent-encoded JSON):
:
* [file domain](#parameters_file_domain) (optional, defaults to 'all my files')
* `tag_service_key_1`: (optional, default 'all known tags', a hex tag service key)
* `tags_1`: (optional, default system:everything, a list of tags you wish to search for)
* `tag_service_key_2`: (optional, default 'all known tags', a hex tag service key)
@ -1799,7 +1811,7 @@ Arguments (in percent-encoded JSON):
* `max_num_pairs`: (optional, integer, defaults to client's option, how many pairs to get in a batch)
``` title="Example request"
/manage_file_relationships/get_potential_pairs?tag_service_key_1=c1ba23c60cda1051349647a151321d43ef5894aacdfb4b4e333d6c4259d56c5f&tags_1=%5B%22dupes_to_process%22%2C%20%22system%3Awidth%3C400%22%5D&search_type=1&pixel_duplicates=2&max_hamming_distance=0&max_num_pairs=50
/manage_file_relationships/get_potential_pairs?tag_service_key_1=c1ba23c60cda1051349647a151321d43ef5894aacdfb4b4e333d6c4259d56c5f&tags_1=%5B%22dupes_to_process%22%2C%20%22system%3Awidth%3C400%22%5D&potentials_search_type=1&pixel_duplicates=2&max_hamming_distance=0&max_num_pairs=50
```
The search arguments work the same as [/manage\_file\_relationships/get\_potentials\_count](#manage_file_relationships_get_potentials_count).
@ -1833,6 +1845,7 @@ Required Headers: n/a
Arguments (in percent-encoded JSON):
:
* [file domain](#parameters_file_domain) (optional, defaults to 'all my files')
* `tag_service_key_1`: (optional, default 'all known tags', a hex tag service key)
* `tags_1`: (optional, default system:everything, a list of tags you wish to search for)
* `tag_service_key_2`: (optional, default 'all known tags', a hex tag service key)
@ -1842,7 +1855,7 @@ Arguments (in percent-encoded JSON):
* `max_hamming_distance`: (optional, integer, default 4, the max 'search distance' of the files)
``` title="Example request"
/manage_file_relationships/get_random_potentials?tag_service_key_1=c1ba23c60cda1051349647a151321d43ef5894aacdfb4b4e333d6c4259d56c5f&tags_1=%5B%22dupes_to_process%22%2C%20%22system%3Awidth%3C400%22%5D&search_type=1&pixel_duplicates=2&max_hamming_distance=0
/manage_file_relationships/get_random_potentials?tag_service_key_1=c1ba23c60cda1051349647a151321d43ef5894aacdfb4b4e333d6c4259d56c5f&tags_1=%5B%22dupes_to_process%22%2C%20%22system%3Awidth%3C400%22%5D&potentials_search_type=1&pixel_duplicates=2&max_hamming_distance=0
```
The arguments work the same as [/manage\_file\_relationships/get\_potentials\_count](#manage_file_relationships_get_potentials_count), with the caveat that `potentials_search_type` has special logic:
@ -1881,13 +1894,20 @@ Required Headers:
Arguments (in JSON):
:
* `pair_rows`: (a list of lists)
* `relationships`: (a list of Objects, one for each file-pair being set)
Each row is:
Each Object is:
* [ relationship, hash_a, hash_b, do_default_content_merge, delete_a, delete_b ]
* `hash_a`: (a hexadecimal SHA256 hash)
* `hash_b`: (a hexadecimal SHA256 hash)
* `relationship`: (integer enum for the relationship being set)
* `do_default_content_merge`: (bool)
* `delete_a`: (optional, bool, default false)
* `delete_b`: (optional, bool, default false)
Where `relationship` is one of this enum:
`hash_a` and `hash_b` are normal hex SHA256 hashes for your file pair.
`relationship` is one of this enum:
* 0 - set as potential duplicates
* 1 - set as false positives
@ -1898,18 +1918,33 @@ Where `relationship` is one of this enum:
2, 4, and 7 all make the files 'duplicates' (8 under `get_file_relationships`), which, specifically, merges the two files' duplicate groups. 'same quality' has different duplicate content merge options to the better/worse choices, but it ultimately sets A>B. You obviously don't have to use 'B is better' if you prefer just to swap the hashes. Do what works for you.
`hash_a` and `hash_b` are normal hex SHA256 hashes for your file pair.
`do_default_content_merge` sets whether the user's duplicate content merge options should be loaded and applied to the files along with the relationship. Most operations in the client do this automatically, so the user may expect it to apply, but if you want to do content merge yourself, set this to false.
`do_default_content_merge` is a boolean setting whether the user's duplicate content merge options should be loaded and applied to the files along with the duplicate status. Most operations in the client do this automatically, so the user may expect it to apply, but if you want to do content merge yourself, set this to false.
`delete_a` and `delete_b` are booleans that obviously select whether to delete A and/or B. You can also do this externally if you prefer.
`delete_a` and `delete_b` are booleans that select whether to delete A and/or B in the same operation as setting the relationship. You can also do this externally if you prefer.
```json title="Example request body"
{
"pair_rows" : [
[ 4, "b54d09218e0d6efc964b78b070620a1fa19c7e069672b4c6313cee2c9b0623f2", "bbaa9876dab238dcf5799bfd8319ed0bab805e844f45cf0de33f40697b11a845", true, false, true ],
[ 4, "22667427eaa221e2bd7ef405e1d2983846c863d40b2999ce8d1bf5f0c18f5fb2", "65d228adfa722f3cd0363853a191898abe8bf92d9a514c6c7f3c89cfed0bf423", true, false, true ],
[ 2, "0480513ffec391b77ad8c4e57fe80e5b710adfa3cb6af19b02a0bd7920f2d3ec", "5fab162576617b5c3fc8caabea53ce3ab1a3c8e0a16c16ae7b4e4a21eab168a7", true, false, false ]
"relationships" : [
{
"hash_a" : "b54d09218e0d6efc964b78b070620a1fa19c7e069672b4c6313cee2c9b0623f2",
"hash_b" : "bbaa9876dab238dcf5799bfd8319ed0bab805e844f45cf0de33f40697b11a845",
"relationship" : 4,
"do_default_content_merge" : true,
"delete_b" : true
},
{
"hash_a" : "22667427eaa221e2bd7ef405e1d2983846c863d40b2999ce8d1bf5f0c18f5fb2",
"hash_b" : "65d228adfa722f3cd0363853a191898abe8bf92d9a514c6c7f3c89cfed0bf423",
"relationship" : 4,
"do_default_content_merge" : true,
"delete_b" : true
},
{
"hash_a" : "0480513ffec391b77ad8c4e57fe80e5b710adfa3cb6af19b02a0bd7920f2d3ec",
"hash_b" : "5fab162576617b5c3fc8caabea53ce3ab1a3c8e0a16c16ae7b4e4a21eab168a7",
"relationship" : 2,
"do_default_content_merge" : true
}
]
}
```
@ -1917,7 +1952,7 @@ Where `relationship` is one of this enum:
Response:
: 200 with no content.
If you try to add an invalid or redundant relationship, for instance setting that files that are already duplicates are potential duplicates, no changes are made.
If you try to add an invalid or redundant relationship, for instance setting files that are already duplicates as potential duplicates, no changes are made.
This is the file relationships request that is probably most likely to change in future. I may implement content merge options. I may move from file pairs to group identifiers. When I expand alternates, those file groups are going to support more variables.
@ -1934,10 +1969,7 @@ Required Headers:
Arguments (in JSON):
:
* `file_id`: (selective, a numerical file id)
* `file_ids`: (selective, a list of numerical file ids)
* `hash`: (selective, a hexadecimal SHA256 hash)
* `hashes`: (selective, a list of hexadecimal SHA256 hashes)
* [files](#parameters_files)
```json title="Example request body"
{
@ -2061,36 +2093,42 @@ Response:
"pages" : {
"name" : "top pages notebook",
"page_key" : "3b28d8a59ec61834325eb6275d9df012860a1ecfd9e1246423059bc47fb6d5bd",
"page_state" : 0,
"page_type" : 10,
"selected" : true,
"pages" : [
{
"name" : "files",
"page_key" : "d436ff5109215199913705eb9a7669d8a6b67c52e41c3b42904db083255ca84d",
"page_state" : 0,
"page_type" : 6,
"selected" : false
},
{
"name" : "thread watcher",
"page_key" : "40887fa327edca01e1d69b533dddba4681b2c43e0b4ebee0576177852e8c32e7",
"page_state" : 0,
"page_type" : 9,
"selected" : false
},
{
"name" : "pages",
"page_key" : "2ee7fa4058e1e23f2bd9e915cdf9347ae90902a8622d6559ba019a83a785c4dc",
"page_state" : 0,
"page_type" : 10,
"selected" : true,
"pages" : [
{
"name" : "urls",
"page_key" : "9fe22cb760d9ee6de32575ed9f27b76b4c215179cf843d3f9044efeeca98411f",
"page_state" : 0,
"page_type" : 7,
"selected" : true
},
{
"name" : "files",
"page_key" : "2977d57fc9c588be783727bcd54225d577b44e8aa2f91e365a3eb3c3f580dc4e",
"page_state" : 0,
"page_type" : 6,
"selected" : false
}
@ -2101,7 +2139,11 @@ Response:
}
```
The page types are as follows:
`name` is the full text on the page tab.
`page_key` is a unique identifier for the page. It will stay the same for a particular page throughout the session, but new ones are generated on a session reload.
`page_type` is as follows:
* 1 - Gallery downloader
* 2 - Simple downloader
@ -2112,10 +2154,20 @@ Response:
* 8 - Duplicates
* 9 - Thread watcher
* 10 - Page of pages
The top page of pages will always be there, and always selected. 'selected' means which page is currently in view and will propagate down other page of pages until it terminates. It may terminate in an empty page of pages, so do not assume it will end on a 'media' page.
The 'page_key' is a unique identifier for the page. It will stay the same for a particular page throughout the session, but new ones are generated on a client restart or other session reload.
`page_state` is as follows:
* 0 - ready
* 1 - initialising
* 2 - searching/loading
* 3 - search cancelled
Most pages will be 0, normal/ready, at all times. Large pages will start in an 'initialising' state for a few seconds, which means their session-saved thumbnails aren't loaded yet. Search pages will enter 'searching' after a refresh or search change and will either return to 'ready' when the search is complete, or fall to 'search cancelled' if the search was interrupted (usually this means the user clicked the 'stop' button that appears after some time).
`selected` means which page is currently in view. It will propagate down the page of pages until it terminates. It may terminate in an empty page of pages, so do not assume it will end on a media page.
The top page of pages will always be there, and always selected.
### **GET `/manage_pages/get_page_info`** { id="manage_pages_get_page_info" }
@ -2145,6 +2197,7 @@ Response description
"page_info" : {
"name" : "threads",
"page_key" : "aebbf4b594e6986bddf1eeb0b5846a1e6bc4e07088e517aff166f1aeb1c3c9da",
"page_state" : 0,
"page_type" : 3,
"management" : {
"multiple_watcher_import" : {
@ -2206,6 +2259,8 @@ Response description
}
```
`name`, `page_key`, `page_state`, and `page_type` are as in [/manage\_pages/get\_pages](#manage_pages_get_pages).
As you can see, even the 'simple' mode can get very large. Imagine that response for a page watching 100 threads! Turning simple mode off will display every import item, gallery log entry, and all hashes in the media (thumbnail) panel.
For this first version, the five importer pages--hdd import, simple downloader, url downloader, gallery page, and watcher page--all give rich info based on their specific variables. The first three only have one importer/gallery log combo, but the latter two of course can have multiple. The "imports" and "gallery_log" entries are all in the same data format.
@ -2225,10 +2280,7 @@ Required Headers:
Arguments (in JSON):
:
* `page_key`: (the page key for the page you wish to add files to)
* `file_id`: (selective, a numerical file id)
* `file_ids`: (selective, a list of numerical file ids)
* `hash`: (selective, a hexadecimal SHA256 hash)
* `hashes`: (selective, a list of hexadecimal SHA256 hashes)
* [files](#parameters_files)
The files you set will be appended to the given page, just like a thumbnail drag and drop operation. The page key is the same as fetched in the [/manage\_pages/get\_pages](#manage_pages_get_pages) call.
@ -2295,6 +2347,8 @@ The page key is the same as fetched in the [/manage\_pages/get\_pages](#manage_p
Response:
: 200 with no content. If the page key is not found, this will 404.
Poll the `page_state` in [/manage\_pages/get\_pages](#manage_pages_get_pages) or [/manage\_pages/get\_page\_info](#manage_pages_get_page_info) to see when the search is complete.
## Managing the Database
### **POST `/manage_database/lock_on`** { id="manage_database_lock_on" }

File diff suppressed because it is too large Load Diff

View File

@ -19,7 +19,7 @@ Drag-and-drop one or more folders or files into Hydrus.
This will open the `import files` window. Here you can add files or folders, or delete files from the import queue. Let Hydrus parse what it will update and then look over the options. By default the option to delete original files after succesful import (if it's ignored for any reason or already present in Hydrus for example) is not checked, activate on your own risk. In `file import options` you can find some settings for minimum and maximum file size, resolution, and whether to import previously deleted files or not.
From here there's two options: `import now` which will just import as is, and `add tags before import >>` which lets you set up some rules to add tags to files on import.
Examples are keeping filename as a tag, add folders as tag (useful if you have some sort of folder based organisation scheme), or load tags from an accompanying .txt file generated by some other program.
Examples are keeping filename as a tag, add folders as tag (useful if you have some sort of folder based organisation scheme), or load tags from an [accompanying text file](advanced_sidecars.md) generated by some other program.
Once you're done click apply (or `import now`) and Hydrus will start processing the files. Exact duplicates are not imported so if you had dupes spread out you will end up with only one file in the end. If files *look* similar but Hydrus imports both then that's a job for the [dupe filter](duplicates.md) as there is some difference even if you can't tell it by eye. A common one is compression giving files with different file sizes, but otherwise looking identical or files with extra meta data baked into them.
@ -41,7 +41,7 @@ If you use a drag and drop to open a file inside an image editing program, remem
You can also copy the files by right-clicking and going down `share -> copy -> files` and then pasting the files where you want them.
### Export
You can also export files with tags, either in filename or as a side-car .txt file by right-clicking and going down `share -> export -> files`. Have a look at the settings and then press `export`.
You can also export files with tags, either in filename or as a [sidecar file](advanced_sidecars.md) by right-clicking and going down `share -> export -> files`. Have a look at the settings and then press `export`.
You can create folders to export files into by using backslashes on Windows (`\`) and slashes on Linux (`/`) in the filename. This can be combined with the patterns listed in the pattern shortcut button dropdown. As example `[series]\{filehash}` will export files into folders named after the `series:` namespaced tags on the files, all files tagged with one series goes into one folder, files tagged with another series goes into another folder as seen in the image below.
![](images/export_files.png)
@ -56,12 +56,12 @@ Under `file -> import and export folders` you'll find options for setting up aut
### Import folders
![](images/import_folder.png)
To import tags you have to add a tag service under the `filename tagging` section.
Like with a manual import, if you wish you can import tags by parsing filenames or [loading sidecars](advanced_sidecars.md).
### Export folders
![](images/export_folder.png)
You can currently not export tags in a .txt file with export folder liks you can when doing normal exports unfortunately.
Like with manual export, you can set the filenames using a tag pattern, and you can [export to sidecars](advanced_sidecars.md) too.
## Importing and exporting tags
While you can import and export tags together with images sometimes you just don't want to deal with the files.

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 123 KiB

View File

@ -34,6 +34,81 @@
<div class="content">
<h1 id="changelog"><a href="#changelog">changelog</a></h1>
<ul>
<li>
<h2 id="version_514"><a href="#version_514">version 514</a></h2>
<ul>
<li><h3>downloaders</h3></li>
<li>twitter took down the API we were using, breaking all our nice twitter downloaders! argh!</li>
<li>a user has figured out a basic new downloader that grabs the tweets amongst the first twenty tweets-and-retweets of an account. yes, only the first twenty max, and usually fewer. because this is a big change, the client will ask about it when you update. if you have some complicated situation where you are working on the old default twitter downloaders and don't want them deleted, you can select 'no' on the dialog it throws up, but everyone else wants to say 'yes'. then check your twitter subs: make sure they moved to the new downloader, and you probably want to make them check more frequently too.</li>
<li>given the rate of changes at twitter, I think we can expect more changes and blocks in future. I don't know whether nitter will be viable alternative, so if the artists you like end up on a nice simple booru _anywhere_, I strongly recommend just moving there. twitter appears to be explicitly moving to non-third-party-friendly</li>
<li>thanks to a user's work, the 'danbooru - get webm ugoira' parser is fixed!</li>
<li>thanks to a user's work, the deviant art parser is updated to get the highest res image in more situations!</li>
<li>thanks to a user's work, the pixiv downloader now gets the artist note, in japanese (and translated, if there is one), and a 'medium:ai generated' tag!</li>
<li><h3>sidecars</h3></li>
<li>I wrote some sidecar help here! https://hydrusnetwork.github.io/hydrus/advanced_sidecars.html</li>
<li>when the client parses files for import, the 'does this look like a sidecar?' test now also checks that the base component of the base filename (e.g. 'Image123' from 'Image123.jpg.txt') actually appears in the list of non-txt/json/xml ext files. a random yo.txt file out of nowhere will now be inspected in case it is secretly a jpeg again, for good or ill</li>
<li>when you drop some files on the client, the number of files skipped because they looked like sidecars is now stated in the status label</li>
<li>fixed a typo bug that meant tags imported from sidecars were not being properly cleaned, despite preview appearance otherwise, for instance ':)', which in hydrus needs to be secretly stored as '::)' was being imported as ')'</li>
<li>as a special case, tags that in hydrus are secretly '::)' will be converted to ':)' on export to sidecar too, the inverse of the above problem. there may be some other tag cleaning quirks to undo here, so let me know what you run into</li>
<li><h3>related tags overhaul</h3></li>
<li>the 'related tags' suggestion system, turned on under _options->tag suggestions_, has several changes, including some prototype tech I'd love feedback on</li>
<li>first off, there are two new search buttons, 'new 1' and 'new 2' ('2' is available on repositories only).. these use an upgraded statistical search and scoring system that a user worked on and sent in. I have butchered his specific namespace searching system to something more general/flexible and easy for me to maintain, but it works better and more comprehensibly than my old method! give it a go and let me know how each button does--the first one will be fast but less useful on the PTR, the second will be slower but generally give richer results (although it cannot do tags with too-high count)</li>
<li>the new search routine works on multiple files, so 'related tags' now shows on tag dialogs launched from a selection of thumbnails!</li>
<li>also, all the related search buttons now search any selection of tags you make!!! so if you can't remember that character's name, just click on the series or another character they are often with and hit the search, and you should get a whole bunch appear</li>
<li>I am going to keep working on this in the future. the new buttons will become the only buttons, I'll try and mitigate the prototype search limitations, add some cancel tech, move to a time-based search length like the current buttons, and I'll add more settings, including for filtering so we aren't looking up related tags for 'page:x' and so on. I'm interested in knowing how you get on with IRL data. are there too many recommendations (is the tolerance too high?)? is the sorting good (is the stuff at the top relevant or often just noise?)?</li>
<li><h3>misc</h3></li>
<li>all users can now copy their service keys (which are a technical non-changing hex identifier for your client's services) from the review services window--advanced mode is no longer needed. this may be useful as the client api transitions to service keys</li>
<li>when a job in the downloader search log generates new jobs (e.g. fetches the next page), the new job(s) are now inserted after the parent. previously, they were appended to the end of the list. this changes how ngugs operate, converting their searches from interleaved to sequential!</li>
<li>restarting search log jobs now also places the new job after the restarted job</li>
<li>when you create a new export folder, if you have default metadata export sidecar settings from a previous manual file export, the program now asks if you want those for the new export folder or an empty list. previously, it just assigned the saved default, which could be jarring if it was saved from ages ago</li>
<li>added a migration guide to the running from source help. also brushed up some language and fixed a bunch of borked title weights in that document</li>
<li>the max initial and periodic file limits in subscriptions is now 50k when in advanced mode. I can't promise that would be nice though!</li>
<li>the file history chart no longer says that inbox and delete time tracking are new</li>
<li><h3>misc fixes</h3></li>
<li>fixed a cursor type detection test that was stopping the cursor from hiding immediately when you do a media viewer drag in Qt6</li>
<li>fixed an issue where 'clear deletion record' calls were not deleting from the newer 'all my files' domain. the erroneous extra records will be searched for and scrubbed on update</li>
<li>fixed the issue where if you had the new 'unnamespaced input gives (any namespace) wildcard results' search option on, you couldn't add any novel tags in WRITE autocomplete contexts like 'manage tags'!!! it could only offer the automatically converted wildcard tags as suggested input, which of course aren't appropriate for a WRITE context. the way I ultimately fixed this was horrible; the whole thing needs more work to deal with clever logic like this better, so let me know if you get any more trouble here</li>
<li>I think I fixed an infinite hang when trying to add certain siblings in manage tag siblings. I believe this was occuring when the dialog was testing if the new pair would create a loop when the sibling structure already contains a loop. now it throws up a message and breaks the test</li>
<li>fixed an issue where certain system:filetype predicates would spawn apparent duplicates of themselves instead of removing on double-click. images+audio+video+swf+pdf was one example. it was a 'all the image types' vs 'list of (all the) image types' conversion/comparison/sorting issue</li>
<li><h3>client api</h3></li>
<li>**this is later than I expected, but as was planned last year, I am clearing up several obsolete parameters and data structures this week. mostly it is bad service name-identification that seemed simple or flexible to support but just added maintenance debt, induced bad implementation practises, and hindered future expansions. if you have a custom api script, please read on--and if you have not yet moved to the alternatives, do so before updating!**</li>
<li>**all `...service_name...` parameters are officially obsolete! they will still work via some legacy hacks, so old scripts shouldn't break, but they are no longer documented. please move to the `...service_key...` alternates as soon as reasonably possible (check out `/get_services` if you need to learn about service keys)**</li>
<li>**`/add_tags/get_tag_services` is removed! use `/get_services` instead!**</li>
<li>**`hide_service_names_tags`, previously made default true, is removed and its data structures `service_names_to_statuses_to_...` are also gone! move to the new `tags` structure.**</li>
<li>**`hide_service_keys_tags` is now default true. it will be removed in 4 weeks or so. same deal as with `service_names_to_statuses_to_...`--move to `tags`**</li>
<li>**`system_inbox` and `system_archive` are removed from `/get_files/search_files`! just use 'system:inbox/archive' in the tags list**</li>
<li>**the 'set_file_relationships' command from last week has been reworked to have a nicer Object parameter with a new name. please check the updated help!** normally I wouldn't change something so quick, but we are still in early prototype, so I'm ok shifting it (and the old method still works lmao, but I'll clear that code out in a few weeks, so please move over--the Object will be much nicer to expand in future, which I forgot about in v513)</li>
<li><h3>many Client API commands now support modern file domain objects, meaning you can search a UNION of file services and 'deleted-from' file services. affected commands are</h3></li>
<li>* /add_files/delete_files</li>
<li>* /add_files/undelete_files</li>
<li>* /add_tags/search_tags</li>
<li>* /get_files/search_files</li>
<li>* /manage_file_relationships/get_everything</li>
<li>a new `/get_service` call now lets you ask about an individual service by service name or service key, basically a parameterised /get_services</li>
<li>the `/manage_pages/get_pages` and `/manage_pages/get_page_info` calls now give the `page_state`, a new enum that says if the page is ready, initialised, searching, or search-cancelled</li>
<li>to reduce duplicate argument spam, the client api help now specifies the complicated 'these files' and now 'this file domain' arguments into sub-sections, and the commands that use them just point to the subsections. check it out--it makes sense when you look at it.</li>
<li>`/add_tags/add_tags` now raises 400 if you give an invalid content action (e.g. pending to a local tag service). previously it skipped these rows silently</li>
<li>added and updated unit tests and help for the above changes</li>
<li>client api version is now 41</li>
<li><h3>boring optimisation</h3></li>
<li>when you are looking at a search log or file log, if entries are added, removed, or moved around, all the log entries that have changed row # now update (previously it just sent a redraw signal for the new rows, not the second-order affected rows that were shuffled up/down. many access routines for these logs are sped up</li>
<li>file log status checking is completely rewritten. the ways it searches, caches and optimises the 'which is the next item with x status' queues is faster and requires far less maintenance. large import queues have less overhead, so the in and outs of general download work should scale up much better now</li>
<li>the main data cache that stores rendered images, image tiles, and thumbnails now maintains itself far more efficiently. there was a hellish O(n) overhead when adding or removing an item which has been reduced to constant time. this gonk was being spammed every few minutes during normal memory maintenance, when hundreds of thumbs can be purged at once. clients with tens of thousands of thumbnails in memory will maintain that list far more smoothly</li>
<li>physical file delete is now more efficient, requiring far fewer hard drive hits to delete a media file. it is also far less aggressive, with a new setting in _options->files and trash_ that sets how long to wait between individual file deletes, default 250ms. before, it was full LFG mode with minor delays every hundred/thousand jobs, and since it takes a write lock, it was lagging out thumbnail load when hitting a lot of work. the daemon here also shuts down faster if caught working during program shut down</li>
<li><h3>boring code cleanup</h3></li>
<li>refactored some parsing routines to be more flexible</li>
<li>added some more dictionary and enum type testing to the client api parameter parsing routines. error messages should be better!</li>
<li>improved how `/add_tags/add_tags` parsing works. ensuring both access methods check all types and report nicer errors</li>
<li>cleaned up the `/search_files/file_metadata` call's parsing, moving to the new generalised method and smoothing out some old code flow. it now checks hashes against the last search, too</li>
<li>cleaned up `/manage_pages/add_files` similarly</li>
<li>cleaned up how tag services are parsed and their errors reported in the client api</li>
<li>the client api is better about processing the file identifiers you give it in the same order you gave</li>
<li>fixed bad 'potentials_search_type'/'search_type' inconsistency in the client api help examples</li>
<li>obviously a bunch of client api unit test and help cleanup to account for the obsolete stuff and various other changes here</li>
<li>updated a bunch of the client api unit tests to handle some of the new parsing</li>
<li>fixed the remaining 'randomly fail due to complex counting logic' potential count unit tests. turns out there were like seven more of them</li>
</ul>
</li>
<li>
<h2 id="version_513"><a href="#version_513">version 513</a></h2>
<ul>

View File

@ -186,7 +186,7 @@ The first start will take a little longer. It will operate just like a normal bu
If you want to redirect your database or use any other launch arguments, then copy 'client.command' to 'client-user.command' and edit it, inserting your desired db path. Run this instead of 'client.command'. New `git pull` commands will not affect 'client-user.command'.
## Simple Updating Guide
### Simple Updating Guide
To update, you do the same thing as for the extract builds.
@ -195,11 +195,33 @@ To update, you do the same thing as for the extract builds.
If you get a library version error when you try to boot, run the venv setup again. It is worth doing this anyway, every now and then, just to stay up to date.
## doing it manually { id="what_you_need" }
### Migrating from an Existing Install
Many users start out using one of the official built releases and decide to move to source. There is lots of information [here](database_migration.md) about how to migrate the database, but for your purposes, the simple method is this:
**If you never moved your database to another place and do not use -d/--db_dir launch parameter**
1. Follow the above guide to get the source install working in a new folder on a fresh database
2. **MAKE A BACKUP OF EVERYTHING**
3. Delete everything from the source install's `db` directory.
4. Move your built release's entire `db` directory to the source.
5. Run your source release again--it should load your old db no problem!
6. Update your backup routine to point to the new source install location.
**If you moved your database to another location and use the -d/--db_dir launch parameter**
1. Follow the above guide to get the source install working in a new folder on a fresh database (without -db_dir)
2. **MAKE A BACKUP OF EVERYTHING**
3. Just to be neat, delete the .db files, .log files, and client_files folder from the source install's `db` directory.
4. Run the source install with --db_dir just as you would the built executable--it should load your old db no problem!
## Doing it Yourself { id="what_you_need" }
_This is for advanced users only._
Inside the extract should be client.py and server.py. You will be treating these basically the same as the 'client' and 'server' executables--you should be able to launch them the same way and they take the same launch parameters as the exes.
_If you have never used python before, do not try this. If the easy setup scripts failed for you and you don't know what happened, please contact hydev before trying this, as the thing that went wrong there will probably go much more wrong here._
You can also set up the environment yourself. Inside the extract should be client.py and server.py. You will be treating these basically the same as the 'client' and 'server' executables--with the right environment, you should be able to launch them the same way and they take the same launch parameters as the exes.
Hydrus needs a whole bunch of libraries, so let's now set your python up. I **strongly** recommend you create a virtual environment. It is easy and doesn't mess up your system python.
@ -210,14 +232,14 @@ To create a new venv environment:
* Open a terminal at your hydrus extract folder. If `python3` doesn't work, use `python`.
* `python3 -m pip install virtualenv` (if you need it)
* `python3 -m venv venv`
* `source venv/bin/activate`
* `source venv/bin/activate` (`CALL venv\Scripts\activate.bat` in Windows cmd)
* `python -m pip install --upgrade pip`
* `python -m pip install --upgrade wheel`
!!! info "venvs"
That `source venv/bin/activate` line turns on your venv. You should see your terminal note you are now in it. A venv is an isolated environment of python that you can install modules to without worrying about breaking something system-wide. **Ideally, you do not want to install python modules to your system python.**
That `source venv/bin/activate` line turns on your venv. You should see your terminal prompt note you are now in it. A venv is an isolated environment of python that you can install modules to without worrying about breaking something system-wide. **Ideally, you do not want to install python modules to your system python.**
This activate line will be needed every time you alter your venv or run the `client.py`/`server.py` files. You can easily tuck this venv activation line into a launch script--check the easy setup files for examples.
This activate line will be needed every time you alter your venv or run the `client.py`/`server.py` files. You can easily tuck this into a launch script--check the easy setup files for examples.
On Windows Powershell, the command is `.\venv\Scripts\activate`, but you may find the whole deal is done much easier in cmd than Powershell. When in Powershell, just type `cmd` to get an old fashioned command line. In cmd, the launch command is just `venv\scripts\activate.bat`, no leading period.
@ -227,9 +249,9 @@ To create a new venv environment:
python -m pip install -r requirements.txt
```
If you need different versions of libraries, check the cut-up requirements.txts the 'advanced' easy-setup uses in `install_dir/static/requirements/advanced`. Check and compare their contents to the main requirements.txt to see what is going on.
If you need different versions of libraries, check the cut-up requirements.txts the 'advanced' easy-setup uses in `install_dir/static/requirements/advanced`. Check and compare their contents to the main requirements.txt to see what is going on. You'll likely need the newer OpenCV on Python 3.10, for instance.
## Qt { id="qt" }
### Qt { id="qt" }
Qt is the UI library. You can run PySide2, PySide6, PyQt5, or PyQt6. A wrapper library called `qtpy` allows this. The default is PySide6, but if it is missing, qtpy will fall back to an available alternative. For PyQt5 or PyQt6, you need an extra Chart module, so go:
@ -245,17 +267,15 @@ If you want to set QT_API in a batch file, do this:
`set QT_API=pyqt6`
If you run Windows 8.1 or Ubuntu 18.04, you cannot run Qt6. Please try PySide2 or PyQt5.
If you run <= Windows 8.1 or Ubuntu 18.04, you cannot run Qt6. Try PySide2 or PyQt5.
## mpv support { id="mpv" }
### mpv { id="mpv" }
MPV is optional and complicated, but it is great, so it is worth the time to figure out!
As well as the python wrapper, 'python-mpv' (which is in the requirements.txt), you also need the underlying library. This is _not_ mpv the program, but 'libmpv', often called 'libmpv1'.
As well as the python wrapper, 'python-mpv' (which is in the requirements.txt), you also need the underlying dev library. This is _not_ mpv the program, but 'libmpv', often called 'libmpv1'.
For Windows, the dll builds are [here](https://sourceforge.net/projects/mpv-player-windows/files/libmpv/), although getting a stable version can be difficult. Just put it in your hydrus base install directory. Check the links in the easy-setup guide above for good versions.
You can also just grab the 'mpv-1.dll'/'mpv-2.dll' I bundle in my extractable Windows release.
For Windows, the dll builds are [here](https://sourceforge.net/projects/mpv-player-windows/files/libmpv/), although getting a stable version can be difficult. Just put it in your hydrus base install directory. Check the links in the easy-setup guide above for good versions. You can also just grab the 'mpv-1.dll'/'mpv-2.dll' I bundle in my extractable Windows release.
If you are on Linux, you can usually get 'libmpv1' like so:
@ -265,13 +285,11 @@ On macOS, you should be able to get it with `brew install mpv`, but you are like
Hit _help->about_ to see your mpv status. If you don't have it, it will present an error popup box with more info.
## SQLite { id="sqlite" }
### SQLite { id="sqlite" }
If you can, update python's SQLite--it'll improve performance.
If you can, update python's SQLite--it'll improve performance. The SQLite that comes with stock python is usually quite old, so you'll get a significant boost in speed. In some python deployments, the built-in SQLite not compiled with neat features like Fast Text Search (FTS) that hydrus needs.
On Windows, get the 64-bit sqlite3.dll [here](https://www.sqlite.org/download.html), and just drop it in your base install directory.
You can also just grab the 'sqlite3.dll' I bundle in my extractable Windows release.
On Windows, get the 64-bit sqlite3.dll [here](https://www.sqlite.org/download.html), and just drop it in your base install directory. You can also just grab the 'sqlite3.dll' I bundle in my extractable Windows release.
You _may_ be able to update your SQLite on Linux or macOS with:
@ -279,27 +297,27 @@ You _may_ be able to update your SQLite on Linux or macOS with:
* (activate your venv)
* `python -m pip install pysqlite3`
But it isn't a big deal.
But as long as the program launches, it usually isn't a big deal.
!!! warning "Extremely safe no way it can go wrong"
If you want to update sqlite for your system python install, you can also drop it into `C:\Python38\DLLs` or wherever you have python installed. You'll be overwriting the old file, so make a backup if you want to (I have never had trouble updating like this, however).
A user who made a Windows venv with Anaconda reported they had to replace the sqlite3.dll in their conda env at `~/.conda/envs/<envname>/Library/bin/sqlite3.dll`.
## FFMPEG { id="ffmpeg" }
### FFMPEG { id="ffmpeg" }
If you don't have FFMPEG in your PATH and you want to import anything more fun than jpegs, you will need to put a static [FFMPEG](https://ffmpeg.org/) executable in your PATH or the `install_dir/bin` directory. [This](https://www.gyan.dev/ffmpeg/builds/ffmpeg-release-full.7z) should always point to a new build.
If you don't have FFMPEG in your PATH and you want to import anything more fun than jpegs, you will need to put a static [FFMPEG](https://ffmpeg.org/) executable in your PATH or the `install_dir/bin` directory. [This](https://www.gyan.dev/ffmpeg/builds/ffmpeg-release-full.7z) should always point to a new build for Windows. Alternately, you can just copy the exe from one of my extractable Windows releases.
Alternately, you can just copy the exe from one of my extractable Windows releases.
### Running It { id="running_it" }
## running it { id="running_it" }
Once you have everything set up, client.py and server.py should look for and run off client.db and server.db just like the executables. You are looking at entering something like this into the terminal:
Once you have everything set up, client.py and server.py should look for and run off client.db and server.db just like the executables. You can use the 'client.bat/sh/command' scripts in the install dir or use them as inspiration for your own. In any case, you are looking at entering something like this into the terminal:
```
source venv/bin/activate
python client.py
```
This will look in the 'db' directory by default, but you can use the [launch arguments](launch_arguments.md) just like for the executables. For example, this could be your client-user.sh file:
This will use the 'db' directory for your database by default, but you can use the [launch arguments](launch_arguments.md) just like for the executables. For example, this could be your client-user.sh file:
```
#!/bin/bash
@ -308,13 +326,11 @@ source venv/bin/activate
python client.py -d="/path/to/database"
```
I develop hydrus on and am most experienced with Windows, so the program is more stable and reasonable on that. I do not have as much experience with Linux or macOS, but I still appreciate and will work on your Linux/macOS bug reports.
### Building these Docs
## Building the docs
When running from source you may want to [build the hydrus help docs](about_docs.md) yourself. You can also check the `setup_help` scripts in the install directory.
When running from source you may want to [build the hydrus help docs](about_docs.md) yourself.
## building packages on windows { id="windows_build" }
### Building Packages on Windows { id="windows_build" }
Almost everything you get through pip is provided as pre-compiled 'wheels' these days, but if you get an error about Visual Studio C++ when you try to pip something, you have two choices:
@ -335,13 +351,15 @@ Trust me, just do this, it will save a ton of headaches!
_Update:_ On Windows 11, in 2023-01, I had trouble with the above. There's a couple '11' SDKs that installed ok, but the vcbuildtools stuff had unusual errors. I hadn't done this in years, so maybe they are broken for Windows 10 too! The good news is that a basic stock Win 11 install with Python 3.10 is fine getting everything on our requirements and even making a build without any extra compiler tech.
## additional windows info { id="additional_windows" }
### Additional Windows Info { id="additional_windows" }
This does not matter much any more, but in the old days, building modules like lz4 and lxml was a complete nightmare, and hooking up Visual Studio was even more difficult. [This page](http://www.lfd.uci.edu/~gohlke/pythonlibs/) has a lot of prebuilt binaries--I have found it very helpful many times.
I have a fair bit of experience with Windows python, so send me a mail if you need help.
## my code { id="my_code" }
## My Code { id="my_code" }
I develop hydrus on and am most experienced with Windows, so the program is more stable and reasonable on that. I do not have as much experience with Linux or macOS, but I still appreciate and will work on your Linux/macOS bug reports.
My coding style is unusual and unprofessional. Everything is pretty much hacked together. If you are interested in how things work, please do look through the source and ask me if you don't understand something.

View File

@ -42,15 +42,17 @@ class DataCache( object ):
if key not in self._keys_to_data:
return
( data, size_estimate ) = self._keys_to_data[ key ]
del self._keys_to_data[ key ]
self._RecalcMemoryUsage()
self._total_estimated_memory_footprint -= size_estimate
if HG.cache_report_mode:
HydrusData.ShowText( 'Cache "{}" removing "{}". Current size {}.'.format( self._name, key, HydrusData.ConvertValueRangeToBytes( self._total_estimated_memory_footprint, self._cache_size ) ) )
HydrusData.ShowText( 'Cache "{}" removing "{}", size "{}". Current size {}.'.format( self._name, key, HydrusData.ToHumanBytes( size_estimate ), HydrusData.ConvertValueRangeToBytes( self._total_estimated_memory_footprint, self._cache_size ) ) )
@ -61,9 +63,27 @@ class DataCache( object ):
self._Delete( deletee_key )
def _RecalcMemoryUsage( self ):
def _GetData( self, key ):
self._total_estimated_memory_footprint = sum( ( data.GetEstimatedMemoryFootprint() for data in self._keys_to_data.values() ) )
if key not in self._keys_to_data:
raise Exception( 'Cache error! Looking for {}, but it was missing.'.format( key ) )
self._TouchKey( key )
( data, size_estimate ) = self._keys_to_data[ key ]
new_estimate = data.GetEstimatedMemoryFootprint()
if new_estimate != size_estimate:
self._total_estimated_memory_footprint += new_estimate - size_estimate
self._keys_to_data[ key ] = ( data, new_estimate )
return data
def _TouchKey( self, key ):
@ -99,19 +119,21 @@ class DataCache( object ):
self._DeleteItem()
self._keys_to_data[ key ] = data
size_estimate = data.GetEstimatedMemoryFootprint()
self._keys_to_data[ key ] = ( data, size_estimate )
self._total_estimated_memory_footprint += size_estimate
self._TouchKey( key )
self._RecalcMemoryUsage()
if HG.cache_report_mode:
HydrusData.ShowText(
'Cache "{}" adding "{}" ({}). Current size {}.'.format(
self._name,
key,
HydrusData.ToHumanBytes( data.GetEstimatedMemoryFootprint() ),
HydrusData.ToHumanBytes( size_estimate ),
HydrusData.ConvertValueRangeToBytes( self._total_estimated_memory_footprint, self._cache_size )
)
)
@ -132,14 +154,7 @@ class DataCache( object ):
with self._lock:
if key not in self._keys_to_data:
raise Exception( 'Cache error! Looking for {}, but it was missing.'.format( key ) )
self._TouchKey( key )
return self._keys_to_data[ key ]
return self._GetData( key )
@ -149,9 +164,7 @@ class DataCache( object ):
if key in self._keys_to_data:
self._TouchKey( key )
return self._keys_to_data[ key ]
return self._GetData( key )
else:
@ -180,6 +193,11 @@ class DataCache( object ):
with self._lock:
while self._total_estimated_memory_footprint > self._cache_size:
self._DeleteItem()
while True:
if len( self._keys_fifo ) == 0:

View File

@ -303,6 +303,11 @@ page_file_count_display_string_lookup = {
PAGE_FILE_COUNT_DISPLAY_NONE : 'for no pages'
}
PAGE_STATE_NORMAL = 0
PAGE_STATE_INITIALISING = 1
PAGE_STATE_SEARCHING = 2
PAGE_STATE_SEARCHING_CANCELLED = 3
SHUTDOWN_TIMESTAMP_VACUUM = 0
SHUTDOWN_TIMESTAMP_FATTEN_AC_CACHE = 1
SHUTDOWN_TIMESTAMP_DELETE_ORPHANS = 2

View File

@ -51,8 +51,8 @@ regen_file_enum_to_str_lookup = {
REGENERATE_FILE_DATA_JOB_REFIT_THUMBNAIL : 'regenerate thumbnail if incorrect size',
REGENERATE_FILE_DATA_JOB_OTHER_HASHES : 'regenerate non-standard hashes',
REGENERATE_FILE_DATA_JOB_DELETE_NEIGHBOUR_DUPES : 'delete duplicate neighbours with incorrect file extension',
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD : 'if file is missing, remove record (no delete record)',
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_DELETE_RECORD : 'if file is missing, remove record (leave delete record)',
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD : 'if file is missing, remove record (leave no delete record)',
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_DELETE_RECORD : 'if file is missing, remove record (leave a delete record)',
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL : 'if file is missing, then if has URL try to redownload',
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD : 'if file is missing, then if has URL try to redownload, else remove record',
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY : 'if file is missing, note it in log',
@ -185,6 +185,8 @@ def GetAllFilePaths( raw_paths, do_human_sort = True, clear_out_sidecars = True
file_paths = []
num_sidecars = 0
paths_to_process = list( raw_paths )
while len( paths_to_process ) > 0:
@ -230,24 +232,49 @@ def GetAllFilePaths( raw_paths, do_human_sort = True, clear_out_sidecars = True
HydrusData.HumanTextSort( file_paths )
num_files_with_sidecars = len( file_paths )
if clear_out_sidecars:
exts = [ '.txt', '.json', '.xml' ]
def not_a_sidecar( p ):
def has_sidecar_ext( p ):
if True in ( p.endswith( ext ) for ext in exts ):
return False
return True
return True
return False
file_paths = [ path for path in file_paths if not_a_sidecar( path ) ]
def get_base_prefix_component( p ):
base_prefix = os.path.basename( p )
if '.' in base_prefix:
base_prefix = base_prefix.split( '.', 1 )[0]
return base_prefix
# let's get all the 'Image123' in our 'path/to/Image123.jpg' list
all_non_ext_prefix_components = { get_base_prefix_component( file_path ) for file_path in file_paths if not has_sidecar_ext( file_path ) }
def looks_like_a_sidecar( p ):
# if we have Image123.txt, that's probably a sidecar!
return has_sidecar_ext( p ) and get_base_prefix_component( p ) in all_non_ext_prefix_components
file_paths = [ path for path in file_paths if not looks_like_a_sidecar( path ) ]
return file_paths
num_sidecars = num_files_with_sidecars - len( file_paths )
return ( file_paths, num_sidecars )
class ClientFilesManager( object ):
@ -259,7 +286,7 @@ class ClientFilesManager( object ):
self._prefixes_to_locations = {}
self._new_physical_file_deletes = threading.Event()
self._physical_file_delete_wait = threading.Event()
self._locations_to_free_space = {}
@ -1172,11 +1199,11 @@ class ClientFilesManager( object ):
def DoDeferredPhysicalDeletes( self ):
wait_period = self._controller.new_options.GetInteger( 'ms_to_wait_between_physical_file_deletes' ) / 1000
num_files_deleted = 0
num_thumbnails_deleted = 0
pauser = HydrusData.BigJobPauser()
while not HG.started_shutdown:
with self._rwlock.write:
@ -1190,9 +1217,18 @@ class ClientFilesManager( object ):
if file_hash is not None:
media_result = self._controller.Read( 'media_result', file_hash )
expected_mime = media_result.GetMime()
try:
( path, mime ) = self._LookForFilePath( file_hash )
path = self._GenerateExpectedFilePath( file_hash, expected_mime )
if not os.path.exists( path ):
( path, actual_mime ) = self._LookForFilePath( file_hash )
ClientPaths.DeletePath( path )
@ -1200,7 +1236,7 @@ class ClientFilesManager( object ):
except HydrusExceptions.FileMissingException:
pass
HydrusData.Print( 'Wanted to physically delete the "{}" file, with expected mime "{}", but it was not found!'.format( file_hash.hex(), expected_mime ) )
@ -1214,6 +1250,10 @@ class ClientFilesManager( object ):
num_thumbnails_deleted += 1
else:
HydrusData.Print( 'Wanted to physically delete the "{}" thumbnail, but it was not found!'.format( file_hash.hex() ) )
self._controller.WriteSynchronous( 'clear_deferred_physical_delete', file_hash = file_hash, thumbnail_hash = thumbnail_hash )
@ -1224,7 +1264,9 @@ class ClientFilesManager( object ):
pauser.Pause()
self._physical_file_delete_wait.wait( wait_period )
self._physical_file_delete_wait.clear()
if num_files_deleted > 0 or num_thumbnails_deleted > 0:
@ -1335,11 +1377,6 @@ class ClientFilesManager( object ):
return os.path.exists( path )
def NotifyNewPhysicalFileDeletes( self ):
self._new_physical_file_deletes.set()
def Rebalance( self, job_key ):
try:
@ -1502,7 +1539,7 @@ class ClientFilesManager( object ):
def shutdown( self ):
self._new_physical_file_deletes.set()
self._physical_file_delete_wait.set()
class FilesMaintenanceManager( object ):

View File

@ -482,6 +482,8 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
self._dictionary[ 'integers' ][ 'human_bytes_sig_figs' ] = 3
self._dictionary[ 'integers' ][ 'ms_to_wait_between_physical_file_deletes' ] = 250
#
self._dictionary[ 'keys' ] = {}

View File

@ -1576,7 +1576,7 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
if predicate_type == PREDICATE_TYPE_SYSTEM_MIME and value is not None:
value = tuple( ConvertSpecificFiletypesToSummary( value ) )
value = tuple( sorted( ConvertSpecificFiletypesToSummary( value ) ) )
if predicate_type == PREDICATE_TYPE_OR_CONTAINER:
@ -1771,6 +1771,11 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
self._value = serialisable_value
if self._predicate_type == PREDICATE_TYPE_SYSTEM_MIME and self._value is not None:
self._value = tuple( sorted( ConvertSpecificFiletypesToSummary( self._value ) ) )
if isinstance( self._value, list ):
@ -1863,7 +1868,7 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
summary_mimes = ConvertSpecificFiletypesToSummary( specific_mimes )
serialisable_value = tuple( summary_mimes )
serialisable_value = tuple( sorted( summary_mimes ) )
new_serialisable_info = ( predicate_type, serialisable_value, inclusive )
@ -3100,7 +3105,7 @@ class ParsedAutocompleteText( object ):
return 'AC Tag Text: {}'.format( self.raw_input )
def _GetSearchText( self, always_autocompleting: bool, force_do_not_collapse: bool = False, allow_unnamespaced_search_gives_any_namespace_wildcards: bool = True ) -> str:
def _GetSearchText( self, always_autocompleting: bool, force_do_not_collapse: bool = False, allow_auto_wildcard_conversion: bool = False ) -> str:
text = CollapseWildcardCharacters( self.raw_content )
@ -3114,7 +3119,7 @@ class ParsedAutocompleteText( object ):
text = ConvertTagToSearchable( text )
if allow_unnamespaced_search_gives_any_namespace_wildcards and self._tag_autocomplete_options.UnnamespacedSearchGivesAnyNamespaceWildcards():
if allow_auto_wildcard_conversion and self._tag_autocomplete_options.UnnamespacedSearchGivesAnyNamespaceWildcards():
if ':' not in text:
@ -3149,12 +3154,12 @@ class ParsedAutocompleteText( object ):
def GetAddTagPredicate( self ):
return Predicate( PREDICATE_TYPE_TAG, self.raw_content )
return Predicate( PREDICATE_TYPE_TAG, self.raw_content, self.inclusive )
def GetImmediateFileSearchPredicate( self ):
def GetImmediateFileSearchPredicate( self, allow_auto_wildcard_conversion ):
non_tag_predicates = self.GetNonTagFileSearchPredicates()
non_tag_predicates = self.GetNonTagFileSearchPredicates( allow_auto_wildcard_conversion )
if len( non_tag_predicates ) > 0:
@ -3166,7 +3171,7 @@ class ParsedAutocompleteText( object ):
return tag_search_predicate
def GetNonTagFileSearchPredicates( self ):
def GetNonTagFileSearchPredicates( self, allow_auto_wildcard_conversion ):
predicates = []
@ -3182,7 +3187,7 @@ class ParsedAutocompleteText( object ):
predicates.append( predicate )
elif self.IsExplicitWildcard():
elif self.IsExplicitWildcard( allow_auto_wildcard_conversion ):
search_texts = []
@ -3199,7 +3204,7 @@ class ParsedAutocompleteText( object ):
for always_autocompleting in always_autocompleting_values:
search_texts.append( self._GetSearchText( always_autocompleting, allow_unnamespaced_search_gives_any_namespace_wildcards = allow_unnamespaced_search_gives_any_namespace_wildcards, force_do_not_collapse = True ) )
search_texts.append( self._GetSearchText( always_autocompleting, allow_auto_wildcard_conversion = allow_unnamespaced_search_gives_any_namespace_wildcards, force_do_not_collapse = True ) )
@ -3220,9 +3225,9 @@ class ParsedAutocompleteText( object ):
return predicates
def GetSearchText( self, always_autocompleting: bool ):
def GetSearchText( self, always_autocompleting: bool, allow_auto_wildcard_conversion = True ):
return self._GetSearchText( always_autocompleting )
return self._GetSearchText( always_autocompleting, allow_auto_wildcard_conversion = allow_auto_wildcard_conversion )
def GetTagAutocompleteOptions( self ):
@ -3232,7 +3237,7 @@ class ParsedAutocompleteText( object ):
def IsAcceptableForTagSearches( self ):
search_text = self._GetSearchText( False )
search_text = self._GetSearchText( False, allow_auto_wildcard_conversion = True )
if search_text == '':
@ -3267,7 +3272,7 @@ class ParsedAutocompleteText( object ):
def IsAcceptableForFileSearches( self ):
search_text = self._GetSearchText( False )
search_text = self._GetSearchText( False, allow_auto_wildcard_conversion = True )
if len( search_text ) == 0:
@ -3287,10 +3292,10 @@ class ParsedAutocompleteText( object ):
return self.raw_input == ''
def IsExplicitWildcard( self ):
def IsExplicitWildcard( self, allow_auto_wildcard_conversion ):
# user has intentionally put a '*' in
return '*' in self.raw_content or self._GetSearchText( False ).startswith( '*:' )
return '*' in self.raw_content or self._GetSearchText( False, allow_auto_wildcard_conversion = allow_auto_wildcard_conversion ).startswith( '*:' )
def IsNamespaceSearch( self ):
@ -3300,9 +3305,9 @@ class ParsedAutocompleteText( object ):
return SearchTextIsNamespaceFetchAll( search_text ) or SearchTextIsNamespaceBareFetchAll( search_text )
def IsTagSearch( self ):
def IsTagSearch( self, allow_auto_wildcard_conversion ):
if self.IsEmpty() or self.IsExplicitWildcard() or self.IsNamespaceSearch():
if self.IsEmpty() or self.IsExplicitWildcard( allow_auto_wildcard_conversion ) or self.IsNamespaceSearch():
return False

View File

@ -5158,6 +5158,246 @@ class DB( HydrusDB.HydrusDB ):
return predicates
def _GetRelatedTagsNewOneTag( self, tag_display_type, file_service_id, tag_service_id, search_tag_id ):
# a user provided the basic idea here
# we are saying get me all the tags for all the hashes this tag has
# specifying namespace is critical to increase search speed, otherwise we actually are searching all tags for tags
# we also call this with single specific file domains to keep things fast
# also this thing searches in fixed file domain to get fast
# this table selection is hacky as anything, but simpler than GetMappingAndTagTables for now
mappings_table_names = []
if file_service_id == self.modules_services.combined_file_service_id:
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = ClientDBMappingsStorage.GenerateMappingsTableNames( tag_service_id )
mappings_table_names.extend( [ current_mappings_table_name, pending_mappings_table_name ] )
else:
if tag_display_type == ClientTags.TAG_DISPLAY_ACTUAL:
( cache_current_display_mappings_table_name, cache_pending_display_mappings_table_name ) = ClientDBMappingsStorage.GenerateSpecificDisplayMappingsCacheTableNames( file_service_id, tag_service_id )
mappings_table_names.extend( [ cache_current_display_mappings_table_name, cache_pending_display_mappings_table_name ] )
else:
statuses_to_table_names = self.modules_mappings_storage.GetFastestStorageMappingTableNames( file_service_id, tag_service_id )
mappings_table_names.extend( [ statuses_to_table_names[ HC.CONTENT_STATUS_CURRENT ], statuses_to_table_names[ HC.CONTENT_STATUS_PENDING ] ] )
results = collections.Counter()
tags_table_name = self.modules_tag_search.GetTagsTableName( file_service_id, tag_service_id )
# while this searches pending and current tags, it does not cross-reference current and pending on the same file, oh well!
for mappings_table_name in mappings_table_names:
search_predicate = 'hash_id IN ( SELECT hash_id FROM {} WHERE tag_id = {} )'.format( mappings_table_name, search_tag_id )
query = 'SELECT tag_id, COUNT( * ) FROM {} CROSS JOIN {} USING ( tag_id ) WHERE {} GROUP BY subtag_id;'.format( mappings_table_name, tags_table_name, search_predicate )
results.update( dict( self._Execute( query ).fetchall() ) )
return results
def _GetRelatedTagsNew( self, file_service_key, tag_service_key, search_tags, max_results = 100, concurrence_threshold = 0.04, total_search_tag_count_threshold = 500000 ):
# a user provided the basic idea here
if len( search_tags ) == 0:
return [ ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_TAG, value = 'no search tags to work with!' ) ]
tag_display_type = ClientTags.TAG_DISPLAY_ACTUAL
tag_service_id = self.modules_services.GetServiceId( tag_service_key )
file_service_id = self.modules_services.GetServiceId( file_service_key )
if tag_display_type == ClientTags.TAG_DISPLAY_ACTUAL:
search_tags = self.modules_tag_siblings.GetIdeals( tag_display_type, tag_service_key, search_tags )
# I had a thing here that added the parents, but it gave some whack results compared to what you expected
search_tag_ids_to_search_tags = self.modules_tags_local_cache.GetTagIdsToTags( tags = search_tags )
with self._MakeTemporaryIntegerTable( search_tag_ids_to_search_tags.keys(), 'tag_id' ) as temp_tag_id_table_name:
search_tag_ids_to_total_counts = collections.Counter( { tag_id : current_count + pending_count for ( tag_id, current_count, pending_count ) in self.modules_mappings_counts.GetCountsForTags( tag_display_type, file_service_id, tag_service_id, temp_tag_id_table_name ) } )
#
search_tags = set()
for ( search_tag_id, count ) in search_tag_ids_to_total_counts.items():
# pending only
if count == 0:
continue
search_tags.add( search_tag_ids_to_search_tags[ search_tag_id ] )
if len( search_tags ) == 0:
return [ ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_TAG, value = 'not enough data in search domain' ) ]
#
search_tag_ids_flat_sorted_ascending = sorted( search_tag_ids_to_total_counts.items(), key = lambda row: row[1] )
search_tags = set()
total_count = 0
# TODO: I think I would rather rework this into a time threshold thing like the old related tags stuff.
# so, should ditch the culling and instead make all the requests cancellable. just keep searching until we are out of time, then put results together
# TODO: Another option as I vacillate on 'all my files' vs 'all known files' would be to incorporate that into the search timer
# do all my files first, then replace that with all known files results until we run out of time (only do this for repositories)
# we don't really want to use '1girl' and friends as search tags here, since the search domain is so huge
# so, we go for the smallest count tags first. they have interesting suggestions
# searching all known files is gonkmode, so we curtail our max search size
if file_service_key == CC.COMBINED_FILE_SERVICE_KEY:
total_search_tag_count_threshold /= 25
for ( search_tag_id, count ) in search_tag_ids_flat_sorted_ascending:
# we don't want the total domain to be too large either. death by a thousand cuts
if total_count + count > total_search_tag_count_threshold:
break
total_count += count
search_tags.add( search_tag_ids_to_search_tags[ search_tag_id ] )
if len( search_tags ) == 0:
return [ ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_TAG, value = 'search domain too big' ) ]
#
search_tag_ids_to_tag_ids_to_matching_counts = {}
for search_tag in search_tags:
search_tag_id = self.modules_tags_local_cache.GetTagId( search_tag )
tag_ids_to_matching_counts = self._GetRelatedTagsNewOneTag( tag_display_type, file_service_id, tag_service_id, search_tag_id )
if search_tag_id in tag_ids_to_matching_counts:
del tag_ids_to_matching_counts[ search_tag_id ] # duh, don't recommend your 100% matching self
search_tag_ids_to_tag_ids_to_matching_counts[ search_tag_id ] = tag_ids_to_matching_counts
#
# ok we have a bunch of counts here for different search tags, so let's figure out some normalised scores and merge them all
#
# the master score is: number matching mappings found / square_root( suggestion_tag_count * search_tag_count )
#
# I don't really know what this *is*, but the user did it and it seems to make nice scores, so hooray
# the dude said it was arbitrary and could do with tuning, so we'll see how it goes
all_tag_ids = set()
for tag_ids_to_matching_counts in search_tag_ids_to_tag_ids_to_matching_counts.values():
all_tag_ids.update( tag_ids_to_matching_counts.keys() )
all_tag_ids.difference_update( search_tag_ids_to_search_tags.keys() )
with self._MakeTemporaryIntegerTable( all_tag_ids, 'tag_id' ) as temp_tag_id_table_name:
tag_ids_to_total_counts = { tag_id : current_count + pending_count for ( tag_id, current_count, pending_count ) in self.modules_mappings_counts.GetCountsForTags( tag_display_type, file_service_id, tag_service_id, temp_tag_id_table_name ) }
tag_ids_to_total_counts.update( search_tag_ids_to_total_counts )
tag_ids_to_scores = collections.Counter()
for ( search_tag_id, tag_ids_to_matching_counts ) in search_tag_ids_to_tag_ids_to_matching_counts.items():
if search_tag_id not in tag_ids_to_total_counts:
continue
search_tag_count = tag_ids_to_total_counts[ search_tag_id ]
search_tag_is_unnamespaced = HydrusTags.IsUnnamespaced( search_tag_ids_to_search_tags[ search_tag_id ] )
for ( tag_id, matching_count ) in tag_ids_to_matching_counts.items():
if matching_count / search_tag_count < concurrence_threshold:
continue
if tag_id not in tag_ids_to_total_counts:
continue
suggestion_tag_count = tag_ids_to_total_counts[ tag_id ]
score = matching_count / ( ( suggestion_tag_count * search_tag_count ) ** 0.5 )
# sophisticated hydev score-tuning
if search_tag_is_unnamespaced:
score /= 3
tag_ids_to_scores[ tag_id ] += score
results_flat_sorted_descending = sorted( tag_ids_to_scores.items(), key = lambda row: row[1], reverse = True )
tag_ids_to_scores = dict( results_flat_sorted_descending[ : max_results ] )
#
inclusive = True
pending_count = 0
tag_ids_to_full_counts = { tag_id : ( int( score * 1000 ), None, pending_count, None ) for ( tag_id, score ) in tag_ids_to_scores.items() }
predicates = self.modules_tag_display.GeneratePredicatesFromTagIdsAndCounts( tag_display_type, tag_service_id, tag_ids_to_full_counts, inclusive )
return predicates
def _GetRepositoryThumbnailHashesIDoNotHave( self, service_key ):
service_id = self.modules_services.GetServiceId( service_key )
@ -7379,6 +7619,7 @@ class DB( HydrusDB.HydrusDB ):
elif action == 'services': result = self.modules_services.GetServices( *args, **kwargs )
elif action == 'similar_files_maintenance_status': result = self.modules_similar_files.GetMaintenanceStatus( *args, **kwargs )
elif action == 'related_tags': result = self._GetRelatedTags( *args, **kwargs )
elif action == 'related_tags_new': result = self._GetRelatedTagsNew( *args, **kwargs )
elif action == 'tag_display_application': result = self.modules_tag_display.GetApplication( *args, **kwargs )
elif action == 'tag_display_maintenance_status': result = self._CacheTagDisplayGetApplicationStatusNumbers( *args, **kwargs )
elif action == 'tag_parents': result = self.modules_tag_parents.GetTagParents( *args, **kwargs )
@ -10762,6 +11003,116 @@ class DB( HydrusDB.HydrusDB ):
if version == 513:
try:
self._controller.frame_splash_status.SetSubtext( 'cleaning some surplus records' )
# clear deletion record wasn't purging on 'all my files'
all_my_deleted_hash_ids = set( self.modules_files_storage.GetDeletedHashIdsList( self.modules_services.combined_local_media_service_id ) )
all_local_current_hash_ids = self.modules_files_storage.GetCurrentHashIdsList( self.modules_services.combined_local_file_service_id )
all_local_deleted_hash_ids = self.modules_files_storage.GetDeletedHashIdsList( self.modules_services.combined_local_file_service_id )
erroneous_hash_ids = all_my_deleted_hash_ids.difference( all_local_current_hash_ids ).difference( all_local_deleted_hash_ids )
if len( erroneous_hash_ids ) > 0:
service_ids_to_nums_cleared = self.modules_files_storage.ClearLocalDeleteRecord( erroneous_hash_ids )
self._ExecuteMany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', ( ( -num_cleared, clear_service_id, HC.SERVICE_INFO_NUM_DELETED_FILES ) for ( clear_service_id, num_cleared ) in service_ids_to_nums_cleared.items() ) )
except:
HydrusData.PrintException( e )
message = 'Trying to clean up some bad delete records failed! Please let hydrus dev know!'
self.pub_initial_message( message )
try:
def ask_what_to_do_twitter_stuff():
message = 'Twitter removed their old API that we were using, breaking all the old downloaders! I am going to delete your old twitter downloaders and add a new limited one that can only get the first ~20 tweets of a profile. Make sure to check your subscriptions are linked to it, and you might want to speed up their check times! OK?'
from hydrus.client.gui import ClientGUIDialogsQuick
result = ClientGUIDialogsQuick.GetYesNo( None, message, title = 'Swap to new twitter downloader?', yes_label = 'do it', no_label = 'do not do it, I need to keep the old broken stuff' )
return result == QW.QDialog.Accepted
do_twitter_stuff = self._controller.CallBlockingToQt( None, ask_what_to_do_twitter_stuff )
domain_manager = self.modules_serialisable.GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER )
domain_manager.Initialise()
#
domain_manager.OverwriteDefaultParsers( [
'danbooru file page parser - get webm ugoira',
'deviant art file page parser',
'pixiv file page api parser'
] )
if do_twitter_stuff:
domain_manager.DeleteGUGs( [
'twitter collection lookup',
'twitter likes lookup',
'twitter list lookup'
] )
url_classes = domain_manager.GetURLClasses()
deletee_url_class_names = [ url_class.GetName() for url_class in url_classes if url_class.GetName() == 'twitter list' or url_class.GetName().startswith( 'twitter syndication api' ) ]
domain_manager.DeleteURLClasses( deletee_url_class_names )
# we're going to leave the one spare non-overwritten parser in place
#
domain_manager.OverwriteDefaultGUGs( [
'twitter profile lookup',
'twitter profile lookup (with replies)'
] )
domain_manager.OverwriteDefaultURLClasses( [
'twitter tweet (i/web/status)',
'twitter tweet',
'twitter syndication api tweet-result',
'twitter syndication api timeline-profile'
] )
domain_manager.OverwriteDefaultParsers( [
'twitter syndication api timeline-profile parser',
'twitter syndication api tweet parser'
] )
domain_manager.TryToLinkURLClassesAndParsers()
#
self.modules_serialisable.SetJSONDump( domain_manager )
except Exception as e:
HydrusData.PrintException( e )
message = 'Trying to update some downloader objects failed! Please let hydrus dev know!'
self.pub_initial_message( message )
self._controller.frame_splash_status.SetTitleText( 'updated db to v{}'.format( HydrusData.ToHumanInt( version + 1 ) ) )
self._Execute( 'UPDATE version SET version = ?;', ( version + 1, ) )

View File

@ -373,7 +373,7 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
service_ids_to_nums_cleared = {}
local_non_trash_service_ids = self.modules_services.GetServiceIds( ( HC.COMBINED_LOCAL_FILE, HC.LOCAL_FILE_DOMAIN ) )
local_non_trash_service_ids = self.modules_services.GetServiceIds( ( HC.COMBINED_LOCAL_FILE, HC.COMBINED_LOCAL_MEDIA, HC.LOCAL_FILE_DOMAIN ) )
if hash_ids is None:

View File

@ -447,17 +447,14 @@ class EditGallerySeedLogPanel( ClientGUIScrolledPanels.EditPanel ):
def _TrySelectedAgain( self, can_generate_more_pages ):
new_gallery_seeds = []
gallery_seeds = self._list_ctrl.GetData( only_selected = True )
for gallery_seed in gallery_seeds:
new_gallery_seeds.append( gallery_seed.GenerateRestartedDuplicate( can_generate_more_pages ) )
restarted_gallery_seed = gallery_seed.GenerateRestartedDuplicate( can_generate_more_pages )
self._gallery_seed_log.AddGallerySeeds( ( restarted_gallery_seed, ), parent_gallery_seed = gallery_seed )
self._gallery_seed_log.AddGallerySeeds( new_gallery_seeds )
self._gallery_seed_log.NotifyGallerySeedsUpdated( new_gallery_seeds )
def _UpdateListCtrl( self, gallery_seeds ):

View File

@ -942,6 +942,10 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
self._delete_to_recycle_bin = QW.QCheckBox( self )
self._ms_to_wait_between_physical_file_deletes = ClientGUICommon.BetterSpinBox( self, min=20, max = 5000 )
tt = 'Deleting a file from a hard disk can be resource expensive, so when files leave the trash, the actual physical file delete happens later, in the background. The operation is spread out so as not to give you lag spikes.'
self._ms_to_wait_between_physical_file_deletes.setToolTip( tt )
self._confirm_trash = QW.QCheckBox( self )
self._confirm_archive = QW.QCheckBox( self )
@ -987,6 +991,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
self._delete_to_recycle_bin.setChecked( HC.options[ 'delete_to_recycle_bin' ] )
self._ms_to_wait_between_physical_file_deletes.setValue( self._new_options.GetInteger( 'ms_to_wait_between_physical_file_deletes' ) )
self._confirm_trash.setChecked( HC.options[ 'confirm_trash' ] )
self._confirm_archive.setChecked( HC.options[ 'confirm_archive' ] )
@ -1028,7 +1034,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
rows.append( ( 'Confirm sending more than one file to archive or inbox: ', self._confirm_archive ) )
rows.append( ( 'Confirm when copying files across local file services: ', self._confirm_multiple_local_file_services_copy ) )
rows.append( ( 'Confirm when moving files across local file services: ', self._confirm_multiple_local_file_services_move ) )
rows.append( ( 'When deleting files or folders, send them to the OS\'s recycle bin: ', self._delete_to_recycle_bin ) )
rows.append( ( 'When physically deleting files or folders, send them to the OS\'s recycle bin: ', self._delete_to_recycle_bin ) )
rows.append( ( 'When maintenance physically deletes files, wait this many ms between each delete: ', self._ms_to_wait_between_physical_file_deletes ) )
rows.append( ( 'Remove files from view when they are filtered: ', self._remove_filtered_files ) )
rows.append( ( 'Remove files from view when they are sent to the trash: ', self._remove_trashed_files ) )
rows.append( ( 'Number of hours a file can be in the trash before being deleted: ', self._trash_max_age ) )
@ -1120,6 +1127,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
HC.options[ 'trash_max_age' ] = self._trash_max_age.GetValue()
HC.options[ 'trash_max_size' ] = self._trash_max_size.GetValue()
self._new_options.SetInteger( 'ms_to_wait_between_physical_file_deletes', self._ms_to_wait_between_physical_file_deletes.value() )
self._new_options.SetBoolean( 'confirm_multiple_local_file_services_copy', self._confirm_multiple_local_file_services_copy.isChecked() )
self._new_options.SetBoolean( 'confirm_multiple_local_file_services_move', self._confirm_multiple_local_file_services_move.isChecked() )
@ -3861,14 +3870,14 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
rows = []
rows.append( ( 'Show related tags on single-file manage tags windows: ', self._show_related_tags ) )
rows.append( ( 'Show related tags: ', self._show_related_tags ) )
rows.append( ( 'Initial search duration (ms): ', self._related_tags_search_1_duration_ms ) )
rows.append( ( 'Medium search duration (ms): ', self._related_tags_search_2_duration_ms ) )
rows.append( ( 'Thorough search duration (ms): ', self._related_tags_search_3_duration_ms ) )
gridbox = ClientGUICommon.WrapInGrid( suggested_tags_related_panel, rows )
desc = 'This will search the database for statistically related tags based on what your focused file already has.'
desc = 'This will search the database for tags statistically related to what your files already have.'
QP.AddToLayout( panel_vbox, ClientGUICommon.BetterStaticText(suggested_tags_related_panel,desc), CC.FLAGS_EXPAND_PERPENDICULAR )
QP.AddToLayout( panel_vbox, gridbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )

View File

@ -2450,15 +2450,6 @@ class ReviewFileHistory( ClientGUIScrolledPanels.ReviewPanel ):
vbox = QP.VBoxLayout()
label = 'Please note that delete and inbox time tracking are new so you may not have full data for them.'
st = ClientGUICommon.BetterStaticText( self, label = label )
st.setWordWrap( True )
st.setAlignment( QC.Qt.AlignCenter )
QP.AddToLayout( vbox, st, CC.FLAGS_EXPAND_PERPENDICULAR )
flip_deleted = QW.QCheckBox( 'show deleted', self )
flip_deleted.setChecked( True )
@ -3424,6 +3415,8 @@ class ReviewLocalFileImports( ClientGUIScrolledPanels.ReviewPanel ):
num_unimportable_mime_files = 0
num_occupied_files = 0
num_sidecars = 0
while not HG.started_shutdown:
if not self or not QP.isValid( self ):
@ -3462,7 +3455,7 @@ class ReviewLocalFileImports( ClientGUIScrolledPanels.ReviewPanel ):
message = HydrusData.ConvertValueRangeToPrettyString( num_good_files, total_paths ) + ' files parsed successfully'
if num_empty_files + num_missing_files + num_unimportable_mime_files + num_occupied_files > 0:
if num_empty_files + num_missing_files + num_unimportable_mime_files + num_occupied_files + num_sidecars > 0:
if num_good_files == 0:
@ -3475,6 +3468,11 @@ class ReviewLocalFileImports( ClientGUIScrolledPanels.ReviewPanel ):
bad_comments = []
if num_sidecars > 0:
bad_comments.append( '{} looked like txt/json/xml sidecars'.format( HydrusData.ToHumanInt( num_sidecars ) ) )
if num_empty_files > 0:
bad_comments.append( HydrusData.ToHumanInt( num_empty_files ) + ' were empty' )
@ -3544,9 +3542,10 @@ class ReviewLocalFileImports( ClientGUIScrolledPanels.ReviewPanel ):
try:
raw_paths = unparsed_paths_queue.get( block = False )
( paths, num_sidecars_this_loop ) = ClientFiles.GetAllFilePaths( raw_paths, do_human_sort = do_human_sort ) # convert any dirs to subpaths
paths = ClientFiles.GetAllFilePaths( raw_paths, do_human_sort = do_human_sort ) # convert any dirs to subpaths
num_sidecars += num_sidecars_this_loop
unparsed_paths.extend( paths )
except queue.Empty:
@ -3562,9 +3561,10 @@ class ReviewLocalFileImports( ClientGUIScrolledPanels.ReviewPanel ):
try:
raw_paths = unparsed_paths_queue.get( timeout = 5 )
( paths, num_sidecars_this_loop ) = ClientFiles.GetAllFilePaths( raw_paths, do_human_sort = do_human_sort ) # convert any dirs to subpaths
paths = ClientFiles.GetAllFilePaths( raw_paths, do_human_sort = do_human_sort ) # convert any dirs to subpaths
num_sidecars += num_sidecars_this_loop
unparsed_paths.extend( paths )
except queue.Empty:

View File

@ -189,7 +189,7 @@ class EditSubscriptionPanel( ClientGUIScrolledPanels.EditPanel ):
if HG.client_controller.new_options.GetBoolean( 'advanced_mode' ):
limits_max = 10000
limits_max = 50000
else:

View File

@ -1,3 +1,4 @@
import os
import typing
from qtpy import QtCore as QC
@ -35,14 +36,18 @@ def FilterSuggestedPredicatesForMedia( predicates: typing.Sequence[ ClientSearch
def FilterSuggestedTagsForMedia( tags: typing.Sequence[ str ], medias: typing.Collection[ ClientMedia.Media ], service_key: bytes ) -> typing.List[ str ]:
# TODO: figure out a nice way to filter out siblings here
# maybe have to wait for when tags always know their siblings
# then we could also filter out worse/better siblings of the same count
num_media = len( medias )
tags_filtered_set = set( tags )
( current_tags_to_count, deleted_tags_to_count, pending_tags_to_count, petitioned_tags_to_count ) = ClientMedia.GetMediasTagCount( medias, service_key, ClientTags.TAG_DISPLAY_STORAGE )
current_tags_to_count.update( pending_tags_to_count )
num_media = len( medias )
for ( tag, count ) in current_tags_to_count.items():
if count == num_media:
@ -331,17 +336,46 @@ class RelatedTagsPanel( QW.QWidget ):
self._have_fetched = False
self._selected_tags = set()
self._new_options = HG.client_controller.new_options
vbox = QP.VBoxLayout()
tt = 'If you select some tags, this will search using only those as reference!'
self._button_2 = QW.QPushButton( 'medium', self )
self._button_2.clicked.connect( self.RefreshMedium )
self._button_2.setMinimumWidth( 30 )
self._button_2.setToolTip( tt )
self._button_3 = QW.QPushButton( 'thorough', self )
self._button_3.clicked.connect( self.RefreshThorough )
self._button_3.setMinimumWidth( 30 )
self._button_3.setToolTip( tt )
self._button_new = QW.QPushButton( 'new 1', self )
self._button_new.clicked.connect( self.RefreshNew )
self._button_new.setMinimumWidth( 30 )
tt = 'Please test this! This uses the new statistical method and searches your local files\' tags. Should be pretty fast, but its search domain is limited.' + os.linesep * 2 + 'Hydev thinks this mode sucks for the PTR, so let him know if it is actually works ok there.'
self._button_new.setToolTip( tt )
self._button_new_2 = QW.QPushButton( 'new 2', self )
self._button_new_2.clicked.connect( self.RefreshNew2 )
self._button_new_2.setMinimumWidth( 30 )
tt = 'Please test this! This uses the new statistical method and searches all the service\'s tags. May search slow and will not get results from large-count tags.' + os.linesep * 2 + 'Hydev wants to use this in the end, so let him know if it is too laggy.'
self._button_new_2.setToolTip( tt )
if len( self._media ) > 1:
self._button_2.setVisible( False )
self._button_3.setVisible( False )
if HG.client_controller.services_manager.GetServiceType( self._service_key ) == HC.LOCAL_TAG:
self._button_new_2.setVisible( False )
self._related_tags = ListBoxTagsSuggestionsRelated( self, service_key, activate_callable )
@ -349,6 +383,8 @@ class RelatedTagsPanel( QW.QWidget ):
QP.AddToLayout( button_hbox, self._button_2, CC.FLAGS_EXPAND_SIZER_BOTH_WAYS )
QP.AddToLayout( button_hbox, self._button_3, CC.FLAGS_EXPAND_SIZER_BOTH_WAYS )
QP.AddToLayout( button_hbox, self._button_new, CC.FLAGS_EXPAND_SIZER_BOTH_WAYS )
QP.AddToLayout( button_hbox, self._button_new_2, CC.FLAGS_EXPAND_SIZER_BOTH_WAYS )
QP.AddToLayout( vbox, button_hbox, CC.FLAGS_EXPAND_PERPENDICULAR )
QP.AddToLayout( vbox, self._related_tags, CC.FLAGS_EXPAND_BOTH_WAYS )
@ -385,17 +421,77 @@ class RelatedTagsPanel( QW.QWidget ):
self._related_tags.SetPredicates( [] )
if len( self._media ) > 1:
return
( m, ) = self._media
hash = m.GetHash()
search_tags = ClientMedia.GetMediasTags( self._media, self._service_key, ClientTags.TAG_DISPLAY_STORAGE, ( HC.CONTENT_STATUS_CURRENT, HC.CONTENT_STATUS_PENDING ) )
# TODO: If user has some tags selected, use them instead
if len( self._selected_tags ) == 0:
search_tags = ClientMedia.GetMediasTags( self._media, self._service_key, ClientTags.TAG_DISPLAY_STORAGE, ( HC.CONTENT_STATUS_CURRENT, HC.CONTENT_STATUS_PENDING ) )
else:
search_tags = self._selected_tags
max_results = 100
HG.client_controller.CallToThread( do_it, self._service_key, hash, search_tags, max_results, max_time_to_take )
def _FetchRelatedTagsNew( self, file_service_key = None ):
def do_it( file_service_key, tag_service_key, search_tags ):
def qt_code( predicates ):
if not self or not QP.isValid( self ):
return
self._last_fetched_predicates = predicates
self._UpdateTagDisplay()
self._have_fetched = True
predicates = HG.client_controller.Read( 'related_tags_new', file_service_key, tag_service_key, search_tags )
predicates = ClientSearch.SortPredicates( predicates )
QP.CallAfter( qt_code, predicates )
self._related_tags.SetPredicates( [] )
if len( self._selected_tags ) == 0:
search_tags = ClientMedia.GetMediasTags( self._media, self._service_key, ClientTags.TAG_DISPLAY_STORAGE, ( HC.CONTENT_STATUS_CURRENT, HC.CONTENT_STATUS_PENDING ) )
else:
search_tags = self._selected_tags
if file_service_key is None:
file_service_key = CC.COMBINED_LOCAL_MEDIA_SERVICE_KEY
tag_service_key = self._service_key
HG.client_controller.CallToThread( do_it, file_service_key, tag_service_key, search_tags )
def _QuickSuggestedRelatedTags( self ):
max_time_to_take = self._new_options.GetInteger( 'related_tags_search_1_duration_ms' ) / 1000.0
@ -424,6 +520,16 @@ class RelatedTagsPanel( QW.QWidget ):
self._FetchRelatedTags( max_time_to_take )
def RefreshNew( self ):
self._FetchRelatedTagsNew()
def RefreshNew2( self ):
self._FetchRelatedTagsNew( file_service_key = CC.COMBINED_FILE_SERVICE_KEY )
def MediaUpdated( self ):
self._UpdateTagDisplay()
@ -433,7 +539,19 @@ class RelatedTagsPanel( QW.QWidget ):
self._media = media
self._QuickSuggestedRelatedTags()
if len( self._media ) == 1:
self._QuickSuggestedRelatedTags()
else:
self._related_tags.SetPredicates( [] )
def SetSelectedTags( self, tags ):
self._selected_tags = tags
def TakeFocusForUser( self ):
@ -682,7 +800,7 @@ class SuggestedTagsPanel( QW.QWidget ):
self._related_tags = None
if self._new_options.GetBoolean( 'show_related_tags' ) and len( media ) == 1:
if self._new_options.GetBoolean( 'show_related_tags' ):
self._related_tags = RelatedTagsPanel( panel_parent, service_key, media, activate_callable )
@ -796,6 +914,14 @@ class SuggestedTagsPanel( QW.QWidget ):
def SetSelectedTags( self, tags ):
if self._related_tags is not None:
self._related_tags.SetSelectedTags( tags )
def TakeFocusForUser( self, command ):
if command == CAC.SIMPLE_SHOW_AND_FOCUS_MANAGE_TAGS_FAVOURITE_TAGS:

View File

@ -2298,6 +2298,8 @@ class ManageTagsPanel( ClientGUIScrolledPanels.ManagePanel, CAC.ApplicationComma
self._suggested_tags.mouseActivationOccurred.connect( self.SetTagBoxFocus )
self._tags_box.tagsSelected.connect( self._suggested_tags.SetSelectedTags )
def _EnterTags( self, tags, only_add = False, only_remove = False, forced_reason = None ):
@ -4473,18 +4475,33 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
current_dict = dict( current_pairs )
current_olds = set( current_dict.keys() )
for ( potential_old, potential_new ) in pairs:
if potential_new in current_dict:
loop_new = potential_new
seen_tags = set()
while loop_new in current_dict:
seen_tags.add( loop_new )
next_new = current_dict[ loop_new ]
if next_new in seen_tags:
QW.QMessageBox.warning( self, 'Loop Problem!', 'While trying to test the new pair(s) for potential loops, I think I ran across an existing loop! Please review everything and see if you can break any loops yourself.' )
message = 'The pair you mean to add seems to connect to a sibling loop already in your database! Please undo this loop manually. The tags involved in the loop are:'
message += os.linesep * 2
message += ', '.join( seen_tags )
QW.QMessageBox.critical( self, 'Error', message )
break
if next_new == potential_old:
pairs_to_auto_petition = [ ( loop_new, next_new ) ]

View File

@ -2098,7 +2098,7 @@ class CanvasWithHovers( CanvasWithDetails ):
# due to the mouse setPos below, the event pos can get funky I think due to out of order coordinate setting events, so we'll poll current value directly
event_pos = self.mapFromGlobal( QG.QCursor.pos() )
mouse_currently_shown = self.cursor() == QG.QCursor( QC.Qt.ArrowCursor )
mouse_currently_shown = self.cursor().shape() == QC.Qt.ArrowCursor
show_mouse = mouse_currently_shown
is_dragging = event.buttons() & QC.Qt.LeftButton and self._last_drag_pos is not None

View File

@ -83,6 +83,27 @@ class EditExportFoldersPanel( ClientGUIScrolledPanels.EditPanel ):
metadata_routers = new_options.GetDefaultExportFilesMetadataRouters()
if len( metadata_routers ) > 0:
message = 'You have some default metadata sidecar settings, most likely from a previous file export. They look like this:'
message += os.linesep * 2
message += os.linesep.join( [ router.ToString( pretty = True ) for router in metadata_routers ] )
message += os.linesep * 2
message += 'Do you want these in the new export folder?'
( result, cancelled ) = ClientGUIDialogsQuick.GetYesNo( self, message, no_label = 'no, I want an empty sidecar list', check_for_cancelled = True )
if cancelled:
return
if result != QW.QDialog.DialogCode.Accepted:
metadata_routers = []
period = 15 * 60
export_folder = ClientExportingFiles.ExportFolder(

View File

@ -1489,6 +1489,8 @@ class ListBox( QW.QScrollArea ):
self._SelectionChanged()
self.widget().update()
@ -1781,6 +1783,11 @@ class ListBox( QW.QScrollArea ):
self._selected_terms = set( self._terms_to_logical_indices.keys() )
def _SelectionChanged( self ):
pass
def _SetVirtualSize( self ):
self.setWidgetResizable( True )
@ -2169,6 +2176,7 @@ COPY_ALL_SUBTAGS_WITH_COUNTS = 7
class ListBoxTags( ListBox ):
tagsSelected = QC.Signal( set )
can_spawn_new_windows = True
def __init__( self, parent, *args, tag_display_type: int = ClientTags.TAG_DISPLAY_STORAGE, **kwargs ):
@ -2437,6 +2445,13 @@ class ListBoxTags( ListBox ):
pass
def _SelectionChanged( self ):
tags = set( self._GetTagsFromTerms( self._selected_terms ) )
self.tagsSelected.emit( tags )
def _UpdateBackgroundColour( self ):
new_options = HG.client_controller.new_options

View File

@ -996,6 +996,8 @@ class ManagementPanel( QW.QScrollArea ):
self._page = page
self._page_key = self._management_controller.GetVariable( 'page_key' )
self._page_state = CC.PAGE_STATE_NORMAL
self._current_selection_tags_list = None
self._media_sort = ClientGUIResultsSortCollect.MediaSortControl( self, management_controller = self._management_controller )
@ -1097,6 +1099,11 @@ class ManagementPanel( QW.QScrollArea ):
return media_panel
def GetPageState( self ) -> int:
return self._page_state
def PageHidden( self ):
pass
@ -1620,16 +1627,20 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
self._page.SwapMediaPanel( panel )
self._page_state = CC.PAGE_STATE_NORMAL
def _ShowRandomPotentialDupes( self ):
( file_search_context_1, file_search_context_2, dupe_search_type, pixel_dupes_preference, max_hamming_distance ) = self._GetDuplicateFileSearchData()
self._page_state = CC.PAGE_STATE_SEARCHING
hashes = self._controller.Read( 'random_potential_duplicate_hashes', file_search_context_1, file_search_context_2, dupe_search_type, pixel_dupes_preference, max_hamming_distance )
if len( hashes ) == 0:
HydrusData.ShowText( 'No files were found. Try refreshing the count, and if this keeps happening, please let hydrus_dev know.' )
HydrusData.ShowText( 'No random potential duplicates were found. Try refreshing the count, and if this keeps happening, please let hydrus_dev know.' )
self._ShowPotentialDupes( hashes )
@ -5501,6 +5512,8 @@ class ManagementPanelQuery( ManagementPanel ):
self._page.SwapMediaPanel( panel )
self._page_state = CC.PAGE_STATE_SEARCHING_CANCELLED
self._UpdateCancelButton()
@ -5576,6 +5589,8 @@ class ManagementPanelQuery( ManagementPanel ):
panel = ClientGUIResults.MediaPanelLoading( self._page, self._page_key, location_context )
self._page_state = CC.PAGE_STATE_SEARCHING
else:
panel = ClientGUIResults.MediaPanelThumbnails( self._page, self._page_key, location_context, [] )
@ -5749,6 +5764,8 @@ class ManagementPanelQuery( ManagementPanel ):
self._page.SwapMediaPanel( panel )
self._page_state = CC.PAGE_STATE_NORMAL
def Start( self ):

View File

@ -664,6 +664,7 @@ class Page( QW.QWidget ):
d[ 'name' ] = self._management_controller.GetPageName()
d[ 'page_key' ] = self._page_key.hex()
d[ 'page_state' ] = self.GetPageState()
d[ 'page_type' ] = self._management_controller.GetType()
management_info = self._management_controller.GetAPIInfoDict( simple )
@ -770,6 +771,18 @@ class Page( QW.QWidget ):
return { self._page_key }
def GetPageState( self ) -> int:
if self._initialised:
return self._management_panel.GetPageState()
else:
return CC.PAGE_STATE_INITIALISING
def GetParentNotebook( self ):
return self._parent_notebook
@ -819,6 +832,7 @@ class Page( QW.QWidget ):
root[ 'name' ] = self.GetName()
root[ 'page_key' ] = self._page_key.hex()
root[ 'page_state' ] = self.GetPageState()
root[ 'page_type' ] = self._management_controller.GetType()
root[ 'selected' ] = is_selected
@ -2286,7 +2300,12 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
def GetAPIInfoDict( self, simple ):
return {}
return {
'name' : self.GetName(),
'page_key' : self._page_key.hex(),
'page_state' : self.GetPageState(),
'page_type' : ClientGUIManagement.MANAGEMENT_TYPE_PAGE_OF_PAGES
}
def GetCurrentGUISession( self, name: str, only_changed_page_data: bool, about_to_save: bool ):
@ -2540,6 +2559,11 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
return self._GetPages()
def GetPageState( self ) -> int:
return CC.PAGE_STATE_NORMAL
def GetPrettyStatusForStatusBar( self ):
( num_files, ( num_value, num_range ) ) = self.GetNumFileSummary()
@ -2596,6 +2620,7 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
root[ 'name' ] = self.GetName()
root[ 'page_key' ] = self._page_key.hex()
root[ 'page_state' ] = self.GetPageState()
root[ 'page_type' ] = ClientGUIManagement.MANAGEMENT_TYPE_PAGE_OF_PAGES
root[ 'selected' ] = is_selected
root[ 'pages' ] = my_pages_list

View File

@ -43,7 +43,9 @@ def InsertOtherPredicatesForRead( predicates: list, parsed_autocomplete_text: Cl
if include_unusual_predicate_types:
non_tag_predicates = list( parsed_autocomplete_text.GetNonTagFileSearchPredicates() )
allow_auto_wildcard_conversion = True
non_tag_predicates = list( parsed_autocomplete_text.GetNonTagFileSearchPredicates( allow_auto_wildcard_conversion ) )
non_tag_predicates.reverse()
@ -58,11 +60,11 @@ def InsertOtherPredicatesForRead( predicates: list, parsed_autocomplete_text: Cl
PutAtTopOfMatches( predicates, under_construction_or_predicate )
def InsertTagPredicates( predicates: list, tag_service_key: bytes, parsed_autocomplete_text: ClientSearch.ParsedAutocompleteText, insert_if_does_not_exist: bool = True ):
def InsertTagPredicates( predicates: list, tag_service_key: bytes, parsed_autocomplete_text: ClientSearch.ParsedAutocompleteText, allow_auto_wildcard_conversion: bool, insert_if_does_not_exist: bool = True ):
if parsed_autocomplete_text.IsTagSearch():
if parsed_autocomplete_text.IsTagSearch( allow_auto_wildcard_conversion):
tag_predicate = parsed_autocomplete_text.GetImmediateFileSearchPredicate()
tag_predicate = parsed_autocomplete_text.GetImmediateFileSearchPredicate( allow_auto_wildcard_conversion )
actual_tag = tag_predicate.GetValue()
@ -185,7 +187,9 @@ def ReadFetch(
if fetch_from_db:
is_explicit_wildcard = parsed_autocomplete_text.IsExplicitWildcard()
allow_auto_wildcard_conversion = True
is_explicit_wildcard = parsed_autocomplete_text.IsExplicitWildcard( allow_auto_wildcard_conversion )
small_exact_match_search = ShouldDoExactSearch( parsed_autocomplete_text )
@ -340,7 +344,9 @@ def ReadFetch(
InsertTagPredicates( matches, tag_service_key, parsed_autocomplete_text, insert_if_does_not_exist = False )
allow_auto_wildcard_conversion = True
InsertTagPredicates( matches, tag_service_key, parsed_autocomplete_text, allow_auto_wildcard_conversion, insert_if_does_not_exist = False )
InsertOtherPredicatesForRead( matches, parsed_autocomplete_text, include_unusual_predicate_types, under_construction_or_predicate )
@ -376,7 +382,9 @@ def PutAtTopOfMatches( matches: list, predicate: ClientSearch.Predicate, insert_
def ShouldDoExactSearch( parsed_autocomplete_text: ClientSearch.ParsedAutocompleteText ):
if parsed_autocomplete_text.IsExplicitWildcard():
allow_auto_wildcard_conversion = True
if parsed_autocomplete_text.IsExplicitWildcard( allow_auto_wildcard_conversion ):
return False
@ -418,9 +426,12 @@ def WriteFetch( win, job_key, results_callable, parsed_autocomplete_text: Client
else:
is_explicit_wildcard = parsed_autocomplete_text.IsExplicitWildcard()
allow_auto_wildcard_conversion = False
strict_search_text = parsed_autocomplete_text.GetSearchText( False )
# TODO: This allow_auto_wildcard_conversion hack to handle allow_unnamespaced_search_gives_any_namespace_wildcards is hell. I should write IsImplicitWildcard or something!
is_explicit_wildcard = parsed_autocomplete_text.IsExplicitWildcard( allow_auto_wildcard_conversion )
strict_search_text = parsed_autocomplete_text.GetSearchText( False, allow_auto_wildcard_conversion = allow_auto_wildcard_conversion )
autocomplete_search_text = parsed_autocomplete_text.GetSearchText( True )
small_exact_match_search = ShouldDoExactSearch( parsed_autocomplete_text )
@ -490,7 +501,9 @@ def WriteFetch( win, job_key, results_callable, parsed_autocomplete_text: Client
matches = ClientSearch.SortPredicates( matches )
InsertTagPredicates( matches, display_tag_service_key, parsed_autocomplete_text )
allow_auto_wildcard_conversion = False
InsertTagPredicates( matches, display_tag_service_key, parsed_autocomplete_text, allow_auto_wildcard_conversion )
HG.client_controller.CallAfterQtSafe( win, 'write a/c fetch', results_callable, job_key, parsed_autocomplete_text, results_cache, matches )
@ -1909,7 +1922,9 @@ class AutoCompleteDropdownTagsRead( AutoCompleteDropdownTags ):
if parsed_autocomplete_text.IsAcceptableForFileSearches():
return parsed_autocomplete_text.GetImmediateFileSearchPredicate()
allow_auto_wildcard_conversion = True
return parsed_autocomplete_text.GetImmediateFileSearchPredicate( allow_auto_wildcard_conversion )
else:
@ -2656,9 +2671,11 @@ class AutoCompleteDropdownTagsWrite( AutoCompleteDropdownTags ):
parsed_autocomplete_text = self._GetParsedAutocompleteText()
if parsed_autocomplete_text.IsTagSearch():
allow_auto_wildcard_conversion = False
if parsed_autocomplete_text.IsTagSearch( allow_auto_wildcard_conversion ):
return parsed_autocomplete_text.GetImmediateFileSearchPredicate()
return parsed_autocomplete_text.GetImmediateFileSearchPredicate( allow_auto_wildcard_conversion )
else:
@ -2765,7 +2782,9 @@ class AutoCompleteDropdownTagsWrite( AutoCompleteDropdownTags ):
stub_predicates = []
InsertTagPredicates( stub_predicates, self._display_tag_service_key, parsed_autocomplete_text )
allow_auto_wildcard_conversion = False
InsertTagPredicates( stub_predicates, self._display_tag_service_key, parsed_autocomplete_text, allow_auto_wildcard_conversion )
AppendLoadingPredicate( stub_predicates )

View File

@ -1602,7 +1602,6 @@ class ReviewServicePanel( QW.QWidget ):
if not HG.client_controller.new_options.GetBoolean( 'advanced_mode' ):
self._id_button.hide()
self._service_key_button.hide()
vbox = QP.VBoxLayout()

View File

@ -1,4 +1,3 @@
import bisect
import collections
import itertools
import os
@ -1921,8 +1920,61 @@ class FileSeedCacheStatus( HydrusSerialisable.SerialisableBase ):
self._latest_added_time = latest_added_time
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_FILE_SEED_CACHE_STATUS ] = FileSeedCacheStatus
def WalkToNextFileSeed( status, starting_file_seed, file_seeds, file_seeds_to_indices, statuses_to_file_seeds ):
# the file seed given is the starting point and can be the answer
# but if it is wrong, or no longer tracked, then let's walk through them until we get one, or None
# along the way, we'll note what's in the wrong place
wrong_file_seeds = set()
if starting_file_seed in file_seeds_to_indices:
index = file_seeds_to_indices[ starting_file_seed ]
else:
index = 0
file_seed = starting_file_seed
while True:
# no need to walk further, we are good
if file_seed.status == status:
result = file_seed
break
# this file seed has the wrong status, move on
if file_seed in statuses_to_file_seeds[ status ]:
wrong_file_seeds.add( file_seed )
index += 1
if index >= len( file_seeds ):
result = None
break
file_seed = file_seeds[ index ]
return ( result, wrong_file_seeds )
class FileSeedCache( HydrusSerialisable.SerialisableBase ):
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_FILE_SEED_CACHE
@ -1939,14 +1991,21 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
self._file_seeds_to_indices = {}
self._statuses_to_indexed_file_seeds = collections.defaultdict( list )
# if there's a file seed here, it is the earliest
# if there's none in here, we know the count is 0
# if status is absent, we don't know
self._observed_statuses_to_next_file_seeds = {}
self._file_seeds_to_observed_statuses = {}
self._statuses_to_file_seeds = collections.defaultdict( set )
self._file_seed_cache_key = HydrusData.GenerateKey()
self._status_cache = FileSeedCacheStatus()
self._status_dirty = True
self._statuses_to_indexed_file_seeds_dirty = True
self._statuses_to_file_seeds_dirty = True
self._file_seeds_to_indices_dirty = True
self._lock = threading.Lock()
@ -1956,56 +2015,88 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
return len( self._file_seeds )
def _FileSeedIndicesJustChanged( self ):
def _FixStatusesToFileSeeds( self, file_seeds: typing.Collection[ FileSeed ] ):
self._file_seeds_to_indices = { file_seed : index for ( index, file_seed ) in enumerate( self._file_seeds ) }
if self._statuses_to_file_seeds_dirty:
return
self._SetStatusesToFileSeedsDirty()
file_seeds_to_indices = self._GetFileSeedsToIndices()
statuses_to_file_seeds = self._GetStatusesToFileSeeds()
def _FixFileSeedsStatusPosition( self, file_seeds ):
if len( file_seeds ) == 0:
return
indices_and_file_seeds_affected = []
outstanding_others_to_fix = set()
for file_seed in file_seeds:
if file_seed in self._file_seeds_to_indices:
indices_and_file_seeds_affected.append( ( self._file_seeds_to_indices[ file_seed ], file_seed ) )
else:
self._SetStatusesToFileSeedsDirty()
return
want_a_record = file_seed in file_seeds_to_indices
record_exists = file_seed in self._file_seeds_to_observed_statuses
for row in indices_and_file_seeds_affected:
correct_status = row[1].status
if row in self._statuses_to_indexed_file_seeds[ correct_status ]:
if not want_a_record and not record_exists:
continue
for ( status, indices_and_file_seeds ) in self._statuses_to_indexed_file_seeds.items():
correct_status = file_seed.status
if record_exists:
if status == correct_status:
set_status = self._file_seeds_to_observed_statuses[ file_seed ]
if set_status != correct_status or not want_a_record:
continue
if set_status in self._observed_statuses_to_next_file_seeds:
if self._observed_statuses_to_next_file_seeds[ set_status ] == file_seed:
# this 'next' is now wrong, so fast forward to the correct one, or None
( result, wrong_file_seeds ) = WalkToNextFileSeed( set_status, file_seed, self._file_seeds, file_seeds_to_indices, statuses_to_file_seeds )
self._observed_statuses_to_next_file_seeds[ set_status ] = result
outstanding_others_to_fix.update( wrong_file_seeds )
statuses_to_file_seeds[ set_status ].discard( file_seed )
del self._file_seeds_to_observed_statuses[ file_seed ]
record_exists = False
if row in indices_and_file_seeds:
if want_a_record:
if not record_exists:
indices_and_file_seeds.remove( row )
statuses_to_file_seeds[ correct_status ].add( file_seed )
bisect.insort( self._statuses_to_indexed_file_seeds[ correct_status ], row )
break
self._file_seeds_to_observed_statuses[ file_seed ] = correct_status
if correct_status in self._observed_statuses_to_next_file_seeds:
current_next_file_seed = self._observed_statuses_to_next_file_seeds[ correct_status ]
if current_next_file_seed is None or file_seeds_to_indices[ file_seed ] < file_seeds_to_indices[ current_next_file_seed ]:
self._observed_statuses_to_next_file_seeds[ correct_status ] = file_seed
outstanding_others_to_fix.difference_update( file_seeds )
if len( outstanding_others_to_fix ) > 0:
self._FixStatusesToFileSeeds( outstanding_others_to_fix )
@ -2029,15 +2120,25 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
else:
if self._statuses_to_indexed_file_seeds_dirty:
self._RegenerateStatusesToFileSeeds()
statuses_to_file_seeds = self._GetStatusesToFileSeeds()
file_seeds_to_indices = self._GetFileSeedsToIndices()
return [ file_seed for ( index, file_seed ) in self._statuses_to_indexed_file_seeds[ status ] ]
return sorted( statuses_to_file_seeds[ status ], key = lambda f_s: file_seeds_to_indices[ f_s ] )
def _GetFileSeedsToIndices( self ) -> typing.Dict[ FileSeed, int ]:
if self._file_seeds_to_indices_dirty:
self._file_seeds_to_indices = { file_seed : index for ( index, file_seed ) in enumerate( self._file_seeds ) }
self._file_seeds_to_indices_dirty = False
return self._file_seeds_to_indices
def _GetLatestAddedTime( self ):
if len( self._file_seeds ) == 0:
@ -2069,36 +2170,41 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
def _GetNextFileSeed( self, status: int ) -> typing.Optional[ FileSeed ]:
statuses_to_file_seeds = self._GetStatusesToFileSeeds()
file_seeds_to_indices = self._GetFileSeedsToIndices()
# the problem with this is if a file seed recently changed but 'notifyupdated' hasn't had a chance to go yet
# there could be a FS in a list other than the one we are looking at that has the status we want
# _however_, it seems like I do not do any async calls to notifyupdated in the actual FSC, only from notifyupdated to GUI elements, so we _seem_ to be good
# _however_, it seems like I do not do any async calls to notifyupdated in the actual FSC, only from notifyupdated to GUI elements, so we _seem_ to be good to talk to this in this way
if self._statuses_to_indexed_file_seeds_dirty:
if status not in self._observed_statuses_to_next_file_seeds:
self._RegenerateStatusesToFileSeeds()
file_seeds = statuses_to_file_seeds[ status ]
indexed_file_seeds = self._statuses_to_indexed_file_seeds[ status ]
while len( indexed_file_seeds ) > 0:
row = indexed_file_seeds[ 0 ]
file_seed = row[1]
if file_seed.status == status:
if len( file_seeds ) == 0:
return file_seed
self._observed_statuses_to_next_file_seeds[ status ] = None
else:
self._FixFileSeedsStatusPosition( ( file_seed, ) )
self._observed_statuses_to_next_file_seeds[ status ] = min( file_seeds, key = lambda f_s: file_seeds_to_indices[ f_s ] )
indexed_file_seeds = self._statuses_to_indexed_file_seeds[ status ]
file_seed = self._observed_statuses_to_next_file_seeds[ status ]
if file_seed is None:
return None
return None
( result, wrong_file_seeds ) = WalkToNextFileSeed( status, file_seed, self._file_seeds, file_seeds_to_indices, statuses_to_file_seeds )
self._observed_statuses_to_next_file_seeds[ status ] = result
self._FixStatusesToFileSeeds( wrong_file_seeds )
return file_seed
def _GetSerialisableInfo( self ):
@ -2125,14 +2231,11 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
statuses_to_counts = collections.Counter()
if self._statuses_to_indexed_file_seeds_dirty:
self._RegenerateStatusesToFileSeeds()
statuses_to_file_seeds = self._GetStatusesToFileSeeds()
for ( status, indexed_file_seeds ) in self._statuses_to_indexed_file_seeds.items():
for ( status, file_seeds ) in statuses_to_file_seeds.items():
count = len( indexed_file_seeds )
count = len( file_seeds )
if count > 0:
@ -2143,11 +2246,51 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
return statuses_to_counts
def _GetStatusesToFileSeeds( self ) -> typing.Dict[ int, typing.Set[ FileSeed ] ]:
file_seeds_to_indices = self._GetFileSeedsToIndices()
if self._statuses_to_file_seeds_dirty:
self._file_seeds_to_observed_statuses = {}
self._statuses_to_file_seeds = collections.defaultdict( set )
self._observed_statuses_to_next_file_seeds = {}
for ( file_seed, index ) in file_seeds_to_indices.items():
status = file_seed.status
self._statuses_to_file_seeds[ status ].add( file_seed )
self._file_seeds_to_observed_statuses[ file_seed ] = status
if status not in self._observed_statuses_to_next_file_seeds:
self._observed_statuses_to_next_file_seeds[ status ] = file_seed
else:
current_next = self._observed_statuses_to_next_file_seeds[ status ]
if current_next is not None and index < file_seeds_to_indices[ current_next ]:
self._observed_statuses_to_next_file_seeds[ status ] = file_seed
self._statuses_to_file_seeds_dirty = False
return self._statuses_to_file_seeds
def _HasFileSeed( self, file_seed: FileSeed ):
file_seeds_to_indices = self._GetFileSeedsToIndices()
search_file_seeds = file_seed.GetSearchFileSeeds()
has_file_seed = True in ( search_file_seed in self._file_seeds_to_indices for search_file_seed in search_file_seeds )
has_file_seed = True in ( search_file_seed in file_seeds_to_indices for search_file_seed in search_file_seeds )
return has_file_seed
@ -2158,30 +2301,30 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
self._file_seeds = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_info )
self._FileSeedIndicesJustChanged()
def _RegenerateStatusesToFileSeeds( self ):
def _NotifyFileSeedsUpdated( self, file_seeds: typing.Collection[ FileSeed ] ):
self._statuses_to_indexed_file_seeds = collections.defaultdict( list )
for ( file_seed, index ) in self._file_seeds_to_indices.items():
if len( file_seeds ) == 0:
self._statuses_to_indexed_file_seeds[ file_seed.status ].append( ( index, file_seed ) )
return
for indexed_file_seeds in self._statuses_to_indexed_file_seeds.values():
indexed_file_seeds.sort()
HG.client_controller.pub( 'file_seed_cache_file_seeds_updated', self._file_seed_cache_key, file_seeds )
self._statuses_to_indexed_file_seeds_dirty = False
def _SetFileSeedsToIndicesDirty( self ):
self._file_seeds_to_indices_dirty = True
self._observed_statuses_to_next_file_seeds = {}
def _SetStatusesToFileSeedsDirty( self ):
self._statuses_to_indexed_file_seeds_dirty = True
# this is never actually called, which is neat! I think we are 'perfect' on this thing maintaining itself after inital generation
self._statuses_to_file_seeds_dirty = True
def _SetStatusDirty( self ):
@ -2406,6 +2549,8 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
with self._lock:
file_seeds_to_indices = self._GetFileSeedsToIndices()
for file_seed in file_seeds:
if self._HasFileSeed( file_seed ):
@ -2445,42 +2590,50 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
index = len( self._file_seeds ) - 1
self._file_seeds_to_indices[ file_seed ] = index
if not self._statuses_to_indexed_file_seeds_dirty:
self._statuses_to_indexed_file_seeds[ file_seed.status ].append( ( index, file_seed ) )
file_seeds_to_indices[ file_seed ] = index
self._FixStatusesToFileSeeds( updated_or_new_file_seeds )
self._SetStatusDirty()
self.NotifyFileSeedsUpdated( updated_or_new_file_seeds )
self._NotifyFileSeedsUpdated( updated_or_new_file_seeds )
return len( updated_or_new_file_seeds )
def AdvanceFileSeed( self, file_seed: FileSeed ):
updated_file_seeds = []
with self._lock:
if file_seed in self._file_seeds_to_indices:
file_seeds_to_indices = self._GetFileSeedsToIndices()
if file_seed in file_seeds_to_indices:
index = self._file_seeds_to_indices[ file_seed ]
index = file_seeds_to_indices[ file_seed ]
if index > 0:
swapped_file_seed = self._file_seeds[ index - 1 ]
self._file_seeds.remove( file_seed )
self._file_seeds.insert( index - 1, file_seed )
self._FileSeedIndicesJustChanged()
file_seeds_to_indices[ file_seed ] = index - 1
file_seeds_to_indices[ swapped_file_seed ] = index
updated_file_seeds = [ file_seed, swapped_file_seed ]
self._FixStatusesToFileSeeds( updated_file_seeds )
self.NotifyFileSeedsUpdated( ( file_seed, ) )
self._NotifyFileSeedsUpdated( updated_file_seeds )
def CanCompact( self, compact_before_this_source_time: int ):
@ -2518,49 +2671,54 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
return
new_file_seeds = HydrusSerialisable.SerialisableList()
removee_file_seeds = set()
for file_seed in self._file_seeds[:-self.COMPACT_NUMBER]:
still_to_do = file_seed.status == CC.STATUS_UNKNOWN
still_relevant = self._GetSourceTimestampForVelocityCalculations( file_seed ) > compact_before_this_source_time
if still_to_do or still_relevant:
if not ( still_to_do or still_relevant ):
new_file_seeds.append( file_seed )
removee_file_seeds.add( file_seed )
new_file_seeds.extend( self._file_seeds[-self.COMPACT_NUMBER:] )
self._file_seeds = new_file_seeds
self._FileSeedIndicesJustChanged()
self._SetStatusDirty()
self.RemoveFileSeeds( removee_file_seeds )
def DelayFileSeed( self, file_seed: FileSeed ):
updated_file_seeds = []
with self._lock:
if file_seed in self._file_seeds_to_indices:
file_seeds_to_indices = self._GetFileSeedsToIndices()
if file_seed in file_seeds_to_indices:
index = self._file_seeds_to_indices[ file_seed ]
index = file_seeds_to_indices[ file_seed ]
if index < len( self._file_seeds ) - 1:
swapped_file_seed = self._file_seeds[ index + 1 ]
self._file_seeds.remove( file_seed )
self._file_seeds.insert( index + 1, file_seed )
self._FileSeedIndicesJustChanged()
file_seeds_to_indices[ swapped_file_seed ] = index
file_seeds_to_indices[ file_seed ] = index + 1
updated_file_seeds = [ file_seed, swapped_file_seed ]
self._FixStatusesToFileSeeds( updated_file_seeds )
self.NotifyFileSeedsUpdated( ( file_seed, ) )
self._NotifyFileSeedsUpdated( updated_file_seeds )
def GetAPIInfoDict( self, simple: bool ):
@ -2656,8 +2814,6 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
def GetFileSeedCount( self, status: int = None ):
result = 0
with self._lock:
if status is None:
@ -2666,12 +2822,9 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
else:
if self._statuses_to_indexed_file_seeds_dirty:
self._RegenerateStatusesToFileSeeds()
statuses_to_file_seeds = self._GetStatusesToFileSeeds()
return len( self._statuses_to_indexed_file_seeds[ status ] )
return len( statuses_to_file_seeds[ status ] )
@ -2690,7 +2843,9 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
with self._lock:
return self._file_seeds_to_indices[ file_seed ]
file_seeds_to_indices = self._GetFileSeedsToIndices()
return file_seeds_to_indices[ file_seed ]
@ -2793,74 +2948,100 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
def InsertFileSeeds( self, index: int, file_seeds: typing.Collection[ FileSeed ] ):
if len( file_seeds ) == 0:
return 0
new_file_seeds = set()
file_seeds = HydrusData.DedupeList( file_seeds )
with self._lock:
index = min( index, len( self._file_seeds ) )
new_file_seeds = []
for file_seed in file_seeds:
if self._HasFileSeed( file_seed ) or file_seed in new_file_seeds:
file_seed.Normalise()
if self._HasFileSeed( file_seed ):
continue
file_seed.Normalise()
new_file_seeds.append( file_seed )
new_file_seeds.add( file_seed )
if len( file_seeds ) == 0:
return 0
index = min( index, len( self._file_seeds ) )
original_insertion_index = index
for file_seed in new_file_seeds:
self._file_seeds.insert( index, file_seed )
index += 1
self._FileSeedIndicesJustChanged()
self._SetFileSeedsToIndicesDirty()
self._SetStatusDirty()
self._FixStatusesToFileSeeds( new_file_seeds )
updated_file_seeds = self._file_seeds[ original_insertion_index : ]
self.NotifyFileSeedsUpdated( new_file_seeds )
self._NotifyFileSeedsUpdated( updated_file_seeds )
return len( new_file_seeds )
def NotifyFileSeedsUpdated( self, file_seeds: typing.Collection[ FileSeed ] ):
if len( file_seeds ) == 0:
return
with self._lock:
if not self._statuses_to_indexed_file_seeds_dirty:
self._FixFileSeedsStatusPosition( file_seeds )
#
self._FixStatusesToFileSeeds( file_seeds )
self._SetStatusDirty()
HG.client_controller.pub( 'file_seed_cache_file_seeds_updated', self._file_seed_cache_key, file_seeds )
self._NotifyFileSeedsUpdated( file_seeds )
def RemoveFileSeeds( self, file_seeds: typing.Iterable[ FileSeed ] ):
def RemoveFileSeeds( self, file_seeds_to_delete: typing.Iterable[ FileSeed ] ):
with self._lock:
file_seeds_to_delete = set( file_seeds )
file_seeds_to_indices = self._GetFileSeedsToIndices()
file_seeds_to_delete = { file_seed for file_seed in file_seeds_to_delete if file_seed in file_seeds_to_indices }
if len( file_seeds_to_delete ) == 0:
return
earliest_affected_index = min( ( file_seeds_to_indices[ file_seed ] for file_seed in file_seeds_to_delete ) )
self._file_seeds = HydrusSerialisable.SerialisableList( [ file_seed for file_seed in self._file_seeds if file_seed not in file_seeds_to_delete ] )
self._FileSeedIndicesJustChanged()
self._SetFileSeedsToIndicesDirty()
self._SetStatusDirty()
self._FixStatusesToFileSeeds( file_seeds_to_delete )
index_shuffled_file_seeds = self._file_seeds[ earliest_affected_index : ]
self.NotifyFileSeedsUpdated( file_seeds_to_delete )
updated_file_seeds = file_seeds_to_delete.union( index_shuffled_file_seeds )
self._NotifyFileSeedsUpdated( updated_file_seeds )
def RemoveFileSeedsByStatus( self, statuses_to_remove: typing.Collection[ int ] ):
@ -2894,8 +3075,12 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
file_seed.SetStatus( CC.STATUS_UNKNOWN )
self._FixStatusesToFileSeeds( failed_file_seeds )
self._SetStatusDirty()
self.NotifyFileSeedsUpdated( failed_file_seeds )
self._NotifyFileSeedsUpdated( failed_file_seeds )
def RetryIgnored( self, ignored_regex = None ):
@ -2917,8 +3102,12 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
file_seed.SetStatus( CC.STATUS_UNKNOWN )
self._FixStatusesToFileSeeds( ignored_file_seeds )
self._SetStatusDirty()
self.NotifyFileSeedsUpdated( ignored_file_seeds )
self._NotifyFileSeedsUpdated( ignored_file_seeds )
def Reverse( self ):
@ -2927,10 +3116,12 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
self._file_seeds.reverse()
self._FileSeedIndicesJustChanged()
self._SetFileSeedsToIndicesDirty()
updated_file_seeds = list( self._file_seeds )
self.NotifyFileSeedsUpdated( list( self._file_seeds ) )
self._NotifyFileSeedsUpdated( updated_file_seeds )
def WorkToDo( self ):

View File

@ -298,7 +298,7 @@ class GallerySeed( HydrusSerialisable.SerialisableBase ):
return False
def WorkOnURL( self, gallery_token_name, gallery_seed_log, file_seeds_callable, status_hook, title_hook, network_job_factory, network_job_presentation_context_factory, file_import_options, gallery_urls_seen_before = None ):
def WorkOnURL( self, gallery_token_name, gallery_seed_log: "GallerySeedLog", file_seeds_callable, status_hook, title_hook, network_job_factory, network_job_presentation_context_factory, file_import_options, gallery_urls_seen_before = None ):
if gallery_urls_seen_before is None:
@ -494,7 +494,7 @@ class GallerySeed( HydrusSerialisable.SerialisableBase ):
sub_gallery_seed.SetExternalAdditionalServiceKeysToTags( self._external_additional_service_keys_to_tags )
gallery_seed_log.AddGallerySeeds( sub_gallery_seeds )
gallery_seed_log.AddGallerySeeds( sub_gallery_seeds, parent_gallery_seed = self )
added_new_gallery_pages = True
@ -569,7 +569,7 @@ class GallerySeed( HydrusSerialisable.SerialisableBase ):
next_gallery_seed.SetExternalAdditionalServiceKeysToTags( self._external_additional_service_keys_to_tags )
gallery_seed_log.AddGallerySeeds( next_gallery_seeds )
gallery_seed_log.AddGallerySeeds( next_gallery_seeds, parent_gallery_seed = self )
added_new_gallery_pages = True
@ -760,7 +760,7 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
self._status_dirty = True
def AddGallerySeeds( self, gallery_seeds ):
def AddGallerySeeds( self, gallery_seeds, parent_gallery_seed: typing.Optional[ GallerySeed ] = None ) -> int:
if len( gallery_seeds ) == 0:
@ -786,23 +786,48 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
new_gallery_seeds.append( gallery_seed )
self._gallery_seeds.append( gallery_seed )
self._gallery_seeds_to_indices[ gallery_seed ] = len( self._gallery_seeds ) - 1
seen_urls.add( gallery_seed.url )
if len( new_gallery_seeds ) == 0:
return 0
if parent_gallery_seed is None or parent_gallery_seed not in self._gallery_seeds:
insertion_index = len( self._gallery_seeds )
else:
insertion_index = self._gallery_seeds.index( parent_gallery_seed ) + 1
original_insertion_index = insertion_index
for gallery_seed in new_gallery_seeds:
self._gallery_seeds.insert( insertion_index, gallery_seed )
insertion_index += 1
self._gallery_seeds_to_indices = { gallery_seed : index for ( index, gallery_seed ) in enumerate( self._gallery_seeds ) }
self._SetStatusDirty()
updated_gallery_seeds = self._gallery_seeds[ original_insertion_index : ]
self.NotifyGallerySeedsUpdated( new_gallery_seeds )
self.NotifyGallerySeedsUpdated( updated_gallery_seeds )
return len( new_gallery_seeds )
def AdvanceGallerySeed( self, gallery_seed ):
updated_gallery_seeds = []
with self._lock:
if gallery_seed in self._gallery_seeds_to_indices:
@ -811,16 +836,20 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
if index > 0:
swapped_gallery_seed = self._gallery_seeds[ index - 1 ]
self._gallery_seeds.remove( gallery_seed )
self._gallery_seeds.insert( index - 1, gallery_seed )
self._gallery_seeds_to_indices = { gallery_seed : index for ( index, gallery_seed ) in enumerate( self._gallery_seeds ) }
self._gallery_seeds_to_indices[ gallery_seed ] = index - 1
self._gallery_seeds_to_indices[ swapped_gallery_seed ] = index
updated_gallery_seeds = ( gallery_seed, swapped_gallery_seed )
self.NotifyGallerySeedsUpdated( ( gallery_seed, ) )
self.NotifyGallerySeedsUpdated( updated_gallery_seeds )
def CanCompact( self, compact_before_this_source_time ):
@ -900,6 +929,8 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
def DelayGallerySeed( self, gallery_seed ):
updated_gallery_seeds = []
with self._lock:
if gallery_seed in self._gallery_seeds_to_indices:
@ -908,16 +939,21 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
if index < len( self._gallery_seeds ) - 1:
swapped_gallery_seed = self._gallery_seeds[ index + 1 ]
self._gallery_seeds.remove( gallery_seed )
self._gallery_seeds.insert( index + 1, gallery_seed )
self._gallery_seeds_to_indices = { gallery_seed : index for ( index, gallery_seed ) in enumerate( self._gallery_seeds ) }
self._gallery_seeds_to_indices[ swapped_gallery_seed ] = index
self._gallery_seeds_to_indices[ gallery_seed ] = index + 1
updated_gallery_seeds = ( swapped_gallery_seed, gallery_seed )
self.NotifyGallerySeedsUpdated( ( gallery_seed, ) )
self.NotifyGallerySeedsUpdated( updated_gallery_seeds )
def GetExampleGallerySeed( self ):
@ -1062,6 +1098,11 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
def NotifyGallerySeedsUpdated( self, gallery_seeds ):
if len( gallery_seeds ) == 0:
return
with self._lock:
self._SetStatusDirty()
@ -1070,11 +1111,18 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
HG.client_controller.pub( 'gallery_seed_log_gallery_seeds_updated', self._gallery_seed_log_key, gallery_seeds )
def RemoveGallerySeeds( self, gallery_seeds ):
def RemoveGallerySeeds( self, gallery_seeds_to_delete ):
with self._lock:
gallery_seeds_to_delete = set( gallery_seeds )
gallery_seeds_to_delete = { gallery_seed for gallery_seed in gallery_seeds_to_delete if gallery_seed in self._gallery_seeds_to_indices }
if len( gallery_seeds_to_delete ) == 0:
return
earliest_affected_index = min( ( self._gallery_seeds_to_indices[ gallery_seed ] for gallery_seed in gallery_seeds_to_delete ) )
self._gallery_seeds = HydrusSerialisable.SerialisableList( [ gallery_seed for gallery_seed in self._gallery_seeds if gallery_seed not in gallery_seeds_to_delete ] )
@ -1082,8 +1130,12 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
self._SetStatusDirty()
index_shuffled_gallery_seeds = self._gallery_seeds[ earliest_affected_index : ]
self.NotifyGallerySeedsUpdated( gallery_seeds_to_delete )
updated_gallery_seeds = gallery_seeds_to_delete.union( index_shuffled_gallery_seeds )
self.NotifyGallerySeedsUpdated( updated_gallery_seeds )
def RemoveGallerySeedsByStatus( self, statuses_to_remove ):

View File

@ -669,8 +669,8 @@ class ImportFolder( HydrusSerialisable.SerialisableBaseNamed ):
def _CheckFolder( self, job_key ):
all_paths = ClientFiles.GetAllFilePaths( [ self._path ] )
( all_paths, num_sidecars ) = ClientFiles.GetAllFilePaths( [ self._path ] )
all_paths = HydrusPaths.FilterFreePaths( all_paths )

View File

@ -130,7 +130,7 @@ class SingleFileMetadataExporterMediaTags( HydrusSerialisable.SerialisableBase,
if len( tags ) > 0:
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_MAPPINGS, add_content_action, ( tag, hashes ) ) for tag in rows ]
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_MAPPINGS, add_content_action, ( tag, hashes ) ) for tag in tags ]
HG.client_controller.WriteSynchronous( 'content_updates', { self._service_key : content_updates } )

View File

@ -153,6 +153,9 @@ class SingleFileMetadataImporterMediaTags( HydrusSerialisable.SerialisableBase,
tags = media_result.GetTagsManager().GetCurrent( self._service_key, ClientTags.TAG_DISPLAY_STORAGE )
# turning ::) into :)
tags = { HydrusText.re_leading_double_colon.sub( ':', tag ) for tag in tags }
if self._string_processor.MakesChanges():
tags = self._string_processor.ProcessStrings( tags )

View File

@ -46,6 +46,7 @@ class HydrusServiceClientAPI( HydrusClientService ):
root.putChild( b'session_key', ClientLocalServerResources.HydrusResourceClientAPIRestrictedAccountSessionKey( self._service, self._client_requests_domain ) )
root.putChild( b'verify_access_key', ClientLocalServerResources.HydrusResourceClientAPIRestrictedAccountVerify( self._service, self._client_requests_domain ) )
root.putChild( b'get_services', ClientLocalServerResources.HydrusResourceClientAPIRestrictedGetServices( self._service, self._client_requests_domain ) )
root.putChild( b'get_service', ClientLocalServerResources.HydrusResourceClientAPIRestrictedGetService( self._service, self._client_requests_domain ) )
add_files = NoResource()
@ -63,7 +64,6 @@ class HydrusServiceClientAPI( HydrusClientService ):
add_tags.putChild( b'add_tags', ClientLocalServerResources.HydrusResourceClientAPIRestrictedAddTagsAddTags( self._service, self._client_requests_domain ) )
add_tags.putChild( b'clean_tags', ClientLocalServerResources.HydrusResourceClientAPIRestrictedAddTagsCleanTags( self._service, self._client_requests_domain ) )
add_tags.putChild( b'get_tag_services', ClientLocalServerResources.HydrusResourceClientAPIRestrictedAddTagsGetTagServices( self._service, self._client_requests_domain ) )
add_tags.putChild( b'search_tags', ClientLocalServerResources.HydrusResourceClientAPIRestrictedAddTagsSearchTags( self._service, self._client_requests_domain ) )
add_urls = NoResource()

File diff suppressed because it is too large Load Diff

View File

@ -84,8 +84,8 @@ options = {}
# Misc
NETWORK_VERSION = 20
SOFTWARE_VERSION = 513
CLIENT_API_VERSION = 40
SOFTWARE_VERSION = 514
CLIENT_API_VERSION = 41
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )

View File

@ -606,6 +606,11 @@ def DebugPrint( debug_info ):
def DedupeList( xs: typing.Iterable ):
if isinstance( xs, set ):
return list( xs )
xs_seen = set()
xs_return = []
@ -1381,7 +1386,7 @@ def RestartProcess():
# exe is python's exe, me is the script
args = [ sys.executable ] + sys.argv
args = [ exe ] + sys.argv
else:
@ -1398,6 +1403,7 @@ def RestartProcess():
os.execv( exe, args )
def SampleSetByGettingFirst( s: set, n ):
# sampling from a big set can be slow, so if we don't care about super random, let's just rip off the front and let __hash__ be our random

View File

@ -18,6 +18,7 @@ re_one_or_more_whitespace = re.compile( r'\s+' ) # this does \t and friends too
# want to keep the 'leading space' part here, despite tag.strip() elsewhere, in case of some crazy '- test' tag
re_leading_garbage = re.compile( r'^(-|system:)+' )
re_leading_single_colon = re.compile( '^:(?!:)' )
re_leading_double_colon = re.compile( '^::(?!:)' )
re_leading_byte_order_mark = re.compile( '^\ufeff' ) # unicode .txt files prepend with this, wew
HYDRUS_NOTE_NEWLINE = '\n'

View File

@ -2,6 +2,7 @@ import collections
import json
import os
import traceback
import typing
import urllib
CBOR_AVAILABLE = False
@ -300,7 +301,7 @@ def ParseNetworkBytesToParsedHydrusArgs( network_bytes ):
return args
def ParseTwistedRequestGETArgs( requests_args, int_params, byte_params, string_params, json_params, json_byte_list_params ):
def ParseTwistedRequestGETArgs( requests_args: dict, int_params, byte_params, string_params, json_params, json_byte_list_params ):
args = ParsedRequestArguments()
@ -311,9 +312,7 @@ def ParseTwistedRequestGETArgs( requests_args, int_params, byte_params, string_p
raise HydrusExceptions.NotAcceptable( 'Sorry, this service does not support CBOR!' )
for name_bytes in requests_args:
values_bytes = requests_args[ name_bytes ]
for ( name_bytes, values_bytes ) in requests_args.items():
try:
@ -413,6 +412,84 @@ def ParseTwistedRequestGETArgs( requests_args, int_params, byte_params, string_p
return args
variable_type_to_text_lookup = collections.defaultdict( lambda: 'unknown!' )
variable_type_to_text_lookup[ int ] = 'integer'
variable_type_to_text_lookup[ str ] = 'string'
variable_type_to_text_lookup[ bytes ] = 'hex-encoded bytestring'
variable_type_to_text_lookup[ bool ] = 'boolean'
variable_type_to_text_lookup[ list ] = 'list'
variable_type_to_text_lookup[ dict ] = 'object/dict'
def GetValueFromDict( dictionary: dict, key, expected_type, expected_list_type = None, expected_dict_types = None, default_value = None, none_on_missing = False ):
# not None because in JSON sometimes people put 'null' to mean 'did not enter this optional parameter'
if key in dictionary and dictionary[ key ] is not None:
value = dictionary[ key ]
TestVariableType( key, value, expected_type, expected_list_type = expected_list_type, expected_dict_types = expected_dict_types )
return value
else:
if default_value is None and not none_on_missing:
raise HydrusExceptions.BadRequestException( 'The required parameter "{}" was missing!'.format( key ) )
else:
return default_value
def TestVariableType( name: str, value: typing.Any, expected_type: type, expected_list_type = None, expected_dict_types = None, allowed_values = None ):
if not isinstance( value, expected_type ):
type_error_text = variable_type_to_text_lookup[ expected_type ]
raise HydrusExceptions.BadRequestException( 'The parameter "{}", with value "{}", was not the expected type: {}!'.format( name, value, type_error_text ) )
if allowed_values is not None and value not in allowed_values:
raise HydrusExceptions.BadRequestException( 'The parameter "{}", with value "{}", was not in the allowed values: {}!'.format( name, value, allowed_values ) )
if expected_type is list and expected_list_type is not None:
for item in value:
if not isinstance( item, expected_list_type ):
raise HydrusExceptions.BadRequestException( 'The list parameter "{}" held an item, "{}" that was {} and not the expected type: {}!'.format( name, item, type( item ), variable_type_to_text_lookup[ expected_list_type ] ) )
if expected_type is dict and expected_dict_types is not None:
( expected_key_type, expected_value_type ) = expected_dict_types
for ( dict_key, dict_value ) in value.items():
if not isinstance( dict_key, expected_key_type ):
raise HydrusExceptions.BadRequestException( 'The Object parameter "{}" held a key, "{}" that was {} and not the expected type: {}!'.format( name, dict_key, type( dict_key ), variable_type_to_text_lookup[ expected_key_type ] ) )
if not isinstance( dict_value, expected_value_type ):
raise HydrusExceptions.BadRequestException( 'The Object parameter "{}" held a value, "{}" that was {} and not the expected type: {}!'.format( name, dict_value, type( dict_value ), variable_type_to_text_lookup[ expected_value_type ] ) )
class ParsedRequestArguments( dict ):
def __missing__( self, key ):
@ -422,75 +499,6 @@ class ParsedRequestArguments( dict ):
def GetValue( self, key, expected_type, expected_list_type = None, expected_dict_types = None, default_value = None, none_on_missing = False ):
# not None because in JSON sometimes people put 'null' to mean 'did not enter this optional parameter'
if key in self and self[ key ] is not None:
value = self[ key ]
error_text_lookup = collections.defaultdict( lambda: 'unknown!' )
error_text_lookup[ int ] = 'integer'
error_text_lookup[ str ] = 'string'
error_text_lookup[ bytes ] = 'hex-encoded bytestring'
error_text_lookup[ bool ] = 'boolean'
error_text_lookup[ list ] = 'list'
error_text_lookup[ dict ] = 'object/dict'
if not isinstance( value, expected_type ):
if expected_type in error_text_lookup:
type_error_text = error_text_lookup[ expected_type ]
else:
type_error_text = 'unknown!'
raise HydrusExceptions.BadRequestException( 'The parameter "{}" was not the expected type: {}!'.format( key, type_error_text ) )
if expected_type is list and expected_list_type is not None:
for item in value:
if not isinstance( item, expected_list_type ):
raise HydrusExceptions.BadRequestException( 'The list parameter "{}" held an item, "{}" that was {} and not the expected type: {}!'.format( key, item, type( item ), error_text_lookup[ expected_list_type ] ) )
if expected_type is dict and expected_dict_types is not None:
( expected_key_type, expected_value_type ) = expected_dict_types
for ( dict_key, dict_value ) in value.items():
if not isinstance( dict_key, expected_key_type ):
raise HydrusExceptions.BadRequestException( 'The Object parameter "{}" held a key, "{}" that was {} and not the expected type: {}!'.format( key, dict_key, type( dict_key ), error_text_lookup[ expected_key_type ] ) )
if not isinstance( dict_value, expected_value_type ):
raise HydrusExceptions.BadRequestException( 'The Object parameter "{}" held a value, "{}" that was {} and not the expected type: {}!'.format( key, dict_value, type( dict_value ), error_text_lookup[ expected_value_type ] ) )
return value
else:
if default_value is None and not none_on_missing:
raise HydrusExceptions.BadRequestException( 'The required parameter "{}" was missing!'.format( key ) )
else:
return default_value
return GetValueFromDict( self, key, expected_type, expected_list_type = expected_list_type, expected_dict_types = expected_dict_types, default_value = default_value, none_on_missing = none_on_missing )

View File

@ -452,7 +452,7 @@ class TestClientAPI( unittest.TestCase ):
path = '/add_tags/add_tags'
body_dict = { 'Hydrus-Client-API-Access-Key' : 'abcd', 'hash' : hash_hex, 'service_names_to_tags' : { 'my tags' : [ 'test', 'test2' ] } }
body_dict = { 'Hydrus-Client-API-Access-Key' : 'abcd', 'hash' : hash_hex, 'service_keys_to_tags' : { CC.DEFAULT_LOCAL_TAG_SERVICE_KEY.hex() : [ 'test', 'test2' ] } }
body = json.dumps( body_dict )
@ -464,7 +464,7 @@ class TestClientAPI( unittest.TestCase ):
self.assertEqual( response.status, 403 )
body_dict = { 'Hydrus-Client-API-Session-Key' : 'abcd', 'hash' : hash_hex, 'service_names_to_tags' : { 'my tags' : [ 'test', 'test2' ] } }
body_dict = { 'Hydrus-Client-API-Session-Key' : 'abcd', 'hash' : hash_hex, 'service_keys_to_tags' : { CC.DEFAULT_LOCAL_TAG_SERVICE_KEY.hex() : [ 'test', 'test2' ] } }
body = json.dumps( body_dict )
@ -505,7 +505,7 @@ class TestClientAPI( unittest.TestCase ):
path = '/add_tags/add_tags'
body_dict = { 'Hydrus-Client-API-Access-Key' : access_key_hex, 'hash' : hash_hex, 'service_names_to_tags' : { 'my tags' : [ 'test', 'test2' ] } }
body_dict = { 'Hydrus-Client-API-Access-Key' : access_key_hex, 'hash' : hash_hex, 'service_keys_to_tags' : { CC.DEFAULT_LOCAL_TAG_SERVICE_KEY.hex() : [ 'test', 'test2' ] } }
body = json.dumps( body_dict )
@ -521,7 +521,7 @@ class TestClientAPI( unittest.TestCase ):
HG.test_controller.ClearWrites( 'content_updates' )
body_dict = { 'Hydrus-Client-API-Session-Key' : session_key_hex, 'hash' : hash_hex, 'service_names_to_tags' : { 'my tags' : [ 'test', 'test2' ] } }
body_dict = { 'Hydrus-Client-API-Session-Key' : session_key_hex, 'hash' : hash_hex, 'service_keys_to_tags' : { CC.DEFAULT_LOCAL_TAG_SERVICE_KEY.hex() : [ 'test', 'test2' ] } }
body = json.dumps( body_dict )
@ -728,6 +728,15 @@ class TestClientAPI( unittest.TestCase ):
]
}
get_service_expected_result = {
'service' : {
'name' : 'repository updates',
'service_key' : '7265706f7369746f72792075706461746573',
'type': 20,
'type_pretty': 'local update file domain'
}
}
for api_permissions in should_work.union( should_break ):
access_key_hex = api_permissions.GetAccessKey().hex()
@ -759,6 +768,55 @@ class TestClientAPI( unittest.TestCase ):
self.assertEqual( response.status, 403 )
#
path = '/get_service?service_name=repository%20updates'
connection.request( 'GET', path, headers = headers )
response = connection.getresponse()
data = response.read()
if api_permissions in should_work:
text = str( data, 'utf-8' )
self.assertEqual( response.status, 200 )
d = json.loads( text )
self.assertEqual( d, get_service_expected_result )
else:
self.assertEqual( response.status, 403 )
path = '/get_service?service_key={}'.format( CC.LOCAL_UPDATE_SERVICE_KEY.hex() )
connection.request( 'GET', path, headers = headers )
response = connection.getresponse()
data = response.read()
if api_permissions in should_work:
text = str( data, 'utf-8' )
self.assertEqual( response.status, 200 )
d = json.loads( text )
self.assertEqual( d, get_service_expected_result )
else:
self.assertEqual( response.status, 403 )
def _test_add_files_add_file( self, connection, set_up_permissions ):
@ -966,7 +1024,9 @@ class TestClientAPI( unittest.TestCase ):
path = '/add_files/delete_files'
body_dict = { 'hashes' : [ h.hex() for h in hashes ], 'file_service_name' : 'not existing service' }
not_existing_service_hex = os.urandom( 32 ).hex()
body_dict = { 'hashes' : [ h.hex() for h in hashes ], 'file_service_key' : not_existing_service_hex }
body = json.dumps( body_dict )
@ -980,7 +1040,7 @@ class TestClientAPI( unittest.TestCase ):
text = str( data, 'utf-8' )
self.assertIn( 'not existing service', text ) # error message should be complaining about it
self.assertIn( not_existing_service_hex, text ) # error message should be complaining about it
#
@ -1036,7 +1096,7 @@ class TestClientAPI( unittest.TestCase ):
path = '/add_files/undelete_files'
body_dict = { 'hashes' : [ h.hex() for h in hashes ], 'file_service_name' : 'not existing service' }
body_dict = { 'hashes' : [ h.hex() for h in hashes ], 'file_service_key' : not_existing_service_hex }
body = json.dumps( body_dict )
@ -1050,7 +1110,7 @@ class TestClientAPI( unittest.TestCase ):
text = str( data, 'utf-8' )
self.assertIn( 'not existing service', text ) # error message should be complaining about it
self.assertIn( not_existing_service_hex, text ) # error message should be complaining about it
#
@ -1482,7 +1542,7 @@ class TestClientAPI( unittest.TestCase ):
path = '/add_tags/add_tags'
body_dict = { 'service_names_to_tags' : { 'my tags' : [ 'test' ] } }
body_dict = { 'service_keys_to_tags' : { CC.DEFAULT_LOCAL_TAG_SERVICE_KEY.hex() : [ 'test' ] } }
body = json.dumps( body_dict )
@ -1498,7 +1558,9 @@ class TestClientAPI( unittest.TestCase ):
path = '/add_tags/add_tags'
body_dict = { 'hash' : hash_hex, 'service_names_to_tags' : { 'bad tag service' : [ 'test' ] } }
not_existing_service_key_hex = os.urandom( 32 ).hex()
body_dict = { 'hash' : hash_hex, 'service_keys_to_tags' : { not_existing_service_key_hex : [ 'test' ] } }
body = json.dumps( body_dict )
@ -1510,13 +1572,17 @@ class TestClientAPI( unittest.TestCase ):
self.assertEqual( response.status, 400 )
text = str( data, 'utf-8' )
self.assertIn( not_existing_service_key_hex, text ) # test it complains about the key in the error
# add tags to local
HG.test_controller.ClearWrites( 'content_updates' )
path = '/add_tags/add_tags'
body_dict = { 'hash' : hash_hex, 'service_names_to_tags' : { 'my tags' : [ 'test', 'test2' ] } }
body_dict = { 'hash' : hash_hex, 'service_keys_to_tags' : { CC.DEFAULT_LOCAL_TAG_SERVICE_KEY.hex() : [ 'test', 'test2' ] } }
body = json.dumps( body_dict )
@ -1542,7 +1608,7 @@ class TestClientAPI( unittest.TestCase ):
path = '/add_tags/add_tags'
body_dict = { 'hash' : hash_hex, 'service_names_to_actions_to_tags' : { 'my tags' : { str( HC.CONTENT_UPDATE_ADD ) : [ 'test_add', 'test_add2' ], str( HC.CONTENT_UPDATE_DELETE ) : [ 'test_delete', 'test_delete2' ] } } }
body_dict = { 'hash' : hash_hex, 'service_keys_to_actions_to_tags' : { CC.DEFAULT_LOCAL_TAG_SERVICE_KEY.hex() : { str( HC.CONTENT_UPDATE_ADD ) : [ 'test_add', 'test_add2' ], str( HC.CONTENT_UPDATE_DELETE ) : [ 'test_delete', 'test_delete2' ] } } }
body = json.dumps( body_dict )
@ -1573,7 +1639,7 @@ class TestClientAPI( unittest.TestCase ):
path = '/add_tags/add_tags'
body_dict = { 'hash' : hash_hex, 'service_names_to_tags' : { 'example tag repo' : [ 'test', 'test2' ] } }
body_dict = { 'hash' : hash_hex, 'service_keys_to_tags' : { HG.test_controller.example_tag_repo_service_key.hex() : [ 'test', 'test2' ] } }
body = json.dumps( body_dict )
@ -1599,7 +1665,7 @@ class TestClientAPI( unittest.TestCase ):
path = '/add_tags/add_tags'
body_dict = { 'hash' : hash_hex, 'service_names_to_actions_to_tags' : { 'example tag repo' : { str( HC.CONTENT_UPDATE_PEND ) : [ 'test_add', 'test_add2' ], str( HC.CONTENT_UPDATE_PETITION ) : [ [ 'test_delete', 'muh reason' ], 'test_delete2' ] } } }
body_dict = { 'hash' : hash_hex, 'service_keys_to_actions_to_tags' : { HG.test_controller.example_tag_repo_service_key.hex() : { str( HC.CONTENT_UPDATE_PEND ) : [ 'test_add', 'test_add2' ], str( HC.CONTENT_UPDATE_PETITION ) : [ [ 'test_delete', 'muh reason' ], 'test_delete2' ] } } }
body = json.dumps( body_dict )
@ -1630,7 +1696,7 @@ class TestClientAPI( unittest.TestCase ):
path = '/add_tags/add_tags'
body_dict = { 'hashes' : [ hash_hex, hash2_hex ], 'service_names_to_tags' : { 'my tags' : [ 'test', 'test2' ] } }
body_dict = { 'hashes' : [ hash_hex, hash2_hex ], 'service_keys_to_tags' : { CC.DEFAULT_LOCAL_TAG_SERVICE_KEY.hex() : [ 'test', 'test2' ] } }
body = json.dumps( body_dict )
@ -2080,7 +2146,7 @@ class TestClientAPI( unittest.TestCase ):
HG.test_controller.ClearWrites( 'import_url_test' )
request_dict = { 'url' : url, 'destination_page_name' : 'muh /tv/', 'show_destination_page' : True, 'filterable_tags' : [ 'filename:yo' ], 'service_names_to_additional_tags' : { 'my tags' : [ '/tv/ thread' ] } }
request_dict = { 'url' : url, 'destination_page_name' : 'muh /tv/', 'show_destination_page' : True, 'filterable_tags' : [ 'filename:yo' ], 'service_keys_to_additional_tags' : { CC.DEFAULT_LOCAL_TAG_SERVICE_KEY.hex() : [ '/tv/ thread' ] } }
request_body = json.dumps( request_dict )
@ -2570,7 +2636,7 @@ class TestClientAPI( unittest.TestCase ):
( location_context, hashes ) = args
self.assertEqual( location_context, default_location_context )
self.assertEqual( hashes, { file_relationships_hash } )
self.assertEqual( set( hashes ), { file_relationships_hash } )
# search files failed tag permission
@ -2850,13 +2916,30 @@ class TestClientAPI( unittest.TestCase ):
path = '/manage_file_relationships/set_file_relationships'
test_pair_rows = [
[ 4, "b54d09218e0d6efc964b78b070620a1fa19c7e069672b4c6313cee2c9b0623f2", "bbaa9876dab238dcf5799bfd8319ed0bab805e844f45cf0de33f40697b11a845", False, False, True ],
[ 4, "22667427eaa221e2bd7ef405e1d2983846c863d40b2999ce8d1bf5f0c18f5fb2", "65d228adfa722f3cd0363853a191898abe8bf92d9a514c6c7f3c89cfed0bf423", False, False, True ],
[ 2, "0480513ffec391b77ad8c4e57fe80e5b710adfa3cb6af19b02a0bd7920f2d3ec", "5fab162576617b5c3fc8caabea53ce3ab1a3c8e0a16c16ae7b4e4a21eab168a7", False, False, False ]
]
request_dict = { 'pair_rows' : test_pair_rows }
request_dict = {
"relationships" : [
{
"hash_a" : "b54d09218e0d6efc964b78b070620a1fa19c7e069672b4c6313cee2c9b0623f2",
"hash_b" : "bbaa9876dab238dcf5799bfd8319ed0bab805e844f45cf0de33f40697b11a845",
"relationship" : 4,
"do_default_content_merge" : False,
"delete_b" : True
},
{
"hash_a" : "22667427eaa221e2bd7ef405e1d2983846c863d40b2999ce8d1bf5f0c18f5fb2",
"hash_b" : "65d228adfa722f3cd0363853a191898abe8bf92d9a514c6c7f3c89cfed0bf423",
"relationship" : 4,
"do_default_content_merge" : False,
"delete_b" : True
},
{
"hash_a" : "0480513ffec391b77ad8c4e57fe80e5b710adfa3cb6af19b02a0bd7920f2d3ec",
"hash_b" : "5fab162576617b5c3fc8caabea53ce3ab1a3c8e0a16c16ae7b4e4a21eab168a7",
"relationship" : 2,
"do_default_content_merge" : False
}
]
}
request_body = json.dumps( request_dict )
@ -2888,7 +2971,7 @@ class TestClientAPI( unittest.TestCase ):
expected_written_rows = [ ( duplicate_type, bytes.fromhex( hash_a_hex ), bytes.fromhex( hash_b_hex ), delete_thing( hash_b_hex, delete_second ) ) for ( duplicate_type, hash_a_hex, hash_b_hex, merge, delete_first, delete_second ) in test_pair_rows ]
expected_written_rows = [ ( r_dict[ 'relationship' ], bytes.fromhex( r_dict[ 'hash_a' ] ), bytes.fromhex( r_dict[ 'hash_b' ] ), delete_thing( r_dict[ 'hash_b' ], 'delete_b' in r_dict and r_dict[ 'delete_b' ] ) ) for r_dict in request_dict[ 'relationships' ] ]
self.assertEqual( written_rows, expected_written_rows )
@ -3294,11 +3377,11 @@ class TestClientAPI( unittest.TestCase ):
tags = [ 'kino', 'green' ]
path = '/get_files/search_files?tags={}&file_sort_type={}&file_sort_asc={}&file_service_name={}'.format(
path = '/get_files/search_files?tags={}&file_sort_type={}&file_sort_asc={}&file_service_key={}'.format(
urllib.parse.quote( json.dumps( tags ) ),
CC.SORT_FILES_BY_FRAMERATE,
'true',
'trash'
CC.TRASH_SERVICE_KEY.hex()
)
connection.request( 'GET', path, headers = headers )
@ -3340,12 +3423,12 @@ class TestClientAPI( unittest.TestCase ):
tags = [ 'kino', 'green' ]
path = '/get_files/search_files?tags={}&file_sort_type={}&file_sort_asc={}&file_service_key={}&tag_service_name={}'.format(
path = '/get_files/search_files?tags={}&file_sort_type={}&file_sort_asc={}&file_service_key={}&tag_service_key={}'.format(
urllib.parse.quote( json.dumps( tags ) ),
CC.SORT_FILES_BY_FRAMERATE,
'true',
CC.TRASH_SERVICE_KEY.hex(),
'all%20known%20tags'
CC.COMBINED_TAG_SERVICE_KEY.hex()
)
connection.request( 'GET', path, headers = headers )
@ -3449,18 +3532,6 @@ class TestClientAPI( unittest.TestCase ):
self.assertEqual( predicates, [] )
#
pretend_request = PretendRequest()
pretend_request.parsed_request_args = { 'system_inbox' : True }
pretend_request.client_api_permissions = set_up_permissions[ 'search_green_files' ]
with self.assertRaises( HydrusExceptions.InsufficientCredentialsException ):
ClientLocalServerResources.ParseClientAPISearchPredicates( pretend_request )
#
pretend_request = PretendRequest()
@ -3517,38 +3588,6 @@ class TestClientAPI( unittest.TestCase ):
pretend_request = PretendRequest()
pretend_request.parsed_request_args = { 'tags' : [ 'green' ], 'system_inbox' : True }
pretend_request.client_api_permissions = set_up_permissions[ 'search_green_files' ]
predicates = ClientLocalServerResources.ParseClientAPISearchPredicates( pretend_request )
expected_predicates = []
expected_predicates.append( ClientSearch.Predicate( predicate_type = ClientSearch.PREDICATE_TYPE_TAG, value = 'green' ) )
expected_predicates.append( ClientSearch.Predicate( predicate_type = ClientSearch.PREDICATE_TYPE_SYSTEM_INBOX ) )
self.assertEqual( set( predicates ), set( expected_predicates ) )
#
pretend_request = PretendRequest()
pretend_request.parsed_request_args = { 'tags' : [ 'green' ], 'system_archive' : True }
pretend_request.client_api_permissions = set_up_permissions[ 'search_green_files' ]
predicates = ClientLocalServerResources.ParseClientAPISearchPredicates( pretend_request )
expected_predicates = []
expected_predicates.append( ClientSearch.Predicate( predicate_type = ClientSearch.PREDICATE_TYPE_TAG, value = 'green' ) )
expected_predicates.append( ClientSearch.Predicate( predicate_type = ClientSearch.PREDICATE_TYPE_SYSTEM_ARCHIVE ) )
self.assertEqual( set( predicates ), set( expected_predicates ) )
#
pretend_request = PretendRequest()
pretend_request.parsed_request_args = { 'tags' : [ 'green', 'system:archive' ] }
pretend_request.client_api_permissions = set_up_permissions[ 'search_green_files' ]
@ -3676,12 +3715,17 @@ class TestClientAPI( unittest.TestCase ):
headers = { 'Hydrus-Client-API-Access-Key' : access_key_hex }
file_ids_to_hashes = { 1 : bytes.fromhex( 'a' * 64 ), 2 : bytes.fromhex( 'b' * 64 ), 3 : bytes.fromhex( 'c' * 64 ) }
file_ids_to_hashes = { i : os.urandom( 32 ) for i in range( 20 ) }
metadata = []
for ( file_id, hash ) in file_ids_to_hashes.items():
if file_id == 0 or file_id >= 4:
continue
metadata_row = { 'file_id' : file_id, 'hash' : hash.hex() }
metadata.append( metadata_row )
@ -3709,6 +3753,11 @@ class TestClientAPI( unittest.TestCase ):
for ( file_id, hash ) in file_ids_to_hashes.items():
if file_id == 0 or file_id >= 4:
continue
size = random.randint( 8192, 20 * 1048576 )
mime = random.choice( [ HC.IMAGE_JPEG, HC.VIDEO_WEBM, HC.APPLICATION_PDF ] )
width = random.randint( 200, 4096 )
@ -3888,56 +3937,6 @@ class TestClientAPI( unittest.TestCase ):
metadata_row[ 'tags' ] = tags_dict
# old stuff start
api_service_keys_to_statuses_to_tags = {}
service_keys_to_statuses_to_tags = tags_manager.GetServiceKeysToStatusesToTags( ClientTags.TAG_DISPLAY_STORAGE )
for ( service_key, statuses_to_tags ) in service_keys_to_statuses_to_tags.items():
if service_key not in service_keys_to_names:
service_keys_to_names[ service_key ] = services_manager.GetName( service_key )
s = { str( status ) : sorted( tags, key = HydrusTags.ConvertTagToSortable ) for ( status, tags ) in statuses_to_tags.items() if len( tags ) > 0 }
if len( s ) > 0:
service_name = service_keys_to_names[ service_key ]
api_service_keys_to_statuses_to_tags[ service_key.hex() ] = s
metadata_row[ 'service_keys_to_statuses_to_tags' ] = api_service_keys_to_statuses_to_tags
service_keys_to_statuses_to_display_tags = {}
service_keys_to_statuses_to_tags = tags_manager.GetServiceKeysToStatusesToTags( ClientTags.TAG_DISPLAY_ACTUAL )
for ( service_key, statuses_to_tags ) in service_keys_to_statuses_to_tags.items():
if service_key not in service_keys_to_names:
service_keys_to_names[ service_key ] = services_manager.GetName( service_key )
s = { str( status ) : sorted( tags, key = HydrusTags.ConvertTagToSortable ) for ( status, tags ) in statuses_to_tags.items() if len( tags ) > 0 }
if len( s ) > 0:
service_name = service_keys_to_names[ service_key ]
service_keys_to_statuses_to_display_tags[ service_key.hex() ] = s
metadata_row[ 'service_keys_to_statuses_to_display_tags' ] = service_keys_to_statuses_to_display_tags
# old stuff end
metadata.append( metadata_row )
detailed_known_urls_metadata_row = dict( metadata_row )
@ -3971,6 +3970,8 @@ class TestClientAPI( unittest.TestCase ):
# fail on non-permitted files
HG.test_controller.SetRead( 'hash_ids_to_hashes', { k : v for ( k, v ) in file_ids_to_hashes.items() if k in [ 1, 2, 3, 7 ] } )
path = '/get_files/file_metadata?file_ids={}&only_return_identifiers=true'.format( urllib.parse.quote( json.dumps( [ 1, 2, 3, 7 ] ) ) )
connection.request( 'GET', path, headers = headers )
@ -3995,6 +3996,8 @@ class TestClientAPI( unittest.TestCase ):
# identifiers from file_ids
HG.test_controller.SetRead( 'hash_ids_to_hashes', { k : v for ( k, v ) in file_ids_to_hashes.items() if k in [ 1, 2, 3 ] } )
path = '/get_files/file_metadata?file_ids={}&only_return_identifiers=true'.format( urllib.parse.quote( json.dumps( [ 1, 2, 3 ] ) ) )
connection.request( 'GET', path, headers = headers )
@ -4013,6 +4016,8 @@ class TestClientAPI( unittest.TestCase ):
# basic metadata from file_ids
HG.test_controller.SetRead( 'hash_ids_to_hashes', { k : v for ( k, v ) in file_ids_to_hashes.items() if k in [ 1, 2, 3 ] } )
path = '/get_files/file_metadata?file_ids={}&only_return_basic_information=true'.format( urllib.parse.quote( json.dumps( [ 1, 2, 3 ] ) ) )
connection.request( 'GET', path, headers = headers )
@ -4031,6 +4036,8 @@ class TestClientAPI( unittest.TestCase ):
# metadata from file_ids
HG.test_controller.SetRead( 'hash_ids_to_hashes', { k : v for ( k, v ) in file_ids_to_hashes.items() if k in [ 1, 2, 3 ] } )
path = '/get_files/file_metadata?file_ids={}'.format( urllib.parse.quote( json.dumps( [ 1, 2, 3 ] ) ) )
connection.request( 'GET', path, headers = headers )
@ -4084,7 +4091,7 @@ class TestClientAPI( unittest.TestCase ):
# identifiers from hashes
path = '/get_files/file_metadata?hashes={}&only_return_identifiers=true'.format( urllib.parse.quote( json.dumps( [ hash.hex() for hash in file_ids_to_hashes.values() ] ) ) )
path = '/get_files/file_metadata?hashes={}&only_return_identifiers=true'.format( urllib.parse.quote( json.dumps( [ hash.hex() for ( k, hash ) in file_ids_to_hashes.items() if k in [ 1, 2, 3 ] ] ) ) )
connection.request( 'GET', path, headers = headers )
@ -4102,7 +4109,7 @@ class TestClientAPI( unittest.TestCase ):
# basic metadata from hashes
path = '/get_files/file_metadata?hashes={}&only_return_basic_information=true'.format( urllib.parse.quote( json.dumps( [ hash.hex() for hash in file_ids_to_hashes.values() ] ) ) )
path = '/get_files/file_metadata?hashes={}&only_return_basic_information=true'.format( urllib.parse.quote( json.dumps( [ hash.hex() for ( k, hash ) in file_ids_to_hashes.items() if k in [ 1, 2, 3 ] ] ) ) )
connection.request( 'GET', path, headers = headers )
@ -4120,7 +4127,7 @@ class TestClientAPI( unittest.TestCase ):
# metadata from hashes
path = '/get_files/file_metadata?hashes={}'.format( urllib.parse.quote( json.dumps( [ hash.hex() for hash in file_ids_to_hashes.values() ] ) ) )
path = '/get_files/file_metadata?hashes={}'.format( urllib.parse.quote( json.dumps( [ hash.hex() for ( k, hash ) in file_ids_to_hashes.items() if k in [ 1, 2, 3 ] ] ) ) )
connection.request( 'GET', path, headers = headers )
@ -4150,7 +4157,7 @@ class TestClientAPI( unittest.TestCase ):
# metadata from hashes with detailed url info
path = '/get_files/file_metadata?hashes={}&detailed_url_information=true'.format( urllib.parse.quote( json.dumps( [ hash.hex() for hash in file_ids_to_hashes.values() ] ) ) )
path = '/get_files/file_metadata?hashes={}&detailed_url_information=true'.format( urllib.parse.quote( json.dumps( [ hash.hex() for ( k, hash ) in file_ids_to_hashes.items() if k in [ 1, 2, 3 ] ] ) ) )
connection.request( 'GET', path, headers = headers )
@ -4168,7 +4175,7 @@ class TestClientAPI( unittest.TestCase ):
# metadata from hashes with notes info
path = '/get_files/file_metadata?hashes={}&include_notes=true'.format( urllib.parse.quote( json.dumps( [ hash.hex() for hash in file_ids_to_hashes.values() ] ) ) )
path = '/get_files/file_metadata?hashes={}&include_notes=true'.format( urllib.parse.quote( json.dumps( [ hash.hex() for ( k, hash ) in file_ids_to_hashes.items() if k in [ 1, 2, 3 ] ] ) ) )
connection.request( 'GET', path, headers = headers )
@ -4186,7 +4193,7 @@ class TestClientAPI( unittest.TestCase ):
# failure on missing file_ids
HG.test_controller.SetRead( 'media_results_from_ids', HydrusExceptions.DataMissing( 'test missing' ) )
HG.test_controller.SetRead( 'hash_ids_to_hashes', HydrusExceptions.DataMissing( 'test missing' ) )
api_permissions = set_up_permissions[ 'everything' ]

View File

@ -749,7 +749,9 @@ class TestClientDBDuplicates( unittest.TestCase ):
self.assertEqual( len( file_duplicate_types_to_counts ), 5 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
expected = self._get_group_potential_count( file_duplicate_types_to_counts )
self.assertIn( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], ( expected, expected - 1 ) )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ], 1 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_ALTERNATE ], 1 )
@ -765,7 +767,9 @@ class TestClientDBDuplicates( unittest.TestCase ):
self.assertEqual( len( file_duplicate_types_to_counts ), 5 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
expected = self._get_group_potential_count( file_duplicate_types_to_counts )
self.assertIn( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], ( expected, expected - 1 ) )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_alt_dupe_group_hashes ) - 1 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ], 1 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_ALTERNATE ], 1 )
@ -810,7 +814,9 @@ class TestClientDBDuplicates( unittest.TestCase ):
self.assertEqual( len( file_duplicate_types_to_counts ), 5 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
expected = self._get_group_potential_count( file_duplicate_types_to_counts )
self.assertIn( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], ( expected, expected - 1 ) )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ], 1 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_ALTERNATE ], 1 )
@ -824,7 +830,9 @@ class TestClientDBDuplicates( unittest.TestCase ):
self.assertEqual( len( file_duplicate_types_to_counts ), 3 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
expected = self._get_group_potential_count( file_duplicate_types_to_counts )
self.assertIn( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], ( expected, expected - 1 ) )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_fp_dupe_group_hashes ) - 1 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ], 2 )
@ -869,7 +877,9 @@ class TestClientDBDuplicates( unittest.TestCase ):
self.assertEqual( len( file_duplicate_types_to_counts ), 5 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
expected = self._get_group_potential_count( file_duplicate_types_to_counts )
self.assertIn( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], ( expected, expected - 1 ) )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ], 1 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_ALTERNATE ], 1 )
@ -883,7 +893,9 @@ class TestClientDBDuplicates( unittest.TestCase ):
self.assertEqual( len( file_duplicate_types_to_counts ), 5 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
expected = self._get_group_potential_count( file_duplicate_types_to_counts )
self.assertIn( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], ( expected, expected - 1 ) )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_alt_dupe_group_hashes ) - 1 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ], 1 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_ALTERNATE ], 1 )
@ -906,7 +918,9 @@ class TestClientDBDuplicates( unittest.TestCase ):
self.assertEqual( len( file_duplicate_types_to_counts ), 5 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
expected = self._get_group_potential_count( file_duplicate_types_to_counts )
self.assertIn( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], ( expected, expected - 1 ) )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ], 1 )
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_ALTERNATE ], 1 )

View File

@ -461,9 +461,9 @@ class TestTagObjects( unittest.TestCase ):
self.assertEqual( pat.IsAcceptableForFileSearches(), values[0] )
self.assertEqual( pat.IsAcceptableForTagSearches(), values[1] )
self.assertEqual( pat.IsEmpty(), values[2] )
self.assertEqual( pat.IsExplicitWildcard(), values[3] )
self.assertEqual( pat.IsExplicitWildcard( True ), values[3] )
self.assertEqual( pat.IsNamespaceSearch(), values[4] )
self.assertEqual( pat.IsTagSearch(), values[5] )
self.assertEqual( pat.IsTagSearch( True ), values[5] )
self.assertEqual( pat.inclusive, values[6] )
@ -475,8 +475,8 @@ class TestTagObjects( unittest.TestCase ):
def read_predicate_tests( pat: ClientSearch.ParsedAutocompleteText, values ):
self.assertEqual( pat.GetImmediateFileSearchPredicate(), values[0] )
self.assertEqual( pat.GetNonTagFileSearchPredicates(), values[1] )
self.assertEqual( pat.GetImmediateFileSearchPredicate( True ), values[0] )
self.assertEqual( pat.GetNonTagFileSearchPredicates( True ), values[1] )
def write_predicate_tests( pat: ClientSearch.ParsedAutocompleteText, values ):

View File

@ -27,6 +27,7 @@ nav:
- Advanced:
- advanced_siblings.md
- advanced_parents.md
- advanced_sidecars.md
- advanced_multiple_local_file_services.md
- advanced.md
- reducing_lag.md

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.7 KiB

After

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.2 KiB

After

Width:  |  Height:  |  Size: 2.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.7 KiB

After

Width:  |  Height:  |  Size: 3.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.7 KiB

After

Width:  |  Height:  |  Size: 4.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.9 KiB

After

Width:  |  Height:  |  Size: 3.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.4 KiB

After

Width:  |  Height:  |  Size: 3.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.6 KiB

After

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.8 KiB

After

Width:  |  Height:  |  Size: 1.9 KiB