parent
171177e5b6
commit
8d761c0251
|
@ -7,6 +7,34 @@ title: Changelog
|
|||
!!! note
|
||||
This is the new changelog, only the most recent builds. For all versions, see the [old changelog](old_changelog.html).
|
||||
|
||||
## [Version 513](https://github.com/hydrusnetwork/hydrus/releases/tag/v513)
|
||||
|
||||
### client api
|
||||
|
||||
* the Client API now supports the duplicates system! this is early stages, and what I've exposed is ugly and technical, but if you want to try out some external dupe processing, give it a go and let me know what you think! (issue #347)
|
||||
* a new 'manage file relationships' permission gives your api keys access
|
||||
* the new GET commands are:
|
||||
* - `/manage_file_relationships/get_file_relationships`, which fetches potential dupes, dupes, alternates, false positives, and dupe kings
|
||||
* - `/manage_file_relationships/get_potentials_count`, which can take two file searches, a potential dupes search type, a pixel match type, and max hamming distance, and will give the number of potential pairs in that domain
|
||||
* - `/manage_file_relationships/get_potential_pairs`, which takes the same params as count and a `max_num_pairs` and gives you a batch of pairs to process, just like the dupe filter
|
||||
* - `/manage_file_relationships/get_random_potentials`, which takes the same params as count and gives you some hashes just like the 'show some random potential pairs' button
|
||||
* the new POST commands are:
|
||||
* - `/manage_file_relationships/set_file_relationships`, which sets potential/dupe/alternate/false positive relationships between file pairs with some optional content merge and file deletes
|
||||
* - `/manage_file_relationships/set_kings`, which sets duplicate group kings
|
||||
* more commands will be written in the future for various remove/dissolve actions
|
||||
* wrote unit tests for all the commands!
|
||||
* wrote help for all the commands!
|
||||
* fixed an issue in the '/manage_pages/get_pages' call where the response data structure was saying 'focused' instead of 'selected' for 'page of pages'
|
||||
* cilent api version is now 40
|
||||
|
||||
### boring misc cleanup and refactoring
|
||||
|
||||
* cleaned and wrote some more parsing methods for the api to support duplicate search tech and reduce copypasted parsing code
|
||||
* renamed the client api permission labels a little, just making it all clearer and line up better. also, the 'edit client permissions' dialog now sorts the permissions
|
||||
* reordered and renamed the dev help headers in the same way
|
||||
* simple but significant rename-refactoring in file duplicates database module, tearing off the old 'Duplicates' prefixes to every method ha ha
|
||||
* updated the advanced Windows 'running from source' help to talk more about VC build tools. some old scripts don't seem to work any more in Win 11, but you also don't really need it any more (I moved to a new dev machine this week so had to set everything up again)
|
||||
|
||||
## [Version 512](https://github.com/hydrusnetwork/hydrus/releases/tag/v512)
|
||||
|
||||
### two searches in duplicates
|
||||
|
@ -451,36 +479,3 @@ title: Changelog
|
|||
* cleaned up a bunch of related metadata importer/exporter code
|
||||
* cleaned import folder code
|
||||
* cleaned hdd importer code
|
||||
|
||||
## [Version 503](https://github.com/hydrusnetwork/hydrus/releases/tag/v503)
|
||||
|
||||
### misc
|
||||
* fixed show/hiding the main gui splitters after a regression in v502. also, keyboard focus after these events should now be less jank
|
||||
* thanks to a user, the Deviant Art parser we rolled back to recently now gets video support. I also added artist tag parsing like the api parser used to do
|
||||
* if you use the internal client database backup system, it now says in the menu when it was last run. this menu doesn't update often, so I put a bit of buffer in where it says 'did one recently'. let me know if the numbers here are ever confusing
|
||||
* fixed a bug where the database menu was not immediately updating the first time you set a backup location
|
||||
* if an apng has sub-millisecond frame durations (seems to be jitter-apngs that were created oddly), these are now each rounded up to 1ms. any apngs that previously appeared to have 0 duration now have borked-tiny but valid duration and will now import ok
|
||||
* the client now catches 529 error responses from servers (service is overloaded) and treats them like a 429/509 bandwidth problem, waiting for a bit before retrying. more work may be needed here
|
||||
* the new popup toaster should restore from minimised better
|
||||
* fixed a subtle bug where trashing and untrashing a file when searching the special 'all my files' domain would temporarily sort that file at the front/end of sorting by 'import time'
|
||||
* added 'dateutil present' to _help->about_ and reordered all the entries for readability
|
||||
* brushed up the network job response-bytes-size counting logic a little more
|
||||
* cleaned up the EVT_ICONIZE event processing wx/Qt patch
|
||||
|
||||
### running from source is now easy on Windows
|
||||
* as I expect to drop Qt5 support in the builds next week, we need an easy way for Windows 7 and other older-OS users to run from source. I am by no means an expert at this, but I have written some easy-setup scripts that can get you running the client in Windows from nothing in a few minutes with no python experience
|
||||
* the help is updated to reflect this, with more pointers to 'running from source', and that page now has a new guide that takes you through it all in simple steps
|
||||
* there's a client-user.bat you can edit to add your own launch parameters, and a setup_help.bat to build the help too
|
||||
* all the requirements.txts across the program have had a full pass. all are now similarly formatted for easy future editing. it is now simple to select whether you want Qt5 or Qt6, and seeing the various differences between the documents is now obvious
|
||||
* the .gitignore has been updated to not stomp over your venv, mpv/ffmpeg/sqlite, or client-user.bat
|
||||
* feedback on how this works and how to make it better would be appreciated, and once we are happy with the workflow, I will invite Linux and macOS users to generate equivalent .sh and .command scripts so we are multiplatform-easy
|
||||
|
||||
### build stuff
|
||||
* _this is all wizard nonsense, so you can ignore it. I am mostly just noting it here for my records. tl;dr: I fixed more boot problems, now and in the future_
|
||||
* just when I was getting on top of the latest boot problems, we had another one last week, caused by yet another external library that updated unusually, this time just a day after the normal release. it struck some users who run from source (such as AUR), and the macOS hotfix I put out on saturday. it turns out PySide6 6.4.0 is not yet supported by qtpy. since these big libraries' bleeding edge versions are common problems, I have updated all the requirements.txts across the program to set specific versions for qtpy, PySide2/PySide6, opencv-python-headless, requests, python-mpv, and setuptools (issue #1254)
|
||||
* updated all the requirements.txts with 'python-dateutil', which has spotty default support and whose absence broke some/all of the macOS and Docker deployments last week
|
||||
* added failsafe code in case python-dateutil is not available
|
||||
* pylzma is no longer in the main requirements.txt. it doesn't have a wheel (and hence needs compiler tech to pip install), and it is only useful for some weird flash files. UPDATE: with the blessed assistance of stackexchange, I rewrote the 'decompress lzma-compressed flash file' routine to re-munge the flash header into a proper lzma header and use the python default 'lzma' library, so 'pylzma' is no longer needed and removed from all requirements.txts
|
||||
* updated most of the actions in the build script to use updated node16 versions. node12 just started getting deprecation warnings. there is more work to do
|
||||
* replaced the node12 pip installer action with a manual command on the reworked requirements.txts
|
||||
* replaced most of the build script's uses of 'set-output', which just started getting deprecation warnings. there is more work to do
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -34,6 +34,33 @@
|
|||
<div class="content">
|
||||
<h1 id="changelog"><a href="#changelog">changelog</a></h1>
|
||||
<ul>
|
||||
<li>
|
||||
<h2 id="version_513"><a href="#version_513">version 513</a></h2>
|
||||
<ul>
|
||||
<li><h3>client api</h3></li>
|
||||
<li>the Client API now supports the duplicates system! this is early stages, and what I've exposed is ugly and technical, but if you want to try out some external dupe processing, give it a go and let me know what you think! (issue #347)</li>
|
||||
<li>a new 'manage file relationships' permission gives your api keys access</li>
|
||||
<li>the new GET commands are:</li>
|
||||
<li>- `/manage_file_relationships/get_file_relationships`, which fetches potential dupes, dupes, alternates, false positives, and dupe kings</li>
|
||||
<li>- `/manage_file_relationships/get_potentials_count`, which can take two file searches, a potential dupes search type, a pixel match type, and max hamming distance, and will give the number of potential pairs in that domain</li>
|
||||
<li>- `/manage_file_relationships/get_potential_pairs`, which takes the same params as count and a `max_num_pairs` and gives you a batch of pairs to process, just like the dupe filter</li>
|
||||
<li>- `/manage_file_relationships/get_random_potentials`, which takes the same params as count and gives you some hashes just like the 'show some random potential pairs' button</li>
|
||||
<li>the new POST commands are:</li>
|
||||
<li>- `/manage_file_relationships/set_file_relationships`, which sets potential/dupe/alternate/false positive relationships between file pairs with some optional content merge and file deletes</li>
|
||||
<li>- `/manage_file_relationships/set_kings`, which sets duplicate group kings</li>
|
||||
<li>more commands will be written in the future for various remove/dissolve actions</li>
|
||||
<li>wrote unit tests for all the commands!</li>
|
||||
<li>wrote help for all the commands!</li>
|
||||
<li>fixed an issue in the '/manage_pages/get_pages' call where the response data structure was saying 'focused' instead of 'selected' for 'page of pages'</li>
|
||||
<li>cilent api version is now 40</li>
|
||||
<li><h3>boring misc cleanup and refactoring</h3></li>
|
||||
<li>cleaned and wrote some more parsing methods for the api to support duplicate search tech and reduce copypasted parsing code</li>
|
||||
<li>renamed the client api permission labels a little, just making it all clearer and line up better. also, the 'edit client permissions' dialog now sorts the permissions</li>
|
||||
<li>reordered and renamed the dev help headers in the same way</li>
|
||||
<li>simple but significant rename-refactoring in file duplicates database module, tearing off the old 'Duplicates' prefixes to every method ha ha</li>
|
||||
<li>updated the advanced Windows 'running from source' help to talk more about VC build tools. some old scripts don't seem to work any more in Win 11, but you also don't really need it any more (I moved to a new dev machine this week so had to set everything up again)</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h2 id="version_512"><a href="#version_512">version 512</a></h2>
|
||||
<ul>
|
||||
|
|
|
@ -316,27 +316,28 @@ When running from source you may want to [build the hydrus help docs](about_docs
|
|||
|
||||
## building packages on windows { id="windows_build" }
|
||||
|
||||
Almost everything is provided as pre-compiled 'wheels' these days, but if you get an error about Visual Studio C++ when you try to pip something, it may be you need that compiler tech.
|
||||
Almost everything you get through pip is provided as pre-compiled 'wheels' these days, but if you get an error about Visual Studio C++ when you try to pip something, you have two choices:
|
||||
|
||||
You also need this if you want to build a frozen release locally.
|
||||
- Get Visual Studio 14/whatever build tools
|
||||
- Pick a different library version
|
||||
|
||||
Although these tools are free, it can be a pain to get them through the official (and often huge) downloader installer from Microsoft. Instead, install [Chocolatey](https://chocolatey.org/) and use this one simple line:
|
||||
Option B is always the simpler. If opencv-headless as the requirements.txt specifies won't compile in Python 3.10, then try a newer version--there will probably be one of these new highly compatible wheels and it'll just work in seconds. Check my build scripts and various requirements.txts for ideas on what versions to try for your python etc...
|
||||
|
||||
If you are confident you need Visual Studio tools, then prepare for headaches. Although the tools are free from Microsoft, it can be a pain to get them through the official (and often huge) downloader installer from Microsoft. Expect a 5GB+ install with an eye-watering number of checkboxes that probably needs some stackexchange searches to figure out.
|
||||
|
||||
On Windows 10, [Chocolatey](https://chocolatey.org/) has been the easy answer. Get it installed and and use this one simple line:
|
||||
|
||||
```
|
||||
choco install -y vcbuildtools visualstudio2017buildtools
|
||||
choco install -y vcbuildtools visualstudio2017buildtools windows-sdk-10.0
|
||||
```
|
||||
|
||||
Trust me, just do this, it will save a ton of headaches!
|
||||
|
||||
This can also be helpful for Windows 10 python work generally:
|
||||
|
||||
```
|
||||
choco install -y windows-sdk-10.0
|
||||
```
|
||||
_Update:_ On Windows 11, in 2023-01, I had trouble with the above. There's a couple '11' SDKs that installed ok, but the vcbuildtools stuff had unusual errors. I hadn't done this in years, so maybe they are broken for Windows 10 too! The good news is that a basic stock Win 11 install with Python 3.10 is fine getting everything on our requirements and even making a build without any extra compiler tech.
|
||||
|
||||
## additional windows info { id="additional_windows" }
|
||||
|
||||
This does not matter much any more, but in the old days, Windows pip could have problems building modules like lz4 and lxml, and Visual Studio was tricky to get working. [This page](http://www.lfd.uci.edu/~gohlke/pythonlibs/) has a lot of prebuilt binaries--I have found it very helpful many times.
|
||||
This does not matter much any more, but in the old days, building modules like lz4 and lxml was a complete nightmare, and hooking up Visual Studio was even more difficult. [This page](http://www.lfd.uci.edu/~gohlke/pythonlibs/) has a lot of prebuilt binaries--I have found it very helpful many times.
|
||||
|
||||
I have a fair bit of experience with Windows python, so send me a mail if you need help.
|
||||
|
||||
|
@ -344,4 +345,4 @@ I have a fair bit of experience with Windows python, so send me a mail if you ne
|
|||
|
||||
My coding style is unusual and unprofessional. Everything is pretty much hacked together. If you are interested in how things work, please do look through the source and ask me if you don't understand something.
|
||||
|
||||
I'm constantly throwing new code together and then cleaning and overhauling it down the line. I work strictly alone, however, so while I am very interested in detailed bug reports or suggestions for good libraries to use, I am not looking for pull requests or suggestions on style. I know a lot of things are a mess. Everything I do is [WTFPL](https://github.com/sirkris/WTFPL/blob/master/WTFPL.md), so feel free to fork and play around with things on your end as much as you like.
|
||||
I'm constantly throwing new code together and then cleaning and overhauling it down the line. I work strictly alone. While I am very interested in detailed bug reports or suggestions for good libraries to use, I am not looking for pull requests or suggestions on style. I know a lot of things are a mess. Everything I do is [WTFPL](https://github.com/sirkris/WTFPL/blob/master/WTFPL.md), so feel free to fork and play around with things on your end as much as you like.
|
||||
|
|
|
@ -17,19 +17,21 @@ CLIENT_API_PERMISSION_MANAGE_PAGES = 4
|
|||
CLIENT_API_PERMISSION_MANAGE_COOKIES = 5
|
||||
CLIENT_API_PERMISSION_MANAGE_DATABASE = 6
|
||||
CLIENT_API_PERMISSION_ADD_NOTES = 7
|
||||
CLIENT_API_PERMISSION_MANAGE_FILE_RELATIONSHIPS = 8
|
||||
|
||||
ALLOWED_PERMISSIONS = ( CLIENT_API_PERMISSION_ADD_FILES, CLIENT_API_PERMISSION_ADD_TAGS, CLIENT_API_PERMISSION_ADD_URLS, CLIENT_API_PERMISSION_SEARCH_FILES, CLIENT_API_PERMISSION_MANAGE_PAGES, CLIENT_API_PERMISSION_MANAGE_COOKIES, CLIENT_API_PERMISSION_MANAGE_DATABASE, CLIENT_API_PERMISSION_ADD_NOTES )
|
||||
ALLOWED_PERMISSIONS = ( CLIENT_API_PERMISSION_ADD_FILES, CLIENT_API_PERMISSION_ADD_TAGS, CLIENT_API_PERMISSION_ADD_URLS, CLIENT_API_PERMISSION_SEARCH_FILES, CLIENT_API_PERMISSION_MANAGE_PAGES, CLIENT_API_PERMISSION_MANAGE_COOKIES, CLIENT_API_PERMISSION_MANAGE_DATABASE, CLIENT_API_PERMISSION_ADD_NOTES, CLIENT_API_PERMISSION_MANAGE_FILE_RELATIONSHIPS )
|
||||
|
||||
basic_permission_to_str_lookup = {}
|
||||
|
||||
basic_permission_to_str_lookup[ CLIENT_API_PERMISSION_ADD_URLS ] = 'add urls for processing'
|
||||
basic_permission_to_str_lookup[ CLIENT_API_PERMISSION_ADD_FILES ] = 'import files'
|
||||
basic_permission_to_str_lookup[ CLIENT_API_PERMISSION_ADD_TAGS ] = 'add tags to files'
|
||||
basic_permission_to_str_lookup[ CLIENT_API_PERMISSION_SEARCH_FILES ] = 'search for files'
|
||||
basic_permission_to_str_lookup[ CLIENT_API_PERMISSION_ADD_URLS ] = 'import and edit urls'
|
||||
basic_permission_to_str_lookup[ CLIENT_API_PERMISSION_ADD_FILES ] = 'import and delete files'
|
||||
basic_permission_to_str_lookup[ CLIENT_API_PERMISSION_ADD_TAGS ] = 'edit file tags'
|
||||
basic_permission_to_str_lookup[ CLIENT_API_PERMISSION_SEARCH_FILES ] = 'search and fetch files'
|
||||
basic_permission_to_str_lookup[ CLIENT_API_PERMISSION_MANAGE_PAGES ] = 'manage pages'
|
||||
basic_permission_to_str_lookup[ CLIENT_API_PERMISSION_MANAGE_COOKIES ] = 'manage cookies'
|
||||
basic_permission_to_str_lookup[ CLIENT_API_PERMISSION_MANAGE_DATABASE ] = 'manage database'
|
||||
basic_permission_to_str_lookup[ CLIENT_API_PERMISSION_ADD_NOTES ] = 'add notes to files'
|
||||
basic_permission_to_str_lookup[ CLIENT_API_PERMISSION_ADD_NOTES ] = 'edit file notes'
|
||||
basic_permission_to_str_lookup[ CLIENT_API_PERMISSION_MANAGE_FILE_RELATIONSHIPS ] = 'manage file relationships'
|
||||
|
||||
SEARCH_RESULTS_CACHE_TIMEOUT = 4 * 3600
|
||||
|
||||
|
|
|
@ -1862,13 +1862,13 @@ class DB( HydrusDB.HydrusDB ):
|
|||
chosen_allowed_hash_ids = query_hash_ids_1
|
||||
comparison_allowed_hash_ids = query_hash_ids_2
|
||||
|
||||
table_join = self.modules_files_duplicates.DuplicatesGetPotentialDuplicatePairsTableJoinOnSeparateSearchResults( temp_table_name_1, temp_table_name_2, pixel_dupes_preference, max_hamming_distance )
|
||||
table_join = self.modules_files_duplicates.GetPotentialDuplicatePairsTableJoinOnSeparateSearchResults( temp_table_name_1, temp_table_name_2, pixel_dupes_preference, max_hamming_distance )
|
||||
|
||||
else:
|
||||
|
||||
if file_search_context_1.IsJustSystemEverything() or file_search_context_1.HasNoPredicates():
|
||||
|
||||
table_join = self.modules_files_duplicates.DuplicatesGetPotentialDuplicatePairsTableJoinOnEverythingSearchResults( db_location_context, pixel_dupes_preference, max_hamming_distance )
|
||||
table_join = self.modules_files_duplicates.GetPotentialDuplicatePairsTableJoinOnEverythingSearchResults( db_location_context, pixel_dupes_preference, max_hamming_distance )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -1879,14 +1879,14 @@ class DB( HydrusDB.HydrusDB ):
|
|||
chosen_allowed_hash_ids = query_hash_ids
|
||||
comparison_allowed_hash_ids = query_hash_ids
|
||||
|
||||
table_join = self.modules_files_duplicates.DuplicatesGetPotentialDuplicatePairsTableJoinOnSearchResultsBothFiles( temp_table_name_1, pixel_dupes_preference, max_hamming_distance )
|
||||
table_join = self.modules_files_duplicates.GetPotentialDuplicatePairsTableJoinOnSearchResultsBothFiles( temp_table_name_1, pixel_dupes_preference, max_hamming_distance )
|
||||
|
||||
else:
|
||||
|
||||
# the master will always be one that matches the search, the comparison can be whatever
|
||||
chosen_allowed_hash_ids = query_hash_ids
|
||||
|
||||
table_join = self.modules_files_duplicates.DuplicatesGetPotentialDuplicatePairsTableJoinOnSearchResults( db_location_context, temp_table_name_1, pixel_dupes_preference, max_hamming_distance )
|
||||
table_join = self.modules_files_duplicates.GetPotentialDuplicatePairsTableJoinOnSearchResults( db_location_context, temp_table_name_1, pixel_dupes_preference, max_hamming_distance )
|
||||
|
||||
|
||||
|
||||
|
@ -1915,7 +1915,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
for potential_media_id in potential_media_ids:
|
||||
|
||||
best_king_hash_id = self.modules_files_duplicates.DuplicatesGetBestKingId( potential_media_id, db_location_context, allowed_hash_ids = chosen_allowed_hash_ids, preferred_hash_ids = chosen_preferred_hash_ids )
|
||||
best_king_hash_id = self.modules_files_duplicates.GetBestKingId( potential_media_id, db_location_context, allowed_hash_ids = chosen_allowed_hash_ids, preferred_hash_ids = chosen_preferred_hash_ids )
|
||||
|
||||
if best_king_hash_id is not None:
|
||||
|
||||
|
@ -1931,7 +1931,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return []
|
||||
|
||||
|
||||
# I used to do self.modules_files_duplicates.DuplicatesGetFileHashesByDuplicateType here, but that gets _all_ potentials in the db context, even with allowed_hash_ids doing work it won't capture pixel hashes or duplicate distance that we searched above
|
||||
# I used to do self.modules_files_duplicates.GetFileHashesByDuplicateType here, but that gets _all_ potentials in the db context, even with allowed_hash_ids doing work it won't capture pixel hashes or duplicate distance that we searched above
|
||||
# so, let's search and make the list manually!
|
||||
|
||||
comparison_hash_ids = []
|
||||
|
@ -1950,7 +1950,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
potential_media_id = smaller_media_id
|
||||
|
||||
|
||||
best_king_hash_id = self.modules_files_duplicates.DuplicatesGetBestKingId( potential_media_id, db_location_context, allowed_hash_ids = comparison_allowed_hash_ids, preferred_hash_ids = comparison_preferred_hash_ids )
|
||||
best_king_hash_id = self.modules_files_duplicates.GetBestKingId( potential_media_id, db_location_context, allowed_hash_ids = comparison_allowed_hash_ids, preferred_hash_ids = comparison_preferred_hash_ids )
|
||||
|
||||
if best_king_hash_id is not None:
|
||||
|
||||
|
@ -1965,8 +1965,15 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
return self.modules_hashes_local_cache.GetHashes( results_hash_ids )
|
||||
|
||||
|
||||
|
||||
|
||||
def _DuplicatesGetPotentialDuplicatePairsForFiltering( self, file_search_context_1: ClientSearch.FileSearchContext, file_search_context_2: ClientSearch.FileSearchContext, dupe_search_type: int, pixel_dupes_preference, max_hamming_distance ):
|
||||
def _DuplicatesGetPotentialDuplicatePairsForFiltering( self, file_search_context_1: ClientSearch.FileSearchContext, file_search_context_2: ClientSearch.FileSearchContext, dupe_search_type: int, pixel_dupes_preference, max_hamming_distance, max_num_pairs: typing.Optional[ int ] = None ):
|
||||
|
||||
if max_num_pairs is None:
|
||||
|
||||
max_num_pairs = HG.client_controller.new_options.GetInteger( 'duplicate_filter_max_batch_size' )
|
||||
|
||||
|
||||
# we need to batch non-intersecting decisions here to keep it simple at the gui-level
|
||||
# we also want to maximise per-decision value
|
||||
|
@ -1993,13 +2000,13 @@ class DB( HydrusDB.HydrusDB ):
|
|||
chosen_allowed_hash_ids = query_hash_ids_1
|
||||
comparison_allowed_hash_ids = query_hash_ids_2
|
||||
|
||||
table_join = self.modules_files_duplicates.DuplicatesGetPotentialDuplicatePairsTableJoinOnSeparateSearchResults( temp_table_name_1, temp_table_name_2, pixel_dupes_preference, max_hamming_distance )
|
||||
table_join = self.modules_files_duplicates.GetPotentialDuplicatePairsTableJoinOnSeparateSearchResults( temp_table_name_1, temp_table_name_2, pixel_dupes_preference, max_hamming_distance )
|
||||
|
||||
else:
|
||||
|
||||
if file_search_context_1.IsJustSystemEverything() or file_search_context_1.HasNoPredicates():
|
||||
|
||||
table_join = self.modules_files_duplicates.DuplicatesGetPotentialDuplicatePairsTableJoinOnEverythingSearchResults( db_location_context, pixel_dupes_preference, max_hamming_distance )
|
||||
table_join = self.modules_files_duplicates.GetPotentialDuplicatePairsTableJoinOnEverythingSearchResults( db_location_context, pixel_dupes_preference, max_hamming_distance )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -2011,14 +2018,14 @@ class DB( HydrusDB.HydrusDB ):
|
|||
chosen_allowed_hash_ids = query_hash_ids
|
||||
comparison_allowed_hash_ids = query_hash_ids
|
||||
|
||||
table_join = self.modules_files_duplicates.DuplicatesGetPotentialDuplicatePairsTableJoinOnSearchResultsBothFiles( temp_table_name_1, pixel_dupes_preference, max_hamming_distance )
|
||||
table_join = self.modules_files_duplicates.GetPotentialDuplicatePairsTableJoinOnSearchResultsBothFiles( temp_table_name_1, pixel_dupes_preference, max_hamming_distance )
|
||||
|
||||
else:
|
||||
|
||||
# the chosen must be in the search, but we don't care about the comparison as long as it is viewable
|
||||
chosen_preferred_hash_ids = query_hash_ids
|
||||
|
||||
table_join = self.modules_files_duplicates.DuplicatesGetPotentialDuplicatePairsTableJoinOnSearchResults( db_location_context, temp_table_name_1, pixel_dupes_preference, max_hamming_distance )
|
||||
table_join = self.modules_files_duplicates.GetPotentialDuplicatePairsTableJoinOnSearchResults( db_location_context, temp_table_name_1, pixel_dupes_preference, max_hamming_distance )
|
||||
|
||||
|
||||
|
||||
|
@ -2028,8 +2035,6 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
|
||||
MAX_BATCH_SIZE = HG.client_controller.new_options.GetInteger( 'duplicate_filter_max_batch_size' )
|
||||
|
||||
batch_of_pairs_of_media_ids = []
|
||||
seen_media_ids = set()
|
||||
|
||||
|
@ -2089,7 +2094,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
batch_of_pairs_of_media_ids.append( pair )
|
||||
|
||||
if len( batch_of_pairs_of_media_ids ) >= MAX_BATCH_SIZE:
|
||||
if len( batch_of_pairs_of_media_ids ) >= max_num_pairs:
|
||||
|
||||
break
|
||||
|
||||
|
@ -2097,13 +2102,13 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
seen_media_ids.update( seen_media_ids_for_this_master_media_id )
|
||||
|
||||
if len( batch_of_pairs_of_media_ids ) >= MAX_BATCH_SIZE:
|
||||
if len( batch_of_pairs_of_media_ids ) >= max_num_pairs:
|
||||
|
||||
break
|
||||
|
||||
|
||||
|
||||
if len( batch_of_pairs_of_media_ids ) >= MAX_BATCH_SIZE:
|
||||
if len( batch_of_pairs_of_media_ids ) >= max_num_pairs:
|
||||
|
||||
break
|
||||
|
||||
|
@ -2119,8 +2124,8 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
for ( smaller_media_id, larger_media_id ) in batch_of_pairs_of_media_ids:
|
||||
|
||||
best_smaller_king_hash_id = self.modules_files_duplicates.DuplicatesGetBestKingId( smaller_media_id, db_location_context, allowed_hash_ids = chosen_allowed_hash_ids, preferred_hash_ids = chosen_preferred_hash_ids )
|
||||
best_larger_king_hash_id = self.modules_files_duplicates.DuplicatesGetBestKingId( larger_media_id, db_location_context, allowed_hash_ids = chosen_allowed_hash_ids, preferred_hash_ids = chosen_preferred_hash_ids )
|
||||
best_smaller_king_hash_id = self.modules_files_duplicates.GetBestKingId( smaller_media_id, db_location_context, allowed_hash_ids = chosen_allowed_hash_ids, preferred_hash_ids = chosen_preferred_hash_ids )
|
||||
best_larger_king_hash_id = self.modules_files_duplicates.GetBestKingId( larger_media_id, db_location_context, allowed_hash_ids = chosen_allowed_hash_ids, preferred_hash_ids = chosen_preferred_hash_ids )
|
||||
|
||||
if best_smaller_king_hash_id is not None and best_larger_king_hash_id is not None:
|
||||
|
||||
|
@ -2137,15 +2142,15 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
for ( smaller_media_id, larger_media_id ) in batch_of_pairs_of_media_ids:
|
||||
|
||||
best_smaller_king_hash_id = self.modules_files_duplicates.DuplicatesGetBestKingId( smaller_media_id, db_location_context, allowed_hash_ids = chosen_allowed_hash_ids, preferred_hash_ids = chosen_preferred_hash_ids )
|
||||
best_larger_king_hash_id = self.modules_files_duplicates.DuplicatesGetBestKingId( larger_media_id, db_location_context, allowed_hash_ids = comparison_allowed_hash_ids, preferred_hash_ids = comparison_preferred_hash_ids )
|
||||
best_smaller_king_hash_id = self.modules_files_duplicates.GetBestKingId( smaller_media_id, db_location_context, allowed_hash_ids = chosen_allowed_hash_ids, preferred_hash_ids = chosen_preferred_hash_ids )
|
||||
best_larger_king_hash_id = self.modules_files_duplicates.GetBestKingId( larger_media_id, db_location_context, allowed_hash_ids = comparison_allowed_hash_ids, preferred_hash_ids = comparison_preferred_hash_ids )
|
||||
|
||||
if best_smaller_king_hash_id is None or best_larger_king_hash_id is None:
|
||||
|
||||
# ok smaller was probably the comparison, let's see if that produces a better king hash
|
||||
|
||||
best_smaller_king_hash_id = self.modules_files_duplicates.DuplicatesGetBestKingId( smaller_media_id, db_location_context, allowed_hash_ids = comparison_allowed_hash_ids, preferred_hash_ids = comparison_preferred_hash_ids )
|
||||
best_larger_king_hash_id = self.modules_files_duplicates.DuplicatesGetBestKingId( larger_media_id, db_location_context, allowed_hash_ids = chosen_allowed_hash_ids, preferred_hash_ids = chosen_preferred_hash_ids )
|
||||
best_smaller_king_hash_id = self.modules_files_duplicates.GetBestKingId( smaller_media_id, db_location_context, allowed_hash_ids = comparison_allowed_hash_ids, preferred_hash_ids = comparison_preferred_hash_ids )
|
||||
best_larger_king_hash_id = self.modules_files_duplicates.GetBestKingId( larger_media_id, db_location_context, allowed_hash_ids = chosen_allowed_hash_ids, preferred_hash_ids = chosen_preferred_hash_ids )
|
||||
|
||||
|
||||
if best_smaller_king_hash_id is not None and best_larger_king_hash_id is not None:
|
||||
|
@ -2179,13 +2184,13 @@ class DB( HydrusDB.HydrusDB ):
|
|||
self._PopulateSearchIntoTempTable( file_search_context_1, temp_table_name_1 )
|
||||
self._PopulateSearchIntoTempTable( file_search_context_2, temp_table_name_2 )
|
||||
|
||||
table_join = self.modules_files_duplicates.DuplicatesGetPotentialDuplicatePairsTableJoinOnSeparateSearchResults( temp_table_name_1, temp_table_name_2, pixel_dupes_preference, max_hamming_distance )
|
||||
table_join = self.modules_files_duplicates.GetPotentialDuplicatePairsTableJoinOnSeparateSearchResults( temp_table_name_1, temp_table_name_2, pixel_dupes_preference, max_hamming_distance )
|
||||
|
||||
else:
|
||||
|
||||
if file_search_context_1.IsJustSystemEverything() or file_search_context_1.HasNoPredicates():
|
||||
|
||||
table_join = self.modules_files_duplicates.DuplicatesGetPotentialDuplicatePairsTableJoinOnEverythingSearchResults( db_location_context, pixel_dupes_preference, max_hamming_distance )
|
||||
table_join = self.modules_files_duplicates.GetPotentialDuplicatePairsTableJoinOnEverythingSearchResults( db_location_context, pixel_dupes_preference, max_hamming_distance )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -2193,11 +2198,11 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if dupe_search_type == CC.DUPE_SEARCH_BOTH_FILES_MATCH_ONE_SEARCH:
|
||||
|
||||
table_join = self.modules_files_duplicates.DuplicatesGetPotentialDuplicatePairsTableJoinOnSearchResultsBothFiles( temp_table_name_1, pixel_dupes_preference, max_hamming_distance )
|
||||
table_join = self.modules_files_duplicates.GetPotentialDuplicatePairsTableJoinOnSearchResultsBothFiles( temp_table_name_1, pixel_dupes_preference, max_hamming_distance )
|
||||
|
||||
else:
|
||||
|
||||
table_join = self.modules_files_duplicates.DuplicatesGetPotentialDuplicatePairsTableJoinOnSearchResults( db_location_context, temp_table_name_1, pixel_dupes_preference, max_hamming_distance )
|
||||
table_join = self.modules_files_duplicates.GetPotentialDuplicatePairsTableJoinOnSearchResults( db_location_context, temp_table_name_1, pixel_dupes_preference, max_hamming_distance )
|
||||
|
||||
|
||||
|
||||
|
@ -2227,8 +2232,8 @@ class DB( HydrusDB.HydrusDB ):
|
|||
hash_id_a = self.modules_hashes_local_cache.GetHashId( hash_a )
|
||||
hash_id_b = self.modules_hashes_local_cache.GetHashId( hash_b )
|
||||
|
||||
media_id_a = self.modules_files_duplicates.DuplicatesGetMediaId( hash_id_a )
|
||||
media_id_b = self.modules_files_duplicates.DuplicatesGetMediaId( hash_id_b )
|
||||
media_id_a = self.modules_files_duplicates.GetMediaId( hash_id_a )
|
||||
media_id_b = self.modules_files_duplicates.GetMediaId( hash_id_b )
|
||||
|
||||
smaller_media_id = min( media_id_a, media_id_b )
|
||||
larger_media_id = max( media_id_a, media_id_b )
|
||||
|
@ -2246,29 +2251,29 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if duplicate_type == HC.DUPLICATE_FALSE_POSITIVE:
|
||||
|
||||
alternates_group_id_a = self.modules_files_duplicates.DuplicatesGetAlternatesGroupId( media_id_a )
|
||||
alternates_group_id_b = self.modules_files_duplicates.DuplicatesGetAlternatesGroupId( media_id_b )
|
||||
alternates_group_id_a = self.modules_files_duplicates.GetAlternatesGroupId( media_id_a )
|
||||
alternates_group_id_b = self.modules_files_duplicates.GetAlternatesGroupId( media_id_b )
|
||||
|
||||
self.modules_files_duplicates.DuplicatesSetFalsePositive( alternates_group_id_a, alternates_group_id_b )
|
||||
self.modules_files_duplicates.SetFalsePositive( alternates_group_id_a, alternates_group_id_b )
|
||||
|
||||
elif duplicate_type == HC.DUPLICATE_ALTERNATE:
|
||||
|
||||
if media_id_a == media_id_b:
|
||||
|
||||
king_hash_id = self.modules_files_duplicates.DuplicatesGetKingHashId( media_id_a )
|
||||
king_hash_id = self.modules_files_duplicates.GetKingHashId( media_id_a )
|
||||
|
||||
hash_id_to_remove = hash_id_b if king_hash_id == hash_id_a else hash_id_a
|
||||
|
||||
self.modules_files_duplicates.DuplicatesRemoveMediaIdMember( hash_id_to_remove )
|
||||
self.modules_files_duplicates.RemoveMediaIdMember( hash_id_to_remove )
|
||||
|
||||
media_id_a = self.modules_files_duplicates.DuplicatesGetMediaId( hash_id_a )
|
||||
media_id_b = self.modules_files_duplicates.DuplicatesGetMediaId( hash_id_b )
|
||||
media_id_a = self.modules_files_duplicates.GetMediaId( hash_id_a )
|
||||
media_id_b = self.modules_files_duplicates.GetMediaId( hash_id_b )
|
||||
|
||||
smaller_media_id = min( media_id_a, media_id_b )
|
||||
larger_media_id = max( media_id_a, media_id_b )
|
||||
|
||||
|
||||
self.modules_files_duplicates.DuplicatesSetAlternates( media_id_a, media_id_b )
|
||||
self.modules_files_duplicates.SetAlternates( media_id_a, media_id_b )
|
||||
|
||||
|
||||
elif duplicate_type in ( HC.DUPLICATE_BETTER, HC.DUPLICATE_WORSE, HC.DUPLICATE_SAME_QUALITY ):
|
||||
|
@ -2281,8 +2286,8 @@ class DB( HydrusDB.HydrusDB ):
|
|||
duplicate_type = HC.DUPLICATE_BETTER
|
||||
|
||||
|
||||
king_hash_id_a = self.modules_files_duplicates.DuplicatesGetKingHashId( media_id_a )
|
||||
king_hash_id_b = self.modules_files_duplicates.DuplicatesGetKingHashId( media_id_b )
|
||||
king_hash_id_a = self.modules_files_duplicates.GetKingHashId( media_id_a )
|
||||
king_hash_id_b = self.modules_files_duplicates.GetKingHashId( media_id_b )
|
||||
|
||||
if duplicate_type == HC.DUPLICATE_BETTER:
|
||||
|
||||
|
@ -2292,7 +2297,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
# user manually set that a > King A, hence we are setting a new king within a group
|
||||
|
||||
self.modules_files_duplicates.DuplicatesSetKing( hash_id_a, media_id_a )
|
||||
self.modules_files_duplicates.SetKing( hash_id_a, media_id_a )
|
||||
|
||||
|
||||
else:
|
||||
|
@ -2301,16 +2306,16 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
# user manually set that a member of A is better than a non-King of B. remove b from B and merge it into A
|
||||
|
||||
self.modules_files_duplicates.DuplicatesRemoveMediaIdMember( hash_id_b )
|
||||
self.modules_files_duplicates.RemoveMediaIdMember( hash_id_b )
|
||||
|
||||
media_id_b = self.modules_files_duplicates.DuplicatesGetMediaId( hash_id_b )
|
||||
media_id_b = self.modules_files_duplicates.GetMediaId( hash_id_b )
|
||||
|
||||
# b is now the King of its new group
|
||||
|
||||
|
||||
# a member of A is better than King B, hence B can merge into A
|
||||
|
||||
self.modules_files_duplicates.DuplicatesMergeMedias( media_id_a, media_id_b )
|
||||
self.modules_files_duplicates.MergeMedias( media_id_a, media_id_b )
|
||||
|
||||
|
||||
elif duplicate_type == HC.DUPLICATE_SAME_QUALITY:
|
||||
|
@ -2324,9 +2329,9 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
# if neither file is the king, remove B from B and merge it into A
|
||||
|
||||
self.modules_files_duplicates.DuplicatesRemoveMediaIdMember( hash_id_b )
|
||||
self.modules_files_duplicates.RemoveMediaIdMember( hash_id_b )
|
||||
|
||||
media_id_b = self.modules_files_duplicates.DuplicatesGetMediaId( hash_id_b )
|
||||
media_id_b = self.modules_files_duplicates.GetMediaId( hash_id_b )
|
||||
|
||||
superior_media_id = media_id_a
|
||||
mergee_media_id = media_id_b
|
||||
|
@ -2351,7 +2356,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
mergee_media_id = media_id_b
|
||||
|
||||
|
||||
self.modules_files_duplicates.DuplicatesMergeMedias( superior_media_id, mergee_media_id )
|
||||
self.modules_files_duplicates.MergeMedias( superior_media_id, mergee_media_id )
|
||||
|
||||
|
||||
|
||||
|
@ -2359,7 +2364,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
potential_duplicate_media_ids_and_distances = [ ( media_id_b, 0 ) ]
|
||||
|
||||
self.modules_files_duplicates.DuplicatesAddPotentialDuplicates( media_id_a, potential_duplicate_media_ids_and_distances )
|
||||
self.modules_files_duplicates.AddPotentialDuplicates( media_id_a, potential_duplicate_media_ids_and_distances )
|
||||
|
||||
|
||||
|
||||
|
@ -2569,7 +2574,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
db_location_context = self.modules_files_storage.GetDBLocationContext( location_context )
|
||||
|
||||
table_join = self.modules_files_duplicates.DuplicatesGetPotentialDuplicatePairsTableJoinOnFileService( db_location_context )
|
||||
table_join = self.modules_files_duplicates.GetPotentialDuplicatePairsTableJoinOnFileService( db_location_context )
|
||||
|
||||
( total_potential_pairs, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT smaller_media_id, larger_media_id FROM {} );'.format( table_join ) ).fetchone()
|
||||
|
||||
|
@ -3667,7 +3672,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
else:
|
||||
|
||||
dupe_hash_ids = self.modules_files_duplicates.DuplicatesGetHashIdsFromDuplicateCountPredicate( db_location_context, operator, num_relationships, dupe_type )
|
||||
dupe_hash_ids = self.modules_files_duplicates.GetHashIdsFromDuplicateCountPredicate( db_location_context, operator, num_relationships, dupe_type )
|
||||
|
||||
query_hash_ids = intersection_update_qhi( query_hash_ids, dupe_hash_ids )
|
||||
|
||||
|
@ -3972,7 +3977,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if king_filter is not None and king_filter:
|
||||
|
||||
king_hash_ids = self.modules_files_duplicates.DuplicatesFilterKingHashIds( query_hash_ids )
|
||||
king_hash_ids = self.modules_files_duplicates.FilterKingHashIds( query_hash_ids )
|
||||
|
||||
query_hash_ids = intersection_update_qhi( query_hash_ids, king_hash_ids )
|
||||
|
||||
|
@ -4118,7 +4123,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if king_filter is not None and not king_filter:
|
||||
|
||||
king_hash_ids = self.modules_files_duplicates.DuplicatesFilterKingHashIds( query_hash_ids )
|
||||
king_hash_ids = self.modules_files_duplicates.FilterKingHashIds( query_hash_ids )
|
||||
|
||||
query_hash_ids.difference_update( king_hash_ids )
|
||||
|
||||
|
@ -4130,17 +4135,17 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if only_do_zero:
|
||||
|
||||
nonzero_hash_ids = self.modules_files_duplicates.DuplicatesGetHashIdsFromDuplicateCountPredicate( db_location_context, '>', 0, dupe_type )
|
||||
nonzero_hash_ids = self.modules_files_duplicates.GetHashIdsFromDuplicateCountPredicate( db_location_context, '>', 0, dupe_type )
|
||||
|
||||
query_hash_ids.difference_update( nonzero_hash_ids )
|
||||
|
||||
elif include_zero:
|
||||
|
||||
nonzero_hash_ids = self.modules_files_duplicates.DuplicatesGetHashIdsFromDuplicateCountPredicate( db_location_context, '>', 0, dupe_type )
|
||||
nonzero_hash_ids = self.modules_files_duplicates.GetHashIdsFromDuplicateCountPredicate( db_location_context, '>', 0, dupe_type )
|
||||
|
||||
zero_hash_ids = query_hash_ids.difference( nonzero_hash_ids )
|
||||
|
||||
accurate_except_zero_hash_ids = self.modules_files_duplicates.DuplicatesGetHashIdsFromDuplicateCountPredicate( db_location_context, operator, num_relationships, dupe_type )
|
||||
accurate_except_zero_hash_ids = self.modules_files_duplicates.GetHashIdsFromDuplicateCountPredicate( db_location_context, operator, num_relationships, dupe_type )
|
||||
|
||||
hash_ids = zero_hash_ids.union( accurate_except_zero_hash_ids )
|
||||
|
||||
|
@ -6196,11 +6201,11 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return ( still_work_to_do, num_done )
|
||||
|
||||
|
||||
media_id = self.modules_files_duplicates.DuplicatesGetMediaId( hash_id )
|
||||
media_id = self.modules_files_duplicates.GetMediaId( hash_id )
|
||||
|
||||
potential_duplicate_media_ids_and_distances = [ ( self.modules_files_duplicates.DuplicatesGetMediaId( duplicate_hash_id ), distance ) for ( duplicate_hash_id, distance ) in self.modules_similar_files.Search( hash_id, search_distance ) if duplicate_hash_id != hash_id ]
|
||||
potential_duplicate_media_ids_and_distances = [ ( self.modules_files_duplicates.GetMediaId( duplicate_hash_id ), distance ) for ( duplicate_hash_id, distance ) in self.modules_similar_files.Search( hash_id, search_distance ) if duplicate_hash_id != hash_id ]
|
||||
|
||||
self.modules_files_duplicates.DuplicatesAddPotentialDuplicates( media_id, potential_duplicate_media_ids_and_distances )
|
||||
self.modules_files_duplicates.AddPotentialDuplicates( media_id, potential_duplicate_media_ids_and_distances )
|
||||
|
||||
self._Execute( 'UPDATE shape_search_cache SET searched_distance = ? WHERE hash_id = ?;', ( search_distance, hash_id ) )
|
||||
|
||||
|
@ -7320,8 +7325,8 @@ class DB( HydrusDB.HydrusDB ):
|
|||
elif action == 'client_files_locations': result = self.modules_files_physical_storage.GetClientFilesLocations( *args, **kwargs )
|
||||
elif action == 'deferred_physical_delete': result = self.modules_files_storage.GetDeferredPhysicalDelete( *args, **kwargs )
|
||||
elif action == 'duplicate_pairs_for_filtering': result = self._DuplicatesGetPotentialDuplicatePairsForFiltering( *args, **kwargs )
|
||||
elif action == 'file_duplicate_hashes': result = self.modules_files_duplicates.DuplicatesGetFileHashesByDuplicateType( *args, **kwargs )
|
||||
elif action == 'file_duplicate_info': result = self.modules_files_duplicates.DuplicatesGetFileDuplicateInfo( *args, **kwargs )
|
||||
elif action == 'file_duplicate_hashes': result = self.modules_files_duplicates.GetFileHashesByDuplicateType( *args, **kwargs )
|
||||
elif action == 'file_duplicate_info': result = self.modules_files_duplicates.GetFileDuplicateInfo( *args, **kwargs )
|
||||
elif action == 'file_hashes': result = self.modules_hashes.GetFileHashes( *args, **kwargs )
|
||||
elif action == 'file_history': result = self.modules_files_metadata_rich.GetFileHistory( *args, **kwargs )
|
||||
elif action == 'file_info_managers': result = self._GetFileInfoManagersFromHashes( *args, **kwargs )
|
||||
|
@ -7329,6 +7334,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
elif action == 'file_maintenance_get_job': result = self.modules_files_maintenance_queue.GetJob( *args, **kwargs )
|
||||
elif action == 'file_maintenance_get_job_counts': result = self.modules_files_maintenance_queue.GetJobCounts( *args, **kwargs )
|
||||
elif action == 'file_query_ids': result = self._GetHashIdsFromQuery( *args, **kwargs )
|
||||
elif action == 'file_relationships_for_api': result = self.modules_files_duplicates.GetFileRelationshipsForAPI( *args, **kwargs )
|
||||
elif action == 'file_system_predicates': result = self._GetFileSystemPredicates( *args, **kwargs )
|
||||
elif action == 'filter_existing_tags': result = self.modules_mappings_counts_update.FilterExistingTags( *args, **kwargs )
|
||||
elif action == 'filter_hashes': result = self.modules_files_metadata_rich.FilterHashesByService( *args, **kwargs )
|
||||
|
@ -9776,7 +9782,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
num_relationships = 0
|
||||
dupe_type = HC.DUPLICATE_POTENTIAL
|
||||
|
||||
dupe_hash_ids = self.modules_files_duplicates.DuplicatesGetHashIdsFromDuplicateCountPredicate( db_location_context, operator, num_relationships, dupe_type )
|
||||
dupe_hash_ids = self.modules_files_duplicates.GetHashIdsFromDuplicateCountPredicate( db_location_context, operator, num_relationships, dupe_type )
|
||||
|
||||
with self._MakeTemporaryIntegerTable( dupe_hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
|
||||
|
||||
|
@ -11234,8 +11240,8 @@ class DB( HydrusDB.HydrusDB ):
|
|||
elif action == 'associate_repository_update_hashes': self.modules_repositories.AssociateRepositoryUpdateHashes( *args, **kwargs )
|
||||
elif action == 'backup': self._Backup( *args, **kwargs )
|
||||
elif action == 'clear_deferred_physical_delete': self.modules_files_storage.ClearDeferredPhysicalDelete( *args, **kwargs )
|
||||
elif action == 'clear_false_positive_relations': self.modules_files_duplicates.DuplicatesClearAllFalsePositiveRelationsFromHashes( *args, **kwargs )
|
||||
elif action == 'clear_false_positive_relations_between_groups': self.modules_files_duplicates.DuplicatesClearFalsePositiveRelationsBetweenGroupsFromHashes( *args, **kwargs )
|
||||
elif action == 'clear_false_positive_relations': self.modules_files_duplicates.ClearAllFalsePositiveRelationsFromHashes( *args, **kwargs )
|
||||
elif action == 'clear_false_positive_relations_between_groups': self.modules_files_duplicates.ClearFalsePositiveRelationsBetweenGroupsFromHashes( *args, **kwargs )
|
||||
elif action == 'clear_orphan_file_records': self._ClearOrphanFileRecords( *args, **kwargs )
|
||||
elif action == 'clear_orphan_tables': self._ClearOrphanTables( *args, **kwargs )
|
||||
elif action == 'content_updates': self._ProcessContentUpdates( *args, **kwargs )
|
||||
|
@ -11246,12 +11252,12 @@ class DB( HydrusDB.HydrusDB ):
|
|||
elif action == 'delete_pending': self._DeletePending( *args, **kwargs )
|
||||
elif action == 'delete_serialisable_named': self.modules_serialisable.DeleteJSONDumpNamed( *args, **kwargs )
|
||||
elif action == 'delete_service_info': self._DeleteServiceInfo( *args, **kwargs )
|
||||
elif action == 'delete_potential_duplicate_pairs': self.modules_files_duplicates.DuplicatesDeleteAllPotentialDuplicatePairs( *args, **kwargs )
|
||||
elif action == 'delete_potential_duplicate_pairs': self.modules_files_duplicates.DeleteAllPotentialDuplicatePairs( *args, **kwargs )
|
||||
elif action == 'dirty_services': self._SaveDirtyServices( *args, **kwargs )
|
||||
elif action == 'dissolve_alternates_group': self.modules_files_duplicates.DuplicatesDissolveAlternatesGroupIdFromHashes( *args, **kwargs )
|
||||
elif action == 'dissolve_duplicates_group': self.modules_files_duplicates.DuplicatesDissolveMediaIdFromHashes( *args, **kwargs )
|
||||
elif action == 'dissolve_alternates_group': self.modules_files_duplicates.DissolveAlternatesGroupIdFromHashes( *args, **kwargs )
|
||||
elif action == 'dissolve_duplicates_group': self.modules_files_duplicates.DissolveMediaIdFromHashes( *args, **kwargs )
|
||||
elif action == 'duplicate_pair_status': self._DuplicatesSetDuplicatePairStatus( *args, **kwargs )
|
||||
elif action == 'duplicate_set_king': self.modules_files_duplicates.DuplicatesSetKingFromHash( *args, **kwargs )
|
||||
elif action == 'duplicate_set_king': self.modules_files_duplicates.SetKingFromHash( *args, **kwargs )
|
||||
elif action == 'file_maintenance_add_jobs': self.modules_files_maintenance_queue.AddJobs( *args, **kwargs )
|
||||
elif action == 'file_maintenance_add_jobs_hashes': self.modules_files_maintenance_queue.AddJobsHashes( *args, **kwargs )
|
||||
elif action == 'file_maintenance_cancel_jobs': self.modules_files_maintenance_queue.CancelJobs( *args, **kwargs )
|
||||
|
@ -11286,9 +11292,9 @@ class DB( HydrusDB.HydrusDB ):
|
|||
elif action == 'repopulate_tag_cache_missing_subtags': self._RepopulateTagCacheMissingSubtags( *args, **kwargs )
|
||||
elif action == 'repopulate_tag_display_mappings_cache': self._RepopulateTagDisplayMappingsCache( *args, **kwargs )
|
||||
elif action == 'relocate_client_files': self.modules_files_physical_storage.RelocateClientFiles( *args, **kwargs )
|
||||
elif action == 'remove_alternates_member': self.modules_files_duplicates.DuplicatesRemoveAlternateMemberFromHashes( *args, **kwargs )
|
||||
elif action == 'remove_duplicates_member': self.modules_files_duplicates.DuplicatesRemoveMediaIdMemberFromHashes( *args, **kwargs )
|
||||
elif action == 'remove_potential_pairs': self.modules_files_duplicates.DuplicatesRemovePotentialPairsFromHashes( *args, **kwargs )
|
||||
elif action == 'remove_alternates_member': self.modules_files_duplicates.RemoveAlternateMemberFromHashes( *args, **kwargs )
|
||||
elif action == 'remove_duplicates_member': self.modules_files_duplicates.RemoveMediaIdMemberFromHashes( *args, **kwargs )
|
||||
elif action == 'remove_potential_pairs': self.modules_files_duplicates.RemovePotentialPairsFromHashes( *args, **kwargs )
|
||||
elif action == 'repair_client_files': self.modules_files_physical_storage.RepairClientFiles( *args, **kwargs )
|
||||
elif action == 'repair_invalid_tags': self._RepairInvalidTags( *args, **kwargs )
|
||||
elif action == 'reprocess_repository': self.modules_repositories.ReprocessRepository( *args, **kwargs )
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -86,6 +86,8 @@ class EditAPIPermissionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
self._basic_permissions.Append( ClientAPI.basic_permission_to_str_lookup[ permission ], permission )
|
||||
|
||||
|
||||
self._basic_permissions.sortItems()
|
||||
|
||||
search_tag_filter = api_permissions.GetSearchTagFilter()
|
||||
|
||||
message = 'The API will only permit searching for tags that pass through this filter.'
|
||||
|
@ -114,7 +116,7 @@ class EditAPIPermissionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
rows.append( ( 'access key: ', self._access_key ) )
|
||||
rows.append( ( 'name: ', self._name ) )
|
||||
rows.append( ( 'permissions: ', self._basic_permissions) )
|
||||
rows.append( ( 'permissions: ', self._basic_permissions ) )
|
||||
rows.append( ( 'tag search permissions: ', self._search_tag_filter ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( self, rows )
|
||||
|
|
|
@ -820,7 +820,7 @@ class Page( QW.QWidget ):
|
|||
root[ 'name' ] = self.GetName()
|
||||
root[ 'page_key' ] = self._page_key.hex()
|
||||
root[ 'page_type' ] = self._management_controller.GetType()
|
||||
root[ 'focused' ] = is_selected
|
||||
root[ 'selected' ] = is_selected
|
||||
|
||||
return root
|
||||
|
||||
|
|
|
@ -99,6 +99,25 @@ class HydrusServiceClientAPI( HydrusClientService ):
|
|||
manage_cookies.putChild( b'get_cookies', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManageCookiesGetCookies( self._service, self._client_requests_domain ) )
|
||||
manage_cookies.putChild( b'set_cookies', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManageCookiesSetCookies( self._service, self._client_requests_domain ) )
|
||||
|
||||
manage_database = NoResource()
|
||||
|
||||
root.putChild( b'manage_database', manage_database )
|
||||
|
||||
manage_database.putChild( b'mr_bones', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManageDatabaseMrBones( self._service, self._client_requests_domain ) )
|
||||
manage_database.putChild( b'lock_on', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManageDatabaseLockOn( self._service, self._client_requests_domain ) )
|
||||
manage_database.putChild( b'lock_off', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManageDatabaseLockOff( self._service, self._client_requests_domain ) )
|
||||
|
||||
manage_file_relationships = NoResource()
|
||||
|
||||
root.putChild( b'manage_file_relationships', manage_file_relationships )
|
||||
|
||||
manage_file_relationships.putChild( b'get_file_relationships', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManageFileRelationshipsGetRelationships( self._service, self._client_requests_domain ) )
|
||||
manage_file_relationships.putChild( b'get_potentials_count', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManageFileRelationshipsGetPotentialsCount( self._service, self._client_requests_domain ) )
|
||||
manage_file_relationships.putChild( b'get_potential_pairs', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManageFileRelationshipsGetPotentialPairs( self._service, self._client_requests_domain ) )
|
||||
manage_file_relationships.putChild( b'get_random_potentials', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManageFileRelationshipsGetRandomPotentials( self._service, self._client_requests_domain ) )
|
||||
manage_file_relationships.putChild( b'set_file_relationships', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManageFileRelationshipsSetRelationships( self._service, self._client_requests_domain ) )
|
||||
manage_file_relationships.putChild( b'set_kings', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManageFileRelationshipsSetKings( self._service, self._client_requests_domain ) )
|
||||
|
||||
manage_headers = NoResource()
|
||||
|
||||
root.putChild( b'manage_headers', manage_headers )
|
||||
|
@ -115,14 +134,6 @@ class HydrusServiceClientAPI( HydrusClientService ):
|
|||
manage_pages.putChild( b'get_page_info', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManagePagesGetPageInfo( self._service, self._client_requests_domain ) )
|
||||
manage_pages.putChild( b'refresh_page', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManagePagesRefreshPage( self._service, self._client_requests_domain ) )
|
||||
|
||||
manage_database = NoResource()
|
||||
|
||||
root.putChild( b'manage_database', manage_database )
|
||||
|
||||
manage_database.putChild( b'mr_bones', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManageDatabaseMrBones( self._service, self._client_requests_domain ) )
|
||||
manage_database.putChild( b'lock_on', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManageDatabaseLockOn( self._service, self._client_requests_domain ) )
|
||||
manage_database.putChild( b'lock_off', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManageDatabaseLockOff( self._service, self._client_requests_domain ) )
|
||||
|
||||
return root
|
||||
|
||||
|
||||
|
|
|
@ -55,10 +55,12 @@ LOCAL_BOORU_STRING_PARAMS = set()
|
|||
LOCAL_BOORU_JSON_PARAMS = set()
|
||||
LOCAL_BOORU_JSON_BYTE_LIST_PARAMS = set()
|
||||
|
||||
CLIENT_API_INT_PARAMS = { 'file_id', 'file_sort_type' }
|
||||
CLIENT_API_BYTE_PARAMS = { 'hash', 'destination_page_key', 'page_key', 'Hydrus-Client-API-Access-Key', 'Hydrus-Client-API-Session-Key', 'tag_service_key', 'file_service_key' }
|
||||
# if a variable name isn't defined here, a GET with it won't work
|
||||
|
||||
CLIENT_API_INT_PARAMS = { 'file_id', 'file_sort_type', 'potentials_search_type', 'pixel_duplicates', 'max_hamming_distance', 'max_num_pairs' }
|
||||
CLIENT_API_BYTE_PARAMS = { 'hash', 'destination_page_key', 'page_key', 'Hydrus-Client-API-Access-Key', 'Hydrus-Client-API-Session-Key', 'tag_service_key', 'tag_service_key_1', 'tag_service_key_2', 'file_service_key' }
|
||||
CLIENT_API_STRING_PARAMS = { 'name', 'url', 'domain', 'search', 'file_service_name', 'tag_service_name', 'reason', 'tag_display_type', 'source_hash_type', 'desired_hash_type' }
|
||||
CLIENT_API_JSON_PARAMS = { 'basic_permissions', 'system_inbox', 'system_archive', 'tags', 'file_ids', 'only_return_identifiers', 'only_return_basic_information', 'create_new_file_ids', 'detailed_url_information', 'hide_service_names_tags', 'hide_service_keys_tags', 'simple', 'file_sort_asc', 'return_hashes', 'return_file_ids', 'include_notes', 'notes', 'note_names', 'doublecheck_file_system' }
|
||||
CLIENT_API_JSON_PARAMS = { 'basic_permissions', 'system_inbox', 'system_archive', 'tags', 'tags_1', 'tags_2', 'file_ids', 'only_return_identifiers', 'only_return_basic_information', 'create_new_file_ids', 'detailed_url_information', 'hide_service_names_tags', 'hide_service_keys_tags', 'simple', 'file_sort_asc', 'return_hashes', 'return_file_ids', 'include_notes', 'notes', 'note_names', 'doublecheck_file_system' }
|
||||
CLIENT_API_JSON_BYTE_LIST_PARAMS = { 'hashes' }
|
||||
CLIENT_API_JSON_BYTE_DICT_PARAMS = { 'service_keys_to_tags', 'service_keys_to_actions_to_tags', 'service_keys_to_additional_tags' }
|
||||
|
||||
|
@ -109,6 +111,24 @@ def CheckHashLength( hashes, hash_type = 'sha256' ):
|
|||
|
||||
|
||||
|
||||
|
||||
def CheckTagService( tag_service_key: bytes ):
|
||||
|
||||
try:
|
||||
|
||||
service = HG.client_controller.services_manager.GetService( tag_service_key )
|
||||
|
||||
except:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Could not find that tag service!' )
|
||||
|
||||
|
||||
if service.GetServiceType() not in HC.ALL_TAG_SERVICES:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Sorry, that service key did not give a tag service!' )
|
||||
|
||||
|
||||
|
||||
def ConvertServiceNamesDictToKeys( allowed_service_types, service_name_dict ):
|
||||
|
||||
service_key_dict = {}
|
||||
|
@ -413,6 +433,60 @@ def ParseClientAPISearchPredicates( request ) -> typing.List[ ClientSearch.Predi
|
|||
|
||||
return predicates
|
||||
|
||||
|
||||
def ParseDuplicateSearch( request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
# TODO: When we have ParseLocationContext for clever file searching, swap it in here too
|
||||
# LocationContext has to be the same for both searches
|
||||
location_context = ClientLocation.LocationContext.STATICCreateSimple( CC.COMBINED_LOCAL_MEDIA_SERVICE_KEY )
|
||||
|
||||
tag_service_key_1 = request.parsed_request_args.GetValue( 'tag_service_key_1', bytes, default_value = CC.COMBINED_TAG_SERVICE_KEY )
|
||||
tag_service_key_2 = request.parsed_request_args.GetValue( 'tag_service_key_2', bytes, default_value = CC.COMBINED_TAG_SERVICE_KEY )
|
||||
|
||||
CheckTagService( tag_service_key_1 )
|
||||
CheckTagService( tag_service_key_2 )
|
||||
|
||||
tag_context_1 = ClientSearch.TagContext( service_key = tag_service_key_1 )
|
||||
tag_context_2 = ClientSearch.TagContext( service_key = tag_service_key_2 )
|
||||
|
||||
tags_1 = request.parsed_request_args.GetValue( 'tags_1', list, default_value = [] )
|
||||
tags_2 = request.parsed_request_args.GetValue( 'tags_2', list, default_value = [] )
|
||||
|
||||
if len( tags_1 ) == 0:
|
||||
|
||||
predicates_1 = [ ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_EVERYTHING ) ]
|
||||
|
||||
else:
|
||||
|
||||
predicates_1 = ConvertTagListToPredicates( request, tags_1, do_permission_check = False )
|
||||
|
||||
|
||||
if len( tags_2 ) == 0:
|
||||
|
||||
predicates_2 = [ ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_EVERYTHING ) ]
|
||||
|
||||
else:
|
||||
|
||||
predicates_2 = ConvertTagListToPredicates( request, tags_2, do_permission_check = False )
|
||||
|
||||
|
||||
|
||||
file_search_context_1 = ClientSearch.FileSearchContext( location_context = location_context, tag_context = tag_context_1, predicates = predicates_1 )
|
||||
file_search_context_2 = ClientSearch.FileSearchContext( location_context = location_context, tag_context = tag_context_2, predicates = predicates_2 )
|
||||
|
||||
dupe_search_type = request.parsed_request_args.GetValue( 'potentials_search_type', int, default_value = CC.DUPE_SEARCH_ONE_FILE_MATCHES_ONE_SEARCH )
|
||||
pixel_dupes_preference = request.parsed_request_args.GetValue( 'pixel_duplicates', int, default_value = CC.SIMILAR_FILES_PIXEL_DUPES_ALLOWED )
|
||||
max_hamming_distance = request.parsed_request_args.GetValue( 'max_hamming_distance', int, default_value = 4 )
|
||||
|
||||
return (
|
||||
file_search_context_1,
|
||||
file_search_context_2,
|
||||
dupe_search_type,
|
||||
pixel_dupes_preference,
|
||||
max_hamming_distance
|
||||
)
|
||||
|
||||
|
||||
def ParseLocationContext( request: HydrusServerRequest.HydrusRequest, default: ClientLocation.LocationContext ):
|
||||
|
||||
if 'file_service_key' in request.parsed_request_args or 'file_service_name' in request.parsed_request_args:
|
||||
|
@ -456,6 +530,7 @@ def ParseLocationContext( request: HydrusServerRequest.HydrusRequest, default: C
|
|||
return default
|
||||
|
||||
|
||||
|
||||
def ParseHashes( request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
hashes = set()
|
||||
|
@ -496,6 +571,7 @@ def ParseHashes( request: HydrusServerRequest.HydrusRequest ):
|
|||
|
||||
return hashes
|
||||
|
||||
|
||||
def ParseRequestedResponseMime( request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
# let them ask for something specifically, else default to what they asked in, finally default to json
|
||||
|
@ -541,6 +617,38 @@ def ParseRequestedResponseMime( request: HydrusServerRequest.HydrusRequest ):
|
|||
return HC.APPLICATION_JSON
|
||||
|
||||
|
||||
def ParseTagServiceKey( request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
if 'tag_service_key' in request.parsed_request_args or 'tag_service_name' in request.parsed_request_args:
|
||||
|
||||
if 'tag_service_key' in request.parsed_request_args:
|
||||
|
||||
tag_service_key = request.parsed_request_args[ 'tag_service_key' ]
|
||||
|
||||
else:
|
||||
|
||||
tag_service_name = request.parsed_request_args[ 'tag_service_name' ]
|
||||
|
||||
try:
|
||||
|
||||
tag_service_key = HG.client_controller.services_manager.GetServiceKeyFromName( HC.ALL_TAG_SERVICES, tag_service_name )
|
||||
|
||||
except:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Could not find the service "{}"!'.format( tag_service_name ) )
|
||||
|
||||
|
||||
|
||||
CheckTagService( tag_service_key )
|
||||
|
||||
else:
|
||||
|
||||
tag_service_key = CC.COMBINED_TAG_SERVICE_KEY
|
||||
|
||||
|
||||
return tag_service_key
|
||||
|
||||
|
||||
def ConvertTagListToPredicates( request, tag_list, do_permission_check = True, error_on_invalid_tag = True ) -> typing.List[ ClientSearch.Predicate ]:
|
||||
|
||||
or_tag_lists = [ tag for tag in tag_list if isinstance( tag, list ) ]
|
||||
|
@ -1246,6 +1354,7 @@ class HydrusResourceClientAPIRestrictedGetServices( HydrusResourceClientAPIRestr
|
|||
ClientAPI.CLIENT_API_PERMISSION_ADD_TAGS,
|
||||
ClientAPI.CLIENT_API_PERMISSION_ADD_NOTES,
|
||||
ClientAPI.CLIENT_API_PERMISSION_MANAGE_PAGES,
|
||||
ClientAPI.CLIENT_API_PERMISSION_MANAGE_FILE_RELATIONSHIPS,
|
||||
ClientAPI.CLIENT_API_PERMISSION_SEARCH_FILES
|
||||
)
|
||||
)
|
||||
|
@ -1778,43 +1887,6 @@ class HydrusResourceClientAPIRestrictedAddTagsSearchTags( HydrusResourceClientAP
|
|||
return parsed_autocomplete_text
|
||||
|
||||
|
||||
def _GetTagServiceKey( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
tag_service_key = CC.COMBINED_TAG_SERVICE_KEY
|
||||
|
||||
if 'tag_service_key' in request.parsed_request_args:
|
||||
|
||||
tag_service_key = request.parsed_request_args[ 'tag_service_key' ]
|
||||
|
||||
elif 'tag_service_name' in request.parsed_request_args:
|
||||
|
||||
tag_service_name = request.parsed_request_args[ 'tag_service_name' ]
|
||||
|
||||
try:
|
||||
|
||||
tag_service_key = HG.client_controller.services_manager.GetServiceKeyFromName( HC.ALL_TAG_SERVICES, tag_service_name )
|
||||
|
||||
except:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Could not find the service "{}"!'.format( tag_service_name ) )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
service = HG.client_controller.services_manager.GetService( tag_service_key )
|
||||
|
||||
except:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Could not find that tag service!' )
|
||||
|
||||
if service.GetServiceType() not in HC.ALL_TAG_SERVICES:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Sorry, that service key did not give a tag service!' )
|
||||
|
||||
|
||||
return tag_service_key
|
||||
|
||||
|
||||
def _GetTagMatches( self, request: HydrusServerRequest.HydrusRequest, tag_display_type: int, tag_service_key: bytes, parsed_autocomplete_text: ClientSearch.ParsedAutocompleteText ) -> typing.List[ ClientSearch.Predicate ]:
|
||||
|
||||
matches = []
|
||||
|
@ -1855,7 +1927,7 @@ class HydrusResourceClientAPIRestrictedAddTagsSearchTags( HydrusResourceClientAP
|
|||
|
||||
tag_display_type = ClientTags.TAG_DISPLAY_STORAGE if tag_display_type_str == 'storage' else ClientTags.TAG_DISPLAY_ACTUAL
|
||||
|
||||
tag_service_key = self._GetTagServiceKey( request )
|
||||
tag_service_key = ParseTagServiceKey( request )
|
||||
|
||||
parsed_autocomplete_text = self._GetParsedAutocompleteText( search, tag_service_key )
|
||||
|
||||
|
@ -2213,44 +2285,7 @@ class HydrusResourceClientAPIRestrictedGetFilesSearchFiles( HydrusResourceClient
|
|||
|
||||
location_context = ParseLocationContext( request, ClientLocation.LocationContext.STATICCreateSimple( CC.COMBINED_LOCAL_MEDIA_SERVICE_KEY ) )
|
||||
|
||||
if 'tag_service_key' in request.parsed_request_args or 'tag_service_name' in request.parsed_request_args:
|
||||
|
||||
if 'tag_service_key' in request.parsed_request_args:
|
||||
|
||||
tag_service_key = request.parsed_request_args[ 'tag_service_key' ]
|
||||
|
||||
else:
|
||||
|
||||
tag_service_name = request.parsed_request_args[ 'tag_service_name' ]
|
||||
|
||||
try:
|
||||
|
||||
tag_service_key = HG.client_controller.services_manager.GetServiceKeyFromName( HC.ALL_TAG_SERVICES, tag_service_name )
|
||||
|
||||
except:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Could not find the service "{}"!'.format( tag_service_name ) )
|
||||
|
||||
|
||||
|
||||
try:
|
||||
|
||||
service = HG.client_controller.services_manager.GetService( tag_service_key )
|
||||
|
||||
except:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Could not find that tag service!' )
|
||||
|
||||
|
||||
if service.GetServiceType() not in HC.ALL_TAG_SERVICES:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Sorry, that service key did not give a tag service!' )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
tag_service_key = CC.COMBINED_TAG_SERVICE_KEY
|
||||
|
||||
tag_service_key = ParseTagServiceKey( request )
|
||||
|
||||
if tag_service_key == CC.COMBINED_TAG_SERVICE_KEY and location_context.IsAllKnownFiles():
|
||||
|
||||
|
@ -3099,6 +3134,285 @@ class HydrusResourceClientAPIRestrictedManageDatabaseMrBones( HydrusResourceClie
|
|||
return response_context
|
||||
|
||||
|
||||
|
||||
class HydrusResourceClientAPIRestrictedManageFileRelationships( HydrusResourceClientAPIRestricted ):
|
||||
|
||||
def _CheckAPIPermissions( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
request.client_api_permissions.CheckPermission( ClientAPI.CLIENT_API_PERMISSION_MANAGE_FILE_RELATIONSHIPS )
|
||||
|
||||
|
||||
|
||||
class HydrusResourceClientAPIRestrictedManageFileRelationshipsGetRelationships( HydrusResourceClientAPIRestrictedManageFileRelationships ):
|
||||
|
||||
def _threadDoGETJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
# TODO: When we have ParseLocationContext for clever file searching, swap it in here too
|
||||
location_context = ClientLocation.LocationContext.STATICCreateSimple( CC.COMBINED_LOCAL_MEDIA_SERVICE_KEY )
|
||||
|
||||
hashes = ParseHashes( request )
|
||||
|
||||
# maybe in future we'll just get the media results and dump the dict from there, but whatever
|
||||
hashes_to_file_duplicates = HG.client_controller.Read( 'file_relationships_for_api', location_context, hashes )
|
||||
|
||||
body_dict = { 'file_relationships' : hashes_to_file_duplicates }
|
||||
|
||||
body = Dumps( body_dict, request.preferred_mime )
|
||||
|
||||
response_context = HydrusServerResources.ResponseContext( 200, mime = request.preferred_mime, body = body )
|
||||
|
||||
return response_context
|
||||
|
||||
|
||||
|
||||
class HydrusResourceClientAPIRestrictedManageFileRelationshipsGetPotentialsCount( HydrusResourceClientAPIRestrictedManageFileRelationships ):
|
||||
|
||||
def _threadDoGETJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
(
|
||||
file_search_context_1,
|
||||
file_search_context_2,
|
||||
dupe_search_type,
|
||||
pixel_dupes_preference,
|
||||
max_hamming_distance
|
||||
) = ParseDuplicateSearch( request )
|
||||
|
||||
count = HG.client_controller.Read( 'potential_duplicates_count', file_search_context_1, file_search_context_2, dupe_search_type, pixel_dupes_preference, max_hamming_distance )
|
||||
|
||||
body_dict = { 'potential_duplicates_count' : count }
|
||||
|
||||
body = Dumps( body_dict, request.preferred_mime )
|
||||
|
||||
response_context = HydrusServerResources.ResponseContext( 200, mime = request.preferred_mime, body = body )
|
||||
|
||||
return response_context
|
||||
|
||||
|
||||
|
||||
class HydrusResourceClientAPIRestrictedManageFileRelationshipsGetPotentialPairs( HydrusResourceClientAPIRestrictedManageFileRelationships ):
|
||||
|
||||
def _threadDoGETJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
(
|
||||
file_search_context_1,
|
||||
file_search_context_2,
|
||||
dupe_search_type,
|
||||
pixel_dupes_preference,
|
||||
max_hamming_distance
|
||||
) = ParseDuplicateSearch( request )
|
||||
|
||||
max_num_pairs = request.parsed_request_args.GetValue( 'max_num_pairs', int, default_value = HG.client_controller.new_options.GetInteger( 'duplicate_filter_max_batch_size' ) )
|
||||
|
||||
filtering_pairs_media_results = HG.client_controller.Read( 'duplicate_pairs_for_filtering', file_search_context_1, file_search_context_2, dupe_search_type, pixel_dupes_preference, max_hamming_distance, max_num_pairs = max_num_pairs )
|
||||
|
||||
filtering_pairs_hashes = [ ( m1.GetHash().hex(), m2.GetHash().hex() ) for ( m1, m2 ) in filtering_pairs_media_results ]
|
||||
|
||||
body_dict = { 'potential_duplicate_pairs' : filtering_pairs_hashes }
|
||||
|
||||
body = Dumps( body_dict, request.preferred_mime )
|
||||
|
||||
response_context = HydrusServerResources.ResponseContext( 200, mime = request.preferred_mime, body = body )
|
||||
|
||||
return response_context
|
||||
|
||||
|
||||
|
||||
class HydrusResourceClientAPIRestrictedManageFileRelationshipsGetRandomPotentials( HydrusResourceClientAPIRestrictedManageFileRelationships ):
|
||||
|
||||
def _threadDoGETJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
(
|
||||
file_search_context_1,
|
||||
file_search_context_2,
|
||||
dupe_search_type,
|
||||
pixel_dupes_preference,
|
||||
max_hamming_distance
|
||||
) = ParseDuplicateSearch( request )
|
||||
|
||||
hashes = HG.client_controller.Read( 'random_potential_duplicate_hashes', file_search_context_1, file_search_context_2, dupe_search_type, pixel_dupes_preference, max_hamming_distance )
|
||||
|
||||
body_dict = { 'random_potential_duplicate_hashes' : [ hash.hex() for hash in hashes ] }
|
||||
|
||||
body = Dumps( body_dict, request.preferred_mime )
|
||||
|
||||
response_context = HydrusServerResources.ResponseContext( 200, mime = request.preferred_mime, body = body )
|
||||
|
||||
return response_context
|
||||
|
||||
|
||||
|
||||
class HydrusResourceClientAPIRestrictedManageFileRelationshipsSetKings( HydrusResourceClientAPIRestrictedManageFileRelationships ):
|
||||
|
||||
def _threadDoPOSTJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
hashes = ParseHashes( request )
|
||||
|
||||
for hash in hashes:
|
||||
|
||||
HG.client_controller.WriteSynchronous( 'duplicate_set_king', hash )
|
||||
|
||||
|
||||
response_context = HydrusServerResources.ResponseContext( 200 )
|
||||
|
||||
return response_context
|
||||
|
||||
|
||||
|
||||
class HydrusResourceClientAPIRestrictedManageFileRelationshipsSetRelationships( HydrusResourceClientAPIRestrictedManageFileRelationships ):
|
||||
|
||||
def _threadDoPOSTJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
rows = []
|
||||
|
||||
raw_rows = request.parsed_request_args.GetValue( 'pair_rows', list, expected_list_type = list )
|
||||
|
||||
all_hashes = set()
|
||||
|
||||
for row in raw_rows:
|
||||
|
||||
if len( row ) != 6:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'One of the pair rows was the wrong length!' )
|
||||
|
||||
|
||||
( duplicate_type, hash_a_hex, hash_b_hex, do_default_content_merge, delete_first, delete_second ) = row
|
||||
|
||||
try:
|
||||
|
||||
hash_a = bytes.fromhex( hash_a_hex )
|
||||
hash_b = bytes.fromhex( hash_b_hex )
|
||||
|
||||
except:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Sorry, did not understand one of the hashes {} or {}!'.format( hash_a_hex, hash_b_hex ) )
|
||||
|
||||
|
||||
CheckHashLength( ( hash_a, hash_b ) )
|
||||
|
||||
all_hashes.update( ( hash_a, hash_b ) )
|
||||
|
||||
|
||||
media_results = HG.client_controller.Read( 'media_results', all_hashes )
|
||||
|
||||
hashes_to_media_results = { media_result.GetHash() : media_result for media_result in media_results }
|
||||
|
||||
for row in raw_rows:
|
||||
|
||||
( duplicate_type, hash_a_hex, hash_b_hex, do_default_content_merge, delete_first, delete_second ) = row
|
||||
|
||||
if duplicate_type not in [
|
||||
HC.DUPLICATE_FALSE_POSITIVE,
|
||||
HC.DUPLICATE_ALTERNATE,
|
||||
HC.DUPLICATE_BETTER,
|
||||
HC.DUPLICATE_WORSE,
|
||||
HC.DUPLICATE_SAME_QUALITY,
|
||||
HC.DUPLICATE_POTENTIAL
|
||||
]:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'One of the duplicate statuses ({}) was incorrect!'.format( duplicate_type ) )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
hash_a = bytes.fromhex( hash_a_hex )
|
||||
hash_b = bytes.fromhex( hash_b_hex )
|
||||
|
||||
except:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Sorry, did not understand one of the hashes {} or {}!'.format( hash_a_hex, hash_b_hex ) )
|
||||
|
||||
|
||||
if not isinstance( do_default_content_merge, bool ):
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Sorry, "do_default_content_merge" has to be a boolean! "{}" was not!'.format( do_default_content_merge ) )
|
||||
|
||||
|
||||
if not isinstance( delete_first, bool ):
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Sorry, "delete_first" has to be a boolean! "{}" was not!'.format( delete_first ) )
|
||||
|
||||
|
||||
if not isinstance( delete_second, bool ):
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Sorry, "delete_second" has to be a boolean! "{}" was not!'.format( delete_second ) )
|
||||
|
||||
|
||||
# ok the raw row looks good
|
||||
|
||||
list_of_service_keys_to_content_updates = []
|
||||
|
||||
first_media = ClientMedia.MediaSingleton( hashes_to_media_results[ hash_a ] )
|
||||
second_media = ClientMedia.MediaSingleton( hashes_to_media_results[ hash_b ] )
|
||||
|
||||
file_deletion_reason = 'From Client API (duplicates processing).'
|
||||
|
||||
if do_default_content_merge:
|
||||
|
||||
duplicate_content_merge_options = HG.client_controller.new_options.GetDuplicateContentMergeOptions( duplicate_type )
|
||||
|
||||
list_of_service_keys_to_content_updates.append( duplicate_content_merge_options.ProcessPairIntoContentUpdates( first_media, second_media, file_deletion_reason = file_deletion_reason, delete_first = delete_first, delete_second = delete_second ) )
|
||||
|
||||
elif delete_first or delete_second:
|
||||
|
||||
service_keys_to_content_updates = collections.defaultdict( list )
|
||||
|
||||
deletee_media = set()
|
||||
|
||||
if delete_first:
|
||||
|
||||
deletee_media.add( first_media )
|
||||
|
||||
|
||||
if delete_second:
|
||||
|
||||
deletee_media.add( second_media )
|
||||
|
||||
|
||||
for media in deletee_media:
|
||||
|
||||
if media.HasDeleteLocked():
|
||||
|
||||
ClientMedia.ReportDeleteLockFailures( [ media ] )
|
||||
|
||||
continue
|
||||
|
||||
|
||||
if media.GetLocationsManager().IsTrashed():
|
||||
|
||||
deletee_service_keys = ( CC.COMBINED_LOCAL_FILE_SERVICE_KEY, )
|
||||
|
||||
else:
|
||||
|
||||
local_file_service_keys = HG.client_controller.services_manager.GetServiceKeys( ( HC.LOCAL_FILE_DOMAIN, ) )
|
||||
|
||||
deletee_service_keys = media.GetLocationsManager().GetCurrent().intersection( local_file_service_keys )
|
||||
|
||||
|
||||
for deletee_service_key in deletee_service_keys:
|
||||
|
||||
content_update = HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_DELETE, media.GetHashes(), reason = file_deletion_reason )
|
||||
|
||||
service_keys_to_content_updates[ deletee_service_key ].append( content_update )
|
||||
|
||||
|
||||
|
||||
list_of_service_keys_to_content_updates.append( service_keys_to_content_updates )
|
||||
|
||||
|
||||
rows.append( ( duplicate_type, hash_a, hash_b, list_of_service_keys_to_content_updates ) )
|
||||
|
||||
|
||||
if len( rows ) > 0:
|
||||
|
||||
HG.client_controller.WriteSynchronous( 'duplicate_pair_status', rows )
|
||||
|
||||
|
||||
response_context = HydrusServerResources.ResponseContext( 200 )
|
||||
|
||||
return response_context
|
||||
|
||||
|
||||
|
||||
class HydrusResourceClientAPIRestrictedManagePages( HydrusResourceClientAPIRestricted ):
|
||||
|
||||
def _CheckAPIPermissions( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
@ -3106,6 +3420,7 @@ class HydrusResourceClientAPIRestrictedManagePages( HydrusResourceClientAPIRestr
|
|||
request.client_api_permissions.CheckPermission( ClientAPI.CLIENT_API_PERMISSION_MANAGE_PAGES )
|
||||
|
||||
|
||||
|
||||
class HydrusResourceClientAPIRestrictedManagePagesAddFiles( HydrusResourceClientAPIRestrictedManagePages ):
|
||||
|
||||
def _threadDoPOSTJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
@ -3183,6 +3498,7 @@ class HydrusResourceClientAPIRestrictedManagePagesAddFiles( HydrusResourceClient
|
|||
return response_context
|
||||
|
||||
|
||||
|
||||
class HydrusResourceClientAPIRestrictedManagePagesFocusPage( HydrusResourceClientAPIRestrictedManagePages ):
|
||||
|
||||
def _threadDoPOSTJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
|
|
@ -2,6 +2,7 @@ import os
|
|||
import sqlite3
|
||||
import sys
|
||||
import typing
|
||||
|
||||
import yaml
|
||||
|
||||
# old method of getting frozen dir, doesn't work for symlinks looks like:
|
||||
|
@ -83,8 +84,8 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 20
|
||||
SOFTWARE_VERSION = 512
|
||||
CLIENT_API_VERSION = 39
|
||||
SOFTWARE_VERSION = 513
|
||||
CLIENT_API_VERSION = 40
|
||||
|
||||
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
||||
|
|
|
@ -1,5 +1,14 @@
|
|||
import random
|
||||
import unittest
|
||||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client.media import ClientMediaManagers
|
||||
from hydrus.client.media import ClientMediaResult
|
||||
|
||||
def compare_content_updates( ut: unittest.TestCase, service_keys_to_content_updates, expected_service_keys_to_content_updates ):
|
||||
|
||||
ut.assertEqual( len( service_keys_to_content_updates ), len( expected_service_keys_to_content_updates ) )
|
||||
|
@ -14,3 +23,55 @@ def compare_content_updates( ut: unittest.TestCase, service_keys_to_content_upda
|
|||
ut.assertEqual( c_u_tuples, e_c_u_tuples )
|
||||
|
||||
|
||||
|
||||
def GetFakeMediaResult( hash: bytes ):
|
||||
|
||||
hash_id = random.randint( 0, 200 * ( 1024 ** 2 ) )
|
||||
|
||||
size = random.randint( 8192, 20 * 1048576 )
|
||||
mime = random.choice( [ HC.IMAGE_JPEG, HC.VIDEO_WEBM, HC.APPLICATION_PDF ] )
|
||||
width = random.randint( 200, 4096 )
|
||||
height = random.randint( 200, 4096 )
|
||||
duration = random.choice( [ 220, 16.66667, None ] )
|
||||
has_audio = random.choice( [ True, False ] )
|
||||
|
||||
file_info_manager = ClientMediaManagers.FileInfoManager( hash_id, hash, size = size, mime = mime, width = width, height = height, duration = duration, has_audio = has_audio )
|
||||
|
||||
file_info_manager.has_exif = True
|
||||
file_info_manager.has_icc_profile = True
|
||||
|
||||
service_keys_to_statuses_to_tags = { CC.DEFAULT_LOCAL_TAG_SERVICE_KEY : { HC.CONTENT_STATUS_CURRENT : { 'blue_eyes', 'blonde_hair' }, HC.CONTENT_STATUS_PENDING : { 'bodysuit' } } }
|
||||
service_keys_to_statuses_to_display_tags = { CC.DEFAULT_LOCAL_TAG_SERVICE_KEY : { HC.CONTENT_STATUS_CURRENT : { 'blue eyes', 'blonde hair' }, HC.CONTENT_STATUS_PENDING : { 'bodysuit', 'clothing' } } }
|
||||
|
||||
service_keys_to_filenames = {}
|
||||
|
||||
import_timestamp = random.randint( HydrusData.GetNow() - 1000000, HydrusData.GetNow() - 15 )
|
||||
|
||||
current_to_timestamps = { CC.COMBINED_LOCAL_FILE_SERVICE_KEY : import_timestamp, CC.COMBINED_LOCAL_MEDIA_SERVICE_KEY : import_timestamp, CC.LOCAL_FILE_SERVICE_KEY : import_timestamp }
|
||||
|
||||
tags_manager = ClientMediaManagers.TagsManager( service_keys_to_statuses_to_tags, service_keys_to_statuses_to_display_tags )
|
||||
|
||||
timestamp_manager = ClientMediaManagers.TimestampManager()
|
||||
|
||||
file_modified_timestamp = random.randint( import_timestamp - 50000, import_timestamp - 1 )
|
||||
|
||||
timestamp_manager.SetFileModifiedTimestamp( file_modified_timestamp )
|
||||
|
||||
locations_manager = ClientMediaManagers.LocationsManager(
|
||||
current_to_timestamps,
|
||||
{},
|
||||
set(),
|
||||
set(),
|
||||
inbox = False,
|
||||
urls = set(),
|
||||
service_keys_to_filenames = service_keys_to_filenames,
|
||||
timestamp_manager = timestamp_manager
|
||||
)
|
||||
ratings_manager = ClientMediaManagers.RatingsManager( {} )
|
||||
notes_manager = ClientMediaManagers.NotesManager( { 'note' : 'hello', 'note2' : 'hello2' } )
|
||||
file_viewing_stats_manager = ClientMediaManagers.FileViewingStatsManager.STATICGenerateEmptyManager()
|
||||
|
||||
media_result = ClientMediaResult.MediaResult( file_info_manager, tags_manager, locations_manager, ratings_manager, notes_manager, file_viewing_stats_manager )
|
||||
|
||||
return media_result
|
||||
|
||||
|
|
|
@ -8,6 +8,7 @@ import shutil
|
|||
import time
|
||||
import unittest
|
||||
import urllib
|
||||
import urllib.parse
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
|
@ -21,7 +22,9 @@ from hydrus.core import HydrusText
|
|||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientAPI
|
||||
from hydrus.client import ClientLocation
|
||||
from hydrus.client import ClientSearch
|
||||
from hydrus.client import ClientSearchParseSystemPredicates
|
||||
from hydrus.client import ClientServices
|
||||
from hydrus.client.importing import ClientImportFiles
|
||||
from hydrus.client.media import ClientMediaManagers
|
||||
|
@ -31,6 +34,8 @@ from hydrus.client.networking import ClientLocalServer
|
|||
from hydrus.client.networking import ClientLocalServerResources
|
||||
from hydrus.client.networking import ClientNetworkingContexts
|
||||
|
||||
from hydrus.test import HelperFunctions
|
||||
|
||||
CBOR_AVAILABLE = False
|
||||
try:
|
||||
import cbor2
|
||||
|
@ -2506,6 +2511,417 @@ class TestClientAPI( unittest.TestCase ):
|
|||
self.assertEqual( boned_stats, dict( expected_data ) )
|
||||
|
||||
|
||||
def _test_manage_duplicates( self, connection, set_up_permissions ):
|
||||
|
||||
# this stuff is super dependent on the db requests, which aren't tested in this class, but we can do the arg parsing and wrapper
|
||||
|
||||
api_permissions = set_up_permissions[ 'everything' ]
|
||||
|
||||
access_key_hex = api_permissions.GetAccessKey().hex()
|
||||
|
||||
headers = { 'Hydrus-Client-API-Access-Key' : access_key_hex }
|
||||
|
||||
default_location_context = ClientLocation.LocationContext.STATICCreateSimple( CC.COMBINED_LOCAL_MEDIA_SERVICE_KEY )
|
||||
|
||||
# file relationships
|
||||
|
||||
file_relationships_hash = bytes.fromhex( 'ac940bb9026c430ea9530b4f4f6980a12d9432c2af8d9d39dfc67b05d91df11d' )
|
||||
|
||||
# yes the database returns hex hashes in this case
|
||||
example_response = {
|
||||
"file_relationships" : {
|
||||
"ac940bb9026c430ea9530b4f4f6980a12d9432c2af8d9d39dfc67b05d91df11d" : {
|
||||
"is_king" : False,
|
||||
"king" : "8784afbfd8b59de3dcf2c13dc1be9d7cb0b3d376803c8a7a8b710c7c191bb657",
|
||||
"0" : [
|
||||
],
|
||||
"1" : [],
|
||||
"3" : [
|
||||
"8bf267c4c021ae4fd7c4b90b0a381044539519f80d148359b0ce61ce1684fefe"
|
||||
],
|
||||
"8" : [
|
||||
"8784afbfd8b59de3dcf2c13dc1be9d7cb0b3d376803c8a7a8b710c7c191bb657",
|
||||
"3fa8ef54811ec8c2d1892f4f08da01e7fc17eed863acae897eb30461b051d5c3"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
HG.test_controller.SetRead( 'file_relationships_for_api', example_response )
|
||||
|
||||
path = '/manage_file_relationships/get_file_relationships?hash={}'.format( file_relationships_hash.hex() )
|
||||
|
||||
connection.request( 'GET', path, headers = headers )
|
||||
|
||||
response = connection.getresponse()
|
||||
|
||||
data = response.read()
|
||||
|
||||
text = str( data, 'utf-8' )
|
||||
|
||||
self.assertEqual( response.status, 200 )
|
||||
|
||||
d = json.loads( text )
|
||||
|
||||
self.assertEqual( d[ 'file_relationships' ], example_response )
|
||||
|
||||
[ ( args, kwargs ) ] = HG.test_controller.GetRead( 'file_relationships_for_api' )
|
||||
|
||||
( location_context, hashes ) = args
|
||||
|
||||
self.assertEqual( location_context, default_location_context )
|
||||
self.assertEqual( hashes, { file_relationships_hash } )
|
||||
|
||||
# search files failed tag permission
|
||||
|
||||
tag_context = ClientSearch.TagContext( CC.COMBINED_TAG_SERVICE_KEY )
|
||||
predicates = { ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_EVERYTHING ) }
|
||||
|
||||
default_file_search_context = ClientSearch.FileSearchContext( location_context = default_location_context, tag_context = tag_context, predicates = predicates )
|
||||
|
||||
default_potentials_search_type = CC.DUPE_SEARCH_ONE_FILE_MATCHES_ONE_SEARCH
|
||||
default_pixel_duplicates = CC.SIMILAR_FILES_PIXEL_DUPES_ALLOWED
|
||||
default_max_hamming_distance = 4
|
||||
|
||||
test_tag_service_key_1 = CC.DEFAULT_LOCAL_TAG_SERVICE_KEY
|
||||
test_tags_1 = [ 'skirt', 'system:width<400' ]
|
||||
|
||||
test_tag_context_1 = ClientSearch.TagContext( test_tag_service_key_1 )
|
||||
test_predicates_1 = ClientLocalServerResources.ConvertTagListToPredicates( None, test_tags_1, do_permission_check = False )
|
||||
|
||||
test_file_search_context_1 = ClientSearch.FileSearchContext( location_context = default_location_context, tag_context = test_tag_context_1, predicates = test_predicates_1 )
|
||||
|
||||
test_tag_service_key_2 = HG.test_controller.example_tag_repo_service_key
|
||||
test_tags_2 = [ 'system:untagged' ]
|
||||
|
||||
test_tag_context_2 = ClientSearch.TagContext( test_tag_service_key_2 )
|
||||
test_predicates_2 = ClientLocalServerResources.ConvertTagListToPredicates( None, test_tags_2, do_permission_check = False )
|
||||
|
||||
test_file_search_context_2 = ClientSearch.FileSearchContext( location_context = default_location_context, tag_context = test_tag_context_2, predicates = test_predicates_2 )
|
||||
|
||||
test_potentials_search_type = CC.DUPE_SEARCH_BOTH_FILES_MATCH_DIFFERENT_SEARCHES
|
||||
test_pixel_duplicates = CC.SIMILAR_FILES_PIXEL_DUPES_EXCLUDED
|
||||
test_max_hamming_distance = 8
|
||||
|
||||
# get count
|
||||
|
||||
HG.test_controller.SetRead( 'potential_duplicates_count', 5 )
|
||||
|
||||
path = '/manage_file_relationships/get_potentials_count'
|
||||
|
||||
connection.request( 'GET', path, headers = headers )
|
||||
|
||||
response = connection.getresponse()
|
||||
|
||||
data = response.read()
|
||||
|
||||
text = str( data, 'utf-8' )
|
||||
|
||||
self.assertEqual( response.status, 200 )
|
||||
|
||||
d = json.loads( text )
|
||||
|
||||
self.assertEqual( d[ 'potential_duplicates_count' ], 5 )
|
||||
|
||||
[ ( args, kwargs ) ] = HG.test_controller.GetRead( 'potential_duplicates_count' )
|
||||
|
||||
( file_search_context_1, file_search_context_2, potentials_search_type, pixel_duplicates, max_hamming_distance ) = args
|
||||
|
||||
self.assertEqual( file_search_context_1.GetSerialisableTuple(), default_file_search_context.GetSerialisableTuple() )
|
||||
self.assertEqual( file_search_context_2.GetSerialisableTuple(), default_file_search_context.GetSerialisableTuple() )
|
||||
self.assertEqual( potentials_search_type, default_potentials_search_type )
|
||||
self.assertEqual( pixel_duplicates, default_pixel_duplicates )
|
||||
self.assertEqual( max_hamming_distance, default_max_hamming_distance )
|
||||
|
||||
# get count with params
|
||||
|
||||
HG.test_controller.SetRead( 'potential_duplicates_count', 5 )
|
||||
|
||||
path = '/manage_file_relationships/get_potentials_count?tag_service_key_1={}&tags_1={}&tag_service_key_2={}&tags_2={}&potentials_search_type={}&pixel_duplicates={}&max_hamming_distance={}'.format(
|
||||
test_tag_service_key_1.hex(),
|
||||
urllib.parse.quote( json.dumps( test_tags_1 ) ),
|
||||
test_tag_service_key_2.hex(),
|
||||
urllib.parse.quote( json.dumps( test_tags_2 ) ),
|
||||
test_potentials_search_type,
|
||||
test_pixel_duplicates,
|
||||
test_max_hamming_distance
|
||||
)
|
||||
|
||||
connection.request( 'GET', path, headers = headers )
|
||||
|
||||
response = connection.getresponse()
|
||||
|
||||
data = response.read()
|
||||
|
||||
text = str( data, 'utf-8' )
|
||||
|
||||
self.assertEqual( response.status, 200 )
|
||||
|
||||
d = json.loads( text )
|
||||
|
||||
self.assertEqual( d[ 'potential_duplicates_count' ], 5 )
|
||||
|
||||
[ ( args, kwargs ) ] = HG.test_controller.GetRead( 'potential_duplicates_count' )
|
||||
|
||||
( file_search_context_1, file_search_context_2, potentials_search_type, pixel_duplicates, max_hamming_distance ) = args
|
||||
|
||||
self.assertEqual( file_search_context_1.GetSerialisableTuple(), test_file_search_context_1.GetSerialisableTuple() )
|
||||
self.assertEqual( file_search_context_2.GetSerialisableTuple(), test_file_search_context_2.GetSerialisableTuple() )
|
||||
self.assertEqual( potentials_search_type, test_potentials_search_type )
|
||||
self.assertEqual( pixel_duplicates, test_pixel_duplicates )
|
||||
self.assertEqual( max_hamming_distance, test_max_hamming_distance )
|
||||
|
||||
# get pairs
|
||||
|
||||
default_max_num_pairs = 250
|
||||
test_max_num_pairs = 20
|
||||
|
||||
test_hash_pairs = [ ( os.urandom( 32 ), os.urandom( 32 ) ) for i in range( 10 ) ]
|
||||
test_media_result_pairs = [ ( HelperFunctions.GetFakeMediaResult( h1 ), HelperFunctions.GetFakeMediaResult( h2 ) ) for ( h1, h2 ) in test_hash_pairs ]
|
||||
test_hash_pairs_hex = [ [ h1.hex(), h2.hex() ] for ( h1, h2 ) in test_hash_pairs ]
|
||||
|
||||
HG.test_controller.SetRead( 'duplicate_pairs_for_filtering', test_media_result_pairs )
|
||||
|
||||
path = '/manage_file_relationships/get_potential_pairs'
|
||||
|
||||
connection.request( 'GET', path, headers = headers )
|
||||
|
||||
response = connection.getresponse()
|
||||
|
||||
data = response.read()
|
||||
|
||||
text = str( data, 'utf-8' )
|
||||
|
||||
self.assertEqual( response.status, 200 )
|
||||
|
||||
d = json.loads( text )
|
||||
|
||||
self.assertEqual( d[ 'potential_duplicate_pairs' ], test_hash_pairs_hex )
|
||||
|
||||
[ ( args, kwargs ) ] = HG.test_controller.GetRead( 'duplicate_pairs_for_filtering' )
|
||||
|
||||
( file_search_context_1, file_search_context_2, potentials_search_type, pixel_duplicates, max_hamming_distance ) = args
|
||||
|
||||
max_num_pairs = kwargs[ 'max_num_pairs' ]
|
||||
|
||||
self.assertEqual( file_search_context_1.GetSerialisableTuple(), default_file_search_context.GetSerialisableTuple() )
|
||||
self.assertEqual( file_search_context_2.GetSerialisableTuple(), default_file_search_context.GetSerialisableTuple() )
|
||||
self.assertEqual( potentials_search_type, default_potentials_search_type )
|
||||
self.assertEqual( pixel_duplicates, default_pixel_duplicates )
|
||||
self.assertEqual( max_hamming_distance, default_max_hamming_distance )
|
||||
self.assertEqual( max_num_pairs, default_max_num_pairs )
|
||||
|
||||
# get pairs with params
|
||||
|
||||
HG.test_controller.SetRead( 'duplicate_pairs_for_filtering', test_media_result_pairs )
|
||||
|
||||
path = '/manage_file_relationships/get_potential_pairs?tag_service_key_1={}&tags_1={}&tag_service_key_2={}&tags_2={}&potentials_search_type={}&pixel_duplicates={}&max_hamming_distance={}&max_num_pairs={}'.format(
|
||||
test_tag_service_key_1.hex(),
|
||||
urllib.parse.quote( json.dumps( test_tags_1 ) ),
|
||||
test_tag_service_key_2.hex(),
|
||||
urllib.parse.quote( json.dumps( test_tags_2 ) ),
|
||||
test_potentials_search_type,
|
||||
test_pixel_duplicates,
|
||||
test_max_hamming_distance,
|
||||
test_max_num_pairs
|
||||
)
|
||||
|
||||
connection.request( 'GET', path, headers = headers )
|
||||
|
||||
response = connection.getresponse()
|
||||
|
||||
data = response.read()
|
||||
|
||||
text = str( data, 'utf-8' )
|
||||
|
||||
self.assertEqual( response.status, 200 )
|
||||
|
||||
d = json.loads( text )
|
||||
|
||||
self.assertEqual( d[ 'potential_duplicate_pairs' ], test_hash_pairs_hex )
|
||||
|
||||
[ ( args, kwargs ) ] = HG.test_controller.GetRead( 'duplicate_pairs_for_filtering' )
|
||||
|
||||
( file_search_context_1, file_search_context_2, potentials_search_type, pixel_duplicates, max_hamming_distance ) = args
|
||||
|
||||
max_num_pairs = kwargs[ 'max_num_pairs' ]
|
||||
|
||||
self.assertEqual( file_search_context_1.GetSerialisableTuple(), test_file_search_context_1.GetSerialisableTuple() )
|
||||
self.assertEqual( file_search_context_2.GetSerialisableTuple(), test_file_search_context_2.GetSerialisableTuple() )
|
||||
self.assertEqual( potentials_search_type, test_potentials_search_type )
|
||||
self.assertEqual( pixel_duplicates, test_pixel_duplicates )
|
||||
self.assertEqual( max_hamming_distance, test_max_hamming_distance )
|
||||
self.assertEqual( max_num_pairs, test_max_num_pairs )
|
||||
|
||||
# get random
|
||||
|
||||
test_hashes = [ os.urandom( 32 ) for i in range( 6 ) ]
|
||||
test_hash_pairs_hex = [ h.hex() for h in test_hashes ]
|
||||
|
||||
HG.test_controller.SetRead( 'random_potential_duplicate_hashes', test_hashes )
|
||||
|
||||
path = '/manage_file_relationships/get_random_potentials'
|
||||
|
||||
connection.request( 'GET', path, headers = headers )
|
||||
|
||||
response = connection.getresponse()
|
||||
|
||||
data = response.read()
|
||||
|
||||
text = str( data, 'utf-8' )
|
||||
|
||||
self.assertEqual( response.status, 200 )
|
||||
|
||||
d = json.loads( text )
|
||||
|
||||
self.assertEqual( d[ 'random_potential_duplicate_hashes' ], test_hash_pairs_hex )
|
||||
|
||||
[ ( args, kwargs ) ] = HG.test_controller.GetRead( 'random_potential_duplicate_hashes' )
|
||||
|
||||
( file_search_context_1, file_search_context_2, potentials_search_type, pixel_duplicates, max_hamming_distance ) = args
|
||||
|
||||
self.assertEqual( file_search_context_1.GetSerialisableTuple(), default_file_search_context.GetSerialisableTuple() )
|
||||
self.assertEqual( file_search_context_2.GetSerialisableTuple(), default_file_search_context.GetSerialisableTuple() )
|
||||
self.assertEqual( potentials_search_type, default_potentials_search_type )
|
||||
self.assertEqual( pixel_duplicates, default_pixel_duplicates )
|
||||
self.assertEqual( max_hamming_distance, default_max_hamming_distance )
|
||||
|
||||
# get random with params
|
||||
|
||||
HG.test_controller.SetRead( 'random_potential_duplicate_hashes', test_hashes )
|
||||
|
||||
path = '/manage_file_relationships/get_random_potentials?tag_service_key_1={}&tags_1={}&tag_service_key_2={}&tags_2={}&potentials_search_type={}&pixel_duplicates={}&max_hamming_distance={}'.format(
|
||||
test_tag_service_key_1.hex(),
|
||||
urllib.parse.quote( json.dumps( test_tags_1 ) ),
|
||||
test_tag_service_key_2.hex(),
|
||||
urllib.parse.quote( json.dumps( test_tags_2 ) ),
|
||||
test_potentials_search_type,
|
||||
test_pixel_duplicates,
|
||||
test_max_hamming_distance
|
||||
)
|
||||
|
||||
connection.request( 'GET', path, headers = headers )
|
||||
|
||||
response = connection.getresponse()
|
||||
|
||||
data = response.read()
|
||||
|
||||
text = str( data, 'utf-8' )
|
||||
|
||||
self.assertEqual( response.status, 200 )
|
||||
|
||||
d = json.loads( text )
|
||||
|
||||
self.assertEqual( d[ 'random_potential_duplicate_hashes' ], test_hash_pairs_hex )
|
||||
|
||||
[ ( args, kwargs ) ] = HG.test_controller.GetRead( 'random_potential_duplicate_hashes' )
|
||||
|
||||
( file_search_context_1, file_search_context_2, potentials_search_type, pixel_duplicates, max_hamming_distance ) = args
|
||||
|
||||
self.assertEqual( file_search_context_1.GetSerialisableTuple(), test_file_search_context_1.GetSerialisableTuple() )
|
||||
self.assertEqual( file_search_context_2.GetSerialisableTuple(), test_file_search_context_2.GetSerialisableTuple() )
|
||||
self.assertEqual( potentials_search_type, test_potentials_search_type )
|
||||
self.assertEqual( pixel_duplicates, test_pixel_duplicates )
|
||||
self.assertEqual( max_hamming_distance, test_max_hamming_distance )
|
||||
|
||||
# set relationship
|
||||
|
||||
# this is tricky to test fully
|
||||
|
||||
HG.test_controller.ClearWrites( 'duplicate_pair_status' )
|
||||
|
||||
HG.test_controller.ClearReads( 'media_result' )
|
||||
|
||||
hashes = {
|
||||
'b54d09218e0d6efc964b78b070620a1fa19c7e069672b4c6313cee2c9b0623f2',
|
||||
'bbaa9876dab238dcf5799bfd8319ed0bab805e844f45cf0de33f40697b11a845',
|
||||
'22667427eaa221e2bd7ef405e1d2983846c863d40b2999ce8d1bf5f0c18f5fb2',
|
||||
'65d228adfa722f3cd0363853a191898abe8bf92d9a514c6c7f3c89cfed0bf423',
|
||||
'0480513ffec391b77ad8c4e57fe80e5b710adfa3cb6af19b02a0bd7920f2d3ec',
|
||||
'5fab162576617b5c3fc8caabea53ce3ab1a3c8e0a16c16ae7b4e4a21eab168a7'
|
||||
}
|
||||
|
||||
# TODO: populate the fakes here with real tags and test actual content merge
|
||||
# to test the content merge, we'd want to set some content merge options and populate these fakes with real tags
|
||||
# don't need to be too clever, just test one thing and we know it'll all be hooked up right
|
||||
HG.test_controller.SetRead( 'media_results', [ HelperFunctions.GetFakeMediaResult( bytes.fromhex( hash_hex ) ) for hash_hex in hashes ] )
|
||||
|
||||
headers = { 'Hydrus-Client-API-Access-Key' : access_key_hex, 'Content-Type' : HC.mime_mimetype_string_lookup[ HC.APPLICATION_JSON ] }
|
||||
|
||||
path = '/manage_file_relationships/set_file_relationships'
|
||||
|
||||
test_pair_rows = [
|
||||
[ 4, "b54d09218e0d6efc964b78b070620a1fa19c7e069672b4c6313cee2c9b0623f2", "bbaa9876dab238dcf5799bfd8319ed0bab805e844f45cf0de33f40697b11a845", False, False, True ],
|
||||
[ 4, "22667427eaa221e2bd7ef405e1d2983846c863d40b2999ce8d1bf5f0c18f5fb2", "65d228adfa722f3cd0363853a191898abe8bf92d9a514c6c7f3c89cfed0bf423", False, False, True ],
|
||||
[ 2, "0480513ffec391b77ad8c4e57fe80e5b710adfa3cb6af19b02a0bd7920f2d3ec", "5fab162576617b5c3fc8caabea53ce3ab1a3c8e0a16c16ae7b4e4a21eab168a7", False, False, False ]
|
||||
]
|
||||
|
||||
request_dict = { 'pair_rows' : test_pair_rows }
|
||||
|
||||
request_body = json.dumps( request_dict )
|
||||
|
||||
connection.request( 'POST', path, body = request_body, headers = headers )
|
||||
|
||||
response = connection.getresponse()
|
||||
|
||||
data = response.read()
|
||||
|
||||
self.assertEqual( response.status, 200 )
|
||||
|
||||
[ ( args, kwargs ) ] = HG.test_controller.GetWrite( 'duplicate_pair_status' )
|
||||
|
||||
( written_rows, ) = args
|
||||
|
||||
def delete_thing( h, do_it ):
|
||||
|
||||
if do_it:
|
||||
|
||||
c = collections.defaultdict( list )
|
||||
|
||||
c[ b'local files' ] = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_DELETE, { bytes.fromhex( h ) }, reason = 'From Client API (duplicates processing).' ) ]
|
||||
|
||||
return [ c ]
|
||||
|
||||
else:
|
||||
|
||||
return []
|
||||
|
||||
|
||||
|
||||
expected_written_rows = [ ( duplicate_type, bytes.fromhex( hash_a_hex ), bytes.fromhex( hash_b_hex ), delete_thing( hash_b_hex, delete_second ) ) for ( duplicate_type, hash_a_hex, hash_b_hex, merge, delete_first, delete_second ) in test_pair_rows ]
|
||||
|
||||
self.assertEqual( written_rows, expected_written_rows )
|
||||
|
||||
# set kings
|
||||
|
||||
HG.test_controller.ClearWrites( 'duplicate_set_king' )
|
||||
|
||||
headers = { 'Hydrus-Client-API-Access-Key' : access_key_hex, 'Content-Type' : HC.mime_mimetype_string_lookup[ HC.APPLICATION_JSON ] }
|
||||
|
||||
path = '/manage_file_relationships/set_kings'
|
||||
|
||||
test_hashes = [
|
||||
"b54d09218e0d6efc964b78b070620a1fa19c7e069672b4c6313cee2c9b0623f2",
|
||||
"bbaa9876dab238dcf5799bfd8319ed0bab805e844f45cf0de33f40697b11a845"
|
||||
]
|
||||
|
||||
request_dict = { 'hashes' : test_hashes }
|
||||
|
||||
request_body = json.dumps( request_dict )
|
||||
|
||||
connection.request( 'POST', path, body = request_body, headers = headers )
|
||||
|
||||
response = connection.getresponse()
|
||||
|
||||
data = response.read()
|
||||
|
||||
self.assertEqual( response.status, 200 )
|
||||
|
||||
[ ( args1, kwargs1 ), ( args2, kwargs2 ) ] = HG.test_controller.GetWrite( 'duplicate_set_king' )
|
||||
|
||||
self.assertEqual( { args1[0], args2[0] }, { bytes.fromhex( h ) for h in test_hashes } )
|
||||
|
||||
|
||||
def _test_manage_pages( self, connection, set_up_permissions ):
|
||||
|
||||
api_permissions = set_up_permissions[ 'manage_pages' ]
|
||||
|
@ -4183,6 +4599,7 @@ class TestClientAPI( unittest.TestCase ):
|
|||
self._test_add_tags( connection, set_up_permissions )
|
||||
self._test_add_tags_search_tags( connection, set_up_permissions )
|
||||
self._test_add_urls( connection, set_up_permissions )
|
||||
self._test_manage_duplicates( connection, set_up_permissions )
|
||||
self._test_manage_cookies( connection, set_up_permissions )
|
||||
self._test_manage_pages( connection, set_up_permissions )
|
||||
self._test_search_files( connection, set_up_permissions )
|
||||
|
|
|
@ -39,6 +39,7 @@ SET /P install_type=Do you want the (s)imple or (a)dvanced install?
|
|||
|
||||
IF "%install_type%" == "s" goto :create
|
||||
IF "%install_type%" == "a" goto :question_qt
|
||||
IF "%install_type%" == "d" goto :create
|
||||
goto :parse_fail
|
||||
|
||||
:question_qt
|
||||
|
@ -98,7 +99,23 @@ IF "%install_type%" == "s" (
|
|||
|
||||
python -m pip install -r requirements.txt
|
||||
|
||||
) ELSE (
|
||||
)
|
||||
|
||||
IF "%install_type%" == "d" (
|
||||
|
||||
python -m pip install -r static\requirements\advanced\requirements_core.txt
|
||||
|
||||
python -m pip install -r static\requirements\advanced\requirements_qt6_test.txt
|
||||
python -m pip install pyside2
|
||||
python -m pip install PyQtChart PyQt5
|
||||
python -m pip install PyQt6-Charts PyQt6
|
||||
python -m pip install -r static\requirements\advanced\requirements_new_mpv.txt
|
||||
python -m pip install -r static\requirements\advanced\requirements_new_opencv.txt
|
||||
python -m pip install -r static\requirements\hydev\requirements_windows_build.txt
|
||||
|
||||
)
|
||||
|
||||
IF "%install_type%" == "a" (
|
||||
|
||||
python -m pip install -r static\requirements\advanced\requirements_core.txt
|
||||
|
||||
|
|
Loading…
Reference in New Issue