parent
c79355fcc5
commit
b4afedb617
|
@ -7,6 +7,55 @@ title: Changelog
|
|||
!!! note
|
||||
This is the new changelog, only the most recent builds. For all versions, see the [old changelog](old_changelog.html).
|
||||
|
||||
## [Version 545](https://github.com/hydrusnetwork/hydrus/releases/tag/v545)
|
||||
|
||||
### blurhash
|
||||
|
||||
* thanks to a user's work, hydrus now calculates the [blurhash](https://blurha.sh/) of files with a thumbnail! (issue #394)
|
||||
* if a file has no thumbnail but does have a blurhash (e.g. missing files, or files you previously deleted and are looking at in a clever view), it now presents a thumbnail generated from that blurhash
|
||||
* all existing thumbnail-having files are scheduled for a blurhash calculation (this is a new job in the file maintenance system). if you have hundreds of thousands of files, expect it to take a couple of weeks/months to clear. if you need to hurry this along, the queue is under _database->file maintenance_
|
||||
* any time a file's thumbnail changes, the blurhash is scheduled for a regen
|
||||
* for this first version, the blurhash is very small and simple, either 15 or 16 cells for ~34 bytes. if we end up using it a lot somewhere, I'd be open to making a size setting so you can set 8x8 or higher grids for actually decent blur-thumbs
|
||||
* a new _help->debug_ report mode switches to blurhashes instead of normal thumbs
|
||||
|
||||
### file history search
|
||||
|
||||
* I did to the file history chart (_help->view file history_) what I did to mr bones a couple weeks ago. you can now search your history of imports, archives, and deletes for creator x, filetype y, or any other search you can think of
|
||||
* I hacked this all together right at the end of my week, so please bear with me if there are bugs or dumb permitted domains/results. the default action when you first open it up should all work the same way as before, no worries™, but let me know how you get on and I'll fix it!
|
||||
* there's more to do here. we'll want a hideable search panel, a widget to control the resolution of the chart (currently fixed at 7680 to look good blown up on a 4k), and it'd be nice to have a selectable date range
|
||||
* in the longer term future, it'd be nice to have more lines of data and that chart tech you see on financial sites where it shows you the current value where your mouse is
|
||||
|
||||
### client api
|
||||
|
||||
* the `file_metadata` call now says the new blurhash. if you pipe it into a blurhash library and blow it up to an appopriate ratio canvas, it _should_ just work. the typical use is as a placeholder while you wait for thumbs/files to download
|
||||
* a new `include_blurhash` parameter will include the blurhash when `only_return_basic_information` is true
|
||||
* `file_metadata` also shows the file's `pixel_hash` now. the algorithm here is proprietary to hydrus, but you can throw it into 'system:similar files' to find pixel dupes. I expect to add perceptual hashes too
|
||||
* the help is updated to talk about this
|
||||
* I updated the unit tests to deal with this
|
||||
* the error when the api fails to parse the client api header is now a properly handled 400 (previously it was falling to the 500 backstop)
|
||||
* the client api version is now 53
|
||||
|
||||
### misc
|
||||
|
||||
* I'm sorry to say I'm removing the Deviant Art artist search and login script for all new users, since they are both broken. DA have been killing their nice old API in pieces, and they finally took down the old artist gallery fetch. :(. there may be a way to finagle-parse their new phone-friendly, live-loading, cloud-deployed engine, but when I look at it, it seems like a much bigger mess than hydrus's parsing system can happily handle atm. the 'correct' way to programatically parse DA is through their new OAuth API, which we simply do not support. individual page URLs seem to still work, but I expect them to go soon too. Sorry folks, try gallery-dl for now--they have a robust OAuth solution
|
||||
* thanks to a user, we now have 'epub' ebook support! no 'num_words' support yet, but it looks like epubs are really just zips with some weird metadata files and a bunch of html inside, so I think this'll be doable with a future hacky parser. all your existing zip files wil be scheduled for a metadata rescan to see if they are actually epubs (this'll capture any secret kritas and procreates, too, I think)
|
||||
* the main UI-level media object is now aware of a file's pixel hash. this is now used in the duplicate filter's 'these are pixel duplicates' statements to save CPU. the jank old on-the-fly calculation code is all removed now, and if these values are missing from the media object, a message will now be shown saying the pixel dupe status could not be determined. we have had multiple rounds of regen over the past year and thus almost all clients have full database data here, so fingers crossed we won't see this error state much if at all, but let me know if you do and I'll figure out a button to accelerate the fix
|
||||
* the thumbnail _right-click->open->similar files_ menu now has an entry for 'open the selection in a new duplicate filter page', letting you quickly resolve the duplicates that involve the selected files
|
||||
* pixel hash and blurhash are now listed, with the actual hash value, in the _share->copy->hash_ thumbnail right-click menu
|
||||
* thanks to a user, 'MPO' jpegs (some weird multi-picture jpeg that we can't page through yet) now parse their EXIF correctly and should rotate on a metadata-reparse. since these are rare, I'm not going to schedule a rescan over everyone's jpegs, but if you see a jpeg that is rotated wrong, try hitting _manage->regenerate->file metadata_ on its thumbnail menu
|
||||
* I may have fixed a rare hang when highlighting a downloader/watcher during very busy network time that involves that includes that importer
|
||||
* added a warning to the 'getting started with installing' and 'database migration' help about running the SQLite database off a compressed filesystem--don't do it!
|
||||
* fixed thumbnail generation for greyspace PSDs (and perhaps some others)
|
||||
|
||||
### boring cleanup
|
||||
|
||||
* I cleaned some code and added some tests around the new blurhash tech and thumbs in general
|
||||
* a variety of metadata changes such as 'has exif', 'has icc profile' now trigger a live update on thumbnails currently loaded into the UI
|
||||
* cleaned up some old file metadata loading code
|
||||
* re-sorted the job list dropdown in the file maintenance dialog
|
||||
* some file maintenance database work should be a bit faster
|
||||
* fixed some behind the scenes stuff when the file history chart has no file info to show
|
||||
|
||||
## [Version 544](https://github.com/hydrusnetwork/hydrus/releases/tag/v544)
|
||||
|
||||
### webp vulnerability
|
||||
|
@ -367,46 +416,3 @@ title: Changelog
|
|||
* wrote a unit test to catch the new delete lock test
|
||||
* deleted the old-and-deprecated-in-one-week 'pair_rows' parameter-handling code in the set_file_relationships command
|
||||
* the client api version is now 49
|
||||
|
||||
## [Version 535](https://github.com/hydrusnetwork/hydrus/releases/tag/v535)
|
||||
|
||||
### misc
|
||||
|
||||
* thanks to a user, we now have Krita (.kra, .krz) support! it even pulls thumbnails!
|
||||
* thanks to another user, we now have SVG (.svg) support! it even generates thumbnails!
|
||||
* I think I fixed a comparison statement calculator divide-by-zero error in the duplicate filter when you compare a file with a resolution with a file without one
|
||||
|
||||
### petitions overview
|
||||
|
||||
* _this is a workflow/usability update only for server janitors_
|
||||
* tl;dr: the petitions page now fetches many petitions at once. update your servers and clients for it all to work right
|
||||
* so, the petitions page now fetches lots of petitions with each 'fetch' button click. you can set how many it will fetch with a new number control
|
||||
* the petitions are shown in a new multi-column list that shows action, account id, reason, and total weight. the actual data for the petitions will load in quickly, reflected in the list. as soon as the first is loaded, it is highlighted, but double-click any to highlight it in the old petition UI as normal
|
||||
* when you process petitions, the client moves instantly to the next, all fitting into the existing workflow, without having to wait for the server to fetch a new one after you commit
|
||||
* you can also mass approve/deny from here! if one account is doing great or terrible stuff, you can now blang it all in one go
|
||||
|
||||
### petitions details
|
||||
|
||||
* the 'fetch x petition' buttons now show `(*)` in their label if they are the active petition type being worked on
|
||||
* petition pages now remember: the last petition type they were looking at; the number of petitions to fetch; and the number of files to show
|
||||
* the petition page will pause any ongoing petition fetches if you close it, and resume if you unclose it
|
||||
* a system where multi-mapping petitions would be broken up and delivered in tags with weight-similar chunks (e.g. if would say 'aaa for 11 files' and 'bbb in 15 files' in the same fetch, but not 'ccc in 542,154 files') is abandoned. this was not well explained and was causing confusion and code complexity. these petitions now appear clientside in full
|
||||
* another system, where multi-mapping petitions would be delivered in same-namespace chunks, is also abandoned, for similar reasons. it was causing more confusion, especially when compared to the newer petition counting tech I've added. perhaps it will come back in as a clientside filter option
|
||||
* the list of petitions you are given _should_ also be neatly grouped by account id, so rather than randomly sampling from all petitions, you'll get batches by user x, y, or z, and in most cases you'll be looking at everything by user x, and y, and then z up to the limit of num petitions you chose to fetch
|
||||
* drawback: since petitions' content can overlap in complicated ways, and janitors can work on the same list at the same time, in edge cases the list you see can be slightly out of sync with what the server actually has. this isn't a big deal, and the worst case is wasted work as you approve the same thing twice. I tried to implement 'refresh list if count drops more than expected' tech, but the situation is complicated and it was spamming too much. I will let you refresh the list with a button click yourself for now, as you like, and please let me know where it works and fails
|
||||
* drawback: I added some new objects, so you have to update both server and client for this to work. older/newer combinations will give you some harmless errors
|
||||
* also, if your list starts running low, but there are plenty more petitions to work on, it will auto-refresh. again, it won't interrupt your current work, but it will fetch more. let me know how it works out
|
||||
* drawback: while the new petition summary list is intentionally lightweight, I do spend some extra CPU figuring it out. with a high 'num petitions to fetch', it may take several seconds for a very busy server like the PTR just to fetch the initial list, so please play around with different fetch sizes and let me know what works well and what is way too slow
|
||||
* there are still some things I want to do to this page, which I want to slip in the near future. I want to hide/show the sort and 'num files to show' widgets as appropriate, figure out a right-click menu for the new list to retry failures, and get some shortcut support going
|
||||
|
||||
### boring code cleanup
|
||||
|
||||
* wrote a new petition header object to hold content type, petition status, account id, and reason for petitions
|
||||
* serverside petition fetching is now split into 'get petition headers' and 'get petition data'. the 'headers' section supports filtering by account id and in future reason
|
||||
* the clientside petition management UI code pretty much got a full pass
|
||||
* cleaned a bunch of ancient server db code
|
||||
* cleaned a bunch of the clientside petition code. it was a real tangle
|
||||
* improved the resilience of the hydrus server when it is given unacceptable tags in a content update
|
||||
* all fetches of multiple rows of data from multi-column lists now happen sorted. this is just a little thing, but it'll probably dejank a few operations where you edit several things at once or get some errors and are trying to figure out which of five things caused it
|
||||
* the hydrus official mimetype for psd files is now 'image/vnd.adobe.photoshop' (instead of 'application/x-photoshop')
|
||||
* with krita file (which are actually just zip files) support, we now have the very barebones of archive tech started. I'll expand it a bit more and we should be able to improve support for other archive-like formats in the future
|
||||
|
|
|
@ -19,7 +19,7 @@ A hydrus client consists of three components:
|
|||
|
||||
2. **the actual SQLite database**
|
||||
|
||||
The client stores all its preferences and current state and knowledge _about_ files--like file size and resolution, tags, ratings, inbox status, and so on and so on--in a handful of SQLite database files, defaulting to _install_dir/db_. Depending on the size of your client, these might total 1MB in size or be as much as 10GB.
|
||||
The client stores all its preferences and current state and knowledge _about_ files--like file size and resolution, tags, ratings, inbox status, and so on and on--in a handful of SQLite database files, defaulting to _install_dir/db_. Depending on the size of your client, these might total 1MB in size or be as much as 10GB.
|
||||
|
||||
In order to perform a search or to fetch or process tags, the client has to interact with these files in many small bursts, which means it is best if these files are on a drive with low latency. An SSD is ideal, but a regularly-defragged HDD with a reasonable amount of free space also works well.
|
||||
|
||||
|
@ -84,10 +84,17 @@ To tell it about the new database location, pass it a `-d` or `--db_dir` command
|
|||
* `hydrus_client -d="D:\media\my_hydrus_database"`
|
||||
* _--or--_
|
||||
* `hydrus_client --db_dir="G:\misc documents\New Folder (3)\DO NOT ENTER"`
|
||||
* _--or, from source--_
|
||||
* `python hydrus_client.py -d="D:\media\my_hydrus_database"`
|
||||
* _--or, for macOS--_
|
||||
* `open -n -a "Hydrus Network.app" --args -d="/path/to/db"`
|
||||
|
||||
And it will instead use the given path. If no database is found, it will similarly create a new empty one at that location. You can use any path that is valid in your system, but I would not advise using network locations and so on, as the database works best with some clever device locking calls these interfaces may not provide.
|
||||
And it will instead use the given path. If no database is found, it will similarly create a new empty one at that location. You can use any path that is valid in your system.
|
||||
|
||||
!!! danger "Bad Locations"
|
||||
**Do not run a SQLite database on a network location!** The database relies on clever hardware-level exclusive file locks, which network interfaces often fake. While the program may work, I cannot guarantee the database will stay non-corrupt.
|
||||
|
||||
**Do not run a SQLite database on a location with filesystem-level compression enabled!** In the best case (BTRFS), the database can suddenly get extremely slow when it hits a certain size; in the worst (NTFS), a >50GB database will encounter I/O errors and receive sporadic corruption!
|
||||
|
||||
Rather than typing the path out in a terminal every time you want to launch your external database, create a new shortcut with the argument in. Something like this:
|
||||
|
||||
|
|
|
@ -1536,6 +1536,7 @@ Response:
|
|||
"ipfs_multihashes" : {},
|
||||
"has_audio" : false,
|
||||
"blurhash" : "U6PZfSi_.AyE_3t7t7R**0o#DgR4_3R*D%xt",
|
||||
"pixel_hash" : "2519e40f8105599fcb26187d39656b1b46f651786d0e32fff2dc5a9bc277b5bb",
|
||||
"num_frames" : null,
|
||||
"num_words" : null,
|
||||
"is_inbox" : false,
|
||||
|
@ -1608,6 +1609,7 @@ Response:
|
|||
},
|
||||
"has_audio" : true,
|
||||
"blurhash" : "UHF5?xYk^6#M@-5b,1J5@[or[k6.};FxngOZ",
|
||||
"pixel_hash" : "1dd9625ce589eee05c22798a9a201602288a1667c59e5cd1fb2251a6261fbd68",
|
||||
"num_frames" : 102,
|
||||
"num_words" : null,
|
||||
"is_inbox" : false,
|
||||
|
@ -1728,7 +1730,7 @@ Size is in bytes. Duration is in milliseconds, and may be an int or a float.
|
|||
|
||||
The `thumbnail_width` and `thumbnail_height` are a generally reliable prediction but aren't a promise. The actual thumbnail you get from [/get\_files/thumbnail](#get_files_thumbnail) will be different if the user hasn't looked at it since changing their thumbnail options. You only get these rows for files that hydrus actually generates an actual thumbnail for. Things like pdf won't have it. You can use your own thumb, or ask the api and it'll give you a fixed fallback; those are mostly 200x200, but you can and should size them to whatever you want.
|
||||
|
||||
`blurhash` gives a base 83 encoded string of a [blurhash](https://blurha.sh/) generated from the file's thumbnail if the file has a thumbnail.
|
||||
If the file has a thumbnail, `blurhash` gives a base 83 encoded string of its [blurhash](https://blurha.sh/). `pixel_hash` is an SHA256 of the image's pixel data and should exactly match for pixel-identical files (it is used in the duplicate system for 'must be pixel duplicates').
|
||||
|
||||
#### tags
|
||||
|
||||
|
|
|
@ -72,8 +72,10 @@ I try to release a new version every Wednesday by 8pm EST and write an accompany
|
|||
|
||||
By default, hydrus stores all its data—options, files, subscriptions, _everything_—entirely inside its own directory. You can extract it to a usb stick, move it from one place to another, have multiple installs for multiple purposes, wrap it all up inside a truecrypt volume, whatever you like. The .exe installer writes some unavoidable uninstall registry stuff to Windows, but the 'installed' client itself will run fine if you manually move it.
|
||||
|
||||
!!! warning "Network Install"
|
||||
Unless you are an expert, do not install your client to a network location (i.e. on a different computer's hard drive)! The database is sensitive to interruption and requires good file locking, which network storage often fakes. There are [ways of splitting your client up](database_migration.md) so the database is on a local SSD but the files are on a network--this is fine--but you really should not put the database on a remote machine unless you know what you are doing and have a backup in case things go wrong.
|
||||
!!! danger "Bad Locations"
|
||||
**Do not install to a network location!** (i.e. on a different computer's hard drive) The SQLite database is sensitive to interruption and requires good file locking, which network interfaces often fake. There are [ways of splitting your client up](database_migration.md) so the database is on a local SSD but the files are on a network--this is fine--but you really should not put the database on a remote machine unless you know what you are doing and have a backup in case things go wrong.
|
||||
|
||||
**Do not install to a location with filesystem-level compression enabled!** It may work ok to start, but when the SQLite database grows to large size, this can cause extreme access latency and I/O errors and corruption.
|
||||
|
||||
!!! info "For macOS users"
|
||||
The Hydrus App is **non-portable** and puts your database in `~/Library/Hydrus` (i.e. `/Users/[You]/Library/Hydrus`). You can update simply by replacing the old App with the new, but if you wish to backup, you should be looking at `~/Library/Hydrus`, not the App itself.
|
||||
|
|
|
@ -34,6 +34,48 @@
|
|||
<div class="content">
|
||||
<h1 id="changelog"><a href="#changelog">changelog</a></h1>
|
||||
<ul>
|
||||
<li>
|
||||
<h2 id="version_545"><a href="#version_545">version 545</a></h2>
|
||||
<ul>
|
||||
<li><h3>blurhash</h3></li>
|
||||
<li>thanks to a user's work, hydrus now calculates the [blurhash](https://blurha.sh/) of files with a thumbnail! (issue #394)</li>
|
||||
<li>if a file has no thumbnail but does have a blurhash (e.g. missing files, or files you previously deleted and are looking at in a clever view), it now presents a thumbnail generated from that blurhash</li>
|
||||
<li>all existing thumbnail-having files are scheduled for a blurhash calculation (this is a new job in the file maintenance system). if you have hundreds of thousands of files, expect it to take a couple of weeks/months to clear. if you need to hurry this along, the queue is under _database->file maintenance_</li>
|
||||
<li>any time a file's thumbnail changes, the blurhash is scheduled for a regen</li>
|
||||
<li>for this first version, the blurhash is very small and simple, either 15 or 16 cells for ~34 bytes. if we end up using it a lot somewhere, I'd be open to making a size setting so you can set 8x8 or higher grids for actually decent blur-thumbs</li>
|
||||
<li>a new _help->debug_ report mode switches to blurhashes instead of normal thumbs</li>
|
||||
<li><h3>file history search</h3></li>
|
||||
<li>I did to the file history chart (_help->view file history_) what I did to mr bones a couple weeks ago. you can now search your history of imports, archives, and deletes for creator x, filetype y, or any other search you can think of</li>
|
||||
<li>I hacked this all together right at the end of my week, so please bear with me if there are bugs or dumb permitted domains/results. the default action when you first open it up should all work the same way as before, no worries™, but let me know how you get on and I'll fix it!</li>
|
||||
<li>there's more to do here. we'll want a hideable search panel, a widget to control the resolution of the chart (currently fixed at 7680 to look good blown up on a 4k), and it'd be nice to have a selectable date range</li>
|
||||
<li>in the longer term future, it'd be nice to have more lines of data and that chart tech you see on financial sites where it shows you the current value where your mouse is</li>
|
||||
<li><h3>client api</h3></li>
|
||||
<li>the `file_metadata` call now says the new blurhash. if you pipe it into a blurhash library and blow it up to an appopriate ratio canvas, it _should_ just work. the typical use is as a placeholder while you wait for thumbs/files to download</li>
|
||||
<li>a new `include_blurhash` parameter will include the blurhash when `only_return_basic_information` is true</li>
|
||||
<li>`file_metadata` also shows the file's `pixel_hash` now. the algorithm here is proprietary to hydrus, but you can throw it into 'system:similar files' to find pixel dupes. I expect to add perceptual hashes too</li>
|
||||
<li>the help is updated to talk about this</li>
|
||||
<li>I updated the unit tests to deal with this</li>
|
||||
<li>the error when the api fails to parse the client api header is now a properly handled 400 (previously it was falling to the 500 backstop)</li>
|
||||
<li>the client api version is now 53</li>
|
||||
<li><h3>misc</h3></li>
|
||||
<li>I'm sorry to say I'm removing the Deviant Art artist search and login script for all new users, since they are both broken. DA have been killing their nice old API in pieces, and they finally took down the old artist gallery fetch. :(. there may be a way to finagle-parse their new phone-friendly, live-loading, cloud-deployed engine, but when I look at it, it seems like a much bigger mess than hydrus's parsing system can happily handle atm. the 'correct' way to programatically parse DA is through their new OAuth API, which we simply do not support. individual page URLs seem to still work, but I expect them to go soon too. Sorry folks, try gallery-dl for now--they have a robust OAuth solution</li>
|
||||
<li>thanks to a user, we now have 'epub' ebook support! no 'num_words' support yet, but it looks like epubs are really just zips with some weird metadata files and a bunch of html inside, so I think this'll be doable with a future hacky parser. all your existing zip files wil be scheduled for a metadata rescan to see if they are actually epubs (this'll capture any secret kritas and procreates, too, I think)</li>
|
||||
<li>the main UI-level media object is now aware of a file's pixel hash. this is now used in the duplicate filter's 'these are pixel duplicates' statements to save CPU. the jank old on-the-fly calculation code is all removed now, and if these values are missing from the media object, a message will now be shown saying the pixel dupe status could not be determined. we have had multiple rounds of regen over the past year and thus almost all clients have full database data here, so fingers crossed we won't see this error state much if at all, but let me know if you do and I'll figure out a button to accelerate the fix</li>
|
||||
<li>the thumbnail _right-click->open->similar files_ menu now has an entry for 'open the selection in a new duplicate filter page', letting you quickly resolve the duplicates that involve the selected files</li>
|
||||
<li>pixel hash and blurhash are now listed, with the actual hash value, in the _share->copy->hash_ thumbnail right-click menu</li>
|
||||
<li>thanks to a user, 'MPO' jpegs (some weird multi-picture jpeg that we can't page through yet) now parse their EXIF correctly and should rotate on a metadata-reparse. since these are rare, I'm not going to schedule a rescan over everyone's jpegs, but if you see a jpeg that is rotated wrong, try hitting _manage->regenerate->file metadata_ on its thumbnail menu</li>
|
||||
<li>I may have fixed a rare hang when highlighting a downloader/watcher during very busy network time that involves that includes that importer</li>
|
||||
<li>added a warning to the 'getting started with installing' and 'database migration' help about running the SQLite database off a compressed filesystem--don't do it!</li>
|
||||
<li>fixed thumbnail generation for greyspace PSDs (and perhaps some others)</li>
|
||||
<li><h3>boring cleanup</h3></li>
|
||||
<li>I cleaned some code and added some tests around the new blurhash tech and thumbs in general</li>
|
||||
<li>a variety of metadata changes such as 'has exif', 'has icc profile' now trigger a live update on thumbnails currently loaded into the UI</li>
|
||||
<li>cleaned up some old file metadata loading code</li>
|
||||
<li>re-sorted the job list dropdown in the file maintenance dialog</li>
|
||||
<li>some file maintenance database work should be a bit faster</li>
|
||||
<li>fixed some behind the scenes stuff when the file history chart has no file info to show</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h2 id="version_544"><a href="#version_544">version 544</a></h2>
|
||||
<ul>
|
||||
|
@ -46,12 +88,12 @@
|
|||
<li>the `migrate database` dialog now allows you to set a 'max size' for all but one of your media locations. if you have a 500GB drive you want to store some stuff on, you no longer have to balance the weights in your head--just set a max size of 450GB and hydrus will figure it out for you. it is not super precise (and it isn't healthy to fill drives up to 98% anyway), so make sure you leave some padding</li>
|
||||
<li>also, please note that this will not automatically rebalance _yet_. right now, the only way files move between locations is through the 'move files now' button on the dialog, so if you have a location that is full up according to its max size rule and then spend a month importing many files, it will go over its limit until and unless you revisit 'migrate database' and move files again. I hope to have automatic background rebalancing in the near future</li>
|
||||
<li>updated the 'database migration' help to talk about this and added a new migration example</li>
|
||||
<li>the 'edit num bytes' widget now supports terabytes (TB)</li>
|
||||
<li>I fleshed out the logic and fixed several bugs in the migration code, mostly to do with the new max size stuff and distributing weights appropriately in various situations</li>
|
||||
<li><h3>misc</h3></li>
|
||||
<li>when an image file fails to render in the media viewer, it now draws a bordered box with a brief 'failed to render' note. previously, it janked with a second of lag, made some popups, and left the display on eternal blank hang. now it finishes its job cleanly and returns a 'nah m8' 'image' result</li>
|
||||
<li>I reworked the Mr Bones layout a bit. the search is now on the left, and the rows of the main count table are separated for readability</li>
|
||||
<li>it turns out that bitmap (.bmp) files can support ICC Profiles, so I've told hydrus to look for them in new bitmaps and retroactively scan all your existing ones</li>
|
||||
<li>the 'edit num bytes' widget now supports terabytes (TB)</li>
|
||||
<li>fixed an issue with the recent PSD code updates that was breaking boot for clients running from source without the psd-tools library (this affected the Docker build)</li>
|
||||
<li>updated all the 'setup_venv' scripts. all the formatting and text has had a pass, and there is now a question on (n)ew or (old) Pillow</li>
|
||||
<li>to stop FFMPEG's false positives where it can think a txt file is an mpeg, the main hydrus filetype scanning routine will no longer send files with common text extensions to ffmpeg. if you do have an mp3 called music.txt, rename it before import!</li>
|
||||
|
|
|
@ -21,7 +21,6 @@ from hydrus.client.media import ClientMediaFileFilter
|
|||
from hydrus.client.metadata import ClientTags
|
||||
|
||||
hashes_to_jpeg_quality = {}
|
||||
hashes_to_pixel_hashes = {}
|
||||
|
||||
def GetDuplicateComparisonScore( shown_media, comparison_media ):
|
||||
|
||||
|
@ -72,39 +71,30 @@ def GetDuplicateComparisonStatements( shown_media, comparison_media ):
|
|||
|
||||
if s_mime in HC.FILES_THAT_CAN_HAVE_PIXEL_HASH and c_mime in HC.FILES_THAT_CAN_HAVE_PIXEL_HASH and shown_media.GetResolution() == comparison_media.GetResolution():
|
||||
|
||||
global hashes_to_pixel_hashes
|
||||
s_pixel_hash = shown_media.GetFileInfoManager().pixel_hash
|
||||
c_pixel_hash = comparison_media.GetFileInfoManager().pixel_hash
|
||||
|
||||
if s_hash not in hashes_to_pixel_hashes:
|
||||
if s_pixel_hash is None or c_pixel_hash is None:
|
||||
|
||||
path = HG.client_controller.client_files_manager.GetFilePath( s_hash, s_mime )
|
||||
statement = 'could not determine if files were pixel-for-pixel duplicates!'
|
||||
score = 0
|
||||
|
||||
hashes_to_pixel_hashes[ s_hash ] = HydrusImageHandling.GetImagePixelHash( path, s_mime )
|
||||
statements_and_scores[ 'pixel_duplicates' ] = ( statement, score )
|
||||
|
||||
|
||||
if c_hash not in hashes_to_pixel_hashes:
|
||||
elif s_pixel_hash == c_pixel_hash:
|
||||
|
||||
path = HG.client_controller.client_files_manager.GetFilePath( c_hash, c_mime )
|
||||
|
||||
hashes_to_pixel_hashes[ c_hash ] = HydrusImageHandling.GetImagePixelHash( path, c_mime )
|
||||
|
||||
|
||||
s_pixel_hash = hashes_to_pixel_hashes[ s_hash ]
|
||||
c_pixel_hash = hashes_to_pixel_hashes[ c_hash ]
|
||||
|
||||
# this is not appropriate for, say, PSD files
|
||||
other_file_is_pixel_png_appropriate_filetypes = {
|
||||
HC.IMAGE_JPEG,
|
||||
HC.IMAGE_GIF,
|
||||
HC.IMAGE_WEBP
|
||||
}
|
||||
|
||||
if s_pixel_hash == c_pixel_hash:
|
||||
# this is not appropriate for, say, PSD files
|
||||
other_file_is_pixel_png_appropriate_filetypes = {
|
||||
HC.IMAGE_JPEG,
|
||||
HC.IMAGE_GIF,
|
||||
HC.IMAGE_WEBP
|
||||
}
|
||||
|
||||
is_a_pixel_dupe = True
|
||||
|
||||
if s_mime == HC.IMAGE_PNG and c_mime in other_file_is_pixel_png_appropriate_filetypes:
|
||||
|
||||
statement = 'this is a pixel-for-pixel duplicate png!'
|
||||
statement = 'this is a pixel-for-pixel duplicate png! it is a waste of space!'
|
||||
|
||||
score = -100
|
||||
|
||||
|
|
|
@ -132,7 +132,7 @@ All missing/Incorrect files will also have their hashes, tags, and URLs exported
|
|||
REGENERATE_FILE_DATA_JOB_FILE_HAS_HUMAN_READABLE_EMBEDDED_METADATA : 'This loads the file to see if it has non-EXIF human-readable metadata, which can be shown in the media viewer and searched with "system:image has human-readable embedded metadata".',
|
||||
REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE : 'This loads the file to see if it has an ICC profile, which is used in "system:has icc profile" search.',
|
||||
REGENERATE_FILE_DATA_JOB_PIXEL_HASH : 'This generates a fast unique identifier for the pixels in a still image, which is used in duplicate pixel searches.',
|
||||
REGENERATE_FILE_DATA_JOB_BLURHASH: 'This generates a very small version of the file\'s thumbnail that can be used as a placeholder while the thumbnail loads'
|
||||
REGENERATE_FILE_DATA_JOB_BLURHASH: 'This generates a very small version of the file\'s thumbnail that can be used as a placeholder while the thumbnail loads.'
|
||||
}
|
||||
|
||||
NORMALISED_BIG_JOB_WEIGHT = 100
|
||||
|
@ -160,7 +160,7 @@ regen_file_enum_to_job_weight_lookup = {
|
|||
REGENERATE_FILE_DATA_JOB_FILE_HAS_HUMAN_READABLE_EMBEDDED_METADATA : 25,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE : 25,
|
||||
REGENERATE_FILE_DATA_JOB_PIXEL_HASH : 100,
|
||||
REGENERATE_FILE_DATA_JOB_BLURHASH: 25
|
||||
REGENERATE_FILE_DATA_JOB_BLURHASH: 15
|
||||
}
|
||||
|
||||
regen_file_enum_to_overruled_jobs = {
|
||||
|
@ -189,7 +189,7 @@ regen_file_enum_to_overruled_jobs = {
|
|||
REGENERATE_FILE_DATA_JOB_BLURHASH: []
|
||||
}
|
||||
|
||||
ALL_REGEN_JOBS_IN_PREFERRED_ORDER = [
|
||||
ALL_REGEN_JOBS_IN_RUN_ORDER = [
|
||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD,
|
||||
|
@ -215,6 +215,32 @@ ALL_REGEN_JOBS_IN_PREFERRED_ORDER = [
|
|||
REGENERATE_FILE_DATA_JOB_DELETE_NEIGHBOUR_DUPES
|
||||
]
|
||||
|
||||
ALL_REGEN_JOBS_IN_HUMAN_ORDER = [
|
||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_DELETE_RECORD,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_REMOVE_RECORD,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_SILENT_DELETE,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_LOG_ONLY,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_METADATA,
|
||||
REGENERATE_FILE_DATA_JOB_REFIT_THUMBNAIL,
|
||||
REGENERATE_FILE_DATA_JOB_FORCE_THUMBNAIL,
|
||||
REGENERATE_FILE_DATA_JOB_BLURHASH,
|
||||
REGENERATE_FILE_DATA_JOB_PIXEL_HASH,
|
||||
REGENERATE_FILE_DATA_JOB_SIMILAR_FILES_METADATA,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_MODIFIED_TIMESTAMP,
|
||||
REGENERATE_FILE_DATA_JOB_OTHER_HASHES,
|
||||
REGENERATE_FILE_DATA_JOB_CHECK_SIMILAR_FILES_MEMBERSHIP,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_HAS_EXIF,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_HAS_HUMAN_READABLE_EMBEDDED_METADATA,
|
||||
REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE,
|
||||
REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS,
|
||||
REGENERATE_FILE_DATA_JOB_DELETE_NEIGHBOUR_DUPES
|
||||
]
|
||||
|
||||
def GetAllFilePaths( raw_paths, do_human_sort = True, clear_out_sidecars = True ):
|
||||
|
||||
file_paths = []
|
||||
|
@ -1733,30 +1759,6 @@ class ClientFilesManager( object ):
|
|||
|
||||
return do_it
|
||||
|
||||
def RegenerateImageBlurHash( self, media ):
|
||||
|
||||
hash = media.GetHash()
|
||||
mime = media.GetMime()
|
||||
|
||||
if mime not in HC.MIMES_WITH_THUMBNAILS:
|
||||
|
||||
return None
|
||||
|
||||
try:
|
||||
|
||||
thumbnail_path = self._GenerateExpectedThumbnailPath( hash )
|
||||
|
||||
thumbnail_mime = HydrusFileHandling.GetThumbnailMime( thumbnail_path )
|
||||
|
||||
numpy_image = ClientImageHandling.GenerateNumPyImage( thumbnail_path, thumbnail_mime )
|
||||
|
||||
return HydrusImageHandling.GetImageBlurHashNumPy(numpy_image)
|
||||
|
||||
except:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def Reinit( self ):
|
||||
|
||||
# this is still useful to hit on ideals changing, since subfolders bring the weight and stuff of those settings. we'd rather it was generally synced
|
||||
|
@ -2427,7 +2429,7 @@ class FilesMaintenanceManager( object ):
|
|||
|
||||
except HydrusExceptions.FileMissingException:
|
||||
|
||||
pass
|
||||
return False
|
||||
|
||||
|
||||
|
||||
|
@ -2470,9 +2472,36 @@ class FilesMaintenanceManager( object ):
|
|||
return None
|
||||
|
||||
|
||||
def _RegenBlurHash( self, media ):
|
||||
|
||||
return self._controller.client_files_manager.RegenerateImageBlurHash( media )
|
||||
def _RegenBlurhash( self, media ):
|
||||
|
||||
if media.GetMime() not in HC.MIMES_WITH_THUMBNAILS:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
try:
|
||||
|
||||
thumbnail_path = self._controller.client_files_manager.GetThumbnailPath( media )
|
||||
|
||||
except HydrusExceptions.FileMissingException as e:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
try:
|
||||
|
||||
thumbnail_mime = HydrusFileHandling.GetThumbnailMime( thumbnail_path )
|
||||
|
||||
numpy_image = ClientImageHandling.GenerateNumPyImage( thumbnail_path, thumbnail_mime )
|
||||
|
||||
return HydrusImageHandling.GetBlurhashFromNumPy( numpy_image )
|
||||
|
||||
except:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
|
||||
|
||||
def _RegenSimilarFilesMetadata( self, media_result ):
|
||||
|
||||
|
@ -2599,7 +2628,7 @@ class FilesMaintenanceManager( object ):
|
|||
elif job_type == REGENERATE_FILE_DATA_JOB_REFIT_THUMBNAIL:
|
||||
|
||||
was_regenerated = self._RegenFileThumbnailRefit( media_result )
|
||||
|
||||
|
||||
additional_data = was_regenerated
|
||||
|
||||
if was_regenerated:
|
||||
|
@ -2631,10 +2660,10 @@ class FilesMaintenanceManager( object ):
|
|||
elif job_type == REGENERATE_FILE_DATA_JOB_FIX_PERMISSIONS:
|
||||
|
||||
self._FixFilePermissions( media_result )
|
||||
|
||||
|
||||
elif job_type == REGENERATE_FILE_DATA_JOB_BLURHASH:
|
||||
|
||||
additional_data = self._RegenBlurHash( media_result )
|
||||
|
||||
additional_data = self._RegenBlurhash( media_result )
|
||||
|
||||
elif job_type in (
|
||||
REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_REMOVE_RECORD,
|
||||
|
|
|
@ -456,16 +456,51 @@ class ThumbnailCache( object ):
|
|||
self._controller.sub( self, 'NotifyNewOptions', 'notify_new_options' )
|
||||
|
||||
|
||||
def _GetBestRecoveryThumbnailHydrusBitmap( self, display_media ):
|
||||
|
||||
blurhash = display_media.GetFileInfoManager().blurhash
|
||||
|
||||
if blurhash is not None:
|
||||
|
||||
try:
|
||||
|
||||
( media_width, media_height ) = display_media.GetResolution()
|
||||
|
||||
bounding_dimensions = self._controller.options[ 'thumbnail_dimensions' ]
|
||||
thumbnail_scale_type = self._controller.new_options.GetInteger( 'thumbnail_scale_type' )
|
||||
thumbnail_dpr_percent = HG.client_controller.new_options.GetInteger( 'thumbnail_dpr_percent' )
|
||||
|
||||
( clip_rect, ( expected_width, expected_height ) ) = HydrusImageHandling.GetThumbnailResolutionAndClipRegion( ( media_width, media_height ), bounding_dimensions, thumbnail_scale_type, thumbnail_dpr_percent )
|
||||
|
||||
numpy_image = HydrusImageHandling.GetNumpyFromBlurhash( blurhash, expected_width, expected_height )
|
||||
|
||||
hydrus_bitmap = ClientRendering.GenerateHydrusBitmapFromNumPyImage( numpy_image )
|
||||
|
||||
return hydrus_bitmap
|
||||
|
||||
except:
|
||||
|
||||
pass
|
||||
|
||||
|
||||
|
||||
return self._special_thumbs[ 'hydrus' ]
|
||||
|
||||
|
||||
def _GetThumbnailHydrusBitmap( self, display_media ):
|
||||
|
||||
if HG.blurhash_mode:
|
||||
|
||||
return self._GetBestRecoveryThumbnailHydrusBitmap( display_media )
|
||||
|
||||
|
||||
hash = display_media.GetHash()
|
||||
mime = display_media.GetMime()
|
||||
|
||||
locations_manager = display_media.GetLocationsManager()
|
||||
|
||||
try:
|
||||
|
||||
path = self._controller.client_files_manager.GetThumbnailPath( display_media )
|
||||
thumbnail_path = self._controller.client_files_manager.GetThumbnailPath( display_media )
|
||||
|
||||
except HydrusExceptions.FileMissingException as e:
|
||||
|
||||
|
@ -476,16 +511,16 @@ class ThumbnailCache( object ):
|
|||
self._HandleThumbnailException( hash, e, summary )
|
||||
|
||||
|
||||
return self._special_thumbs[ 'hydrus' ]
|
||||
return self._GetBestRecoveryThumbnailHydrusBitmap( display_media )
|
||||
|
||||
|
||||
thumbnail_mime = HC.IMAGE_JPEG
|
||||
|
||||
try:
|
||||
|
||||
thumbnail_mime = HydrusFileHandling.GetThumbnailMime( path )
|
||||
thumbnail_mime = HydrusFileHandling.GetThumbnailMime( thumbnail_path )
|
||||
|
||||
numpy_image = ClientImageHandling.GenerateNumPyImage( path, thumbnail_mime )
|
||||
numpy_image = ClientImageHandling.GenerateNumPyImage( thumbnail_path, thumbnail_mime )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
|
@ -500,12 +535,12 @@ class ThumbnailCache( object ):
|
|||
|
||||
self._HandleThumbnailException( hash, e, summary )
|
||||
|
||||
return self._special_thumbs[ 'hydrus' ]
|
||||
return self._GetBestRecoveryThumbnailHydrusBitmap( display_media )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
numpy_image = ClientImageHandling.GenerateNumPyImage( path, thumbnail_mime )
|
||||
numpy_image = ClientImageHandling.GenerateNumPyImage( thumbnail_path, thumbnail_mime )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
|
@ -513,7 +548,7 @@ class ThumbnailCache( object ):
|
|||
|
||||
self._HandleThumbnailException( hash, e, summary )
|
||||
|
||||
return self._special_thumbs[ 'hydrus' ]
|
||||
return self._GetBestRecoveryThumbnailHydrusBitmap( display_media )
|
||||
|
||||
|
||||
|
||||
|
@ -710,7 +745,11 @@ class ThumbnailCache( object ):
|
|||
|
||||
locations_manager = media.GetLocationsManager()
|
||||
|
||||
return locations_manager.IsLocal() or not locations_manager.GetCurrent().isdisjoint( HG.client_controller.services_manager.GetServiceKeys( ( HC.FILE_REPOSITORY, ) ) )
|
||||
we_have_file = locations_manager.IsLocal()
|
||||
we_should_have_thumb = not locations_manager.GetCurrent().isdisjoint( HG.client_controller.services_manager.GetServiceKeys( ( HC.FILE_REPOSITORY, ) ) )
|
||||
we_have_blurhash = media.GetFileInfoManager().blurhash is not None
|
||||
|
||||
return we_have_file or we_should_have_thumb or we_have_blurhash
|
||||
|
||||
|
||||
def CancelWaterfall( self, page_key: bytes, medias: list ):
|
||||
|
|
|
@ -2511,7 +2511,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if do_not_need_to_search:
|
||||
|
||||
files_table_name = db_location_context.GetSingleFilesTableName()
|
||||
current_files_table_name = db_location_context.GetSingleFilesTableName()
|
||||
|
||||
else:
|
||||
|
||||
|
@ -2528,7 +2528,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
self._ExecuteMany( f'INSERT OR IGNORE INTO {temp_table_name} ( hash_id ) VALUES ( ? );', ( ( hash_id, ) for hash_id in hash_ids ) )
|
||||
|
||||
files_table_name = temp_table_name
|
||||
current_files_table_name = temp_table_name
|
||||
|
||||
|
||||
hacks_going_to_work = location_context.IsOneDomain()
|
||||
|
@ -2580,8 +2580,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
return self._GetBonedStatsFromTable(
|
||||
db_location_context,
|
||||
files_table_name,
|
||||
current_files_table_name,
|
||||
current_timestamps_table_name,
|
||||
deleted_files_table_name,
|
||||
deleted_timestamps_table_name,
|
||||
|
@ -2593,8 +2592,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
def _GetBonedStatsFromTable(
|
||||
self,
|
||||
db_location_context: ClientDBFilesStorage.DBLocationContext,
|
||||
files_table_name: str,
|
||||
current_files_table_name: str,
|
||||
current_timestamps_table_name: typing.Optional[ str ],
|
||||
deleted_files_table_name: typing.Optional[ str ],
|
||||
deleted_timestamps_table_name: typing.Optional[ str ],
|
||||
|
@ -2608,8 +2606,8 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
boned_stats = {}
|
||||
|
||||
( num_total, size_total ) = self._Execute( f'SELECT COUNT( hash_id ), SUM( size ) FROM {files_table_name} CROSS JOIN files_info USING ( hash_id );' ).fetchone()
|
||||
( num_inbox, size_inbox ) = self._Execute( f'SELECT COUNT( hash_id ), SUM( size ) FROM {files_table_name} CROSS JOIN file_inbox USING ( hash_id ) CROSS JOIN files_info USING ( hash_id );' ).fetchone()
|
||||
( num_total, size_total ) = self._Execute( f'SELECT COUNT( hash_id ), SUM( size ) FROM {current_files_table_name} CROSS JOIN files_info USING ( hash_id );' ).fetchone()
|
||||
( num_inbox, size_inbox ) = self._Execute( f'SELECT COUNT( hash_id ), SUM( size ) FROM {current_files_table_name} CROSS JOIN file_inbox USING ( hash_id ) CROSS JOIN files_info USING ( hash_id );' ).fetchone()
|
||||
|
||||
if size_total is None:
|
||||
|
||||
|
@ -2653,13 +2651,13 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if current_timestamps_table_name is not None:
|
||||
|
||||
if files_table_name != current_timestamps_table_name:
|
||||
if current_files_table_name != current_timestamps_table_name:
|
||||
|
||||
table_join = f'{files_table_name} CROSS JOIN {current_timestamps_table_name} USING ( hash_id )'
|
||||
table_join = f'{current_files_table_name} CROSS JOIN {current_timestamps_table_name} USING ( hash_id )'
|
||||
|
||||
else:
|
||||
|
||||
table_join = files_table_name
|
||||
table_join = current_files_table_name
|
||||
|
||||
|
||||
result = self._Execute( f'SELECT MIN( timestamp ) FROM {table_join};' ).fetchone()
|
||||
|
@ -2713,7 +2711,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
#
|
||||
|
||||
canvas_types_to_total_viewtimes = { canvas_type : ( views, viewtime ) for ( canvas_type, views, viewtime ) in self._Execute( f'SELECT canvas_type, SUM( views ), SUM( viewtime ) FROM {files_table_name} CROSS JOIN file_viewing_stats USING ( hash_id ) GROUP BY canvas_type;' ) }
|
||||
canvas_types_to_total_viewtimes = { canvas_type : ( views, viewtime ) for ( canvas_type, views, viewtime ) in self._Execute( f'SELECT canvas_type, SUM( views ), SUM( viewtime ) FROM {current_files_table_name} CROSS JOIN file_viewing_stats USING ( hash_id ) GROUP BY canvas_type;' ) }
|
||||
|
||||
if deleted_files_table_name is not None:
|
||||
|
||||
|
@ -2740,7 +2738,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
#
|
||||
|
||||
total_alternate_files = sum( ( count for ( alternates_group_id, count ) in self._Execute( f'SELECT alternates_group_id, COUNT( * ) FROM {files_table_name} CROSS JOIN duplicate_file_members USING ( hash_id ) CROSS JOIN alternate_file_group_members USING ( media_id ) GROUP BY alternates_group_id;' ) if count > 1 ) )
|
||||
total_alternate_files = sum( ( count for ( alternates_group_id, count ) in self._Execute( f'SELECT alternates_group_id, COUNT( * ) FROM {current_files_table_name} CROSS JOIN duplicate_file_members USING ( hash_id ) CROSS JOIN alternate_file_group_members USING ( media_id ) GROUP BY alternates_group_id;' ) if count > 1 ) )
|
||||
|
||||
boned_stats[ 'total_alternate_files' ] = total_alternate_files
|
||||
|
||||
|
@ -2749,7 +2747,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return boned_stats
|
||||
|
||||
|
||||
total_duplicate_files = sum( ( count for ( media_id, count ) in self._Execute( f'SELECT media_id, COUNT( * ) FROM {files_table_name} CROSS JOIN duplicate_file_members USING ( hash_id ) GROUP BY media_id;' ) if count > 1 ) )
|
||||
total_duplicate_files = sum( ( count for ( media_id, count ) in self._Execute( f'SELECT media_id, COUNT( * ) FROM {current_files_table_name} CROSS JOIN duplicate_file_members USING ( hash_id ) GROUP BY media_id;' ) if count > 1 ) )
|
||||
|
||||
boned_stats[ 'total_duplicate_files' ] = total_duplicate_files
|
||||
|
||||
|
@ -2761,7 +2759,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return boned_stats
|
||||
|
||||
# TODO: fix this, it takes ages sometimes IRL
|
||||
table_join = self.modules_files_duplicates.GetPotentialDuplicatePairsTableJoinOnSearchResults( db_location_context, files_table_name, CC.SIMILAR_FILES_PIXEL_DUPES_ALLOWED, max_hamming_distance = 8 )
|
||||
table_join = self.modules_files_duplicates.GetPotentialDuplicatePairsTableJoinOnSearchResults( db_location_context, current_files_table_name, CC.SIMILAR_FILES_PIXEL_DUPES_ALLOWED, max_hamming_distance = 8 )
|
||||
|
||||
( total_potential_pairs, ) = self._Execute( f'SELECT COUNT( * ) FROM ( SELECT DISTINCT smaller_media_id, larger_media_id FROM {table_join} );' ).fetchone()
|
||||
|
||||
|
@ -2775,7 +2773,365 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return boned_stats
|
||||
|
||||
|
||||
def _GetFileInfoManagers( self, hash_ids: typing.Collection[ int ], sorted = False, blurhash = False ) -> typing.List[ ClientMediaManagers.FileInfoManager ]:
|
||||
def _GetFileHistory( self, num_steps: int, file_search_context: ClientSearch.FileSearchContext = None, job_key = None ):
|
||||
|
||||
# TODO: clean this up. it is a mess cribbed from the boned work, and I'm piping similar nonsense down to the db tables
|
||||
# don't supply deleted timestamps for 'all files deleted' and all that gubbins, it is a mess
|
||||
|
||||
if job_key is None:
|
||||
|
||||
job_key = ClientThreading.JobKey()
|
||||
|
||||
|
||||
if file_search_context is None:
|
||||
|
||||
file_search_context = ClientSearch.FileSearchContext(
|
||||
location_context = ClientLocation.LocationContext.STATICCreateSimple( CC.COMBINED_LOCAL_MEDIA_SERVICE_KEY )
|
||||
)
|
||||
|
||||
|
||||
current_timestamps_table_name = None
|
||||
deleted_files_table_name = None
|
||||
deleted_timestamps_table_name = None
|
||||
|
||||
location_context = file_search_context.GetLocationContext()
|
||||
|
||||
db_location_context = self.modules_files_storage.GetDBLocationContext( location_context )
|
||||
|
||||
do_not_need_to_search = file_search_context.IsJustSystemEverything() or file_search_context.HasNoPredicates()
|
||||
|
||||
with self._MakeTemporaryIntegerTable( [], 'hash_id' ) as temp_table_name:
|
||||
|
||||
with self._MakeTemporaryIntegerTable( [], 'hash_id' ) as deleted_temp_table_name:
|
||||
|
||||
if do_not_need_to_search:
|
||||
|
||||
current_files_table_name = db_location_context.GetSingleFilesTableName()
|
||||
|
||||
else:
|
||||
|
||||
hash_ids = self.modules_files_query.GetHashIdsFromQuery(
|
||||
file_search_context = file_search_context,
|
||||
apply_implicit_limit = False,
|
||||
job_key = job_key
|
||||
)
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return {}
|
||||
|
||||
|
||||
self._ExecuteMany( f'INSERT OR IGNORE INTO {temp_table_name} ( hash_id ) VALUES ( ? );', ( ( hash_id, ) for hash_id in hash_ids ) )
|
||||
|
||||
current_files_table_name = temp_table_name
|
||||
|
||||
|
||||
hacks_going_to_work = location_context.IsOneDomain()
|
||||
deleted_logic_makes_sense = location_context.IncludesCurrent() and not location_context.IncludesDeleted()
|
||||
current_domains_have_inverse = CC.TRASH_SERVICE_KEY not in location_context.current_service_keys and CC.COMBINED_FILE_SERVICE_KEY not in location_context.current_service_keys
|
||||
|
||||
if hacks_going_to_work and deleted_logic_makes_sense and current_domains_have_inverse:
|
||||
|
||||
# note I can't currently support two _complicated_ db location contexts in one query since their mickey mouse temp table has a fixed name
|
||||
# therefore leave this as IsOneDomain for now
|
||||
# TODO: plug the DBLocationContext into a manager for temp table names and then come back here
|
||||
|
||||
db_location_context = self.modules_files_storage.GetDBLocationContext( location_context )
|
||||
|
||||
# special IsOneDomain hack
|
||||
current_timestamps_table_name = db_location_context.GetSingleFilesTableName()
|
||||
|
||||
deleted_location_context = location_context.GetDeletedInverse()
|
||||
|
||||
deleted_db_location_context = self.modules_files_storage.GetDBLocationContext( deleted_location_context )
|
||||
|
||||
# special IsOneDomain hack
|
||||
deleted_timestamps_table_name = deleted_db_location_context.GetSingleFilesTableName()
|
||||
|
||||
if do_not_need_to_search:
|
||||
|
||||
deleted_files_table_name = deleted_db_location_context.GetSingleFilesTableName()
|
||||
|
||||
else:
|
||||
|
||||
deleted_file_search_context = file_search_context.Duplicate()
|
||||
deleted_file_search_context.SetLocationContext( deleted_location_context )
|
||||
|
||||
hash_ids = self.modules_files_query.GetHashIdsFromQuery(
|
||||
file_search_context = deleted_file_search_context,
|
||||
apply_implicit_limit = False,
|
||||
job_key = job_key
|
||||
)
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return {}
|
||||
|
||||
|
||||
self._ExecuteMany( f'INSERT OR IGNORE INTO {deleted_temp_table_name} ( hash_id ) VALUES ( ? );', ( ( hash_id, ) for hash_id in hash_ids ) )
|
||||
|
||||
deleted_files_table_name = deleted_temp_table_name
|
||||
|
||||
|
||||
|
||||
if current_timestamps_table_name is None or deleted_files_table_name is None or deleted_timestamps_table_name is None:
|
||||
|
||||
raise HydrusExceptions.TooComplicatedM8()
|
||||
|
||||
|
||||
return self._GetFileHistoryFromTable(
|
||||
num_steps,
|
||||
current_files_table_name,
|
||||
current_timestamps_table_name,
|
||||
deleted_files_table_name,
|
||||
deleted_timestamps_table_name,
|
||||
job_key
|
||||
)
|
||||
|
||||
|
||||
|
||||
|
||||
def _GetFileHistoryFromTable(
|
||||
self,
|
||||
num_steps: int,
|
||||
current_files_table_name: str,
|
||||
current_timestamps_table_name: str,
|
||||
deleted_files_table_name: str,
|
||||
deleted_timestamps_table_name: str,
|
||||
job_key = None
|
||||
):
|
||||
|
||||
# get all sorts of stats and present them in ( timestamp, cumulative_num ) tuple pairs
|
||||
|
||||
file_history = {}
|
||||
|
||||
# first let's do current files. we increment when added, decrement when we know removed
|
||||
|
||||
if current_files_table_name == current_timestamps_table_name:
|
||||
|
||||
current_timestamps = self._STL( self._Execute( f'SELECT timestamp FROM {current_timestamps_table_name} WHERE timestamp IS NOT NULL;' ) )
|
||||
|
||||
else:
|
||||
|
||||
current_timestamps = self._STL( self._Execute( f'SELECT timestamp FROM {current_files_table_name} CROSS JOIN {current_timestamps_table_name} USING ( hash_id ) WHERE timestamp IS NOT NULL;' ) )
|
||||
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return file_history
|
||||
|
||||
|
||||
if deleted_files_table_name == deleted_timestamps_table_name:
|
||||
|
||||
since_deleted = self._STL( self._Execute( f'SELECT original_timestamp FROM {deleted_timestamps_table_name} WHERE original_timestamp IS NOT NULL;' ) )
|
||||
|
||||
else:
|
||||
|
||||
since_deleted = self._STL( self._Execute( f'SELECT original_timestamp FROM {deleted_files_table_name} CROSS JOIN {deleted_timestamps_table_name} USING ( hash_id ) WHERE original_timestamp IS NOT NULL;' ) )
|
||||
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return file_history
|
||||
|
||||
|
||||
all_known_import_timestamps = list( current_timestamps )
|
||||
|
||||
all_known_import_timestamps.extend( since_deleted )
|
||||
|
||||
all_known_import_timestamps.sort()
|
||||
|
||||
if deleted_files_table_name == deleted_timestamps_table_name:
|
||||
|
||||
deleted_timestamps = self._STL( self._Execute( f'SELECT timestamp FROM {deleted_timestamps_table_name} WHERE timestamp IS NOT NULL ORDER BY timestamp ASC;' ) )
|
||||
|
||||
( total_deleted_files, ) = self._Execute( f'SELECT COUNT( * ) FROM {deleted_timestamps_table_name} WHERE timestamp IS NULL;' ).fetchone()
|
||||
|
||||
else:
|
||||
|
||||
deleted_timestamps = self._STL( self._Execute( f'SELECT timestamp FROM {deleted_files_table_name} CROSS JOIN {deleted_timestamps_table_name} USING ( hash_id ) WHERE timestamp IS NOT NULL ORDER BY timestamp ASC;' ) )
|
||||
|
||||
( total_deleted_files, ) = self._Execute( f'SELECT COUNT( * ) FROM {deleted_files_table_name} CROSS JOIN {deleted_timestamps_table_name} USING ( hash_id ) WHERE timestamp IS NULL;' ).fetchone()
|
||||
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return file_history
|
||||
|
||||
|
||||
combined_timestamps_with_delta = [ ( timestamp, 1 ) for timestamp in all_known_import_timestamps ]
|
||||
combined_timestamps_with_delta.extend( ( ( timestamp, -1 ) for timestamp in deleted_timestamps ) )
|
||||
|
||||
combined_timestamps_with_delta.sort()
|
||||
|
||||
current_file_history = []
|
||||
|
||||
if len( combined_timestamps_with_delta ) > 0:
|
||||
|
||||
# set 0 on first file import time
|
||||
current_file_history.append( ( combined_timestamps_with_delta[0][0], 0 ) )
|
||||
|
||||
if len( combined_timestamps_with_delta ) < 2:
|
||||
|
||||
step_gap = 1
|
||||
|
||||
else:
|
||||
|
||||
step_gap = max( ( combined_timestamps_with_delta[-1][0] - combined_timestamps_with_delta[0][0] ) // num_steps, 1 )
|
||||
|
||||
|
||||
total_current_files = 0
|
||||
step_timestamp = combined_timestamps_with_delta[0][0]
|
||||
|
||||
for ( timestamp, delta ) in combined_timestamps_with_delta:
|
||||
|
||||
while timestamp > step_timestamp + step_gap:
|
||||
|
||||
current_file_history.append( ( step_timestamp, total_current_files ) )
|
||||
|
||||
step_timestamp += step_gap
|
||||
|
||||
|
||||
total_current_files += delta
|
||||
|
||||
|
||||
|
||||
file_history[ 'current' ] = current_file_history
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return file_history
|
||||
|
||||
|
||||
deleted_file_history = []
|
||||
|
||||
if len( deleted_timestamps ) > 0:
|
||||
|
||||
if len( deleted_timestamps ) < 2:
|
||||
|
||||
step_gap = 1
|
||||
|
||||
else:
|
||||
|
||||
step_gap = max( ( deleted_timestamps[-1] - deleted_timestamps[0] ) // num_steps, 1 )
|
||||
|
||||
|
||||
step_timestamp = deleted_timestamps[0]
|
||||
|
||||
for deleted_timestamp in deleted_timestamps:
|
||||
|
||||
while deleted_timestamp > step_timestamp + step_gap:
|
||||
|
||||
deleted_file_history.append( ( step_timestamp, total_deleted_files ) )
|
||||
|
||||
step_timestamp += step_gap
|
||||
|
||||
|
||||
total_deleted_files += 1
|
||||
|
||||
|
||||
|
||||
file_history[ 'deleted' ] = deleted_file_history
|
||||
|
||||
# and inbox, which will work backwards since we have numbers for archiving. several subtle differences here
|
||||
# we know the inbox now and the recent history of archives and file changes
|
||||
# working backwards in time (which reverses increment/decrement):
|
||||
# - an archive increments
|
||||
# - a file import decrements
|
||||
# note that we archive right before we delete a file, so file deletes shouldn't change anything for inbox count. all deletes are on archived files, so the increment will already be counted
|
||||
# UPDATE: and now we add archived, which is mostly the same deal but we subtract from current files to start and don't care about file imports since they are always inbox but do care about file deletes
|
||||
|
||||
inbox_file_history = []
|
||||
archive_file_history = []
|
||||
|
||||
if current_files_table_name == ClientDBFilesStorage.GenerateFilesTableName( self.modules_services.combined_local_file_service_id, HC.CONTENT_STATUS_CURRENT ):
|
||||
|
||||
( total_inbox_files, ) = self._Execute( 'SELECT COUNT( * ) FROM file_inbox;' ).fetchone()
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return file_history
|
||||
|
||||
|
||||
# note also that we do not scrub archived time on a file delete, so this upcoming fetch is for all files ever. this is useful, so don't undo it m8
|
||||
archive_timestamps = self._STL( self._Execute( 'SELECT archived_timestamp FROM archive_timestamps ORDER BY archived_timestamp ASC;' ) )
|
||||
|
||||
else:
|
||||
|
||||
( total_inbox_files, ) = self._Execute( f'SELECT COUNT( * ) FROM {current_files_table_name} CROSS JOIN file_inbox USING ( hash_id );' ).fetchone()
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return file_history
|
||||
|
||||
|
||||
# note also that we do not scrub archived time on a file delete, so this upcoming fetch is for all files ever. this is useful, so don't undo it m8
|
||||
archive_timestamps = self._STL( self._Execute( f'SELECT archived_timestamp FROM {current_files_table_name} CROSS JOIN archive_timestamps USING ( hash_id ) ORDER BY archived_timestamp ASC;' ) )
|
||||
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return file_history
|
||||
|
||||
|
||||
total_current_files = len( current_timestamps )
|
||||
|
||||
# I now exclude updates and trash my searching 'all my files'
|
||||
total_update_files = 0 #self.modules_files_storage.GetCurrentFilesCount( self.modules_services.local_update_service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
total_trash_files = 0 #self.modules_files_storage.GetCurrentFilesCount( self.modules_services.trash_service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
total_archive_files = ( total_current_files - total_update_files - total_trash_files ) - total_inbox_files
|
||||
|
||||
if len( archive_timestamps ) > 0:
|
||||
|
||||
first_archive_time = archive_timestamps[0]
|
||||
|
||||
combined_timestamps_with_deltas = [ ( timestamp, 1, -1 ) for timestamp in archive_timestamps ]
|
||||
combined_timestamps_with_deltas.extend( ( ( timestamp, -1, 0 ) for timestamp in all_known_import_timestamps if timestamp >= first_archive_time ) )
|
||||
combined_timestamps_with_deltas.extend( ( ( timestamp, 0, 1 ) for timestamp in deleted_timestamps if timestamp >= first_archive_time ) )
|
||||
|
||||
combined_timestamps_with_deltas.sort( reverse = True )
|
||||
|
||||
if len( combined_timestamps_with_deltas ) > 0:
|
||||
|
||||
if len( combined_timestamps_with_deltas ) < 2:
|
||||
|
||||
step_gap = 1
|
||||
|
||||
else:
|
||||
|
||||
# reversed, so first minus last
|
||||
step_gap = max( ( combined_timestamps_with_deltas[0][0] - combined_timestamps_with_deltas[-1][0] ) // num_steps, 1 )
|
||||
|
||||
|
||||
step_timestamp = combined_timestamps_with_deltas[0][0]
|
||||
|
||||
for ( archived_timestamp, inbox_delta, archive_delta ) in combined_timestamps_with_deltas:
|
||||
|
||||
while archived_timestamp < step_timestamp - step_gap:
|
||||
|
||||
inbox_file_history.append( ( archived_timestamp, total_inbox_files ) )
|
||||
archive_file_history.append( ( archived_timestamp, total_archive_files ) )
|
||||
|
||||
step_timestamp -= step_gap
|
||||
|
||||
|
||||
total_inbox_files += inbox_delta
|
||||
total_archive_files += archive_delta
|
||||
|
||||
|
||||
inbox_file_history.reverse()
|
||||
archive_file_history.reverse()
|
||||
|
||||
|
||||
|
||||
file_history[ 'inbox' ] = inbox_file_history
|
||||
file_history[ 'archive' ] = archive_file_history
|
||||
|
||||
return file_history
|
||||
|
||||
|
||||
def _GetFileInfoManagers( self, hash_ids: typing.Collection[ int ], sorted = False ) -> typing.List[ ClientMediaManagers.FileInfoManager ]:
|
||||
|
||||
( cached_media_results, missing_hash_ids ) = self._weakref_media_result_cache.GetMediaResultsAndMissing( hash_ids )
|
||||
|
||||
|
@ -2790,9 +3146,8 @@ class DB( HydrusDB.HydrusDB ):
|
|||
# temp hashes to metadata
|
||||
hash_ids_to_info = { hash_id : ClientMediaManagers.FileInfoManager( hash_id, missing_hash_ids_to_hashes[ hash_id ], size, mime, width, height, duration, num_frames, has_audio, num_words ) for ( hash_id, size, mime, width, height, duration, num_frames, has_audio, num_words ) in self._Execute( 'SELECT * FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_table_name ) ) }
|
||||
|
||||
if blurhash:
|
||||
|
||||
hash_ids_to_blurhash = self.modules_files_metadata_basic.GetHashIdsToBlurHash( temp_table_name )
|
||||
hash_ids_to_blurhashes = self.modules_files_metadata_basic.GetHashIdsToBlurhashes( temp_table_name )
|
||||
|
||||
|
||||
# build it
|
||||
|
||||
|
@ -2801,10 +3156,6 @@ class DB( HydrusDB.HydrusDB ):
|
|||
if hash_id in hash_ids_to_info:
|
||||
|
||||
file_info_manager = hash_ids_to_info[ hash_id ]
|
||||
|
||||
if blurhash and hash_id in hash_ids_to_blurhash:
|
||||
|
||||
file_info_manager.blurhash = hash_ids_to_blurhash[hash_id]
|
||||
|
||||
else:
|
||||
|
||||
|
@ -2813,6 +3164,8 @@ class DB( HydrusDB.HydrusDB ):
|
|||
file_info_manager = ClientMediaManagers.FileInfoManager( hash_id, hash )
|
||||
|
||||
|
||||
file_info_manager.blurhash = hash_ids_to_blurhashes.get( hash_id, None )
|
||||
|
||||
file_info_managers.append( file_info_manager )
|
||||
|
||||
|
||||
|
@ -3302,8 +3655,9 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
hash_ids_to_tags_managers = self._GetForceRefreshTagsManagersWithTableHashIds( missing_hash_ids, temp_table_name, hash_ids_to_current_file_service_ids = hash_ids_to_current_file_service_ids )
|
||||
|
||||
hash_ids_to_blurhash = self.modules_files_metadata_basic.GetHashIdsToBlurHash( temp_table_name )
|
||||
|
||||
hash_ids_to_pixel_hashes = self.modules_similar_files.GetHashIdsToPixelHashes( temp_table_name )
|
||||
hash_ids_to_blurhashes = self.modules_files_metadata_basic.GetHashIdsToBlurhashes( temp_table_name )
|
||||
|
||||
has_exif_hash_ids = self.modules_files_metadata_basic.GetHasEXIFHashIds( temp_table_name )
|
||||
has_human_readable_embedded_metadata_hash_ids = self.modules_files_metadata_basic.GetHasHumanReadableEmbeddedMetadataHashIds( temp_table_name )
|
||||
has_icc_profile_hash_ids = self.modules_files_metadata_basic.GetHasICCProfileHashIds( temp_table_name )
|
||||
|
@ -3352,14 +3706,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
timestamps_manager.SetDeletedTimestamps( deleted_file_service_keys_to_timestamps )
|
||||
timestamps_manager.SetPreviouslyImportedTimestamps( deleted_file_service_keys_to_previously_imported_timestamps )
|
||||
|
||||
if hash_id in hash_ids_to_local_file_deletion_reasons:
|
||||
|
||||
local_file_deletion_reason = hash_ids_to_local_file_deletion_reasons[ hash_id ]
|
||||
|
||||
else:
|
||||
|
||||
local_file_deletion_reason = None
|
||||
|
||||
local_file_deletion_reason = hash_ids_to_local_file_deletion_reasons.get( hash_id, None )
|
||||
|
||||
locations_manager = ClientMediaManagers.LocationsManager(
|
||||
set( current_file_service_keys_to_timestamps.keys() ),
|
||||
|
@ -3421,10 +3768,8 @@ class DB( HydrusDB.HydrusDB ):
|
|||
file_info_manager.has_exif = hash_id in has_exif_hash_ids
|
||||
file_info_manager.has_human_readable_embedded_metadata = hash_id in has_human_readable_embedded_metadata_hash_ids
|
||||
file_info_manager.has_icc_profile = hash_id in has_icc_profile_hash_ids
|
||||
|
||||
if hash_id in hash_ids_to_blurhash:
|
||||
|
||||
file_info_manager.blurhash = hash_ids_to_blurhash[hash_id]
|
||||
file_info_manager.pixel_hash = hash_ids_to_pixel_hashes.get( hash_id, None )
|
||||
file_info_manager.blurhash = hash_ids_to_blurhashes.get( hash_id, None )
|
||||
|
||||
missing_media_results.append( ClientMediaResult.MediaResult( file_info_manager, tags_manager, timestamps_manager, locations_manager, ratings_manager, notes_manager, file_viewing_stats_manager ) )
|
||||
|
||||
|
@ -4626,9 +4971,8 @@ class DB( HydrusDB.HydrusDB ):
|
|||
self.modules_files_metadata_basic.SetHasEXIF( hash_id, file_import_job.HasEXIF() )
|
||||
self.modules_files_metadata_basic.SetHasHumanReadableEmbeddedMetadata( hash_id, file_import_job.HasHumanReadableEmbeddedMetadata() )
|
||||
self.modules_files_metadata_basic.SetHasICCProfile( hash_id, file_import_job.HasICCProfile() )
|
||||
self.modules_files_metadata_basic.SetBlurhash( hash_id, file_import_job.GetBlurhash() )
|
||||
|
||||
self.modules_files_metadata_basic.SetBlurHash( hash_id, file_import_job.GetBlurhash())
|
||||
|
||||
#
|
||||
|
||||
file_modified_timestamp = file_import_job.GetFileModifiedTimestamp()
|
||||
|
@ -6494,7 +6838,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
elif action == 'file_duplicate_hashes': result = self.modules_files_duplicates.GetFileHashesByDuplicateType( *args, **kwargs )
|
||||
elif action == 'file_duplicate_info': result = self.modules_files_duplicates.GetFileDuplicateInfo( *args, **kwargs )
|
||||
elif action == 'file_hashes': result = self.modules_hashes.GetFileHashes( *args, **kwargs )
|
||||
elif action == 'file_history': result = self.modules_files_metadata_rich.GetFileHistory( *args, **kwargs )
|
||||
elif action == 'file_history': result = self._GetFileHistory( *args, **kwargs )
|
||||
elif action == 'file_info_managers': result = self._GetFileInfoManagersFromHashes( *args, **kwargs )
|
||||
elif action == 'file_info_managers_from_ids': result = self._GetFileInfoManagers( *args, **kwargs )
|
||||
elif action == 'file_maintenance_get_job_counts': result = self.modules_files_maintenance_queue.GetJobCounts( *args, **kwargs )
|
||||
|
@ -9792,6 +10136,32 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
|
||||
if version == 544:
|
||||
|
||||
self._controller.frame_splash_status.SetSubtext( f'scheduling some maintenance work' )
|
||||
|
||||
all_local_hash_ids = self.modules_files_storage.GetCurrentHashIdsList( self.modules_services.combined_local_file_service_id )
|
||||
|
||||
with self._MakeTemporaryIntegerTable( all_local_hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
|
||||
|
||||
if not self._TableExists( 'external_master.blurhashes' ):
|
||||
|
||||
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.blurhashes ( hash_id INTEGER PRIMARY KEY, blurhash TEXT );' )
|
||||
|
||||
hash_ids = self._STS( self._Execute( f'SELECT hash_id FROM {temp_hash_ids_table_name} CROSS JOIN files_info USING ( hash_id ) WHERE mime IN {HydrusData.SplayListForDB( HC.MIMES_WITH_THUMBNAILS )};' ) )
|
||||
self.modules_files_maintenance_queue.AddJobs( hash_ids, ClientFiles.REGENERATE_FILE_DATA_JOB_BLURHASH )
|
||||
|
||||
|
||||
hash_ids = self._STS( self._Execute( f'SELECT hash_id FROM {temp_hash_ids_table_name} CROSS JOIN files_info USING ( hash_id ) WHERE mime = ?;', ( HC.APPLICATION_ZIP, ) ) )
|
||||
self.modules_files_maintenance_queue.AddJobs( hash_ids, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_METADATA )
|
||||
|
||||
|
||||
if not self._IdealIndexExists( 'external_caches.file_maintenance_jobs', [ 'job_type' ] ):
|
||||
|
||||
self._CreateIndex( 'external_caches.file_maintenance_jobs', [ 'job_type' ], False )
|
||||
|
||||
|
||||
|
||||
self._controller.frame_splash_status.SetTitleText( 'updated db to v{}'.format( HydrusData.ToHumanInt( version + 1 ) ) )
|
||||
|
||||
self._Execute( 'UPDATE version SET version = ?;', ( version + 1, ) )
|
||||
|
|
|
@ -112,6 +112,8 @@ class ClientDBFilesMaintenance( ClientDBModule.ClientDBModule ):
|
|||
self.modules_files_metadata_basic.SetHasEXIF( hash_id, has_exif )
|
||||
|
||||
|
||||
new_file_info.add( ( hash_id, hash ) )
|
||||
|
||||
elif job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_HAS_HUMAN_READABLE_EMBEDDED_METADATA:
|
||||
|
||||
previous_has_human_readable_embedded_metadata = self.modules_files_metadata_basic.GetHasHumanReadableEmbeddedMetadata( hash_id )
|
||||
|
@ -123,6 +125,8 @@ class ClientDBFilesMaintenance( ClientDBModule.ClientDBModule ):
|
|||
self.modules_files_metadata_basic.SetHasHumanReadableEmbeddedMetadata( hash_id, has_human_readable_embedded_metadata )
|
||||
|
||||
|
||||
new_file_info.add( ( hash_id, hash ) )
|
||||
|
||||
elif job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_HAS_ICC_PROFILE:
|
||||
|
||||
previous_has_icc_profile = self.modules_files_metadata_basic.GetHasICCProfile( hash_id )
|
||||
|
@ -139,6 +143,8 @@ class ClientDBFilesMaintenance( ClientDBModule.ClientDBModule ):
|
|||
|
||||
|
||||
|
||||
new_file_info.add( ( hash_id, hash ) )
|
||||
|
||||
elif job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_PIXEL_HASH:
|
||||
|
||||
pixel_hash = additional_data
|
||||
|
@ -147,6 +153,8 @@ class ClientDBFilesMaintenance( ClientDBModule.ClientDBModule ):
|
|||
|
||||
self.modules_similar_files.SetPixelHash( hash_id, pixel_hash_id )
|
||||
|
||||
new_file_info.add( ( hash_id, hash ) )
|
||||
|
||||
elif job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_MODIFIED_TIMESTAMP:
|
||||
|
||||
file_modified_timestamp = additional_data
|
||||
|
@ -180,16 +188,32 @@ class ClientDBFilesMaintenance( ClientDBModule.ClientDBModule ):
|
|||
if self.modules_similar_files.FileIsInSystem( hash_id ):
|
||||
|
||||
self.modules_similar_files.StopSearchingFile( hash_id )
|
||||
|
||||
elif job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_FORCE_THUMBNAIL or job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_REFIT_THUMBNAIL:
|
||||
|
||||
self.modules_files_maintenance_queue.AddJobs( ( hash_id, ), ClientFiles.REGENERATE_FILE_DATA_JOB_BLURHASH )
|
||||
|
||||
|
||||
|
||||
elif job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_FORCE_THUMBNAIL or job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_REFIT_THUMBNAIL:
|
||||
|
||||
if job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_FORCE_THUMBNAIL:
|
||||
|
||||
was_regenerated = True
|
||||
|
||||
else:
|
||||
|
||||
was_regenerated = additional_data
|
||||
|
||||
|
||||
if was_regenerated:
|
||||
|
||||
self.modules_files_maintenance_queue.AddJobs( ( hash_id, ), ClientFiles.REGENERATE_FILE_DATA_JOB_BLURHASH )
|
||||
|
||||
|
||||
elif job_type == ClientFiles.REGENERATE_FILE_DATA_JOB_BLURHASH:
|
||||
|
||||
|
||||
blurhash: str = additional_data
|
||||
|
||||
self.modules_files_metadata_basic.SetBlurHash( hash_id, blurhash )
|
||||
|
||||
self.modules_files_metadata_basic.SetBlurhash( hash_id, blurhash )
|
||||
|
||||
new_file_info.add( ( hash_id, hash ) )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -24,6 +24,17 @@ class ClientDBFilesMaintenanceQueue( ClientDBModule.ClientDBModule ):
|
|||
self.modules_hashes_local_cache = modules_hashes_local_cache
|
||||
|
||||
|
||||
def _GetInitialIndexGenerationDict( self ) -> dict:
|
||||
|
||||
index_generation_dict = {}
|
||||
|
||||
index_generation_dict[ 'external_caches.file_maintenance_jobs' ] = [
|
||||
( [ 'job_type' ], False, 545 )
|
||||
]
|
||||
|
||||
return index_generation_dict
|
||||
|
||||
|
||||
def _GetInitialTableGenerationDict( self ) -> dict:
|
||||
|
||||
return {
|
||||
|
@ -79,7 +90,7 @@ class ClientDBFilesMaintenanceQueue( ClientDBModule.ClientDBModule ):
|
|||
|
||||
if job_types is None:
|
||||
|
||||
possible_job_types = ClientFiles.ALL_REGEN_JOBS_IN_PREFERRED_ORDER
|
||||
possible_job_types = ClientFiles.ALL_REGEN_JOBS_IN_RUN_ORDER
|
||||
|
||||
else:
|
||||
|
||||
|
@ -104,7 +115,7 @@ class ClientDBFilesMaintenanceQueue( ClientDBModule.ClientDBModule ):
|
|||
|
||||
hashes_to_job_types = {}
|
||||
|
||||
sort_index = { job_type : index for ( index, job_type ) in enumerate( ClientFiles.ALL_REGEN_JOBS_IN_PREFERRED_ORDER ) }
|
||||
sort_index = { job_type : index for ( index, job_type ) in enumerate( ClientFiles.ALL_REGEN_JOBS_IN_RUN_ORDER ) }
|
||||
|
||||
for ( hash_id, job_types ) in hash_ids_to_job_types.items():
|
||||
|
||||
|
|
|
@ -36,7 +36,7 @@ class ClientDBFilesMetadataBasic( ClientDBModule.ClientDBModule ):
|
|||
'main.has_icc_profile' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );', 465 ),
|
||||
'main.has_exif' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );', 505 ),
|
||||
'main.has_human_readable_embedded_metadata' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );', 505 ),
|
||||
'main.blurhash' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, blurhash TEXT );', 545 ),
|
||||
'external_master.blurhashes' : ( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, blurhash TEXT );', 545 ),
|
||||
}
|
||||
|
||||
|
||||
|
@ -55,6 +55,67 @@ class ClientDBFilesMetadataBasic( ClientDBModule.ClientDBModule ):
|
|||
self._ExecuteMany( insert_phrase + ' files_info ( hash_id, size, mime, width, height, duration, num_frames, has_audio, num_words ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ? );', rows )
|
||||
|
||||
|
||||
def GetBlurhash( self, hash_id: int ) -> str:
|
||||
|
||||
result = self._Execute( 'SELECT blurhash FROM blurhashes WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
|
||||
|
||||
if result is None:
|
||||
|
||||
raise HydrusExceptions.DataMissing( 'Did not have blurhash information for that file!' )
|
||||
|
||||
|
||||
( blurhash, ) = result
|
||||
|
||||
return blurhash
|
||||
|
||||
|
||||
def GetHasEXIF( self, hash_id: int ):
|
||||
|
||||
result = self._Execute( 'SELECT hash_id FROM has_exif WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
|
||||
|
||||
return result is not None
|
||||
|
||||
|
||||
def GetHasEXIFHashIds( self, hash_ids_table_name: str ) -> typing.Set[ int ]:
|
||||
|
||||
has_exif_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN has_exif USING ( hash_id );'.format( hash_ids_table_name ) ) )
|
||||
|
||||
return has_exif_hash_ids
|
||||
|
||||
|
||||
def GetHasHumanReadableEmbeddedMetadata( self, hash_id: int ):
|
||||
|
||||
result = self._Execute( 'SELECT hash_id FROM has_human_readable_embedded_metadata WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
|
||||
|
||||
return result is not None
|
||||
|
||||
|
||||
def GetHasHumanReadableEmbeddedMetadataHashIds( self, hash_ids_table_name: str ) -> typing.Set[ int ]:
|
||||
|
||||
has_human_readable_embedded_metadata_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN has_human_readable_embedded_metadata USING ( hash_id );'.format( hash_ids_table_name ) ) )
|
||||
|
||||
return has_human_readable_embedded_metadata_hash_ids
|
||||
|
||||
|
||||
def GetHashIdsToBlurhashes( self, hash_ids_table_name: str ):
|
||||
|
||||
return dict( self._Execute( 'SELECT hash_id, blurhash FROM {} CROSS JOIN blurhashes USING ( hash_id );'.format( hash_ids_table_name ) ) )
|
||||
|
||||
|
||||
def GetHasICCProfile( self, hash_id: int ):
|
||||
|
||||
result = self._Execute( 'SELECT hash_id FROM has_icc_profile WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
|
||||
|
||||
return result is not None
|
||||
|
||||
|
||||
def GetHasICCProfileHashIds( self, hash_ids_table_name: str ) -> typing.Set[ int ]:
|
||||
|
||||
has_icc_profile_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN has_icc_profile USING ( hash_id );'.format( hash_ids_table_name ) ) )
|
||||
|
||||
return has_icc_profile_hash_ids
|
||||
|
||||
|
||||
def GetMime( self, hash_id: int ) -> int:
|
||||
|
||||
result = self._Execute( 'SELECT mime FROM files_info WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
|
||||
|
@ -109,7 +170,7 @@ class ClientDBFilesMetadataBasic( ClientDBModule.ClientDBModule ):
|
|||
( 'has_exif', 'hash_id' ),
|
||||
( 'has_human_readable_embedded_metadata', 'hash_id' ),
|
||||
( 'has_icc_profile', 'hash_id' ),
|
||||
( 'blurhash', 'hash_id' )
|
||||
( 'blurhashes', 'hash_id' )
|
||||
]
|
||||
|
||||
|
||||
|
@ -142,48 +203,6 @@ class ClientDBFilesMetadataBasic( ClientDBModule.ClientDBModule ):
|
|||
return total_size
|
||||
|
||||
|
||||
def GetHasEXIF( self, hash_id: int ):
|
||||
|
||||
result = self._Execute( 'SELECT hash_id FROM has_exif WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
|
||||
|
||||
return result is not None
|
||||
|
||||
|
||||
def GetHasEXIFHashIds( self, hash_ids_table_name: str ) -> typing.Set[ int ]:
|
||||
|
||||
has_exif_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN has_exif USING ( hash_id );'.format( hash_ids_table_name ) ) )
|
||||
|
||||
return has_exif_hash_ids
|
||||
|
||||
|
||||
def GetHasHumanReadableEmbeddedMetadata( self, hash_id: int ):
|
||||
|
||||
result = self._Execute( 'SELECT hash_id FROM has_human_readable_embedded_metadata WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
|
||||
|
||||
return result is not None
|
||||
|
||||
|
||||
def GetHasHumanReadableEmbeddedMetadataHashIds( self, hash_ids_table_name: str ) -> typing.Set[ int ]:
|
||||
|
||||
has_human_readable_embedded_metadata_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN has_human_readable_embedded_metadata USING ( hash_id );'.format( hash_ids_table_name ) ) )
|
||||
|
||||
return has_human_readable_embedded_metadata_hash_ids
|
||||
|
||||
|
||||
def GetHasICCProfile( self, hash_id: int ):
|
||||
|
||||
result = self._Execute( 'SELECT hash_id FROM has_icc_profile WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
|
||||
|
||||
return result is not None
|
||||
|
||||
|
||||
def GetHasICCProfileHashIds( self, hash_ids_table_name: str ) -> typing.Set[ int ]:
|
||||
|
||||
has_icc_profile_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN has_icc_profile USING ( hash_id );'.format( hash_ids_table_name ) ) )
|
||||
|
||||
return has_icc_profile_hash_ids
|
||||
|
||||
|
||||
def SetHasEXIF( self, hash_id: int, has_exif: bool ):
|
||||
|
||||
if has_exif:
|
||||
|
@ -217,31 +236,10 @@ class ClientDBFilesMetadataBasic( ClientDBModule.ClientDBModule ):
|
|||
else:
|
||||
|
||||
self._Execute( 'DELETE FROM has_icc_profile WHERE hash_id = ?;', ( hash_id, ) )
|
||||
|
||||
def SetBlurHash( self, hash_id: int, blurhash: str ):
|
||||
|
||||
# TODO blurhash db stuff
|
||||
|
||||
self._Execute('INSERT OR REPLACE INTO blurhash ( hash_id, blurhash ) VALUES ( ?, ?);', (hash_id, blurhash))
|
||||
|
||||
|
||||
def GetBlurHash( self, hash_id: int ) -> str:
|
||||
|
||||
result = self._Execute( 'SELECT blurhash FROM blurhash WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
|
||||
|
||||
# TODO blurhash db stuff
|
||||
|
||||
if result is None:
|
||||
|
||||
raise HydrusExceptions.DataMissing( 'Did not have blurhash information for that file!' )
|
||||
|
||||
|
||||
( blurhash, ) = result
|
||||
|
||||
return blurhash
|
||||
|
||||
def GetHashIdsToBlurHash( self, hash_ids_table_name: str ):
|
||||
|
||||
return dict( self._Execute( 'SELECT hash_id, blurhash FROM {} CROSS JOIN blurhash USING ( hash_id );'.format( hash_ids_table_name ) ) )
|
||||
|
||||
|
||||
def SetBlurhash( self, hash_id: int, blurhash: str ):
|
||||
|
||||
self._Execute('INSERT OR REPLACE INTO blurhashes ( hash_id, blurhash ) VALUES ( ?, ?);', ( hash_id, blurhash ) )
|
||||
|
||||
|
||||
|
|
|
@ -66,175 +66,6 @@ class ClientDBFilesMetadataRich( ClientDBModule.ClientDBModule ):
|
|||
return [ hash for hash in hashes if hash in hashes_to_hash_ids and hashes_to_hash_ids[ hash ] in valid_hash_ids ]
|
||||
|
||||
|
||||
def GetFileHistory( self, num_steps: int ):
|
||||
|
||||
# get all sorts of stats and present them in ( timestamp, cumulative_num ) tuple pairs
|
||||
|
||||
file_history = {}
|
||||
|
||||
# first let's do current files. we increment when added, decrement when we know removed
|
||||
|
||||
current_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( self.modules_services.combined_local_media_service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
current_timestamps = self._STL( self._Execute( 'SELECT timestamp FROM {};'.format( current_files_table_name ) ) )
|
||||
|
||||
deleted_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( self.modules_services.combined_local_media_service_id, HC.CONTENT_STATUS_DELETED )
|
||||
|
||||
since_deleted = self._STL( self._Execute( 'SELECT original_timestamp FROM {} WHERE original_timestamp IS NOT NULL;'.format( deleted_files_table_name ) ) )
|
||||
|
||||
all_known_import_timestamps = list( current_timestamps )
|
||||
|
||||
all_known_import_timestamps.extend( since_deleted )
|
||||
|
||||
all_known_import_timestamps.sort()
|
||||
|
||||
deleted_timestamps = self._STL( self._Execute( 'SELECT timestamp FROM {} WHERE timestamp IS NOT NULL ORDER BY timestamp ASC;'.format( deleted_files_table_name ) ) )
|
||||
|
||||
combined_timestamps_with_delta = [ ( timestamp, 1 ) for timestamp in all_known_import_timestamps ]
|
||||
combined_timestamps_with_delta.extend( ( ( timestamp, -1 ) for timestamp in deleted_timestamps ) )
|
||||
|
||||
combined_timestamps_with_delta.sort()
|
||||
|
||||
current_file_history = []
|
||||
|
||||
if len( combined_timestamps_with_delta ) > 0:
|
||||
|
||||
# set 0 on first file import time
|
||||
current_file_history.append( ( combined_timestamps_with_delta[0][0], 0 ) )
|
||||
|
||||
if len( combined_timestamps_with_delta ) < 2:
|
||||
|
||||
step_gap = 1
|
||||
|
||||
else:
|
||||
|
||||
step_gap = max( ( combined_timestamps_with_delta[-1][0] - combined_timestamps_with_delta[0][0] ) // num_steps, 1 )
|
||||
|
||||
|
||||
total_current_files = 0
|
||||
step_timestamp = combined_timestamps_with_delta[0][0]
|
||||
|
||||
for ( timestamp, delta ) in combined_timestamps_with_delta:
|
||||
|
||||
while timestamp > step_timestamp + step_gap:
|
||||
|
||||
current_file_history.append( ( step_timestamp, total_current_files ) )
|
||||
|
||||
step_timestamp += step_gap
|
||||
|
||||
|
||||
total_current_files += delta
|
||||
|
||||
|
||||
|
||||
file_history[ 'current' ] = current_file_history
|
||||
|
||||
# now deleted times. we will pre-populate total_num_files with non-timestamped records
|
||||
|
||||
( total_deleted_files, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE timestamp IS NULL;'.format( deleted_files_table_name ) ).fetchone()
|
||||
|
||||
deleted_file_history = []
|
||||
|
||||
if len( deleted_timestamps ) > 0:
|
||||
|
||||
if len( deleted_timestamps ) < 2:
|
||||
|
||||
step_gap = 1
|
||||
|
||||
else:
|
||||
|
||||
step_gap = max( ( deleted_timestamps[-1] - deleted_timestamps[0] ) // num_steps, 1 )
|
||||
|
||||
|
||||
step_timestamp = deleted_timestamps[0]
|
||||
|
||||
for deleted_timestamp in deleted_timestamps:
|
||||
|
||||
while deleted_timestamp > step_timestamp + step_gap:
|
||||
|
||||
deleted_file_history.append( ( step_timestamp, total_deleted_files ) )
|
||||
|
||||
step_timestamp += step_gap
|
||||
|
||||
|
||||
total_deleted_files += 1
|
||||
|
||||
|
||||
|
||||
file_history[ 'deleted' ] = deleted_file_history
|
||||
|
||||
# and inbox, which will work backwards since we have numbers for archiving. several subtle differences here
|
||||
# we know the inbox now and the recent history of archives and file changes
|
||||
# working backwards in time (which reverses increment/decrement):
|
||||
# - an archive increments
|
||||
# - a file import decrements
|
||||
# note that we archive right before we delete a file, so file deletes shouldn't change anything for inbox count. all deletes are on archived files, so the increment will already be counted
|
||||
# UPDATE: and now we add archived, which is mostly the same deal but we subtract from current files to start and don't care about file imports since they are always inbox but do care about file deletes
|
||||
|
||||
inbox_file_history = []
|
||||
archive_file_history = []
|
||||
|
||||
( total_inbox_files, ) = self._Execute( 'SELECT COUNT( * ) FROM file_inbox;' ).fetchone()
|
||||
total_current_files = len( current_timestamps )
|
||||
|
||||
# I now exclude updates and trash my searching 'all my files'
|
||||
total_update_files = 0 #self.modules_files_storage.GetCurrentFilesCount( self.modules_services.local_update_service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
total_trash_files = 0 #self.modules_files_storage.GetCurrentFilesCount( self.modules_services.trash_service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
total_archive_files = ( total_current_files - total_update_files - total_trash_files ) - total_inbox_files
|
||||
|
||||
# note also that we do not scrub archived time on a file delete, so this upcoming fetch is for all files ever. this is useful, so don't undo it m8
|
||||
archive_timestamps = self._STL( self._Execute( 'SELECT archived_timestamp FROM archive_timestamps ORDER BY archived_timestamp ASC;' ) )
|
||||
|
||||
if len( archive_timestamps ) > 0:
|
||||
|
||||
first_archive_time = archive_timestamps[0]
|
||||
|
||||
combined_timestamps_with_deltas = [ ( timestamp, 1, -1 ) for timestamp in archive_timestamps ]
|
||||
combined_timestamps_with_deltas.extend( ( ( timestamp, -1, 0 ) for timestamp in all_known_import_timestamps if timestamp >= first_archive_time ) )
|
||||
combined_timestamps_with_deltas.extend( ( ( timestamp, 0, 1 ) for timestamp in deleted_timestamps if timestamp >= first_archive_time ) )
|
||||
|
||||
combined_timestamps_with_deltas.sort( reverse = True )
|
||||
|
||||
if len( combined_timestamps_with_deltas ) > 0:
|
||||
|
||||
if len( combined_timestamps_with_deltas ) < 2:
|
||||
|
||||
step_gap = 1
|
||||
|
||||
else:
|
||||
|
||||
# reversed, so first minus last
|
||||
step_gap = max( ( combined_timestamps_with_deltas[0][0] - combined_timestamps_with_deltas[-1][0] ) // num_steps, 1 )
|
||||
|
||||
|
||||
step_timestamp = combined_timestamps_with_deltas[0][0]
|
||||
|
||||
for ( archived_timestamp, inbox_delta, archive_delta ) in combined_timestamps_with_deltas:
|
||||
|
||||
while archived_timestamp < step_timestamp - step_gap:
|
||||
|
||||
inbox_file_history.append( ( archived_timestamp, total_inbox_files ) )
|
||||
archive_file_history.append( ( archived_timestamp, total_archive_files ) )
|
||||
|
||||
step_timestamp -= step_gap
|
||||
|
||||
|
||||
total_inbox_files += inbox_delta
|
||||
total_archive_files += archive_delta
|
||||
|
||||
|
||||
inbox_file_history.reverse()
|
||||
archive_file_history.reverse()
|
||||
|
||||
|
||||
|
||||
file_history[ 'inbox' ] = inbox_file_history
|
||||
file_history[ 'archive' ] = archive_file_history
|
||||
|
||||
return file_history
|
||||
|
||||
|
||||
def GetHashIdStatus( self, hash_id, prefix = '' ) -> ClientImportFiles.FileImportStatus:
|
||||
|
||||
if prefix != '':
|
||||
|
|
|
@ -619,6 +619,11 @@ class ClientDBSimilarFiles( ClientDBModule.ClientDBModule ):
|
|||
return searched_distances_to_count
|
||||
|
||||
|
||||
def GetHashIdsToPixelHashes( self, hash_ids_table_name: str ):
|
||||
|
||||
return dict( self._Execute( f'SELECT {hash_ids_table_name}.hash_id, hash FROM {hash_ids_table_name} CROSS JOIN pixel_hash_map ON ( {hash_ids_table_name}.hash_id = pixel_hash_map.hash_id ) CROSS JOIN hashes ON ( pixel_hash_map.pixel_hash_id = hashes.hash_id );' ) )
|
||||
|
||||
|
||||
def GetTablesAndColumnsThatUseDefinitions( self, content_type: int ) -> typing.List[ typing.Tuple[ str, str ] ]:
|
||||
|
||||
if content_type == HC.CONTENT_TYPE_HASH:
|
||||
|
|
|
@ -2340,28 +2340,14 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
|
||||
def work_callable( args ):
|
||||
|
||||
job_key = ClientThreading.JobKey()
|
||||
|
||||
job_key.SetStatusText( 'Loading File History' + HC.UNICODE_ELLIPSIS )
|
||||
|
||||
HG.client_controller.pub( 'message', job_key )
|
||||
|
||||
num_steps = 7680
|
||||
|
||||
file_history = HG.client_controller.Read( 'file_history', num_steps )
|
||||
|
||||
return ( job_key, file_history )
|
||||
return 1
|
||||
|
||||
|
||||
def publish_callable( result ):
|
||||
|
||||
( job_key, file_history ) = result
|
||||
|
||||
job_key.Delete()
|
||||
|
||||
frame = ClientGUITopLevelWindowsPanels.FrameThatTakesScrollablePanel( self, 'file history', frame_key = 'file_history_chart' )
|
||||
|
||||
panel = ClientGUIScrolledPanelsReview.ReviewFileHistory( frame, file_history )
|
||||
panel = ClientGUIScrolledPanelsReview.ReviewFileHistory( frame )
|
||||
|
||||
frame.SetPanel( panel )
|
||||
|
||||
|
@ -3319,6 +3305,7 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
|
||||
report_modes = ClientGUIMenus.GenerateMenu( debug )
|
||||
|
||||
ClientGUIMenus.AppendMenuCheckItem( report_modes, 'blurhash mode', 'Draw blurhashes instead of thumbnails.', HG.blurhash_mode, self._SwitchBoolean, 'blurhash_mode' )
|
||||
ClientGUIMenus.AppendMenuCheckItem( report_modes, 'cache report mode', 'Have the image and thumb caches report their operation.', HG.cache_report_mode, self._SwitchBoolean, 'cache_report_mode' )
|
||||
ClientGUIMenus.AppendMenuCheckItem( report_modes, 'callto report mode', 'Report whenever the thread pool is given a task.', HG.callto_report_mode, self._SwitchBoolean, 'callto_report_mode' )
|
||||
ClientGUIMenus.AppendMenuCheckItem( report_modes, 'canvas tile borders mode', 'Draw tile borders.', HG.canvas_tile_outline_mode, self._SwitchBoolean, 'canvas_tile_outline_mode' )
|
||||
|
@ -6598,6 +6585,12 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
HG.autocomplete_delay_mode = not HG.autocomplete_delay_mode
|
||||
|
||||
elif name == 'blurhash_mode':
|
||||
|
||||
HG.blurhash_mode = not HG.blurhash_mode
|
||||
|
||||
self._controller.pub( 'reset_thumbnail_cache' )
|
||||
|
||||
elif name == 'cache_report_mode':
|
||||
|
||||
HG.cache_report_mode = not HG.cache_report_mode
|
||||
|
|
|
@ -188,6 +188,8 @@ try:
|
|||
max_num_files = max( self._max_num_files_deleted, max_num_files )
|
||||
|
||||
|
||||
max_num_files = max( max_num_files, 1 )
|
||||
|
||||
self._y_value_axis.setRange( 0, max_num_files )
|
||||
|
||||
self._y_value_axis.applyNiceNumbers()
|
||||
|
|
|
@ -23,16 +23,37 @@ from hydrus.client.media import ClientMedia
|
|||
|
||||
def CopyHashesToClipboard( win: QW.QWidget, hash_type: str, medias: typing.Sequence[ ClientMedia.Media ] ):
|
||||
|
||||
sha256_hashes = list( itertools.chain.from_iterable( ( media.GetHashes( ordered = True ) for media in medias ) ) )
|
||||
hex_it = True
|
||||
|
||||
if hash_type == 'sha256':
|
||||
desired_hashes = []
|
||||
|
||||
flat_media = ClientMedia.FlattenMedia( medias )
|
||||
|
||||
sha256_hashes = [ media.GetHash() for media in flat_media ]
|
||||
|
||||
if hash_type in ( 'pixel_hash', 'blurhash' ):
|
||||
|
||||
file_info_managers = [ media.GetFileInfoManager() for media in flat_media ]
|
||||
|
||||
if hash_type == 'pixel_hash':
|
||||
|
||||
desired_hashes = [ fim.pixel_hash for fim in file_info_managers if fim.pixel_hash is not None ]
|
||||
|
||||
elif hash_type == 'blurhash':
|
||||
|
||||
desired_hashes = [ fim.blurhash for fim in file_info_managers if fim.blurhash is not None ]
|
||||
|
||||
hex_it = False
|
||||
|
||||
|
||||
elif hash_type == 'sha256':
|
||||
|
||||
desired_hashes = sha256_hashes
|
||||
|
||||
else:
|
||||
|
||||
num_hashes = len( sha256_hashes )
|
||||
num_remote_sha256_hashes = len( [ itertools.chain.from_iterable( ( media.GetHashes( discriminant = CC.DISCRIMINANT_NOT_LOCAL, ordered = True ) for media in medias ) ) ] )
|
||||
num_remote_medias = len( [ not media.GetLocationsManager().IsLocal() for media in flat_media ] )
|
||||
|
||||
source_to_desired = HG.client_controller.Read( 'file_hashes', sha256_hashes, 'sha256', hash_type )
|
||||
|
||||
|
@ -51,12 +72,12 @@ def CopyHashesToClipboard( win: QW.QWidget, hash_type: str, medias: typing.Seque
|
|||
message = 'Unfortunately, {} of the {} hashes could not be found.'.format( HydrusData.ToHumanInt( num_missing ), hash_type )
|
||||
|
||||
|
||||
if num_remote_sha256_hashes > 0:
|
||||
if num_remote_medias > 0:
|
||||
|
||||
message += ' {} of the files you wanted are not currently in this client. If they have never visited this client, the lookup is impossible.'.format( HydrusData.ToHumanInt( num_remote_sha256_hashes ) )
|
||||
message += ' {} of the files you wanted are not currently in this client. If they have never visited this client, the lookup is impossible.'.format( HydrusData.ToHumanInt( num_remote_medias ) )
|
||||
|
||||
|
||||
if num_remote_sha256_hashes < num_hashes:
|
||||
if num_remote_medias < num_hashes:
|
||||
|
||||
message += ' It could be that some of the local files are currently missing this information in the hydrus database. A file maintenance job (under the database menu) can repopulate this data.'
|
||||
|
||||
|
@ -67,7 +88,14 @@ def CopyHashesToClipboard( win: QW.QWidget, hash_type: str, medias: typing.Seque
|
|||
|
||||
if len( desired_hashes ) > 0:
|
||||
|
||||
text_lines = [ desired_hash.hex() for desired_hash in desired_hashes ]
|
||||
if hex_it:
|
||||
|
||||
text_lines = [ desired_hash.hex() for desired_hash in desired_hashes ]
|
||||
|
||||
else:
|
||||
|
||||
text_lines = desired_hashes
|
||||
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'prefix_hash_when_copying' ):
|
||||
|
||||
|
|
|
@ -2399,27 +2399,179 @@ class ReviewFileEmbeddedMetadata( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
class ReviewFileHistory( ClientGUIScrolledPanels.ReviewPanel ):
|
||||
|
||||
def __init__( self, parent, file_history ):
|
||||
def __init__( self, parent ):
|
||||
|
||||
ClientGUIScrolledPanels.ReviewPanel.__init__( self, parent )
|
||||
|
||||
file_history_chart = ClientGUICharts.FileHistory( self, file_history )
|
||||
self._job_key = ClientThreading.JobKey()
|
||||
|
||||
file_history_chart.setMinimumSize( 720, 480 )
|
||||
#
|
||||
|
||||
vbox = QP.VBoxLayout()
|
||||
self._search_panel = QW.QWidget( self )
|
||||
|
||||
flip_deleted = QW.QCheckBox( 'show deleted', self )
|
||||
# TODO: ok add 'num_steps' as a control, and a date range
|
||||
|
||||
flip_deleted.setChecked( True )
|
||||
panel_vbox = QP.VBoxLayout()
|
||||
|
||||
flip_deleted.clicked.connect( file_history_chart.FlipDeletedVisible )
|
||||
file_search_context = ClientSearch.FileSearchContext(
|
||||
location_context = ClientLocation.LocationContext.STATICCreateSimple( CC.COMBINED_LOCAL_MEDIA_SERVICE_KEY )
|
||||
)
|
||||
|
||||
QP.AddToLayout( vbox, flip_deleted, CC.FLAGS_CENTER )
|
||||
page_key = b'mr bones placeholder'
|
||||
|
||||
QP.AddToLayout( vbox, file_history_chart, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
self._tag_autocomplete = ClientGUIACDropdown.AutoCompleteDropdownTagsRead(
|
||||
self._search_panel,
|
||||
page_key,
|
||||
file_search_context,
|
||||
allow_all_known_files = False,
|
||||
force_system_everything = True,
|
||||
fixed_results_list_height = 8
|
||||
)
|
||||
|
||||
self.widget().setLayout( vbox )
|
||||
self._loading_text = ClientGUICommon.BetterStaticText( self._search_panel )
|
||||
self._loading_text.setAlignment( QC.Qt.AlignVCenter | QC.Qt.AlignRight )
|
||||
|
||||
self._cancel_button = ClientGUICommon.BetterBitmapButton( self._search_panel, CC.global_pixmaps().stop, self._CancelCurrentSearch )
|
||||
self._refresh_button = ClientGUICommon.BetterBitmapButton( self._search_panel, CC.global_pixmaps().refresh, self._RefreshSearch )
|
||||
|
||||
hbox = QP.HBoxLayout()
|
||||
|
||||
QP.AddToLayout( hbox, self._loading_text, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
QP.AddToLayout( hbox, self._cancel_button, CC.FLAGS_CENTER )
|
||||
QP.AddToLayout( hbox, self._refresh_button, CC.FLAGS_CENTER )
|
||||
|
||||
QP.AddToLayout( panel_vbox, self._tag_autocomplete, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
QP.AddToLayout( panel_vbox, hbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
||||
panel_vbox.addStretch( 1 )
|
||||
|
||||
self._search_panel.setLayout( panel_vbox )
|
||||
|
||||
#
|
||||
|
||||
self._file_history_chart_panel = QW.QWidget( self )
|
||||
|
||||
self._file_history_vbox = QP.VBoxLayout()
|
||||
|
||||
self._status_st = ClientGUICommon.BetterStaticText( self._file_history_chart_panel, label = f'loading{HC.UNICODE_ELLIPSIS}' )
|
||||
|
||||
self._flip_deleted = QW.QCheckBox( 'show deleted', self._file_history_chart_panel )
|
||||
|
||||
self._flip_deleted.setChecked( True )
|
||||
|
||||
self._file_history_chart = QW.QWidget( self._file_history_chart_panel )
|
||||
|
||||
QP.AddToLayout( self._file_history_vbox, self._status_st, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( self._file_history_vbox, self._flip_deleted, CC.FLAGS_CENTER )
|
||||
QP.AddToLayout( self._file_history_vbox, self._file_history_chart, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
|
||||
self._file_history_chart_panel.setLayout( self._file_history_vbox )
|
||||
|
||||
#
|
||||
|
||||
self._hbox = QP.HBoxLayout()
|
||||
|
||||
QP.AddToLayout( self._hbox, self._search_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( self._hbox, self._file_history_chart_panel, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
|
||||
self.widget().setLayout( self._hbox )
|
||||
|
||||
self._tag_autocomplete.searchChanged.connect( self._RefreshSearch )
|
||||
|
||||
self._RefreshSearch()
|
||||
|
||||
|
||||
def _CancelCurrentSearch( self ):
|
||||
|
||||
self._job_key.Cancel()
|
||||
|
||||
self._cancel_button.setEnabled( False )
|
||||
|
||||
|
||||
def _RefreshSearch( self ):
|
||||
|
||||
def work_callable():
|
||||
|
||||
try:
|
||||
|
||||
file_history = HG.client_controller.Read( 'file_history', num_steps, file_search_context = file_search_context, job_key = job_key )
|
||||
|
||||
except HydrusExceptions.DBException as e:
|
||||
|
||||
if isinstance( e.db_e, HydrusExceptions.TooComplicatedM8 ):
|
||||
|
||||
return -1
|
||||
|
||||
|
||||
raise
|
||||
|
||||
|
||||
return file_history
|
||||
|
||||
|
||||
def publish_callable( file_history ):
|
||||
|
||||
try:
|
||||
|
||||
if file_history == -1:
|
||||
|
||||
self._status_st.setText( 'Sorry, this domain is too complicated to calculate for now, try something simpler!' )
|
||||
|
||||
return
|
||||
|
||||
elif job_key.IsCancelled():
|
||||
|
||||
self._status_st.setText( 'Cancelled!' )
|
||||
|
||||
return
|
||||
|
||||
|
||||
self._file_history_vbox.removeWidget( self._file_history_chart )
|
||||
|
||||
# TODO: presumably the thing here is to have SetValue on this widget so we can simply clear/set it rather than the mickey-mouse replace
|
||||
self._file_history_chart = ClientGUICharts.FileHistory( self._file_history_chart_panel, file_history )
|
||||
|
||||
self._file_history_chart.setMinimumSize( 720, 480 )
|
||||
|
||||
self._flip_deleted.clicked.connect( self._file_history_chart.FlipDeletedVisible )
|
||||
|
||||
QP.AddToLayout( self._file_history_vbox, self._file_history_chart, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
|
||||
self._status_st.setVisible( False )
|
||||
self._flip_deleted.setVisible( True )
|
||||
|
||||
finally:
|
||||
|
||||
self._cancel_button.setEnabled( False )
|
||||
self._refresh_button.setEnabled( True )
|
||||
|
||||
|
||||
|
||||
if not self._tag_autocomplete.IsSynchronised():
|
||||
|
||||
self._refresh_button.setEnabled( False )
|
||||
|
||||
return
|
||||
|
||||
|
||||
file_search_context = self._tag_autocomplete.GetFileSearchContext()
|
||||
|
||||
num_steps = 7680
|
||||
|
||||
self._status_st.setVisible( True )
|
||||
|
||||
self._flip_deleted.setVisible( False )
|
||||
self._file_history_chart.setVisible( False )
|
||||
|
||||
self._job_key.Cancel()
|
||||
|
||||
job_key = ClientThreading.JobKey()
|
||||
|
||||
self._job_key = job_key
|
||||
|
||||
self._update_job = ClientGUIAsync.AsyncQtJob( self, work_callable, publish_callable )
|
||||
|
||||
self._update_job.start()
|
||||
|
||||
|
||||
class ReviewFileMaintenance( ClientGUIScrolledPanels.ReviewPanel ):
|
||||
|
@ -2505,7 +2657,7 @@ class ReviewFileMaintenance( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
self._action_selector = ClientGUICommon.BetterChoice( self._action_panel )
|
||||
|
||||
for job_type in ClientFiles.ALL_REGEN_JOBS_IN_PREFERRED_ORDER:
|
||||
for job_type in ClientFiles.ALL_REGEN_JOBS_IN_HUMAN_ORDER:
|
||||
|
||||
self._action_selector.addItem( ClientFiles.regen_file_enum_to_str_lookup[ job_type ], job_type )
|
||||
|
||||
|
@ -2847,10 +2999,6 @@ class ReviewHowBonedAmI( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
ClientGUIScrolledPanels.ReviewPanel.__init__( self, parent )
|
||||
|
||||
# refresh button
|
||||
# search tab
|
||||
# update function, reset/loading.../set
|
||||
|
||||
self._update_job = None
|
||||
self._job_key = ClientThreading.JobKey()
|
||||
|
||||
|
@ -2871,10 +3019,11 @@ class ReviewHowBonedAmI( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
#
|
||||
|
||||
self._search_panel = QW.QWidget( self )
|
||||
|
||||
self._files_panel = QW.QWidget( self._notebook )
|
||||
self._views_panel = QW.QWidget( self._notebook )
|
||||
self._duplicates_panel = QW.QWidget( self._notebook )
|
||||
self._search_panel = QW.QWidget( self._notebook )
|
||||
|
||||
self._notebook.addTab( self._files_panel, 'files' )
|
||||
self._notebook.addTab( self._views_panel, 'views' )
|
||||
|
|
|
@ -1558,6 +1558,18 @@ class CanvasPanel( Canvas ):
|
|||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, 'sha1', 'Copy this file\'s SHA1 hash.', self._CopyHashToClipboard, 'sha1' )
|
||||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, 'sha512', 'Copy this file\'s SHA512 hash.', self._CopyHashToClipboard, 'sha512' )
|
||||
|
||||
file_info_manager = self._current_media.GetFileInfoManager()
|
||||
|
||||
if file_info_manager.blurhash is not None:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, f'blurhash ({file_info_manager.blurhash})', 'Copy this file\'s blurhash.', self._CopyHashToClipboard, 'blurhash' )
|
||||
|
||||
|
||||
if file_info_manager.pixel_hash is not None:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, f'pixel ({file_info_manager.pixel_hash.hex()})', 'Copy this file\'s pixel hash.', self._CopyHashToClipboard, 'pixel_hash' )
|
||||
|
||||
|
||||
ClientGUIMenus.AppendMenu( copy_menu, copy_hash_menu, 'hash' )
|
||||
|
||||
if advanced_mode:
|
||||
|
@ -3186,7 +3198,6 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
|||
HG.client_controller.pub( 'new_similar_files_potentials_search_numbers' )
|
||||
|
||||
ClientDuplicates.hashes_to_jpeg_quality = {} # clear the cache
|
||||
ClientDuplicates.hashes_to_pixel_hashes = {} # clear the cache
|
||||
|
||||
CanvasWithHovers.CleanBeforeDestroy( self )
|
||||
|
||||
|
@ -4564,6 +4575,18 @@ class CanvasMediaListBrowser( CanvasMediaListNavigable ):
|
|||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, 'sha1', 'Copy this file\'s SHA1 hash to your clipboard.', self._CopyHashToClipboard, 'sha1' )
|
||||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, 'sha512', 'Copy this file\'s SHA512 hash to your clipboard.', self._CopyHashToClipboard, 'sha512' )
|
||||
|
||||
file_info_manager = self._current_media.GetFileInfoManager()
|
||||
|
||||
if file_info_manager.blurhash is not None:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, f'blurhash ({file_info_manager.blurhash})', 'Copy this file\'s blurhash.', self._CopyHashToClipboard, 'blurhash' )
|
||||
|
||||
|
||||
if file_info_manager.pixel_hash is not None:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, f'pixel ({file_info_manager.pixel_hash.hex()})', 'Copy this file\'s pixel hash.', self._CopyHashToClipboard, 'pixel_hash' )
|
||||
|
||||
|
||||
ClientGUIMenus.AppendMenu( copy_menu, copy_hash_menu, 'hash' )
|
||||
|
||||
if advanced_mode:
|
||||
|
|
|
@ -1947,6 +1947,19 @@ class MediaPanel( CAC.ApplicationCommandProcessorMixin, ClientMedia.ListeningMed
|
|||
|
||||
|
||||
|
||||
def _ShowSelectionInNewDuplicateFilterPage( self ):
|
||||
|
||||
hashes = self._GetSelectedHashes( ordered = True )
|
||||
|
||||
activate_window = HG.client_controller.new_options.GetBoolean( 'activate_window_on_tag_search_page_activation' )
|
||||
|
||||
predicates = [ ClientSearch.Predicate( predicate_type = ClientSearch.PREDICATE_TYPE_SYSTEM_HASH, value = ( tuple( hashes ), 'sha256' ) ) ]
|
||||
|
||||
page_name = 'duplicates'
|
||||
|
||||
HG.client_controller.pub( 'new_page_duplicates', self._location_context, initial_predicates = predicates, page_name = page_name, activate_window = activate_window )
|
||||
|
||||
|
||||
def _ShowSelectionInNewPage( self ):
|
||||
|
||||
hashes = self._GetSelectedHashes( ordered = True )
|
||||
|
@ -4093,6 +4106,7 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
ClientGUIMenus.AppendMenuItem( regen_menu, 'file metadata', 'Regenerated the selected files\' metadata and thumbnails.', self._RegenerateFileData, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_METADATA )
|
||||
ClientGUIMenus.AppendMenuItem( regen_menu, 'similar files data', 'Regenerated the selected files\' perceptual hashes.', self._RegenerateFileData, ClientFiles.REGENERATE_FILE_DATA_JOB_SIMILAR_FILES_METADATA )
|
||||
ClientGUIMenus.AppendMenuItem( regen_menu, 'duplicate pixel data', 'Regenerated the selected files\' pixel hashes.', self._RegenerateFileData, ClientFiles.REGENERATE_FILE_DATA_JOB_PIXEL_HASH )
|
||||
ClientGUIMenus.AppendMenuItem( regen_menu, 'blurhash', 'Regenerated the selected files\' blurhashes.', self._RegenerateFileData, ClientFiles.REGENERATE_FILE_DATA_JOB_BLURHASH )
|
||||
ClientGUIMenus.AppendMenuItem( regen_menu, 'full presence check', 'Check file presence and try to fix.', self._RegenerateFileData, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE_TRY_URL_ELSE_REMOVE_RECORD )
|
||||
ClientGUIMenus.AppendMenuItem( regen_menu, 'full integrity check', 'Check file integrity and try to fix.', self._RegenerateFileData, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA_TRY_URL_ELSE_REMOVE_RECORD )
|
||||
|
||||
|
@ -4230,12 +4244,17 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
similar_menu = ClientGUIMenus.GenerateMenu( open_menu )
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( similar_menu, 'in a new duplicate filter page', 'Make a new duplicate filter page that searches for these files specifically.', self._ShowSelectionInNewDuplicateFilterPage )
|
||||
|
||||
ClientGUIMenus.AppendSeparator( similar_menu )
|
||||
|
||||
ClientGUIMenus.AppendMenuLabel( similar_menu, 'search for similar-looking files:' )
|
||||
ClientGUIMenus.AppendMenuItem( similar_menu, 'exact match', 'Search the database for files that look precisely like those selected.', self._GetSimilarTo, CC.HAMMING_EXACT_MATCH )
|
||||
ClientGUIMenus.AppendMenuItem( similar_menu, 'very similar', 'Search the database for files that look just like those selected.', self._GetSimilarTo, CC.HAMMING_VERY_SIMILAR )
|
||||
ClientGUIMenus.AppendMenuItem( similar_menu, 'similar', 'Search the database for files that look generally like those selected.', self._GetSimilarTo, CC.HAMMING_SIMILAR )
|
||||
ClientGUIMenus.AppendMenuItem( similar_menu, 'speculative', 'Search the database for files that probably look like those selected. This is sometimes useful for symbols with sharp edges or lines.', self._GetSimilarTo, CC.HAMMING_SPECULATIVE )
|
||||
|
||||
ClientGUIMenus.AppendMenu( open_menu, similar_menu, 'similar-looking files' )
|
||||
ClientGUIMenus.AppendMenu( open_menu, similar_menu, 'similar files' )
|
||||
|
||||
|
||||
ClientGUIMenus.AppendSeparator( open_menu )
|
||||
|
@ -4277,6 +4296,18 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, 'sha1', 'Copy the selected file\'s SHA1 hash to the clipboard.', self._CopyHashToClipboard, 'sha1' )
|
||||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, 'sha512', 'Copy the selected file\'s SHA512 hash to the clipboard.', self._CopyHashToClipboard, 'sha512' )
|
||||
|
||||
file_info_manager = focus_singleton.GetFileInfoManager()
|
||||
|
||||
if file_info_manager.blurhash is not None:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, f'blurhash ({file_info_manager.blurhash})', 'Copy this file\'s blurhash.', self._CopyHashToClipboard, 'blurhash' )
|
||||
|
||||
|
||||
if file_info_manager.pixel_hash is not None:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, f'pixel ({file_info_manager.pixel_hash.hex()})', 'Copy this file\'s pixel hash.', self._CopyHashToClipboard, 'pixel_hash' )
|
||||
|
||||
|
||||
|
||||
ClientGUIMenus.AppendMenu( copy_menu, copy_hash_menu, 'hash' )
|
||||
|
||||
|
@ -4288,6 +4319,8 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, 'md5', 'Copy the selected files\' MD5 hashes to the clipboard.', self._CopyHashesToClipboard, 'md5' )
|
||||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, 'sha1', 'Copy the selected files\' SHA1 hashes to the clipboard.', self._CopyHashesToClipboard, 'sha1' )
|
||||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, 'sha512', 'Copy the selected files\' SHA512 hashes to the clipboard.', self._CopyHashesToClipboard, 'sha512' )
|
||||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, 'blurhash', 'Copy the selected files\' blurhashes to the clipboard.', self._CopyHashesToClipboard, 'blurhash' )
|
||||
ClientGUIMenus.AppendMenuItem( copy_hash_menu, 'pixel', 'Copy the selected files\' pixel hashes to the clipboard.', self._CopyHashesToClipboard, 'pixel_hash' )
|
||||
|
||||
ClientGUIMenus.AppendMenu( copy_menu, copy_hash_menu, 'hashes' )
|
||||
|
||||
|
|
|
@ -357,15 +357,16 @@ class FileImportJob( object ):
|
|||
thumbnail_numpy = HydrusFileHandling.GenerateThumbnailNumPy(self._temp_path, target_resolution, mime, duration, num_frames, clip_rect = clip_rect, percentage_in = percentage_in)
|
||||
|
||||
# this guy handles almost all his own exceptions now, so no need for clever catching. if it fails, we are prob talking an I/O failure, which is not a 'thumbnail failed' error
|
||||
self._thumbnail_bytes = HydrusImageHandling.GenerateThumbnailBytesNumPy( thumbnail_numpy )
|
||||
self._thumbnail_bytes = HydrusImageHandling.GenerateThumbnailBytesFromNumPy( thumbnail_numpy )
|
||||
|
||||
try:
|
||||
|
||||
self._blurhash = HydrusImageHandling.GetImageBlurHashNumPy( thumbnail_numpy )
|
||||
|
||||
|
||||
self._blurhash = HydrusImageHandling.GetBlurhashFromNumPy( thumbnail_numpy )
|
||||
|
||||
except:
|
||||
|
||||
|
||||
pass
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -515,9 +516,10 @@ class FileImportJob( object ):
|
|||
def HasICCProfile( self ) -> bool:
|
||||
|
||||
return self._has_icc_profile
|
||||
|
||||
|
||||
def GetBlurhash( self ) -> str:
|
||||
|
||||
|
||||
return self._blurhash
|
||||
|
||||
|
||||
|
@ -532,3 +534,4 @@ class FileImportJob( object ):
|
|||
HG.client_controller.Write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -588,10 +588,12 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
fsc = self._file_seed_cache
|
||||
|
||||
if presentation_import_options is None:
|
||||
|
||||
presentation_import_options = FileImportOptions.GetRealPresentationImportOptions( self._file_import_options, FileImportOptions.IMPORT_TYPE_LOUD )
|
||||
|
||||
fio = self._file_import_options
|
||||
|
||||
|
||||
if presentation_import_options is None:
|
||||
|
||||
presentation_import_options = FileImportOptions.GetRealPresentationImportOptions( fio, FileImportOptions.IMPORT_TYPE_LOUD )
|
||||
|
||||
|
||||
return fsc.GetPresentedHashes( presentation_import_options )
|
||||
|
|
|
@ -1414,10 +1414,12 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
fsc = self._file_seed_cache
|
||||
|
||||
if presentation_import_options is None:
|
||||
|
||||
presentation_import_options = FileImportOptions.GetRealPresentationImportOptions( self._file_import_options, FileImportOptions.IMPORT_TYPE_LOUD )
|
||||
|
||||
fio = self._file_import_options
|
||||
|
||||
|
||||
if presentation_import_options is None:
|
||||
|
||||
presentation_import_options = FileImportOptions.GetRealPresentationImportOptions( fio, FileImportOptions.IMPORT_TYPE_LOUD )
|
||||
|
||||
|
||||
return fsc.GetPresentedHashes( presentation_import_options )
|
||||
|
|
|
@ -39,7 +39,7 @@ def FilterServiceKeysToContentUpdates( full_service_keys_to_content_updates, has
|
|||
return filtered_service_keys_to_content_updates
|
||||
|
||||
|
||||
def FlattenMedia( media_list ):
|
||||
def FlattenMedia( media_list ) -> typing.List[ "MediaSingleton" ]:
|
||||
|
||||
flat_media = []
|
||||
|
||||
|
@ -1662,6 +1662,11 @@ class MediaSingleton( Media ):
|
|||
return self._media_result.GetFileInfoManager().hash_id
|
||||
|
||||
|
||||
def GetFileInfoManager( self ):
|
||||
|
||||
return self._media_result.GetFileInfoManager()
|
||||
|
||||
|
||||
def GetFileViewingStatsManager( self ):
|
||||
|
||||
return self._media_result.GetFileViewingStatsManager()
|
||||
|
|
|
@ -61,7 +61,8 @@ class FileInfoManager( object ):
|
|||
if mime is None:
|
||||
|
||||
mime = HC.APPLICATION_UNKNOWN
|
||||
|
||||
|
||||
|
||||
self.hash_id = hash_id
|
||||
self.hash = hash
|
||||
self.size = size
|
||||
|
@ -77,6 +78,7 @@ class FileInfoManager( object ):
|
|||
self.has_human_readable_embedded_metadata = False
|
||||
self.has_icc_profile = False
|
||||
self.blurhash = None
|
||||
self.pixel_hash = None
|
||||
|
||||
|
||||
def Duplicate( self ):
|
||||
|
|
|
@ -1420,7 +1420,7 @@ class HydrusResourceClientAPIRestricted( HydrusResourceClientAPI ):
|
|||
|
||||
except:
|
||||
|
||||
raise Exception( 'Problem parsing {}!'.format( name_of_key ) )
|
||||
raise HydrusExceptions.BadRequestException( 'Problem parsing {}!'.format( name_of_key ) )
|
||||
|
||||
|
||||
|
||||
|
@ -2995,7 +2995,7 @@ class HydrusResourceClientAPIRestrictedGetFilesFileMetadata( HydrusResourceClien
|
|||
|
||||
elif only_return_basic_information:
|
||||
|
||||
file_info_managers = HG.client_controller.Read( 'file_info_managers_from_ids', hash_ids, blurhash = include_blurhash )
|
||||
file_info_managers = HG.client_controller.Read( 'file_info_managers_from_ids', hash_ids )
|
||||
|
||||
hashes_to_file_info_managers = { file_info_manager.hash : file_info_manager for file_info_manager in file_info_managers }
|
||||
|
||||
|
@ -3022,8 +3022,9 @@ class HydrusResourceClientAPIRestrictedGetFilesFileMetadata( HydrusResourceClien
|
|||
}
|
||||
|
||||
if include_blurhash:
|
||||
|
||||
metadata_row['blurhash'] = file_info_manager.blurhash
|
||||
|
||||
metadata_row[ 'blurhash' ] = file_info_manager.blurhash
|
||||
|
||||
|
||||
metadata.append( metadata_row )
|
||||
|
||||
|
@ -3078,7 +3079,8 @@ class HydrusResourceClientAPIRestrictedGetFilesFileMetadata( HydrusResourceClien
|
|||
'num_frames' : file_info_manager.num_frames,
|
||||
'num_words' : file_info_manager.num_words,
|
||||
'has_audio' : file_info_manager.has_audio,
|
||||
'blurhash' : file_info_manager.blurhash
|
||||
'blurhash' : file_info_manager.blurhash,
|
||||
'pixel_hash' : None if file_info_manager.pixel_hash is None else file_info_manager.pixel_hash.hex()
|
||||
}
|
||||
|
||||
if file_info_manager.mime in HC.MIMES_WITH_THUMBNAILS:
|
||||
|
@ -3095,6 +3097,7 @@ class HydrusResourceClientAPIRestrictedGetFilesFileMetadata( HydrusResourceClien
|
|||
if include_notes:
|
||||
|
||||
metadata_row[ 'notes' ] = media_result.GetNotesManager().GetNamesToNotes()
|
||||
|
||||
|
||||
locations_manager = media_result.GetLocationsManager()
|
||||
|
||||
|
|
|
@ -25,15 +25,17 @@ def ReadSingleFileFromZip( path_to_zip, filename_to_extract ):
|
|||
return reader.read()
|
||||
|
||||
|
||||
|
||||
|
||||
def GetZipAsPath( path_to_zip, path_in_zip="" ):
|
||||
|
||||
|
||||
return zipfile.Path( path_to_zip, at=path_in_zip )
|
||||
|
||||
|
||||
|
||||
def MimeFromOpenDocument( path ):
|
||||
|
||||
try:
|
||||
|
||||
|
||||
mimetype_data = GetZipAsPath( path, 'mimetype' ).read_text()
|
||||
|
||||
filetype = HC.mime_enum_lookup.get(mimetype_data, None)
|
||||
|
@ -43,3 +45,5 @@ def MimeFromOpenDocument( path ):
|
|||
except:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
|
|
|
@ -100,8 +100,8 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 20
|
||||
SOFTWARE_VERSION = 544
|
||||
CLIENT_API_VERSION = 52
|
||||
SOFTWARE_VERSION = 545
|
||||
CLIENT_API_VERSION = 53
|
||||
|
||||
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
||||
|
|
|
@ -34,6 +34,7 @@ class UnknownException( HydrusException ): pass
|
|||
|
||||
class CantRenderWithCVException( HydrusException ): pass
|
||||
class DataMissing( HydrusException ): pass
|
||||
class TooComplicatedM8( HydrusException ): pass
|
||||
|
||||
class DBException( HydrusException ):
|
||||
|
||||
|
|
|
@ -74,9 +74,9 @@ headers_and_mime.extend( [
|
|||
( ( ( 0, b'gimp xcf ' ), ), HC.APPLICATION_XCF ),
|
||||
( ( ( 38, b'application/x-krita' ), ), HC.APPLICATION_KRITA ), # important this comes before zip files because this is also a zip file
|
||||
( ( ( 42, b'application/x-krita' ), ), HC.APPLICATION_KRITA ), # https://gitlab.freedesktop.org/xdg/shared-mime-info/-/blob/master/data/freedesktop.org.xml.in#L2829
|
||||
( ( ( 58, b'application/x-krita' ), ), HC.APPLICATION_KRITA ),
|
||||
( ( ( 63, b'application/x-krita' ), ), HC.APPLICATION_KRITA ),
|
||||
( ( ( 38, b'application/epub+zip' ), ), HC.APPLICATION_EPUB ),
|
||||
( ( ( 58, b'application/x-krita' ), ), HC.APPLICATION_KRITA ),
|
||||
( ( ( 63, b'application/x-krita' ), ), HC.APPLICATION_KRITA ),
|
||||
( ( ( 38, b'application/epub+zip' ), ), HC.APPLICATION_EPUB ),
|
||||
( ( ( 43, b'application/epub+zip' ), ), HC.APPLICATION_EPUB ),
|
||||
( ( ( 0, b'PK\x03\x04' ), ), HC.APPLICATION_ZIP ),
|
||||
( ( ( 0, b'PK\x05\x06' ), ), HC.APPLICATION_ZIP ),
|
||||
|
@ -121,9 +121,11 @@ def GenerateThumbnailBytes( path, target_resolution, mime, duration, num_frames,
|
|||
|
||||
thumbnail_numpy = GenerateThumbnailNumPy(path, target_resolution, mime, duration, num_frames, clip_rect, percentage_in )
|
||||
|
||||
return HydrusImageHandling.GenerateThumbnailBytesNumPy(thumbnail_numpy)
|
||||
return HydrusImageHandling.GenerateThumbnailBytesFromNumPy( thumbnail_numpy )
|
||||
|
||||
|
||||
def GenerateThumbnailNumPy( path, target_resolution, mime, duration, num_frames, clip_rect = None, percentage_in = 35 ):
|
||||
|
||||
if target_resolution == ( 0, 0 ):
|
||||
|
||||
target_resolution = ( 128, 128 )
|
||||
|
@ -202,6 +204,7 @@ def GenerateThumbnailNumPy( path, target_resolution, mime, duration, num_frames,
|
|||
|
||||
HydrusTemp.CleanUpTempPath( os_file_handle, temp_path )
|
||||
|
||||
|
||||
elif mime == HC.APPLICATION_PROCREATE:
|
||||
|
||||
( os_file_handle, temp_path ) = HydrusTemp.GetTempPath()
|
||||
|
@ -623,19 +626,20 @@ def GetMime( path, ok_to_look_for_hydrus_updates = False ):
|
|||
if it_passes:
|
||||
|
||||
if mime == HC.APPLICATION_ZIP:
|
||||
|
||||
|
||||
opendoc_mime = HydrusArchiveHandling.MimeFromOpenDocument( path )
|
||||
|
||||
if opendoc_mime is not None:
|
||||
|
||||
|
||||
return opendoc_mime
|
||||
|
||||
|
||||
if HydrusProcreateHandling.ZipLooksLikeProcreate( path ):
|
||||
|
||||
|
||||
return HC.APPLICATION_PROCREATE
|
||||
|
||||
|
||||
return HC.APPLICATION_ZIP
|
||||
|
||||
|
||||
if mime in ( HC.UNDETERMINED_WM, HC.UNDETERMINED_MP4 ):
|
||||
|
||||
|
|
|
@ -70,10 +70,12 @@ network_report_mode = False
|
|||
pubsub_report_mode = False
|
||||
daemon_report_mode = False
|
||||
mpv_report_mode = False
|
||||
|
||||
force_idle_mode = False
|
||||
no_page_limit_mode = False
|
||||
thumbnail_debug_mode = False
|
||||
autocomplete_delay_mode = False
|
||||
blurhash_mode = False
|
||||
|
||||
do_idle_shutdown_work = False
|
||||
shutdown_complete = False
|
||||
|
|
|
@ -43,11 +43,9 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusTemp
|
||||
from hydrus.core import HydrusPSDHandling
|
||||
|
||||
from hydrus.external import blurhash
|
||||
from hydrus.external import blurhash as external_blurhash
|
||||
|
||||
PIL_SRGB_PROFILE = PILImageCms.createProfile( 'sRGB' )
|
||||
|
||||
|
@ -452,8 +450,9 @@ def GenerateThumbnailNumPyFromStaticImagePath( path, target_resolution, mime, cl
|
|||
thumbnail_numpy_image = ResizeNumPyImage( numpy_image, target_resolution )
|
||||
|
||||
return thumbnail_numpy_image
|
||||
|
||||
|
||||
|
||||
|
||||
pil_image = GeneratePILImage( path )
|
||||
|
||||
if clip_rect is not None:
|
||||
|
@ -467,19 +466,29 @@ def GenerateThumbnailNumPyFromStaticImagePath( path, target_resolution, mime, cl
|
|||
|
||||
return thumbnail_numpy_image
|
||||
|
||||
def GenerateThumbnailBytesNumPy( numpy_image ) -> bytes:
|
||||
|
||||
def GenerateThumbnailBytesFromNumPy( numpy_image ) -> bytes:
|
||||
|
||||
( im_height, im_width, depth ) = numpy_image.shape
|
||||
|
||||
numpy_image = StripOutAnyUselessAlphaChannel( numpy_image )
|
||||
|
||||
if depth == 4:
|
||||
if len( numpy_image.shape ) == 2:
|
||||
|
||||
convert = cv2.COLOR_RGBA2BGRA
|
||||
depth = 3
|
||||
|
||||
convert = cv2.COLOR_GRAY2RGB
|
||||
|
||||
else:
|
||||
|
||||
convert = cv2.COLOR_RGB2BGR
|
||||
( im_height, im_width, depth ) = numpy_image.shape
|
||||
|
||||
numpy_image = StripOutAnyUselessAlphaChannel( numpy_image )
|
||||
|
||||
if depth == 4:
|
||||
|
||||
convert = cv2.COLOR_RGBA2BGRA
|
||||
|
||||
else:
|
||||
|
||||
convert = cv2.COLOR_RGB2BGR
|
||||
|
||||
|
||||
|
||||
numpy_image = cv2.cvtColor( numpy_image, convert )
|
||||
|
@ -512,7 +521,8 @@ def GenerateThumbnailBytesNumPy( numpy_image ) -> bytes:
|
|||
raise HydrusExceptions.CantRenderWithCVException( 'Thumb failed to encode!' )
|
||||
|
||||
|
||||
def GenerateThumbnailBytesPIL( pil_image: PILImage.Image ) -> bytes:
|
||||
|
||||
def GenerateThumbnailBytesFromPIL( pil_image: PILImage.Image ) -> bytes:
|
||||
|
||||
f = io.BytesIO()
|
||||
|
||||
|
@ -533,6 +543,7 @@ def GenerateThumbnailBytesPIL( pil_image: PILImage.Image ) -> bytes:
|
|||
|
||||
return thumbnail_bytes
|
||||
|
||||
|
||||
def GeneratePNGBytesNumPy( numpy_image ) -> bytes:
|
||||
|
||||
( im_height, im_width, depth ) = numpy_image.shape
|
||||
|
@ -813,14 +824,14 @@ def GetThumbnailResolutionAndClipRegion( image_resolution: typing.Tuple[ int, in
|
|||
bounding_width = int( bounding_width * thumbnail_dpr )
|
||||
|
||||
if im_width is None:
|
||||
|
||||
|
||||
im_width = bounding_width
|
||||
|
||||
|
||||
if im_height is None:
|
||||
|
||||
|
||||
im_height = bounding_height
|
||||
|
||||
|
||||
|
||||
# TODO SVG thumbs should always scale up to the bounding dimensions
|
||||
|
||||
if thumbnail_scale_type == THUMBNAIL_SCALE_DOWN_ONLY:
|
||||
|
@ -1145,7 +1156,7 @@ def RawOpenPILImage( path ) -> PILImage.Image:
|
|||
return pil_image
|
||||
|
||||
|
||||
def ResizeNumPyImage( numpy_image: numpy.array, target_resolution ) -> numpy.array:
|
||||
def ResizeNumPyImage( numpy_image: numpy.array, target_resolution, forced_interpolation = None ) -> numpy.array:
|
||||
|
||||
( target_width, target_height ) = target_resolution
|
||||
( image_width, image_height ) = GetResolutionNumPy( numpy_image )
|
||||
|
@ -1163,6 +1174,11 @@ def ResizeNumPyImage( numpy_image: numpy.array, target_resolution ) -> numpy.arr
|
|||
interpolation = cv2.INTER_AREA
|
||||
|
||||
|
||||
if forced_interpolation is not None:
|
||||
|
||||
interpolation = forced_interpolation
|
||||
|
||||
|
||||
return cv2.resize( numpy_image, ( target_width, target_height ), interpolation = interpolation )
|
||||
|
||||
def RotateEXIFPILImage( pil_image: PILImage.Image )-> PILImage.Image:
|
||||
|
@ -1246,6 +1262,58 @@ def StripOutAnyUselessAlphaChannel( numpy_image: numpy.array ) -> numpy.array:
|
|||
|
||||
return numpy_image
|
||||
|
||||
def GetImageBlurHashNumPy( numpy_image, components_x = 4, components_y = 4 ):
|
||||
|
||||
return blurhash.blurhash_encode( numpy_image, components_x, components_y )
|
||||
def GetBlurhashFromNumPy( numpy_image: numpy.array ) -> str:
|
||||
|
||||
media_height = numpy_image.shape[0]
|
||||
media_width = numpy_image.shape[1]
|
||||
|
||||
if media_width == 0 or media_height == 0:
|
||||
|
||||
return ''
|
||||
|
||||
|
||||
ratio = media_width / media_height
|
||||
|
||||
if ratio > 4 / 3:
|
||||
|
||||
components_x = 5
|
||||
components_y = 3
|
||||
|
||||
elif ratio < 3 / 4:
|
||||
|
||||
components_x = 3
|
||||
components_y = 5
|
||||
|
||||
else:
|
||||
|
||||
components_x = 4
|
||||
components_y = 4
|
||||
|
||||
|
||||
CUTOFF_DIMENSION = 100
|
||||
|
||||
if numpy_image.shape[0] > CUTOFF_DIMENSION or numpy_image.shape[1] > CUTOFF_DIMENSION:
|
||||
|
||||
numpy_image = ResizeNumPyImage( numpy_image, ( CUTOFF_DIMENSION, CUTOFF_DIMENSION ), forced_interpolation = cv2.INTER_LINEAR )
|
||||
|
||||
|
||||
return external_blurhash.blurhash_encode( numpy_image, components_x, components_y )
|
||||
|
||||
|
||||
def GetNumpyFromBlurhash( blurhash, width, height ) -> numpy.array:
|
||||
|
||||
# this thing is super slow, they recommend even in the documentation to render small and scale up
|
||||
if width > 32 or height > 32:
|
||||
|
||||
numpy_image = numpy.array( external_blurhash.blurhash_decode( blurhash, 32, 32 ), dtype = 'uint8' )
|
||||
|
||||
numpy_image = ResizeNumPyImage( numpy_image, ( width, height ) )
|
||||
|
||||
else:
|
||||
|
||||
numpy_image = numpy.array( external_blurhash.blurhash_decode( blurhash, width, height ), dtype = 'uint8' )
|
||||
|
||||
|
||||
return numpy_image
|
||||
|
||||
|
|
|
@ -48,7 +48,11 @@ def GenerateThumbnailNumPyFromPSDPath( path: str, target_resolution: typing.Tupl
|
|||
|
||||
thumbnail_pil_image = pil_image.resize( target_resolution, PILImage.LANCZOS )
|
||||
|
||||
return HydrusImageHandling.GenerateNumPyImageFromPILImage(thumbnail_pil_image)
|
||||
numpy_image = HydrusImageHandling.GenerateNumPyImageFromPILImage(thumbnail_pil_image)
|
||||
|
||||
numpy_image = HydrusImageHandling.DequantizeNumPyImage( numpy_image )
|
||||
|
||||
return numpy_image
|
||||
|
||||
|
||||
def GetPSDResolution( path: str ):
|
||||
|
|
|
@ -879,7 +879,7 @@ class HydrusResource( Resource ):
|
|||
|
||||
HydrusData.DebugPrint( failure.getTraceback() )
|
||||
|
||||
error_summary = f'The "{self._service.GetName()}" encountered an error it could not handle!\n\nHere is a dump of what happened, which will also be written to your log. If it persists, please forward it to hydrus.admin@gmail.com:\n\n' + failure.getTraceback()
|
||||
error_summary = f'The "{self._service.GetName()}" encountered an error it could not handle!\n\nHere is a full traceback of what happened. If you are using the hydrus client, it will be saved to your log. Please forward it to hydrus.admin@gmail.com:\n\n' + failure.getTraceback()
|
||||
|
||||
|
||||
# TODO: maybe pull the cbor stuff down to hydrus core here and respond with Dumps( blah, requested_mime ) instead
|
||||
|
@ -1269,16 +1269,18 @@ class ResponseContext( object ):
|
|||
if cookies is None:
|
||||
|
||||
cookies = []
|
||||
|
||||
if max_age is None:
|
||||
|
||||
if body is not None:
|
||||
|
||||
max_age = 4
|
||||
|
||||
|
||||
if max_age is None:
|
||||
|
||||
if body is not None:
|
||||
|
||||
max_age = 4
|
||||
|
||||
elif path is not None:
|
||||
|
||||
|
||||
max_age = 86400 * 365
|
||||
|
||||
|
||||
|
||||
self._status_code = status_code
|
||||
|
|
|
@ -91,7 +91,7 @@ def blurhash_components(blurhash):
|
|||
Decodes and returns the number of x and y components in the given blurhash.
|
||||
"""
|
||||
if len(blurhash) < 6:
|
||||
raise ValueError("BlurHash must be at least 6 characters long.")
|
||||
raise ValueError("Blurhash must be at least 6 characters long.")
|
||||
|
||||
# Decode metadata
|
||||
size_info = base83_decode(blurhash[0])
|
||||
|
@ -116,7 +116,7 @@ def blurhash_decode(blurhash, width, height, punch = 1.0, linear = False):
|
|||
basically looks the same anyways.
|
||||
"""
|
||||
if len(blurhash) < 6:
|
||||
raise ValueError("BlurHash must be at least 6 characters long.")
|
||||
raise ValueError("Blurhash must be at least 6 characters long.")
|
||||
|
||||
# Decode metadata
|
||||
size_info = base83_decode(blurhash[0])
|
||||
|
@ -128,7 +128,7 @@ def blurhash_decode(blurhash, width, height, punch = 1.0, linear = False):
|
|||
|
||||
# Make sure we at least have the right number of characters
|
||||
if len(blurhash) != 4 + 2 * size_x * size_y:
|
||||
raise ValueError("Invalid BlurHash length.")
|
||||
raise ValueError("Invalid Blurhash length.")
|
||||
|
||||
# Decode DC component
|
||||
dc_value = base83_decode(blurhash[2:6])
|
||||
|
|
|
@ -4708,6 +4708,9 @@ class TestClientAPI( unittest.TestCase ):
|
|||
file_info_manager.has_exif = True
|
||||
file_info_manager.has_icc_profile = True
|
||||
|
||||
file_info_manager.blurhash = 'UBECh1xtFg-X-qxvxZ$*4mD%n3s*M_I9IVNG'
|
||||
file_info_manager.pixel_hash = os.urandom( 32 )
|
||||
|
||||
file_info_managers.append( file_info_manager )
|
||||
|
||||
service_keys_to_statuses_to_tags = { CC.DEFAULT_LOCAL_TAG_SERVICE_KEY : { HC.CONTENT_STATUS_CURRENT : [ 'blue_eyes', 'blonde_hair' ], HC.CONTENT_STATUS_PENDING : [ 'bodysuit' ] } }
|
||||
|
@ -4780,11 +4783,10 @@ class TestClientAPI( unittest.TestCase ):
|
|||
detailed_known_urls_metadata = []
|
||||
with_notes_metadata = []
|
||||
only_return_basic_information_metadata = []
|
||||
only_return_basic_information_metadata_but_blurhash_too = []
|
||||
|
||||
services_manager = HG.client_controller.services_manager
|
||||
|
||||
service_keys_to_names = {}
|
||||
|
||||
for media_result in media_results:
|
||||
|
||||
file_info_manager = media_result.GetFileInfoManager()
|
||||
|
@ -4807,6 +4809,10 @@ class TestClientAPI( unittest.TestCase ):
|
|||
|
||||
only_return_basic_information_metadata.append( dict( metadata_row ) )
|
||||
|
||||
metadata_row[ 'blurhash' ] = file_info_manager.blurhash
|
||||
|
||||
only_return_basic_information_metadata_but_blurhash_too.append( dict( metadata_row ) )
|
||||
|
||||
if file_info_manager.mime in HC.MIMES_WITH_THUMBNAILS:
|
||||
|
||||
bounding_dimensions = HG.test_controller.options[ 'thumbnail_dimensions' ]
|
||||
|
@ -4851,7 +4857,8 @@ class TestClientAPI( unittest.TestCase ):
|
|||
'has_exif' : True,
|
||||
'has_human_readable_embedded_metadata' : False,
|
||||
'has_icc_profile' : True,
|
||||
'known_urls' : list( sorted_urls )
|
||||
'known_urls' : list( sorted_urls ),
|
||||
'pixel_hash' : file_info_manager.pixel_hash.hex()
|
||||
} )
|
||||
|
||||
locations_manager = media_result.GetLocationsManager()
|
||||
|
@ -4937,6 +4944,7 @@ class TestClientAPI( unittest.TestCase ):
|
|||
expected_detailed_known_urls_metadata_result = { 'metadata' : detailed_known_urls_metadata, 'services' : GetExampleServicesDict() }
|
||||
expected_notes_metadata_result = { 'metadata' : with_notes_metadata, 'services' : GetExampleServicesDict() }
|
||||
expected_only_return_basic_information_result = { 'metadata' : only_return_basic_information_metadata, 'services' : GetExampleServicesDict() }
|
||||
expected_only_return_basic_information_but_blurhash_too_result = { 'metadata' : only_return_basic_information_metadata_but_blurhash_too, 'services' : GetExampleServicesDict() }
|
||||
|
||||
HG.test_controller.SetRead( 'hash_ids_to_hashes', file_ids_to_hashes )
|
||||
HG.test_controller.SetRead( 'media_results', media_results )
|
||||
|
@ -5012,6 +5020,26 @@ class TestClientAPI( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( d, expected_only_return_basic_information_result )
|
||||
|
||||
# basic metadata with blurhash
|
||||
|
||||
HG.test_controller.SetRead( 'hash_ids_to_hashes', { k : v for ( k, v ) in file_ids_to_hashes.items() if k in [ 1, 2, 3 ] } )
|
||||
|
||||
path = '/get_files/file_metadata?file_ids={}&only_return_basic_information=true&include_blurhash=true'.format( urllib.parse.quote( json.dumps( [ 1, 2, 3 ] ) ) )
|
||||
|
||||
connection.request( 'GET', path, headers = headers )
|
||||
|
||||
response = connection.getresponse()
|
||||
|
||||
data = response.read()
|
||||
|
||||
text = str( data, 'utf-8' )
|
||||
|
||||
self.assertEqual( response.status, 200 )
|
||||
|
||||
d = json.loads( text )
|
||||
|
||||
self.assertEqual( d, expected_only_return_basic_information_but_blurhash_too_result )
|
||||
|
||||
# same but diff order
|
||||
|
||||
expected_order = [ 3, 1, 2 ]
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 2.5 KiB |
Binary file not shown.
Before Width: | Height: | Size: 2.6 KiB |
Loading…
Reference in New Issue