Version 499

closes #1221, closes #1219, closes #1222, closes #1223
This commit is contained in:
Hydrus Network Developer 2022-09-07 16:16:25 -05:00
parent bfd63bfe5f
commit 6152573676
98 changed files with 963 additions and 659 deletions

View File

@ -326,14 +326,14 @@ jobs:
uses: carlosperate/download-file-action@v1.0.3
id: download_mpv
with:
file-url: 'https://sourceforge.net/projects/mpv-player-windows/files/libmpv/mpv-dev-x86_64-20210228-git-d1be8bb.7z'
file-url: 'https://sourceforge.net/projects/mpv-player-windows/files/libmpv/mpv-dev-x86_64-v3-20220829-git-211ce69.7z'
file-name: 'mpv-dev-x86_64.7z'
location: '.'
-
name: Process mpv-dev
run: |
7z x ${{ steps.download_mpv.outputs.file-path }}
move mpv-1.dll hydrus\
move mpv-2.dll hydrus\
-
name: Build Hydrus
run: |
@ -413,14 +413,14 @@ jobs:
uses: carlosperate/download-file-action@v1.0.3
id: download_mpv
with:
file-url: 'https://sourceforge.net/projects/mpv-player-windows/files/libmpv/mpv-dev-x86_64-20210228-git-d1be8bb.7z'
file-url: 'https://sourceforge.net/projects/mpv-player-windows/files/libmpv/mpv-dev-x86_64-v3-20220829-git-211ce69.7z'
file-name: 'mpv-dev-x86_64.7z'
location: '.'
-
name: Process mpv-dev
run: |
7z x ${{ steps.download_mpv.outputs.file-path }}
move mpv-1.dll hydrus\
move mpv-2.dll hydrus\
-
name: Build Hydrus
run: |

View File

@ -1,3 +1,7 @@
---
title: About These Docs
---
# About These Docs
The Hydrus docs are built with [MkDocs](https://www.mkdocs.org/) using the [Material for MkDocs](https://squidfunk.github.io/mkdocs-material/) theme. The .md files in the `docs` directory are converted into nice html in the `help` directory. This is done automatically in the built releases, but if you run from source, you will want to build your own.

View File

@ -1,7 +1,9 @@
---
title: PTR access keys
title: PTR Access Keys
---
# PTR access keys
The PTR is now run by users with more bandwidth than I had to give, so the bandwidth limits are gone! If you would like to talk with the new management, please check the [discord](https://discord.gg/wPHPCUZ).
A guide and schema for the new PTR is [here](PTR.md).
@ -66,4 +68,4 @@ Then you can check your PTR at any time under _services->review services_, under
A user kindly manages a store of update files and pre-processed empty client databases to get your synced quicker. This is generally recommended for advanced users or those following a guide, but if you are otherwise interested, please check it out:
[https://cuddlebear92.github.io/Quicksync/](https://cuddlebear92.github.io/Quicksync/)
[https://cuddlebear92.github.io/Quicksync/](https://cuddlebear92.github.io/Quicksync/)

View File

@ -1,5 +1,5 @@
---
title: adding new downloaders
title: Adding New Downloaders
---
# adding new downloaders
@ -18,4 +18,4 @@ You can get these pngs from anyone who has experience in the downloader system.
To 'add' the easy-import pngs to your client, hit _network->downloaders->import downloaders_. A little image-panel will appear onto which you can drag-and-drop these png files. The client will then decode and go through the png, looking for interesting new objects and automatically import and link them up without you having to do any more. Your only further input on your end is a 'does this look correct?' check right before the actual import, just to make sure there isn't some mistake or other glaring problem.
Objects imported this way will take precedence over existing functionality, so if one of your downloaders breaks due to a site change, importing a fixed png here will overwrite the broken entries and become the new default.
Objects imported this way will take precedence over existing functionality, so if one of your downloaders breaks due to a site change, importing a fixed png here will overwrite the broken entries and become the new default.

View File

@ -1,7 +1,8 @@
---
title: general clever tricks
title: General Clever Tricks
---
# general clever tricks
!!! note "this is non-comprehensive"
I am always changing and adding little things. The best way to learn is just to look around. If you think a shortcut should probably do something, try it out! If you can't find something, let me know and I'll try to add it!
@ -95,4 +96,4 @@ If the file is one you particularly care about, the easiest solution is to open
## setting a password { id="password" }
the client offers a very simple password system, enough to keep out noobs. You can set it at _database->set a password_. It will thereafter ask for the password every time you start the program, and will not open without it. However none of the database is encrypted, and someone with enough enthusiasm or a tool and access to your computer can still very easily see what files you have. The password is mainly to stop idle snoops checking your images if you are away from your machine.
the client offers a very simple password system, enough to keep out noobs. You can set it at _database->set a password_. It will thereafter ask for the password every time you start the program, and will not open without it. However none of the database is encrypted, and someone with enough enthusiasm or a tool and access to your computer can still very easily see what files you have. The password is mainly to stop idle snoops checking your images if you are away from your machine.

View File

@ -1,7 +1,9 @@
---
title: multiple local file services
title: Multiple Local File Services
---
# multiple local file services
The client lets you store your files in different overlapping partitions. This can help management workflows and privacy.
## what's the problem? { id="the_problem" }

View File

@ -1,7 +1,9 @@
---
title: tag parents
title: Tag Parents
---
# tag parents
Tag parents let you automatically add a particular tag every time another tag is added. The relationship will also apply retroactively.
## what's the problem? { id="the_problem" }

View File

@ -1,7 +1,9 @@
---
title: tag siblings
title: Tag Siblings
---
# tag siblings
Tag siblings let you replace a bad tag with a better tag.
## what's the problem? { id="the_problem" }

View File

@ -2,7 +2,9 @@
title: recovering after disaster
---
# you just had a database problem
# Recovering After Disaster
## you just had a database problem
I have helped quite a few users recover a mangled database from disk failure or accidental deletion. You just had similar and have been pointed here. This is a simple spiel on the next step that I, hydev, like to give people once we are done.

View File

@ -1,8 +1,49 @@
---
title: Changelog
---
# changelog
!!! note
This is the new changelog, only the most recent builds. For all versions, see the [old changelog](old_changelog.html).
## [Version 499](https://github.com/hydrusnetwork/hydrus/releases/tag/v499)
### mpv
* updated the mpv version for Windows. this is more complicated than it sounds and has been fraught with difficulty at times, so I do not try it often, but the situation seems to be much better now. today we are updating about twelve months. I may be imagining it, but things seem a bit smoother. a variety of weird file support should be better--an old transparent apng that I know crashed older mpv no longer causes a crash--and there's some acceleration now for very new CPU chipsets. I've also insisted on precise seeking (rather than keyframe seeking, which some users may have defaulted to). mpv-1.dll is now mpv-2.dll
* I don't have an easy Linux testbed any more, so I would be interested in a Linux 'running from source' user trying out a similar update and letting me know how it goes. try getting the latest libmpv1 and then update python-mpv to 1.0.1 on pip. your 'mpv api version' in _help->about_ should now be 2.0. this new python-mpv seems to have several compatibility improvements, which is what has plagued us before here
* mpv on macOS is still a frustrating question mark, but if this works on Linux, it may open another door. who knows, maybe the new version doesn't crash instantly on load
### search change for potential duplicates
* this is subtle and complicated, so if you are a casual user of duplicates, don't worry about it. duplicates page = better now
* for those who are more invested in dupes, I have altered the main potential duplicate search query. when the filter prepares some potential dupes to compare, or you load up some random thumbs in the page, or simply when the duplicates processing page presents counts, this all now only tests kings. previously, it could compare any member of a duplicate group to any other, and it would nominate kings as group representatives, but this lead to some odd situations where if you said 'must be pixel dupes', you could get two low quality pixel dupes offering their better king(s) up for actual comparison, giving you a comparison that was not a pixel dupe. same for the general searching of potentials, where if you search for 'bad quality', any bad quality file you set as a dupe but didn't delete could get matched (including in 'both match' mode), and offer a 'nicer' king as tribute that didn't have the tag. now, it only searches kings. kings match searches, and it is those kings that must match pixel dupe rules. this also means that kings will always be available on the current file domain, and no fallback king-nomination-from-filtered-members routine is needed any more
* the knock-on effect here is minimal, but in general all database work in the duplicate filter should be a little faster, and some of your numbers may be a few counts smaller, typically after discounting weird edge case split-up duplicate groups that aren't real/common enough to really worry about. if you use a waterfall of multiple local file services to process your files, you might see significantly smaller counts due to kings not always being in the same file domain as their bad members, so you may want to try 'all my files' or just see how it goes--might be far less confusing, now you are only given unambiguous kings. anyway, in general, I think no big differences here for most users except better precision in searching!
* but let me know how you get on IRL!
### misc
* thank's to a user's hard work, the default twitter downloader gets some upgrades this week: you can now download from twitter lists, a twitter user's likes, and twitter collections (which are curated lists of tweets). the downloaders still get a lot of 'ignored' results for text-only tweets, and you still have to be logged in to get nsfw, but this adds some neat tools to the toolbox
* thanks to a user, the Client API now reports brief caching information and should boost Hydrus Companion performance (issue #605)
* the simple shortcut list in the edit shortcut action dialog now no longer shows any duplicates (such as 'close media viewer' in the dupes window)
* added a new default reason for tag petitions, 'clearing mass-pasted junk'. 'not applicable' is now 'not applicable/incorrect'
* in the petition processing page, the content boxes now specifically say ADD or DELETE to reinforce what you are doing and to differentiate the two boxes when you have a pixel petition
* in the petition processing page, the content boxes now grow and shrink in height, up to a max of 20 rows, depending on how much stuff is in them. I _think_ I have pixel perfect heights here, so let me know if yours are wrong!
* the 'service info' rows in review services are now presented in nicer order
* updated the header/title formatting across the help documentation. when you search for a page title, it should now show up in results (e.g. you type 'running from source', you get that nicely at the top, not a confusing sub-header of that article). the section links are also all now capitalised
* misc refactoring
### bunch of fixes
* fixed a weird and possible crash-inducing scrolling bug in the tag list some users had in Qt6
* fixed a typo error in file lookup scripts from when I added multi-line support to the parsing system (issue #1221)
* fixed some bad labels in 'speed and memory' that talked about 'MB' when the widget allowed setting different units. also, I updated the 'video buffer' option on that page to a full 'bytes value' widget too (issue #1223)
* the 'bytes value' widget, where you can set '100 MB' and similar, now gives the 'unit' dropdown a little more minimum width. it was getting a little thin on some styles and not showing the full text in the dropdown menu (issue #1222)
* fixed a bug in similar-shape-search-tree-rebalancing maintenance in the rare case that the queue of branches in need of regeneration become out of sync with the main tree (issue #1219)
* fixed a bug in archive/delete filter where clicks that were making actions would start borked drag-and-drop panning states if you dragged before releasing the click. it would cause warped media movement if you then clicked on hover window greyspace
* fixed the 'this was a cloudflare problem' scanner for the new 1.2.64 version of cloudscraper
* updated the popupmanager's positioning update code to use a nicer event filter and gave its position calculation code a quick pass. it might fix some popup toaster position bugs, not sure
* fixed a weird menu creation bug involving a QStandardItem appearing in the menu actions
* fixed a similar weird QStandardItem bug in the media viewer canvas code
* fixed an error that could appear on force-emptied pages that receive sort signals
## [Version 498](https://github.com/hydrusnetwork/hydrus/releases/tag/v498)
_almost all the changes this week are only important to server admins and janitors. regular users can skip updating this week_
@ -335,19 +376,3 @@ _almost all the changes this week are only important to server admins and janito
* I also cleaned some of the maintenance code, and made it more aggressive, so 'do a full metadata resync' is now be even more uncompromising
* also, the repository updates file service gets a bit of cleanup. it seems some ghost files have snuck in there over time, and today their records are corrected. the bug that let this happen in the first place is also fixed
* there remains an issue where some users' clients have tried to hit the PTR with 404ing update file hashes. I am still investigating this
## [Version 488](https://github.com/hydrusnetwork/hydrus/releases/tag/v488)
### all misc this week
* the client now supports 'wavpack' files. these are basically a kind of compressed wav. mpv seems to play them fine too!
* added a new file maintenance action, 'if file is missing, note it in log', which records the metadata about missing files to the database directory but makes no other action
* the 'file is missing/incorrect' file maintenance jobs now also export the files' tags to the database directory, to further help identify them
* simplified the logic behind the 'remove files if they are trashed' option. it should fire off more reliably now, even if you have a weird multiple-domain location for the current page, and still not fire if you are actually looking at the trash
* if you paste an URL into the normal 'urls' downloader page, and it already has that URL and the URL has status 'failed', that existing URL will now be tried again. let's see how this works IRL, maybe it needs an option, maybe this feels natural when it comes up
* the default bandwidth rules are boosted. the client is more efficient these days and doesn't need so many forced breaks on big import lists, and the internet has generally moved on. thanks to the users who helped talk out what the new limits should aim at. if you are an existing user, you can change your current defaults under _network->data->review bandwidth usage and edit rules_--there's even a button to revert your defaults 'back' to these new rules
* now like all its neighbours, the cog icon on the duplicate right-side hover no longer annoyingly steals keyboard focus on a click.
* did some code and logic cleanup around 'delete files', particularly to improve repository update deletes now we have multiple local file services, and in planning for future maintenance in this area
* all the 'yes yes no' dialogs--the ones with multiple yes options--are moved to the newer panel system and will render their size and layout a bit more uniformly
* may have fixed an issue with a very slow to boot client trying to politely wait on the thumbnail cache before it instantiates
* misc UI text rewording and layout flag fixes
* fixed some jank formatting on database migration help

View File

@ -1,4 +1,6 @@
---
title: Contact and Links
---
# contact and links
@ -29,4 +31,4 @@ That said:
* [email](mailto:hydrus.admin@gmail.com)
* [discord](https://discord.gg/wPHPCUZ)
* [patreon](https://www.patreon.com/hydrus_dev)
* [user-run repository and wiki (including download presets for several non-default boorus)](https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts)
* [user-run repository and wiki (including download presets for several non-default boorus)](https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts)

View File

@ -1,7 +1,9 @@
---
title: database migration
title: Database Migration
---
# database migration
## the hydrus database { id="intro" }
A hydrus client consists of three components:

View File

@ -1,7 +1,9 @@
---
title: Putting it All Together
title: Putting It All Together
---
# Putting it all together
Now you know what GUGs, URL Classes, and Parsers are, you should have some ideas of how URL Classes could steer what happens when the downloader is faced with an URL to process. Should a URL be imported as a media file, or should it be parsed? If so, how?
You may have noticed in the Edit GUG ui that it lists if a current URL Class matches the example URL output. If the GUG has no matching URL Class, it won't be listed in the main 'gallery selector' button's list--it'll be relegated to the 'non-functioning' page. Without a URL Class, the client doesn't know what to do with the output of that GUG. But if a URL Class does match, we can then hand the result over to a parser set at _network->downloader definitions->manage url class links_:

View File

@ -2,6 +2,8 @@
title: Gallery URL Generators
---
# Gallery URL Generators
Gallery URL Generators, or **GUGs** are simple objects that take a simple string from the user, like:
* blue_eyes

View File

@ -1,3 +1,7 @@
---
title: Login Manager
---
# Login Manager
The system works, but this help was never done! Check the defaults for examples of how it works, sorry!
The system works, but this help was never done! Check the defaults for examples of how it works, sorry!

View File

@ -1,3 +1,7 @@
---
title: Parsers
---
# Parsers
In hydrus, a parser is an object that takes a single block of HTML or JSON data and returns many kinds of hydrus-level metadata.
@ -27,4 +31,4 @@ When you are making a parser, consider this checklist (you might want to copy/ha
* Is a source/post time available?
* Is a source URL available? Is it good quality, or does it often just point to an artist's base twitter profile? If you pull it from text or a tooltip, is it clipped for longer URLs?
[Taken a break? Now let's put it all together ---->](downloader_completion.md)
[Taken a break? Now let's put it all together ---->](downloader_completion.md)

View File

@ -1,3 +1,7 @@
---
title: Content Parsers
---
# Content Parsers
So, we can now generate some strings from a document. Content Parsers will let us apply a single metadata type to those strings to inform hydrus what they are.
@ -67,4 +71,4 @@ This is a special content type--it tells the next highest stage of parsing that
![](images/edit_content_parser_panel_veto.png)
They will associate their name with the veto being raised, so it is useful to give these a decent descriptive name so you can see what might be going right or wrong during testing. If it is an appropriate and serious enough veto, it may also rise up to the user level and will be useful if they need to report you an error (like "After five pages of parsing, it gives 'veto: no next page link'").
They will associate their name with the veto being raised, so it is useful to give these a decent descriptive name so you can see what might be going right or wrong during testing. If it is an appropriate and serious enough veto, it may also rise up to the user level and will be useful if they need to report you an error (like "After five pages of parsing, it gives 'veto: no next page link'").

View File

@ -1,3 +1,7 @@
---
title: API Example
---
# api example
Some sites offer API calls for their pages. Depending on complexity and quality of content, using these APIs may or may not be a good idea. Artstation has a good one--let's first review our URL Classes:
@ -42,4 +46,4 @@ This again uses python's datetime to decode the date, which Artstation presents
## summary { id="summary" }
APIs that are stable and free to access (e.g. do not require OAuth or other complicated login headers) can make parsing fantastic. They save bandwidth and CPU time, and they are typically easier to work with than HTML. Unfortunately, the boorus that do provide APIs often list their tags without namespace information, so I recommend you double-check you can get what you want before you get too deep into it. Some APIs also offer incomplete data, such as relative URLs (relative to the original URL!), which can be a pain to figure out in our system.
APIs that are stable and free to access (e.g. do not require OAuth or other complicated login headers) can make parsing fantastic. They save bandwidth and CPU time, and they are typically easier to work with than HTML. Unfortunately, the boorus that do provide APIs often list their tags without namespace information, so I recommend you double-check you can get what you want before you get too deep into it. Some APIs also offer incomplete data, such as relative URLs (relative to the original URL!), which can be a pain to figure out in our system.

View File

@ -1,3 +1,7 @@
---
title: File Page Example
---
# file page example
Let's look at this page: [https://gelbooru.com/index.php?page=post&s=view&id=3837615](https://gelbooru.com/index.php?page=post&s=view&id=3837615).
@ -96,4 +100,4 @@ Phew--all that for a bit of Lara Croft! Thankfully, most sites use similar schem
![](images/downloader_post_example_final.png)
This is overall a decent parser. Some parts of it may fail when Gelbooru update to their next version, but that can be true of even very good parsers with multiple redundancy. For now, hydrus can use this to quickly and efficiently pull content from anything running Gelbooru 0.2.5., and the effort spent now can save millions of combined _right-click->save as_ and manual tag copies in future. If you make something like this and share it about, you'll be doing a good service for those who could never figure it out.
This is overall a decent parser. Some parts of it may fail when Gelbooru update to their next version, but that can be true of even very good parsers with multiple redundancy. For now, hydrus can use this to quickly and efficiently pull content from anything running Gelbooru 0.2.5., and the effort spent now can save millions of combined _right-click->save as_ and manual tag copies in future. If you make something like this and share it about, you'll be doing a good service for those who could never figure it out.

View File

@ -1,3 +1,7 @@
---
title: Gallery Page Example
---
# gallery page example
!!! caution
@ -39,4 +43,4 @@ Note that this finds two URLs. e621 apply the `rel="next"` to both the "2" link
## summary { id="summary" }
With those two rules, we are done. Gallery parsers are nice and simple.
With those two rules, we are done. Gallery parsers are nice and simple.

View File

@ -1,3 +1,7 @@
---
title: Page Parsers
---
# Page Parsers
We can now produce individual rows of rich metadata. To arrange them all into a useful structure, we will use Page Parsers.
@ -56,4 +60,4 @@ When the client sees this in a downloader context, it will where to download the
## subsidiary page parsers { id="subsidiary_page_parsers" }
Here be dragons. This was an attempt to make parsing more helpful in certain API situations, but it ended up ugly. I do not recommend you use it, as I will likely scratch the whole thing and replace it with something better one day. It basically splits the page up into pieces that can then be parsed by nested page parsers as separate objects, but the UI and workflow is hell. Afaik, the imageboard API parsers use it, but little/nothing else. If you are really interested, check out how those work and maybe duplicate to figure out your own imageboard parser and/or send me your thoughts on how to separate File URL/timestamp combos better.
Here be dragons. This was an attempt to make parsing more helpful in certain API situations, but it ended up ugly. I do not recommend you use it, as I will likely scratch the whole thing and replace it with something better one day. It basically splits the page up into pieces that can then be parsed by nested page parsers as separate objects, but the UI and workflow is hell. Afaik, the imageboard API parsers use it, but little/nothing else. If you are really interested, check out how those work and maybe duplicate to figure out your own imageboard parser and/or send me your thoughts on how to separate File URL/timestamp combos better.

View File

@ -1,5 +1,5 @@
---
title: Sharing Downloaders
title: Sharing
---
# Sharing Downloaders
@ -16,4 +16,4 @@ It isn't difficult. Essentially, you want to bundle enough objects to make one o
This all works on Example URLs and some domain guesswork, so make sure your url classes are good and the parsers have correct Example URLs as well. If they don't, they won't all link up neatly for the end user. If part of your downloader is on a different domain to the GUGs and Gallery URLs, then you'll have to add them manually. Just start with 'add gug' and see if it looks like enough.
Once you have the necessary and sufficient objects added, you can export to png. You'll get a similar 'does this look right?' summary as what the end-user will see, just to check you have everything in order and the domains all correct. If that is good, then make sure to give the png a sensible filename and embellish the title and description if you need to. You can then send/post that png wherever, and any regular user will be able to use your work.
Once you have the necessary and sufficient objects added, you can export to png. You'll get a similar 'does this look right?' summary as what the end-user will see, just to check you have everything in order and the domains all correct. If that is good, then make sure to give the png a sensible filename and embellish the title and description if you need to. You can then send/post that png wherever, and any regular user will be able to use your work.

View File

@ -1,3 +1,7 @@
---
title: URL Classes
---
# URL Classes
The fundamental connective tissue of the downloader system is the 'URL Class'. This object identifies and normalises URLs and links them to other components. Whenever the client handles a URL, it tries to match it to a URL Class to figure out what to do.

View File

@ -1,5 +1,5 @@
---
title: duplicates
title: Filtering Duplicates
---
# duplicates { id="intro" }

View File

@ -1,3 +1,7 @@
---
title: FAQ
---
# FAQ
## what is a repository? { id="repositories" }
@ -105,4 +109,4 @@ This is another long string of random hexadecimal that _identifies_ your account
The repositories do not work like conventional search engines; it takes a short but predictable while for changes to propagate to other users.
The client's searches only ever happen over its local cache of what is on the repository. Any changes you make will be delayed for others until their next update occurs. At the moment, the update period is 100,000 seconds, which is about 1 day and 4 hours.
The client's searches only ever happen over its local cache of what is on the repository. Any changes you make will be delayed for others until their next update occurs. At the moment, the update period is 100,000 seconds, which is about 1 day and 4 hours.

View File

@ -1,5 +1,5 @@
---
title: Overview for getting started
title: Overview For Getting Started
---
# Overview for getting started
@ -25,4 +25,4 @@ It is also worth having a look at [siblings](advanced_siblings.md) for when you
Have a lot of very similar looking pictures because of one reason or another? Have a look at [duplicates](duplicates.md), Hydrus' duplicates finder and filtering tool.
## API
Hydrus has an API that lets external tools connect to it. See [API](client_api.md) for how to turn it on and a list of some of these tools.
Hydrus has an API that lets external tools connect to it. See [API](client_api.md) for how to turn it on and a list of some of these tools.

View File

@ -1,5 +1,5 @@
---
title: Importing and exporting
title: Importing and Exporting
---
# Importing and exporting
@ -68,4 +68,4 @@ While you can import and export tags together with images sometimes you just don
Going to `tags -> migrate tags` you get a window that lets you deal with just tags. One of the options here is what's called a Hydrus Tag Archive, a file containing the hash <-> tag mappings for the files and tags matching the query.
![](images/hydrus_tag_archive.png)
![](images/hydrus_tag_archive.png)

View File

@ -1,7 +1,9 @@
---
title: installing and updating
title: Installing and Updating
---
# installing and updating
If any of this is confusing, a simpler guide is [here](https://github.com/Zweibach/text/blob/master/Hydrus/Hydrus%20Help%20Docs/00_tableOfContents.md), and some video guides are [here](https://github.com/CuddleBear92/Hydrus-guides)!
## downloading

View File

@ -1,6 +1,7 @@
---
title: ratings
title: Ratings
---
# getting started with ratings
The hydrus client supports two kinds of ratings: _like/dislike_ and _numerical_. Let's start with the simpler one:

View File

@ -1,5 +1,5 @@
---
title: Searching and sorting
title: Searching and Sorting
---
# Searching and sorting

View File

@ -1,5 +1,5 @@
---
title: subscriptions
title: Subscriptions
---
# subscriptions

View File

@ -1,5 +1,5 @@
---
title: tags
title: Tags
---
# getting started with tags

View File

@ -4,6 +4,7 @@ hide:
- navigation
- toc
---
# hydrus network - client and server
The hydrus network client is a desktop application written for Anonymous and other internet enthusiasts with large media collections. It organises your files into an internal database and browses them with tags instead of folders, a little like a booru on your desktop. Tags and files can be anonymously shared through custom servers that any user may run. Everything is free, nothing phones home, and the source code is included with the release. It is developed mostly for Windows, but builds for Linux and macOS are available (perhaps with some limitations, depending on your situation).

View File

@ -1,7 +1,9 @@
---
title: introduction and statement of principles
title: Introduction and Statement of Principles
---
# introduction and statement of principles
## on being anonymous { id="anonymous" }
Nearly all sites use the same pseudonymous username/password system, and nearly all of them have the same drama, sockpuppets, and egotistical mods. Censorship is routine. That works for many people, but not for me.
@ -44,4 +46,4 @@ These programs are free software. Everything I, hydrus dev, have made is under t
--8<-- "license.txt"
```
Do what the fuck you want to with my software, and if shit breaks, DEAL WITH IT.
Do what the fuck you want to with my software, and if shit breaks, DEAL WITH IT.

View File

@ -2,6 +2,8 @@
title: IPFS
---
# IPFS
IPFS is a p2p protocol that makes it easy to share many sorts of data. The hydrus client can communicate with an IPFS daemon to send and receive files.
You can read more about IPFS from [their homepage](http://ipfs.io), or [this guide](https://medium.com/@ConsenSys/an-introduction-to-ipfs-9bba4860abd0) that explains its various rules in more detail.

View File

@ -1,5 +1,5 @@
---
title: launch arguments
title: Launch Arguments
---
# launch arguments

View File

@ -1,3 +1,6 @@
---
title: Local Booru
---
# local booru
@ -61,4 +64,4 @@ You can review all your shares on _services->review services_, under _local->boo
## future plans { id="future" }
This was a fun project, but it never advanced beyond a prototype. The future of this system is other people's nice applications plugging into the [Client API](client_api.md).
This was a fun project, but it never advanced beyond a prototype. The future of this system is other people's nice applications plugging into the [Client API](client_api.md).

View File

@ -33,9 +33,46 @@
<div class="content">
<h3 id="changelog"><a href="#changelog">changelog</a></h3>
<ul>
<li><h3 id="version_499"><a href="#version_499">version 499</a></h3></li>
<ul>
<li>mpv:</li>
<li>updated the mpv version for Windows. this is more complicated than it sounds and has been fraught with difficulty at times, so I do not try it often, but the situation seems to be much better now. today we are updating about twelve months. I may be imagining it, but things seem a bit smoother. a variety of weird file support should be better--an old transparent apng that I know crashed older mpv no longer causes a crash--and there's some acceleration now for very new CPU chipsets. I've also insisted on precise seeking (rather than keyframe seeking, which some users may have defaulted to). mpv-1.dll is now mpv-2.dll</li>
<li>I don't have an easy Linux testbed any more, so I would be interested in a Linux 'running from source' user trying out a similar update and letting me know how it goes. try getting the latest libmpv1 and then update python-mpv to 1.0.1 on pip. your 'mpv api version' in _help->about_ should now be 2.0. this new python-mpv seems to have several compatibility improvements, which is what has plagued us before here</li>
<li>mpv on macOS is still a frustrating question mark, but if this works on Linux, it may open another door. who knows, maybe the new version doesn't crash instantly on load</li>
<li>.</li>
<li>search change for potential duplicates:</li>
<li>this is subtle and complicated, so if you are a casual user of duplicates, don't worry about it. duplicates page = better now</li>
<li>for those who are more invested in dupes, I have altered the main potential duplicate search query. when the filter prepares some potential dupes to compare, or you load up some random thumbs in the page, or simply when the duplicates processing page presents counts, this all now only tests kings. previously, it could compare any member of a duplicate group to any other, and it would nominate kings as group representatives, but this lead to some odd situations where if you said 'must be pixel dupes', you could get two low quality pixel dupes offering their better king(s) up for actual comparison, giving you a comparison that was not a pixel dupe. same for the general searching of potentials, where if you search for 'bad quality', any bad quality file you set as a dupe but didn't delete could get matched (including in 'both match' mode), and offer a 'nicer' king as tribute that didn't have the tag. now, it only searches kings. kings match searches, and it is those kings that must match pixel dupe rules. this also means that kings will always be available on the current file domain, and no fallback king-nomination-from-filtered-members routine is needed any more</li>
<li>the knock-on effect here is minimal, but in general all database work in the duplicate filter should be a little faster, and some of your numbers may be a few counts smaller, typically after discounting weird edge case split-up duplicate groups that aren't real/common enough to really worry about. if you use a waterfall of multiple local file services to process your files, you might see significantly smaller counts due to kings not always being in the same file domain as their bad members, so you may want to try 'all my files' or just see how it goes--might be far less confusing, now you are only given unambiguous kings. anyway, in general, I think no big differences here for most users except better precision in searching!</li>
<li>but let me know how you get on IRL!</li>
<li>.</li>
<li>misc:</li>
<li>thank's to a user's hard work, the default twitter downloader gets some upgrades this week: you can now download from twitter lists, a twitter user's likes, and twitter collections (which are curated lists of tweets). the downloaders still get a lot of 'ignored' results for text-only tweets, and you still have to be logged in to get nsfw, but this adds some neat tools to the toolbox</li>
<li>thanks to a user, the Client API now reports brief caching information and should boost Hydrus Companion performance (issue #605)</li>
<li>the simple shortcut list in the edit shortcut action dialog now no longer shows any duplicates (such as 'close media viewer' in the dupes window)</li>
<li>added a new default reason for tag petitions, 'clearing mass-pasted junk'. 'not applicable' is now 'not applicable/incorrect'</li>
<li>in the petition processing page, the content boxes now specifically say ADD or DELETE to reinforce what you are doing and to differentiate the two boxes when you have a pixel petition</li>
<li>in the petition processing page, the content boxes now grow and shrink in height, up to a max of 20 rows, depending on how much stuff is in them. I _think_ I have pixel perfect heights here, so let me know if yours are wrong!</li>
<li>the 'service info' rows in review services are now presented in nicer order</li>
<li>updated the header/title formatting across the help documentation. when you search for a page title, it should now show up in results (e.g. you type 'running from source', you get that nicely at the top, not a confusing sub-header of that article). the section links are also all now capitalised</li>
<li>misc refactoring</li>
<li>.</li>
<li>bunch of fixes:</li>
<li>fixed a weird and possible crash-inducing scrolling bug in the tag list some users had in Qt6</li>
<li>fixed a typo error in file lookup scripts from when I added multi-line support to the parsing system (issue #1221)</li>
<li>fixed some bad labels in 'speed and memory' that talked about 'MB' when the widget allowed setting different units. also, I updated the 'video buffer' option on that page to a full 'bytes value' widget too (issue #1223)</li>
<li>the 'bytes value' widget, where you can set '100 MB' and similar, now gives the 'unit' dropdown a little more minimum width. it was getting a little thin on some styles and not showing the full text in the dropdown menu (issue #1222)</li>
<li>fixed a bug in similar-shape-search-tree-rebalancing maintenance in the rare case that the queue of branches in need of regeneration become out of sync with the main tree (issue #1219)</li>
<li>fixed a bug in archive/delete filter where clicks that were making actions would start borked drag-and-drop panning states if you dragged before releasing the click. it would cause warped media movement if you then clicked on hover window greyspace</li>
<li>fixed the 'this was a cloudflare problem' scanner for the new 1.2.64 version of cloudscraper</li>
<li>updated the popupmanager's positioning update code to use a nicer event filter and gave its position calculation code a quick pass. it might fix some popup toaster position bugs, not sure</li>
<li>fixed a weird menu creation bug involving a QStandardItem appearing in the menu actions</li>
<li>fixed a similar weird QStandardItem bug in the media viewer canvas code</li>
<li>fixed an error that could appear on force-emptied pages that receive sort signals</li>
</ul>
<li><h3 id="version_498"><a href="#version_498">version 498</a></h3></li>
<ul>
<li>_almost all the changes this week are only important to server admins and janitors. regular users can skip updating this week_</li>
<li><i>almost all the changes this week are only important to server admins and janitors. regular users can skip updating this week</i></li>
<li>overview:</li>
<li>the server has important database and network updates this week. if your server has a lot of content, it has to count it all up, so it will take a short while to update. the petition protocol has also changed, so older clients will not be able to fetch new servers' petitions without an error. I think newer clients will be able to fetch older servers' ones, but it may be iffy</li>
<li>I considered whether I should update the network protocol version number, which would (politely) force all users to update, but as this causes inconvenience every time I do it, and I expect to do more incremental updates here in coming weeks, and since this only affects admins and janitors, I decided to not. we are going to be in awkward flux for a little bit, so please make sure you update privileged clients and servers at roughly the same time</li>

View File

@ -1,6 +1,7 @@
---
title: Petition practices
title: Petition Practices
---
# Petitions practices
This document exists to give a rough idea what to do in regard to the PTR to avoid creating uncecessary work for the janitors.
@ -42,4 +43,4 @@ List of some bad parents to `character:` tags as an example:
## Translations
Translations should be siblinged to what the closest in-use romanised tag is if there's no proper translation. If the tag is ambiguous, such as `響` or `ヒビキ` which means `hibiki`, just sibling them to the ambiguous tag. The tag can then later on be deleted and replaced by a less ambiguous tag. On the other hand, `響(艦隊これくしょん)` straight up means `hibiki (kantai kollection)` and can safely be siblinged to the proper `character:` tag.
Do the same for subjective tags. `魅惑のふともも` can be translated to `bewitching thighs`. `まったく、駆逐艦は最高だぜ!!` straight up translates to `Geez, destroyers are the best!!`, which does not contain much usable information for Hydrus currently. These can then either be siblinged down to an unsubjective tag (`thighs`) if there's objective information in the tag, deleted if purely subjective, or deleted and replaced if ambiguous.
Do the same for subjective tags. `魅惑のふともも` can be translated to `bewitching thighs`. `まったく、駆逐艦は最高だぜ!!` straight up translates to `Geez, destroyers are the best!!`, which does not contain much usable information for Hydrus currently. These can then either be siblinged down to an unsubjective tag (`thighs`) if there's objective information in the tag, deleted if purely subjective, or deleted and replaced if ambiguous.

View File

@ -1,3 +1,7 @@
---
title: Privacy
---
# privacy
!!! tldr "tl;dr"

View File

@ -1,7 +1,9 @@
---
title: reducing lag
title: Reducing Lag
---
# reducing lag
## hydrus is cpu and hdd hungry { id="intro" }
The hydrus client manages a lot of complicated data and gives you a lot of power over it. To add millions of files and tags to its database, and then to perform difficult searches over that information, it needs to use a lot of CPU time and hard drive time--sometimes in small laggy blips, and occasionally in big 100% CPU chunks. I don't put training wheels or limiters on the software either, so if you search for 300,000 files, the client will try to fetch that many.
@ -45,4 +47,4 @@ You can generate a profile by hitting _help->debug->profile mode_, which tells t
Turn on profile mode, do the thing that runs slow for you (importing a file, fetching some tags, whatever), and then check your database folder (most likely _install_dir/db_) for a new 'client profile - DATE.log' file. This file will be filled with several sets of tables with timing information. Please send that whole file to me, or if it is too large, cut what seems important. It should not contain any personal information, but feel free to look through it.
There are several ways to [contact me](contact.md).
There are several ways to [contact me](contact.md).

View File

@ -1,7 +1,9 @@
---
title: running from source
title: Running From Source
---
# running from source
I write the client and server entirely in [python](https://python.org), which can run straight from source. It is not simple to get hydrus running this way, but if none of the built packages work for you (for instance you use a non-Ubuntu-compatible flavour of Linux), it may be the only way you can get the program to run. Also, if you have a general interest in exploring the code or wish to otherwise modify the program, you will obviously need to do this.
## a quick note about Linux flavours { id="linux_flavours" }
@ -88,7 +90,7 @@ MPV is optional and complicated, but it is great, so it is worth the time to fig
As well as the python wrapper, 'python-mpv' as in the requirements.txt, you also need the underlying library. This is _not_ mpv the program, but 'libmpv', often called 'libmpv1'.
For Windows, the dll builds are [here](https://sourceforge.net/projects/mpv-player-windows/files/libmpv/), although getting the right version for the current wrapper can be difficult (you will get errors when you try to load video if it is not correct). Just put it in your hydrus base install directory. You can also just grab the 'mpv-1.dll' I bundle in my release. In my experience, [this](https://sourceforge.net/projects/mpv-player-windows/files/libmpv/mpv-dev-x86_64-20210228-git-d1be8bb.7z/download) works with python-mpv 0.5.2.
For Windows, the dll builds are [here](https://sourceforge.net/projects/mpv-player-windows/files/libmpv/), although getting the right version for the current wrapper can be difficult (you will get errors when you try to load video if it is not correct). Just put it in your hydrus base install directory. You can also just grab the 'mpv-2.dll' I bundle in my release. In my experience, [this](https://sourceforge.net/projects/mpv-player-windows/files/libmpv/mpv-dev-x86_64-20210228-git-d1be8bb.7z/download) works with python-mpv 0.5.2.
If you are on Linux, you can usually get 'libmpv1' with _apt_. You might have to adjust your python-mpv version (e.g. `pip3 install python-mpv==0.4.5`) to get it to work.

View File

@ -1,7 +1,9 @@
---
title: running your own server
title: Running Your Own Server
---
# running your own server
!!! note
**You do not need the server to do anything with hydrus! It is only for advanced users to do very specific jobs!** The server is also hacked-together and quite technical. It requires a fair amount of experience with the client and its concepts, and it does not operate on a timescale that works well on a LAN. Only try running your own server once you have a bit of experience synchronising with something like the PTR and you think, 'Hey, I know exactly what that does, and I would like one!'
@ -75,4 +77,4 @@ All of a server's files and options are stored in its accompanying .db file and
If you get to a point where you can no longer boot the repository, try running SQLite Studio and opening server.db. If the issue is simple--like manually changing the port number--you may be in luck. Send me an email if it is tricky.
Remember that everything is breaking all the time. Make regular backups, and you'll minimise your problems.
Remember that everything is breaking all the time. Make regular backups, and you'll minimise your problems.

View File

@ -1,8 +1,10 @@
---
title: financial support
title: Financial Support
---
# can I contribute to hydrus development? { id="support" }
# Financial Support
## can I contribute to hydrus development? { id="support" }
I do not expect anything from anyone. I'm amazed and grateful that anyone wants to use my software and share tags with others. I enjoy the feedback and work, and I hope to keep putting completely free weekly releases out as long as there is more to do.
@ -10,4 +12,4 @@ That said, as I have developed the software, several users have kindly offered t
I find the tactics of most internet fundraising very distasteful, especially when they promise something they then fail to deliver. I much prefer the 'if you like me and would like to contribute, then please do, meanwhile I'll keep doing what I do' model. I support several 'put out regular free content' creators on Patreon in this way, and I get a lot out of it, even though I have no direct reward beyond the knowledge that I helped some people do something neat.
If you feel the same way about my work, I've set up a simple Patreon page [here](https://www.patreon.com/hydrus_dev). If you can help out, it is deeply appreciated.
If you feel the same way about my work, I've set up a simple Patreon page [here](https://www.patreon.com/hydrus_dev). If you can help out, it is deeply appreciated.

View File

@ -1,5 +1,5 @@
---
title: running in wine
title: Running In Wine
---
# running a client or server in wine
@ -27,4 +27,4 @@ Installation process:
---
If you get the client running in Wine, please let me know how you get on!
If you get the client running in Wine, please let me know how you get on!

View File

@ -334,7 +334,7 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
self._dictionary[ 'integers' ][ 'notebook_tab_alignment' ] = CC.DIRECTION_UP
self._dictionary[ 'integers' ][ 'video_buffer_size_mb' ] = 96
self._dictionary[ 'integers' ][ 'video_buffer_size' ] = 96 * 1024 * 1024
self._dictionary[ 'integers' ][ 'related_tags_search_1_duration_ms' ] = 250
self._dictionary[ 'integers' ][ 'related_tags_search_2_duration_ms' ] = 2000

View File

@ -2799,7 +2799,9 @@ class ParseNodeContentLink( HydrusSerialisable.SerialisableBase ):
def ParseURLs( self, job_key, parsing_text, referral_url ):
basic_urls = self._formula.Parse( {}, parsing_text )
collapse_newlines = True
basic_urls = self._formula.Parse( {}, parsing_text, collapse_newlines )
absolute_urls = [ urllib.parse.urljoin( referral_url, basic_url ) for basic_url in basic_urls ]

View File

@ -486,7 +486,7 @@ class RasterContainerVideo( RasterContainer ):
new_options = HG.client_controller.new_options
video_buffer_size_mb = new_options.GetInteger( 'video_buffer_size_mb' )
video_buffer_size = new_options.GetInteger( 'video_buffer_size' )
duration = self._media.GetDuration()
num_frames_in_video = self._media.GetNumFrames()
@ -515,7 +515,7 @@ class RasterContainerVideo( RasterContainer ):
self._average_frame_duration = duration / num_frames_in_video
frame_buffer_length = ( video_buffer_size_mb * 1024 * 1024 ) // ( x * y * 3 )
frame_buffer_length = video_buffer_size // ( x * y * 3 )
# if we can't buffer the whole vid, then don't have a clunky massive buffer

View File

@ -11518,6 +11518,72 @@ class DB( HydrusDB.HydrusDB ):
if version == 498:
try:
domain_manager = self.modules_serialisable.GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER )
domain_manager.Initialise()
#
domain_manager.RenameGUG( 'twitter syndication profile lookup (limited)', 'twitter syndication profile lookup' )
domain_manager.RenameGUG( 'twitter syndication profile lookup (limited) (with replies)', 'twitter syndication profile lookup (with replies)' )
domain_manager.OverwriteDefaultGUGs( ( 'twitter syndication list lookup', 'twitter syndication likes lookup', 'twitter syndication collection lookup' ) )
domain_manager.OverwriteDefaultParsers( ( 'twitter syndication api profile parser', 'twitter syndication api tweet parser' ) )
domain_manager.OverwriteDefaultURLClasses( (
'twitter list',
'twitter syndication api collection',
'twitter syndication api likes (user_id)',
'twitter syndication api likes',
'twitter syndication api list (list_id)',
'twitter syndication api list (screen_name and slug)',
'twitter syndication api list (user_id and slug)',
'twitter syndication api profile (user_id)',
'twitter syndication api profile',
'twitter syndication api tweet',
'twitter tweet'
) )
#
domain_manager.TryToLinkURLClassesAndParsers()
#
self.modules_serialisable.SetJSONDump( domain_manager )
except Exception as e:
HydrusData.PrintException( e )
message = 'Trying to update some downloader objects failed! Please let hydrus dev know!'
self.pub_initial_message( message )
try:
new_options = self.modules_serialisable.GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_CLIENT_OPTIONS )
new_options.SetInteger( 'video_buffer_size', new_options.GetInteger( 'video_buffer_size_mb' ) * 1024 * 1024 )
self.modules_serialisable.SetJSONDump( new_options )
except:
HydrusData.PrintException( e )
message = 'Trying to update the video buffer option value failed! Please let hydrus dev know!'
self.pub_initial_message( message )
self._controller.frame_splash_status.SetTitleText( 'updated db to v{}'.format( HydrusData.ToHumanInt( version + 1 ) ) )
self._Execute( 'UPDATE version SET version = ?;', ( version + 1, ) )

View File

@ -416,7 +416,7 @@ class ClientDBFilesDuplicates( ClientDBModule.ClientDBModule ):
if king_hash_id not in preferred_hash_ids:
king_hash_id = random.sample( preferred_hash_ids, 1 )[0]
king_hash_id = random.choice( list( preferred_hash_ids ) )
return king_hash_id
@ -425,7 +425,7 @@ class ClientDBFilesDuplicates( ClientDBModule.ClientDBModule ):
if king_hash_id not in media_hash_ids:
king_hash_id = random.sample( media_hash_ids, 1 )[0]
king_hash_id = random.choice( list( media_hash_ids ) )
return king_hash_id
@ -926,8 +926,8 @@ class ClientDBFilesDuplicates( ClientDBModule.ClientDBModule ):
def DuplicatesGetPotentialDuplicatePairsTableJoinOnEverythingSearchResults( self, db_location_context: ClientDBFilesStorage.DBLocationContext, pixel_dupes_preference: int, max_hamming_distance: int ):
tables = 'potential_duplicate_pairs, duplicate_file_members AS duplicate_file_members_smaller, duplicate_file_members AS duplicate_file_members_larger'
join_predicate = 'smaller_media_id = duplicate_file_members_smaller.media_id AND larger_media_id = duplicate_file_members_larger.media_id AND distance <= {}'.format( max_hamming_distance )
tables = 'potential_duplicate_pairs, duplicate_files AS duplicate_files_smaller, duplicate_files AS duplicate_files_larger'
join_predicate = 'smaller_media_id = duplicate_files_smaller.media_id AND larger_media_id = duplicate_files_larger.media_id AND distance <= {}'.format( max_hamming_distance )
if not db_location_context.location_context.IsAllKnownFiles():
@ -935,12 +935,12 @@ class ClientDBFilesDuplicates( ClientDBModule.ClientDBModule ):
tables = '{}, {} AS current_files_smaller, {} AS current_files_larger'.format( tables, files_table_name, files_table_name )
join_predicate = '{} AND duplicate_file_members_smaller.hash_id = current_files_smaller.hash_id AND duplicate_file_members_larger.hash_id = current_files_larger.hash_id'.format( join_predicate )
join_predicate = '{} AND duplicate_files_smaller.king_hash_id = current_files_smaller.hash_id AND duplicate_files_larger.king_hash_id = current_files_larger.hash_id'.format( join_predicate )
if pixel_dupes_preference in ( CC.SIMILAR_FILES_PIXEL_DUPES_REQUIRED, CC.SIMILAR_FILES_PIXEL_DUPES_EXCLUDED ):
join_predicate_pixel_dupes = 'duplicate_file_members_smaller.hash_id = pixel_hash_map_smaller.hash_id AND duplicate_file_members_larger.hash_id = pixel_hash_map_larger.hash_id AND pixel_hash_map_smaller.pixel_hash_id = pixel_hash_map_larger.pixel_hash_id'
join_predicate_pixel_dupes = 'duplicate_files_smaller.king_hash_id = pixel_hash_map_smaller.hash_id AND duplicate_files_larger.king_hash_id = pixel_hash_map_larger.hash_id AND pixel_hash_map_smaller.pixel_hash_id = pixel_hash_map_larger.pixel_hash_id'
if pixel_dupes_preference == CC.SIMILAR_FILES_PIXEL_DUPES_REQUIRED:
@ -973,7 +973,7 @@ class ClientDBFilesDuplicates( ClientDBModule.ClientDBModule ):
files_table_name = db_location_context.GetSingleFilesTableName()
table_join = 'potential_duplicate_pairs, duplicate_file_members AS duplicate_file_members_smaller, {} AS current_files_smaller, duplicate_file_members AS duplicate_file_members_larger, {} AS current_files_larger ON ( smaller_media_id = duplicate_file_members_smaller.media_id AND duplicate_file_members_smaller.hash_id = current_files_smaller.hash_id AND larger_media_id = duplicate_file_members_larger.media_id AND duplicate_file_members_larger.hash_id = current_files_larger.hash_id )'.format( files_table_name, files_table_name )
table_join = 'potential_duplicate_pairs, duplicate_files AS duplicate_files_smaller, {} AS current_files_smaller, duplicate_files AS duplicate_files_larger, {} AS current_files_larger ON ( smaller_media_id = duplicate_files_smaller.media_id AND duplicate_files_smaller.king_hash_id = current_files_smaller.hash_id AND larger_media_id = duplicate_files_larger.media_id AND duplicate_files_larger.king_hash_id = current_files_larger.hash_id )'.format( files_table_name, files_table_name )
return table_join
@ -1030,15 +1030,15 @@ class ClientDBFilesDuplicates( ClientDBModule.ClientDBModule ):
# ████████████████████████████████████████████████████████████████████████
#
base_tables = 'potential_duplicate_pairs, duplicate_file_members AS duplicate_file_members_smaller, duplicate_file_members AS duplicate_file_members_larger'
base_tables = 'potential_duplicate_pairs, duplicate_files AS duplicate_files_smaller, duplicate_files AS duplicate_files_larger'
join_predicate_media_to_hashes = 'smaller_media_id = duplicate_file_members_smaller.media_id AND larger_media_id = duplicate_file_members_larger.media_id AND distance <= {}'.format( max_hamming_distance )
join_predicate_media_to_hashes = 'smaller_media_id = duplicate_files_smaller.media_id AND larger_media_id = duplicate_files_larger.media_id AND distance <= {}'.format( max_hamming_distance )
if both_files_match:
tables = '{}, {} AS results_smaller, {} AS results_larger'.format( base_tables, results_table_name, results_table_name )
join_predicate_hashes_to_allowed_results = 'duplicate_file_members_smaller.hash_id = results_smaller.hash_id AND duplicate_file_members_larger.hash_id = results_larger.hash_id'
join_predicate_hashes_to_allowed_results = 'duplicate_files_smaller.king_hash_id = results_smaller.hash_id AND duplicate_files_larger.king_hash_id = results_larger.hash_id'
else:
@ -1046,7 +1046,7 @@ class ClientDBFilesDuplicates( ClientDBModule.ClientDBModule ):
tables = '{}, {} AS results_table_for_this_query'.format( base_tables, results_table_name )
join_predicate_hashes_to_allowed_results = '( duplicate_file_members_smaller.hash_id = results_table_for_this_query.hash_id OR duplicate_file_members_larger.hash_id = results_table_for_this_query.hash_id )'
join_predicate_hashes_to_allowed_results = '( duplicate_files_smaller.king_hash_id = results_table_for_this_query.hash_id OR duplicate_files_larger.king_hash_id = results_table_for_this_query.hash_id )'
else:
@ -1054,9 +1054,9 @@ class ClientDBFilesDuplicates( ClientDBModule.ClientDBModule ):
tables = '{}, {} AS results_table_for_this_query, {} AS current_files_for_this_query'.format( base_tables, results_table_name, files_table_name )
join_predicate_smaller_matches = '( duplicate_file_members_smaller.hash_id = results_table_for_this_query.hash_id AND duplicate_file_members_larger.hash_id = current_files_for_this_query.hash_id )'
join_predicate_smaller_matches = '( duplicate_files_smaller.king_hash_id = results_table_for_this_query.hash_id AND duplicate_files_larger.king_hash_id = current_files_for_this_query.hash_id )'
join_predicate_larger_matches = '( duplicate_file_members_smaller.hash_id = current_files_for_this_query.hash_id AND duplicate_file_members_larger.hash_id = results_table_for_this_query.hash_id )'
join_predicate_larger_matches = '( duplicate_files_smaller.king_hash_id = current_files_for_this_query.hash_id AND duplicate_files_larger.king_hash_id = results_table_for_this_query.hash_id )'
join_predicate_hashes_to_allowed_results = '( {} OR {} )'.format( join_predicate_smaller_matches, join_predicate_larger_matches )
@ -1064,7 +1064,7 @@ class ClientDBFilesDuplicates( ClientDBModule.ClientDBModule ):
if pixel_dupes_preference in ( CC.SIMILAR_FILES_PIXEL_DUPES_REQUIRED, CC.SIMILAR_FILES_PIXEL_DUPES_EXCLUDED ):
join_predicate_pixel_dupes = 'duplicate_file_members_smaller.hash_id = pixel_hash_map_smaller.hash_id AND duplicate_file_members_larger.hash_id = pixel_hash_map_larger.hash_id AND pixel_hash_map_smaller.pixel_hash_id = pixel_hash_map_larger.pixel_hash_id'
join_predicate_pixel_dupes = 'duplicate_files_smaller.king_hash_id = pixel_hash_map_smaller.hash_id AND duplicate_files_larger.king_hash_id = pixel_hash_map_larger.hash_id AND pixel_hash_map_smaller.pixel_hash_id = pixel_hash_map_larger.pixel_hash_id'
if pixel_dupes_preference == CC.SIMILAR_FILES_PIXEL_DUPES_REQUIRED:

View File

@ -528,7 +528,18 @@ class ClientDBSimilarFiles( ClientDBModule.ClientDBModule ):
with self._MakeTemporaryIntegerTable( rebalance_perceptual_hash_ids, 'phash_id' ) as temp_table_name:
# temp perceptual hashes to tree
( biggest_perceptual_hash_id, ) = self._Execute( 'SELECT phash_id FROM {} CROSS JOIN shape_vptree USING ( phash_id ) ORDER BY inner_population + outer_population DESC;'.format( temp_table_name ) ).fetchone()
result = self._Execute( 'SELECT phash_id FROM {} CROSS JOIN shape_vptree USING ( phash_id ) ORDER BY inner_population + outer_population DESC;'.format( temp_table_name ) ).fetchone()
if result is None:
self._Execute( 'DELETE FROM shape_maintenance_branch_regen;' )
return
else:
( biggest_perceptual_hash_id, ) = result
self._RegenerateBranch( job_key, biggest_perceptual_hash_id )

View File

@ -174,7 +174,11 @@ def AppendSeparator( menu ):
last_item = menu.actions()[-1]
if not last_item.isSeparator():
# got this once, who knows what happened, so we test for QAction now
# 'PySide2.QtGui.QStandardItem' object has no attribute 'isSeparator'
last_item_is_separator = isinstance( last_item, QW.QAction ) and last_item.isSeparator()
if not last_item_is_separator:
menu.addSeparator()

View File

@ -588,6 +588,7 @@ class PopupMessage( PopupWindow ):
class PopupMessageManager( QW.QWidget ):
def __init__( self, parent ):
@ -625,9 +626,7 @@ class PopupMessageManager( QW.QWidget ):
self._pending_job_keys = []
self._gui_event_filter = QP.WidgetEventFilter( parent )
self._gui_event_filter.EVT_SIZE( self.EventParentMovedOrResized )
self._gui_event_filter.EVT_MOVE( self.EventParentMovedOrResized )
parent.installEventFilter( self )
HG.client_controller.sub( self, 'AddMessage', 'message' )
@ -781,7 +780,7 @@ class PopupMessageManager( QW.QWidget ):
try:
gui_frame = self.parentWidget()
gui_frame = HG.client_controller.gui
gui_is_hidden = not gui_frame.isVisible()
@ -789,8 +788,6 @@ class PopupMessageManager( QW.QWidget ):
current_focus_tlw = QW.QApplication.activeWindow()
self_is_active = current_focus_tlw == self
main_gui_or_child_window_is_active = ClientGUIFunctions.TLWOrChildIsActive( gui_frame )
num_messages_displayed = self._message_vbox.count()
@ -799,6 +796,18 @@ class PopupMessageManager( QW.QWidget ):
if there_is_stuff_to_display:
# Unhiding tends to raise the main gui tlw in some window managers, which is annoying if a media viewer window has focus
show_is_not_annoying = main_gui_or_child_window_is_active or self._DisplayingError()
ok_to_show = show_is_not_annoying and not going_to_bug_out_at_hide_or_show
if ok_to_show:
self.show()
#
parent_size = gui_frame.size()
my_size = self.size()
@ -816,16 +825,6 @@ class PopupMessageManager( QW.QWidget ):
# Unhiding tends to raise the main gui tlw in some window managers, which is annoying if a media viewer window has focus
show_is_not_annoying = main_gui_or_child_window_is_active or self._DisplayingError()
ok_to_show = show_is_not_annoying and not going_to_bug_out_at_hide_or_show
if ok_to_show:
self.show()
else:
if not going_to_bug_out_at_hide_or_show:
@ -1098,6 +1097,22 @@ class PopupMessageManager( QW.QWidget ):
self.MakeSureEverythingFits()
def eventFilter( self, watched, event ):
if watched == self.parentWidget():
if event.type() in ( QC.QEvent.Resize, QC.QEvent.Move ):
if self._OKToAlterUI():
self._SizeAndPositionAndShow()
return False
def resizeEvent( self, event ):
if not self or not QP.isValid( self ): # funny runtime error caused this
@ -1113,21 +1128,6 @@ class PopupMessageManager( QW.QWidget ):
event.ignore()
def EventParentMovedOrResized( self, event ):
if not self or not QP.isValid( self ): # funny runtime error caused this
return
if self._OKToAlterUI():
self._SizeAndPositionAndShow()
return True # was: event.ignore()
def MakeSureEverythingFits( self ):
if self._OKToAlterUI():

View File

@ -2864,8 +2864,8 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
buffer_panel = ClientGUICommon.StaticBox( self, 'video buffer' )
self._video_buffer_size_mb = ClientGUICommon.BetterSpinBox( buffer_panel, min=48, max= 16 * 1024 )
self._video_buffer_size_mb.valueChanged.connect( self.EventVideoBufferUpdate )
self._video_buffer_size = ClientGUIControls.BytesControl( buffer_panel )
self._video_buffer_size.valueChanged.connect( self.EventVideoBufferUpdate )
self._estimated_number_video_frames = QW.QLabel( '', buffer_panel )
@ -2881,7 +2881,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
self._ideal_tile_dimension.setValue( self._new_options.GetInteger( 'ideal_tile_dimension' ) )
self._video_buffer_size_mb.setValue( self._new_options.GetInteger( 'video_buffer_size_mb' ) )
self._video_buffer_size.SetValue( self._new_options.GetInteger( 'video_buffer_size' ) )
self._media_viewer_prefetch_delay_base_ms.setValue( self._new_options.GetInteger( 'media_viewer_prefetch_delay_base_ms' ) )
self._media_viewer_prefetch_num_previous.setValue( self._new_options.GetInteger( 'media_viewer_prefetch_num_previous' ) )
@ -2929,7 +2929,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
video_buffer_sizer = QP.HBoxLayout()
QP.AddToLayout( video_buffer_sizer, self._video_buffer_size_mb, CC.FLAGS_CENTER_PERPENDICULAR )
QP.AddToLayout( video_buffer_sizer, self._video_buffer_size, CC.FLAGS_CENTER_PERPENDICULAR )
QP.AddToLayout( video_buffer_sizer, self._estimated_number_video_frames, CC.FLAGS_CENTER_PERPENDICULAR )
#
@ -2942,7 +2942,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
rows = []
rows.append( ( 'MB memory reserved for thumbnail cache:', thumbnails_sizer ) )
rows.append( ( 'Memory reserved for thumbnail cache:', thumbnails_sizer ) )
rows.append( ( 'Thumbnail cache timeout:', self._thumbnail_cache_timeout ) )
gridbox = ClientGUICommon.WrapInGrid( thumbnail_cache_panel, rows )
@ -2965,7 +2965,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
rows = []
rows.append( ( 'MB memory reserved for image cache:', fullscreens_sizer ) )
rows.append( ( 'Memory reserved for image cache:', fullscreens_sizer ) )
rows.append( ( 'Image cache timeout:', self._image_cache_timeout ) )
rows.append( ( 'Maximum image size (in % of cache) that can be cached:', image_cache_storage_sizer ) )
rows.append( ( 'Maximum image size (in % of cache) that will be prefetched:', image_cache_prefetch_sizer ) )
@ -2989,7 +2989,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
rows = []
rows.append( ( 'MB memory reserved for image tile cache:', image_tiles_sizer ) )
rows.append( ( 'Memory reserved for image tile cache:', image_tiles_sizer ) )
rows.append( ( 'Image tile cache timeout:', self._image_tile_cache_timeout ) )
rows.append( ( 'Ideal tile width/height px:', self._ideal_tile_dimension ) )
@ -3019,7 +3019,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
rows = []
rows.append( ( 'MB memory for video buffer: ', video_buffer_sizer ) )
rows.append( ( 'Memory for video buffer: ', video_buffer_sizer ) )
gridbox = ClientGUICommon.WrapInGrid( buffer_panel, rows )
@ -3041,7 +3041,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
self.EventImageCacheUpdate()
self.EventThumbnailsUpdate()
self.EventImageTilesUpdate()
self.EventVideoBufferUpdate( self._video_buffer_size_mb.value() )
self.EventVideoBufferUpdate()
def EventImageCacheUpdate( self ):
@ -3105,9 +3105,11 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
self._estimated_number_thumbnails.setText( '(at '+res_string+', about '+HydrusData.ToHumanInt(estimated_thumbs)+' thumbnails)' )
def EventVideoBufferUpdate( self, value ):
def EventVideoBufferUpdate( self ):
estimated_720p_frames = int( ( value * 1024 * 1024 ) // ( 1280 * 720 * 3 ) )
value = self._video_buffer_size.GetValue()
estimated_720p_frames = int( value // ( 1280 * 720 * 3 ) )
self._estimated_number_video_frames.setText( '(about '+HydrusData.ToHumanInt(estimated_720p_frames)+' frames of 720p video)' )
@ -3131,7 +3133,7 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
self._new_options.SetInteger( 'image_cache_storage_limit_percentage', self._image_cache_storage_limit_percentage.value() )
self._new_options.SetInteger( 'image_cache_prefetch_limit_percentage', self._image_cache_prefetch_limit_percentage.value() )
self._new_options.SetInteger( 'video_buffer_size_mb', self._video_buffer_size_mb.value() )
self._new_options.SetInteger( 'video_buffer_size', self._video_buffer_size.GetValue() )

View File

@ -258,6 +258,8 @@ simple_shortcut_name_to_action_lookup = {
'custom' : SHORTCUTS_MEDIA_ACTIONS + SHORTCUTS_MEDIA_VIEWER_ACTIONS
}
simple_shortcut_name_to_action_lookup = { key : HydrusData.DedupeList( value ) for ( key, value ) in simple_shortcut_name_to_action_lookup.items() }
CUMULATIVE_MOUSEWARP_MANHATTAN_LENGTH = 0
# ok, the problem here is that I get key codes that are converted, so if someone does shift+1 on a US keyboard, this ends up with Shift+! same with ctrl+alt+ to get accented characters

View File

@ -2413,7 +2413,8 @@ class ManageTagsPanel( ClientGUIScrolledPanels.ManagePanel, CAC.ApplicationComma
suggestions = []
suggestions.append( 'mangled parse/typo' )
suggestions.append( 'not applicable' )
suggestions.append( 'not applicable/incorrect' )
suggestions.append( 'clearing mass-pasted junk' )
suggestions.append( 'splitting filename/title/etc... into individual tags' )
with ClientGUIDialogs.DialogTextEntry( self, message, suggestions = suggestions ) as dlg:

View File

@ -1209,6 +1209,7 @@ class CallAfterEventCatcher( QC.QObject ):
return False
def CallAfter( fn, *args, **kwargs ):

View File

@ -229,6 +229,19 @@ class CanvasLayout( QW.QLayout ):
class LayoutEventSilencer( QC.QObject ):
def eventFilter( self, watched, event ):
if watched == self.parent() and event.type() == QC.QEvent.LayoutRequest:
return True
return False
class Canvas( QW.QWidget, CAC.ApplicationCommandProcessorMixin ):
CANVAS_TYPE = CC.CANVAS_MEDIA_VIEWER
@ -261,6 +274,9 @@ class Canvas( QW.QWidget, CAC.ApplicationCommandProcessorMixin ):
self._my_shortcuts_handler = ClientGUIShortcuts.ShortcutsHandler( self, [ 'media', 'media_viewer' ], catch_mouse = catch_mouse, ignore_activating_mouse_click = ignore_activating_mouse_click )
self._layout_silencer = LayoutEventSilencer( self )
self.installEventFilter( self._layout_silencer )
self._click_drag_reporting_filter = MediaContainerDragClickReportingFilter( self )
self.installEventFilter( self._click_drag_reporting_filter )
@ -660,18 +676,6 @@ class Canvas( QW.QWidget, CAC.ApplicationCommandProcessorMixin ):
ClientGUIMediaActions.UndeleteMedia( self, ( self._current_media, ) )
def event( self, event ):
if event.type() == QC.QEvent.LayoutRequest:
return True
else:
return QW.QWidget.event( self, event )
def CleanBeforeDestroy( self ):
self.ClearMedia()
@ -704,7 +708,7 @@ class Canvas( QW.QWidget, CAC.ApplicationCommandProcessorMixin ):
self._media_container.ResetCenterPosition()
self._last_drag_pos = None
self.EndDrag()
@ -979,7 +983,7 @@ class Canvas( QW.QWidget, CAC.ApplicationCommandProcessorMixin ):
self._media_container.ResetCenterPosition()
self._last_drag_pos = None
self.EndDrag()
def SetLocationContext( self, location_context: ClientLocation.LocationContext ):
@ -1006,6 +1010,8 @@ class Canvas( QW.QWidget, CAC.ApplicationCommandProcessorMixin ):
if media != self._current_media:
self.EndDrag()
HG.client_controller.ResetIdleTimer()
self._SaveCurrentMediaViewTime()
@ -2661,7 +2667,7 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
self._media_container.ResetCenterPosition()
self._last_drag_pos = None
self.EndDrag()
self._media_container.show()
@ -3979,7 +3985,7 @@ class CanvasMediaListBrowser( CanvasMediaListNavigable ):
i_can_post_ratings = len( local_ratings_services ) > 0
self._last_drag_pos = None # to stop successive right-click drag warp bug
self.EndDrag() # to stop successive right-click drag warp bug
locations_manager = self._current_media.GetLocationsManager()

View File

@ -1372,7 +1372,7 @@ class NotePanel( QW.QWidget ):
return True
return QW.QWidget.eventFilter( self, object, event )
return False
def heightForWidth( self, width: int ):

View File

@ -141,6 +141,15 @@ class mpvWidget( QW.QWidget, CAC.ApplicationCommandProcessorMixin ):
self._my_shortcut_handler = ClientGUIShortcuts.ShortcutsHandler( self, [], catch_mouse = True )
try:
self.we_are_newer_api = float( GetClientAPIVersionString() ) >= 2.0
except:
self.we_are_newer_api = False
def _GetAudioOptionNames( self ):
@ -422,7 +431,7 @@ class mpvWidget( QW.QWidget, CAC.ApplicationCommandProcessorMixin ):
try:
self._player.seek( time_index_s, reference = 'absolute' )
self._player.seek( time_index_s, reference = 'absolute', precision = 'exact' )
except:

File diff suppressed because it is too large Load Diff

View File

@ -4332,7 +4332,10 @@ class ManagementPanelPetitions( ManagementPanel ):
self._reason_text = QW.QTextEdit( self._petition_panel )
self._reason_text.setReadOnly( True )
self._reason_text.setMinimumHeight( 80 )
( min_width, min_height ) = ClientGUIFunctions.ConvertTextToPixels( self._reason_text, ( 16, 6 ) )
self._reason_text.setFixedHeight( min_height )
check_all = ClientGUICommon.BetterButton( self._petition_panel, 'check all', self._CheckAll )
flip_selected = ClientGUICommon.BetterButton( self._petition_panel, 'flip selected', self._FlipSelected )
@ -4349,14 +4352,14 @@ class ManagementPanelPetitions( ManagementPanel ):
( min_width, min_height ) = ClientGUIFunctions.ConvertTextToPixels( self._contents_add, ( 16, 20 ) )
self._contents_add.setMinimumHeight( min_height )
self._contents_add.setFixedHeight( min_height )
self._contents_delete = ClientGUICommon.BetterCheckBoxList( self._petition_panel )
self._contents_delete.itemDoubleClicked.connect( self.ContentsDeleteDoubleClick )
( min_width, min_height ) = ClientGUIFunctions.ConvertTextToPixels( self._contents_delete, ( 16, 20 ) )
self._contents_delete.setMinimumHeight( min_height )
self._contents_delete.setFixedHeight( min_height )
self._process = QW.QPushButton( 'process', self._petition_panel )
self._process.clicked.connect( self.EventProcess )
@ -4395,8 +4398,8 @@ class ManagementPanelPetitions( ManagementPanel ):
self._petition_panel.Add( self._reason_text, CC.FLAGS_EXPAND_PERPENDICULAR )
self._petition_panel.Add( check_hbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
self._petition_panel.Add( sort_hbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
self._petition_panel.Add( self._contents_add, CC.FLAGS_EXPAND_BOTH_WAYS )
self._petition_panel.Add( self._contents_delete, CC.FLAGS_EXPAND_BOTH_WAYS )
self._petition_panel.Add( self._contents_add, CC.FLAGS_EXPAND_PERPENDICULAR )
self._petition_panel.Add( self._contents_delete, CC.FLAGS_EXPAND_PERPENDICULAR )
self._petition_panel.Add( self._process, CC.FLAGS_EXPAND_PERPENDICULAR )
self._petition_panel.Add( self._copy_account_key_button, CC.FLAGS_EXPAND_PERPENDICULAR )
self._petition_panel.Add( self._modify_petitioner, CC.FLAGS_EXPAND_PERPENDICULAR )
@ -4407,7 +4410,7 @@ class ManagementPanelPetitions( ManagementPanel ):
QP.AddToLayout( vbox, self._media_collect, CC.FLAGS_EXPAND_PERPENDICULAR )
QP.AddToLayout( vbox, self._petitions_info_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
QP.AddToLayout( vbox, self._petition_panel, CC.FLAGS_EXPAND_BOTH_WAYS )
QP.AddToLayout( vbox, self._petition_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
if service_type == HC.TAG_REPOSITORY:
@ -4820,20 +4823,35 @@ class ManagementPanelPetitions( ManagementPanel ):
contents = self._contents_add
string_template = 'ADD: {}'
else:
contents = self._contents_delete
string_template = 'DELETE: {}'
contents.clear()
for ( i, ( content, check ) ) in enumerate( contents_and_checks ):
content_string = content.ToString()
content_string = string_template.format( content.ToString() )
contents.Append( content_string, content, starts_checked = check )
if contents.count() > 0:
ideal_height_in_rows = min( 20, len( contents_and_checks ) )
pixels_per_row = contents.sizeHintForRow( 0 )
ideal_height_in_pixels = ( ideal_height_in_rows * pixels_per_row ) + ( contents.frameWidth() * 2 )
contents.setFixedHeight( ideal_height_in_pixels )
def _ShowHashes( self, hashes ):

View File

@ -707,8 +707,6 @@ class EditContentParserPanel( ClientGUIScrolledPanels.EditPanel ):
vbox = QP.VBoxLayout()
label = 'Try to make sure you will only ever parse one text result here. A single content parser with a single note name producing eleven different note texts is going to be conflict hell for the user at the end.'
label += os.linesep * 2
label += 'Also this is prototype, it does not do anything yet!'
st = ClientGUICommon.BetterStaticText( self._notes_panel, label = label )

View File

@ -2902,7 +2902,10 @@ class ReviewServiceRepositorySubPanel( QW.QWidget ):
message = 'Note that num file hashes and tags here include deleted content so will likely not line up with your review services value, which is only for current content.'
message += os.linesep * 2
message += os.linesep.join( ( '{}: {}'.format( HC.service_info_enum_str_lookup[ int( info_type ) ], HydrusData.ToHumanInt( info ) ) for ( info_type, info ) in service_info_dict.items() ) )
tuples = [ ( HC.service_info_enum_str_lookup[ info_type ], HydrusData.ToHumanInt( service_info_dict[ info_type ] ) ) for info_type in l if info_type in service_info_dict ]
string_rows = [ '{}: {}'.format( info_type, info ) for ( info_type, info ) in tuples ]
message += os.linesep.join( string_rows )
QW.QMessageBox.information( self, 'Service Info', message )

View File

@ -11,6 +11,7 @@ from hydrus.core import HydrusText
from hydrus.core.networking import HydrusNetworking
from hydrus.client import ClientConstants as CC
from hydrus.client.gui import ClientGUIFunctions
from hydrus.client.gui import ClientGUIScrolledPanels
from hydrus.client.gui import ClientGUITime
from hydrus.client.gui import ClientGUITopLevelWindowsPanels
@ -293,6 +294,10 @@ class BytesControl( QW.QWidget ):
QP.AddToLayout( hbox, self._unit, CC.FLAGS_CENTER_PERPENDICULAR )
self.setLayout( hbox )
min_width = ClientGUIFunctions.ConvertTextToPixelWidth( self._unit, 8 )
self._unit.setMinimumWidth( min_width )
self._spin.valueChanged.connect( self._HandleValueChanged )
self._unit.currentIndexChanged.connect( self._HandleValueChanged )

View File

@ -1229,7 +1229,7 @@ class MediaList( object ):
self._collected_media = set()
self._selected_media = set()
self._sorted_media = []
self._sorted_media = SortedList()
self._RecalcAfterMediaRemove()

View File

@ -1619,6 +1619,31 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ):
def RenameGUG( self, original_name, new_name ):
with self._lock:
existing_gug_names_to_gugs = { gug.GetName() : gug for gug in self._gugs }
if original_name in existing_gug_names_to_gugs:
gug = existing_gug_names_to_gugs[ original_name ]
del existing_gug_names_to_gugs[ original_name ]
gug.SetName( new_name )
gug.SetNonDupeName( set( existing_gug_names_to_gugs.keys() ) )
existing_gug_names_to_gugs[ gug.GetName() ] = gug
new_gugs = list( existing_gug_names_to_gugs.values() )
self.SetGUGs( new_gugs )
def ReportNetworkInfrastructureError( self, url ):
with self._lock:

View File

@ -790,12 +790,12 @@ class NetworkJob( object ):
# cloudscraper refactored a bit around 1.2.60, so we now have some different paths to what we want
old_module = None
new_module = None
old_class_object = None
new_class_instance = None
if hasattr( cloudscraper, 'CloudScraper' ):
old_module = getattr( cloudscraper, 'CloudScraper' )
old_class_object = getattr( cloudscraper, 'CloudScraper' )
if hasattr( cloudscraper, 'cloudflare' ):
@ -804,13 +804,17 @@ class NetworkJob( object ):
if hasattr( m, 'Cloudflare' ):
new_module = getattr( m, 'Cloudflare' )
new_class_object = getattr( m, 'Cloudflare' )
cs = cloudscraper.CloudScraper()
new_class_instance = new_class_object( cs )
possible_paths = [
( old_module, 'is_Firewall_Blocked' ),
( new_module, 'is_Firewall_Blocked' )
( old_class_object, 'is_Firewall_Blocked' ),
( new_class_instance, 'is_Firewall_Blocked' )
]
is_firewall = False
@ -834,9 +838,9 @@ class NetworkJob( object ):
possible_paths = [
( old_module, 'is_reCaptcha_Challenge' ),
( old_module, 'is_Captcha_Challenge' ),
( new_module, 'is_Captcha_Challenge' )
( old_class_object, 'is_reCaptcha_Challenge' ),
( old_class_object, 'is_Captcha_Challenge' ),
( new_class_instance, 'is_Captcha_Challenge' )
]
is_captcha = False
@ -860,9 +864,9 @@ class NetworkJob( object ):
possible_paths = [
( old_module, 'is_IUAM_Challenge' ),
( new_module, 'is_IUAM_Challenge' ),
( new_module, 'is_New_IUAM_Challenge' )
( old_class_object, 'is_IUAM_Challenge' ),
( new_class_instance, 'is_IUAM_Challenge' ),
( new_class_instance, 'is_New_IUAM_Challenge' )
]
is_iuam = False

View File

@ -80,7 +80,7 @@ options = {}
# Misc
NETWORK_VERSION = 20
SOFTWARE_VERSION = 498
SOFTWARE_VERSION = 499
CLIENT_API_VERSION = 32
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )

View File

@ -675,7 +675,7 @@ class HydrusResource( Resource ):
request.setHeader( 'Content-Type', content_type )
request.setHeader( 'Content-Length', str( content_length ) )
request.setHeader( 'Content-Disposition', content_disposition )
request.setHeader( 'Cache-Control', 'max-age={}'.format( 4 ) ) #hydurs won't change its mind about dynamic data under 4 seconds even if you ask repeatedly
request.setHeader( 'Cache-Control', 'max-age={}'.format( 4 ) ) # hydrus won't change its mind about dynamic data under 4 seconds even if you ask repeatedly
request.write( body_bytes )

View File

@ -23,7 +23,7 @@ nav:
- Next Steps:
- adding_new_downloaders.md
- getting_started_subscriptions.md
- filtering duplicates: duplicates.md
- duplicates.md
- Advanced:
- advanced_siblings.md
- advanced_parents.md
@ -54,7 +54,7 @@ nav:
- downloader_parsers_full_example_file_page.md
- downloader_parsers_full_example_api.md
- downloader_completion.md
- Sharing: downloader_sharing.md
- downloader_sharing.md
- downloader_login.md
- API:
- client_api.md

View File

@ -7,7 +7,7 @@ lxml>=4.5.0
lz4>=3.0.0
nose>=1.3.0
numpy>=1.16.0
opencv-python-headless>=4.0.0, <=4.5.3.56
opencv-python-headless>=4.0.0
Pillow>=6.0.0
psutil>=5.0.0
pylzma>=0.5.0

View File

@ -7,7 +7,7 @@ lxml>=4.5.0
lz4>=3.0.0
nose>=1.3.0
numpy>=1.16.0
opencv-python-headless>=4.0.0, <=4.5.3.56
opencv-python-headless>=4.0.0
Pillow>=6.0.0
psutil>=5.0.0
pylzma>=0.5.0

View File

@ -7,14 +7,14 @@ lxml>=4.5.0
lz4>=3.0.0
nose>=1.3.0
numpy>=1.16.0
opencv-python-headless>=4.0.0, <=4.5.3.56
opencv-python-headless>=4.0.0
Pillow>=6.0.0
psutil>=5.0.0
pylzma>=0.5.0
pyOpenSSL>=19.1.0
PySide2>=5.15.0
PySocks>=1.7.0
python-mpv==0.5.2
python-mpv==1.0.1
PyYAML>=5.0.0
QtPy>=1.9.0
requests==2.23.0

View File

@ -24,7 +24,7 @@ a = Analysis(['hydrus\\client.pyw'],
('hydrus\\db', 'db'),
('hydrus\\hydrus', 'hydrus'),
('hydrus\\sqlite3.dll', '.'),
('hydrus\\mpv-1.dll', '.'),
('hydrus\\mpv-2.dll', '.'),
(cloudscraper_dir, 'cloudscraper'),
(shiboken_dir, 'shiboken2\\files.dir')
],

View File

@ -22,7 +22,7 @@ a = Analysis(['hydrus\\client.pyw'],
('hydrus\\db', 'db'),
('hydrus\\hydrus', 'hydrus'),
('hydrus\\sqlite3.dll', '.'),
('hydrus\\mpv-1.dll', '.'),
('hydrus\\mpv-2.dll', '.'),
(cloudscraper_dir, 'cloudscraper')
],
hiddenimports=['hydrus\\server.py', 'cloudscraper'],

View File

@ -14,7 +14,7 @@ pylzma>=0.5.0
pyOpenSSL>=19.1.0
PySide2>=5.15.0
PySocks>=1.7.0
python-mpv==0.5.2
python-mpv==1.0.1
PyYAML>=5.0.0
QtPy>=1.9.0
requests==2.23.0

View File

@ -14,7 +14,7 @@ pylzma>=0.5.0
pyOpenSSL>=19.1.0
PySide6>=6.0.0
PySocks>=1.7.0
python-mpv==0.5.2
python-mpv==1.0.1
PyYAML>=5.0.0
QtPy>=1.9.0
requests==2.23.0

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.6 KiB

After

Width:  |  Height:  |  Size: 7.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.5 KiB

After

Width:  |  Height:  |  Size: 3.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.0 KiB

After

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.9 KiB

After

Width:  |  Height:  |  Size: 2.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.4 KiB

After

Width:  |  Height:  |  Size: 1.9 KiB