parent
8ffe359c50
commit
8b3ae4ac1a
|
@ -7,6 +7,46 @@ title: Changelog
|
|||
!!! note
|
||||
This is the new changelog, only the most recent builds. For all versions, see the [old changelog](old_changelog.html).
|
||||
|
||||
## [Version 533](https://github.com/hydrusnetwork/hydrus/releases/tag/v533)
|
||||
|
||||
### macOS App crashes
|
||||
|
||||
* unfortunately, last week's eventFilter work did not fix the macOS build's crashing--however, thanks to user help, we figured out that it was some half-hidden auxiliary Qt library that updated in the background starting v530 (the excellently named `PyQt6-Qt6` package). the build script is updated to roll back this version and it seems like things are fixed. this particular issue shouldn't happen again. sorry for the trouble, and let me know if there are any new issues! (issue #1379)
|
||||
|
||||
### misc
|
||||
|
||||
* the download panels in subscription popup windows are now significantly more responsive. ever since the popup manager was embedded into the gui, popup messages were not doing the 'should I update myself?' test correctly, and their network UI was not being updated without other events like surrounding widgets resizing. I was wondering what was going on here for ages--turns out it was regular stupidity
|
||||
* if an image has width or height > 1024, the 'share->copy' menu now shows a second, 'source lookup' bitmap, with the resolution clipped to 1024x1024
|
||||
* 'sort files by hash' can now be sorted asc or desc. this also fixes a bug where it was secretly either sorting asc or desc based on the previous selection. well done to the user who noticed and tested this
|
||||
* if system:limit=0 is in a search, the search is no longer run--it comes back immediately empty. also, the system:limit edit panel now has a minimum value of 1 to dissuade this state
|
||||
* the experimental QtMediaPlayer now initialises with the correct volume/mute and updates on volume/mute events. the scanbar and volume control UI are still hidden behind the OpenGL frame for now, but one step forward
|
||||
* the system that caches media results now hangs on to the most recent 2048 files aggressively for two minutes after initial load. previously, if you refreshed a page of unique files, or did some repeated client api work on files that were not loaded somewhere as thumbs, in the interim periods those media objects were not strictly in non-weak memory anywhere in the client and could have been eligible for clearing out of the cache. now they are a bit more sticky
|
||||
* added some info on editing predicates and the various undocumented click shortcuts the taglist supports (e.g. ctrl+double-left-click) to the 'getting started with searching and sorting' help page
|
||||
* added a link to the Client API help for 'Hydrus Video Deduplicator' (https://github.com/appleappleapplenanner/hydrus-video-deduplicator), which neatly discovers duplicate videos in your client and queues them up in the duplicate filter by marking them as 'potential dupes'
|
||||
|
||||
### sub-gallery url network parsing gubbins
|
||||
|
||||
* sub-gallery import objects now get the tags and custom headers that are parsed with them. if the sub-gallery urls are parsed in 'posts' using a subsidiary parser, they only inherit the metadata within their post
|
||||
* sub-gallery import objects now use their parent gallery urls as referral header
|
||||
* sub-gallery import objects now inherit the 'can generate more pages' state of their parents (previously it was always 'yes')
|
||||
* 'next page' gallery urls do not get the tags they are parsed with. this behaviour makes a little less sense, and I suspect it _could_ cause various troubles, so I'll wait for more input, bug reports, and a larger cleanup and overhaul of how metadata is managed and passed down from one item to the next in the downloader system
|
||||
* generally speaking, when file and gallery import objects have the opportunity to acquire tags or headers, they'll now add to the existing store rather than replace it. this should mean if they both inherit and parse stuff, it won't all overwrite each other. this is all a giant mess so I have a cleanup overhaul planned
|
||||
|
||||
### boring stuff
|
||||
|
||||
* if a critical drive error occurs (e.g. running out of space on a storage drive), the popup message saying 'hey everything just got mega-paused' is now a little clearer about what has happened and how to fix it
|
||||
* similarly, the specific 'all watchers are paused'-style messages now specifically state 'network->pause to resume!' to point users to this menu to fix this tricky issue. this has frustrated a couple of newer users before
|
||||
* to reduce confusion, the 'clear orphan files' pre-job now only presents the user one combined dialog
|
||||
* improved how pages test and recognise that they have changes and thus should be saved--it works faster, and a bunch of false negatives should be removed
|
||||
* improved the safety routine that ensures multiple-column list data is read-only
|
||||
* fixed .txt sidecar importers' description labels, which were missing extra text munging
|
||||
* to relieve some latency stress on session load, pages that are loading their initial files in the background now do so in batches of 64 rather than 256
|
||||
* fixed some bad error handling in the master client db update routine
|
||||
* fixed a scatter of linting problems a user found
|
||||
* last week's pixiv parser hotfix is reinforced this week for anyone who got the early 532 release
|
||||
* made some primitive interfaces for the main controller and schedulable job classes and ensured the main hydrusglobals has type hinting for these--now everything that talks to the controller has a _bit_ of an idea what it is actually doing, and as I continue to work on this, we'll get better linting
|
||||
* moved the client DataCache object and its friends to a new 'caches' module and cleaned some of the code
|
||||
|
||||
## [Version 532](https://github.com/hydrusnetwork/hydrus/releases/tag/v532)
|
||||
|
||||
### misc
|
||||
|
@ -324,52 +364,3 @@ title: Changelog
|
|||
* fixed an error popup that still said 'run repair invalid tags' instead of 'run fix invalid tags'
|
||||
* the FILE_SERVICES constant now holds the 'all deleted files' virtual domain. this domain keeps slipping my logic, so fingers crossed this helps. also means you can select it in 'system:file service' and stuff now
|
||||
* misc cleaning and linting work
|
||||
|
||||
## [Version 523](https://github.com/hydrusnetwork/hydrus/releases/tag/v523)
|
||||
|
||||
### timestamp editing
|
||||
|
||||
* you can now _right-click->manage->times_ on any file to edit its archived, imported, deleted, previously imported (for undelete), file modified, domain modified, and last viewed times. there's a whole new dialog with new datetime buttons and everything. it only works on single files atm, so it is currently only appropriate for little fixes, and there's a couple advanced things like setting a currently missing deletion time that it can't do yet, but I expect to expand it in future (also ideally with some kind of 'cascade' option for multi-files so you can set a timestamp iteratively (e.g. +1 second per file) over a series of thumbs to force a certain import order sort etc...)
|
||||
* I added a new shortcut action 'manage file times', for this dialog. like the other media 'manage' shortcuts, you can hit it on the dialog to ok it, too
|
||||
* when you edit a saved file modified date, I have made it to update the actual file modified date on your disk too. a statement is printed to the log with old/new timestamps, just in case you ever need to recover this
|
||||
* added system:archived time search predicate! it is under the system:time stub like the other time-based search preds. it works in the system predicate parser too
|
||||
|
||||
### misc
|
||||
|
||||
* fixed a stupid logical typo from 521's overhaul that was causing the advanced file deletion dialog to always set the default file deletion reason! sorry for the trouble, this one slipped through due to a tricky test situation (this data is actually calculated twice on dialog ok, and on the first run it was correct -\_-)
|
||||
* in the edit system predicate dialogs, when you have a list of 'recent' preds and static useful preds, if one of the recent is supposed to also appear in the statics, it now won't be duped
|
||||
* fixed a bug in the media object's file locations manager's deletion routine, which wasn't adding and removing the special 'all deleted files' domain at the UI level--not that this shows up in UI much, but the new timestamps UI revealed this
|
||||
* in the janitorial 'petitions processing' page, the add and delete checkbox lists now no longer have horizontal scrollbars in any situation. previously, either list, but particularly the 'delete', at height 1, could be deceptively obscured by a pop-in scrollbar
|
||||
* when you change your internal backup location, the dialog now states your current location beforehand. this information was previously not viewable! also, if you select the same location again, the process notes this and exits with no changes made
|
||||
* all multi-column lists across the program now show a ▲ or ▼ on the column they are currently sorted on! this is one of those things I meant to do for ages; now it is done.
|
||||
* also, you can now right-click any multi-column list's header for a stub menu. for now it just says the thing's identifier name, but I'll start hanging things off here like individual section-size reset and, in time, finally play around with 'select columns' tech
|
||||
* all menus across the program now send their longer description text to the main window status bar. until now (at least in Qt, I forget wx), this has only been true for the menubar menus
|
||||
* all menus across the program now have tooltips turned on. any command with description text, which is I think pretty much all of them, will present its full written description on hover. this may end up being annoying, so let me know what you think
|
||||
|
||||
### client api
|
||||
|
||||
* fixed an issue in the client api where it wasn't returning `file_metadata` results in the same file order you asked for. sorry for the trouble--this was softly intended, previously, but I forgot to make sure it stayed true. it also now folds in 'missing' hashes with null ids in the same position you asked for
|
||||
* a new suite of unit tests check this explicitly for all the typical parameter/response types, and the new missing-hash insertion order--it shouldn't happen again!
|
||||
* just to be safe, since this is a new feature, client api version is now 44
|
||||
|
||||
### boring code updates/cleanup
|
||||
|
||||
* wrote a new serialisable 'timestamp data' object to hold the various hydrus timestamps: archived, imported, deleted, previously imported, file modified, domain modified, aggregate modified, and last viewed time
|
||||
* rewrote the timestamp content update pipeline to use 'timestamp data' object
|
||||
* wrote a new database module for timestamp management off the file metadata module and migrated the domain-based modified timestamp code to it
|
||||
* migrated the 'archive time' timestamp-handling from the inbox module to the new timestamp module
|
||||
* migrated the media result timestamp-manager construction routine all down to the new timestamp module
|
||||
* migrated the aggregate modified time file search code to the new timestamp module and added archived time search too
|
||||
* wrote some UI for timestamp editing, whacked some copy/paste buttons on it too
|
||||
* moved all current/deleted timestamp handling down from the locations manager to the timestamp manager and split off 'previously imported' time, which is used to preserve import timestamp for undelete events, into its own thing rather than a tacked-on hack for deleted timestamps
|
||||
* moved all the location manager location timestamp tracking down to the timestamp manager
|
||||
* the media result is now initialised with and handles an explicit copy of the timestamp manager, which is now shared to both location manager and file viewing stats manager, with duplication and merging code updated to handle this shared situation
|
||||
* moved all the media/preview 'last view time' tracking down from the file viewing stats manager to the timestamp manager, which FVS now received on initialisation
|
||||
* all media-based timestamp inspection now goes through the timestamp manager
|
||||
* collections now track some aggregate timestamps a bit better, and they now calculate a archived time--not sure if it is useful, but they know it now
|
||||
* updated all parts of the timestamp system to use the same shared enums
|
||||
* cleaned the timestamp code generally
|
||||
* cleaned some file service update code generally
|
||||
* moved the main file viewing stats fetching routine for MediaResult building down to the file viewing stats module
|
||||
* updated the old custom gridbox layout to handle multiple-column-spanning controls
|
||||
* went through all the bash scripts and fixed some issues my IDE linter was moaning about. -r on reads, quotes around variable names, 4-space indenting, and neater testing of program return states
|
||||
|
|
|
@ -29,11 +29,12 @@ Once the API is running, go to its entry in _services->review services_. Each ex
|
|||
* [LoliSnatcher](https://github.com/NO-ob/LoliSnatcher_Droid): a booru client for Android that can talk to hydrus
|
||||
* [Anime Boxes](https://www.animebox.es/): a booru browser, now supports adding your client as a Hydrus Server
|
||||
* [FlipFlip](https://ififfy.github.io/flipflip/#/): an advanced slideshow interface, now supports hydrus as a source
|
||||
* [hydrus-dd](https://gitgud.io/koto/hydrus-dd): DeepDanbooru neural network tagging for Hydrus
|
||||
* [Hydrus Video Deduplicator](https://github.com/appleappleapplenanner/hydrus-video-deduplicator): Discovers duplicate videos in your client and queues them for the duplicate filter
|
||||
* [Hydrus Archive Delete](https://gitgud.io/koto/hydrus-archive-delete): Archive/Delete filter in your web browser
|
||||
* [tagrank](https://github.com/matjojo/tagrank): Shows you comparison images and cleverly ranks your favourite tag.
|
||||
* [hyextract](https://github.com/floogulinc/hyextract): Extract archives from Hydrus and reimport with tags and URL associations
|
||||
* [Send to Hydrus](https://github.com/Wyrrrd/send-to-hydrus): send URLs from your Android device to your client
|
||||
* [Iwara-Hydrus](https://github.com/GoAwayNow/Iwara-Hydrus): a userscript to simplify sending Iwara videos to Hydrus Network
|
||||
* [tagrank](https://github.com/matjojo/tagrank): Shows you comparison images and cleverly ranks your favourite tag.
|
||||
* [Hydrus Archive Delete](https://gitgud.io/koto/hydrus-archive-delete): Archive/Delete filter in your web browser
|
||||
* [hydrus-dd](https://gitgud.io/koto/hydrus-dd): DeepDanbooru neural network tagging for Hydrus
|
||||
* [hyextract](https://github.com/floogulinc/hyextract): Extract archives from Hydrus and reimport with tags and URL associations
|
||||
* [dolphin-hydrus-actions](https://gitgud.io/prkc/dolphin-hydrus-actions): Adds Hydrus right-click context menu actions to Dolphin file manager.
|
||||
* [more projects on github](https://github.com/stars/hydrusnetwork/lists/hydrus-related-projects)
|
||||
|
|
|
@ -67,6 +67,21 @@ This is particularly useful if you have a number of files with commonly structur
|
|||
|
||||
In this case, selecting the `title:cool pic*` predicate will return all three images in the same search, where you can conveniently give them some more-easily searched tags like `series:cool pic` and `page:1`, `page:2`, `page:3`.
|
||||
|
||||
### Editing Predicates
|
||||
|
||||
You can edit any selected 'active' search predicates by either its <kbd>Right-Click</kbd> menu or through <kbd>Shift+Double-Left-Click</kbd> on the selection. For simple tags, this means just changing the text (and, say, adding/removing a leading hyphen for negation/inclusion), but any 'system' predicate can be fully edited with its original panel. If you entered 'system:filesize < 200KB' and want to make it a little bigger, don't delete and re-add--just edit the existing one in place.
|
||||
|
||||
### Other Shortcuts
|
||||
|
||||
These will eventually be migrated to the shortcut system where they will be more visible and changeable, but for now:
|
||||
|
||||
- <kbd>Left-Click</kbd> on any taglist is draggable, if you want to select multiple tags quickly.
|
||||
- <kbd>Shift+Left-Click</kbd> across any taglist will do a multi-select. This click is also draggable.
|
||||
- <kbd>Ctrl+Left-Click</kbd> on any taglist will add to or remove from the selection. This is draggable, and if you start on a 'remove', the drag will be a 'remove' drag. Play with it--you'll see how it works.
|
||||
- <kbd>Double-Left-Click</kbd> on one or more tags in the 'selection tags' box moves them to the active search box. Doing the same on the active search box removes them.
|
||||
- <kbd>Ctrl+Double-Left-Click</kbd> on one or more tags in the 'selection tags' box will add their negation (i.e. '-skirt').
|
||||
- <kbd>Shift+Double-Left-Click</kbd> on more than one tags in the 'selection tags' box will add their 'OR' to the active search box. What's an OR? Well:
|
||||
|
||||
## OR searching
|
||||
Searches find files that match every search 'predicate' in the list (it is an **AND** search), which makes it difficult to search for files that include one **OR** another tag. For example the query `red eyes` **AND** `green eyes` (aka what you get if you enter each tag by itself) will only find files that has both tags. While the query `red eyes` **OR** `green eyes` will present you with files that are tagged with red eyes or green eyes, or both.
|
||||
|
||||
|
|
|
@ -34,6 +34,41 @@
|
|||
<div class="content">
|
||||
<h1 id="changelog"><a href="#changelog">changelog</a></h1>
|
||||
<ul>
|
||||
<li>
|
||||
<h2 id="version_533"><a href="#version_533">version 533</a></h2>
|
||||
<ul>
|
||||
<li><h3>macOS App crashes</h3></li>
|
||||
<li>unfortunately, last week's eventFilter work did not fix the macOS build's crashing--however, thanks to user help, we figured out that it was some half-hidden auxiliary Qt library that updated in the background starting v530 (the excellently named `PyQt6-Qt6` package). the build script is updated to roll back this version and it seems like things are fixed. this particular issue shouldn't happen again. sorry for the trouble, and let me know if there are any new issues! (issue #1379)</li>
|
||||
<li><h3>misc</h3></li>
|
||||
<li>the download panels in subscription popup windows are now significantly more responsive. ever since the popup manager was embedded into the gui, popup messages were not doing the 'should I update myself?' test correctly, and their network UI was not being updated without other events like surrounding widgets resizing. I was wondering what was going on here for ages--turns out it was regular stupidity</li>
|
||||
<li>if an image has width or height > 1024, the 'share->copy' menu now shows a second, 'source lookup' bitmap, with the resolution clipped to 1024x1024</li>
|
||||
<li>'sort files by hash' can now be sorted asc or desc. this also fixes a bug where it was secretly either sorting asc or desc based on the previous selection. well done to the user who noticed and tested this</li>
|
||||
<li>if system:limit=0 is in a search, the search is no longer run--it comes back immediately empty. also, the system:limit edit panel now has a minimum value of 1 to dissuade this state</li>
|
||||
<li>the experimental QtMediaPlayer now initialises with the correct volume/mute and updates on volume/mute events. the scanbar and volume control UI are still hidden behind the OpenGL frame for now, but one step forward</li>
|
||||
<li>the system that caches media results now hangs on to the most recent 2048 files aggressively for two minutes after initial load. previously, if you refreshed a page of unique files, or did some repeated client api work on files that were not loaded somewhere as thumbs, in the interim periods those media objects were not strictly in non-weak memory anywhere in the client and could have been eligible for clearing out of the cache. now they are a bit more sticky</li>
|
||||
<li>added some info on editing predicates and the various undocumented click shortcuts the taglist supports (e.g. ctrl+double-left-click) to the 'getting started with searching and sorting' help page</li>
|
||||
<li>added a link to the Client API help for 'Hydrus Video Deduplicator' (https://github.com/appleappleapplenanner/hydrus-video-deduplicator), which neatly discovers duplicate videos in your client and queues them up in the duplicate filter by marking them as 'potential dupes'</li>
|
||||
<li><h3>sub-gallery url network parsing gubbins</h3></li>
|
||||
<li>sub-gallery import objects now get the tags and custom headers that are parsed with them. if the sub-gallery urls are parsed in 'posts' using a subsidiary parser, they only inherit the metadata within their post</li>
|
||||
<li>sub-gallery import objects now use their parent gallery urls as referral header</li>
|
||||
<li>sub-gallery import objects now inherit the 'can generate more pages' state of their parents (previously it was always 'yes')</li>
|
||||
<li>'next page' gallery urls do not get the tags they are parsed with. this behaviour makes a little less sense, and I suspect it _could_ cause various troubles, so I'll wait for more input, bug reports, and a larger cleanup and overhaul of how metadata is managed and passed down from one item to the next in the downloader system</li>
|
||||
<li>generally speaking, when file and gallery import objects have the opportunity to acquire tags or headers, they'll now add to the existing store rather than replace it. this should mean if they both inherit and parse stuff, it won't all overwrite each other. this is all a giant mess so I have a cleanup overhaul planned</li>
|
||||
<li><h3>boring stuff</h3></li>
|
||||
<li>if a critical drive error occurs (e.g. running out of space on a storage drive), the popup message saying 'hey everything just got mega-paused' is now a little clearer about what has happened and how to fix it</li>
|
||||
<li>similarly, the specific 'all watchers are paused'-style messages now specifically state 'network->pause to resume!' to point users to this menu to fix this tricky issue. this has frustrated a couple of newer users before</li>
|
||||
<li>to reduce confusion, the 'clear orphan files' pre-job now only presents the user one combined dialog</li>
|
||||
<li>improved how pages test and recognise that they have changes and thus should be saved--it works faster, and a bunch of false negatives should be removed</li>
|
||||
<li>improved the safety routine that ensures multiple-column list data is read-only</li>
|
||||
<li>fixed .txt sidecar importers' description labels, which were missing extra text munging</li>
|
||||
<li>to relieve some latency stress on session load, pages that are loading their initial files in the background now do so in batches of 64 rather than 256</li>
|
||||
<li>fixed some bad error handling in the master client db update routine</li>
|
||||
<li>fixed a scatter of linting problems a user found</li>
|
||||
<li>last week's pixiv parser hotfix is reinforced this week for anyone who got the early 532 release</li>
|
||||
<li>made some primitive interfaces for the main controller and schedulable job classes and ensured the main hydrusglobals has type hinting for these--now everything that talks to the controller has a _bit_ of an idea what it is actually doing, and as I continue to work on this, we'll get better linting</li>
|
||||
<li>moved the client DataCache object and its friends to a new 'caches' module and cleaned some of the code</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h2 id="version_532"><a href="#version_532">version 532</a></h2>
|
||||
<ul>
|
||||
|
|
|
@ -27,7 +27,6 @@ from hydrus.core.networking import HydrusNetwork
|
|||
from hydrus.core.networking import HydrusNetworking
|
||||
|
||||
from hydrus.client import ClientAPI
|
||||
from hydrus.client import ClientCaches
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientDaemons
|
||||
from hydrus.client import ClientDefaults
|
||||
|
@ -38,6 +37,7 @@ from hydrus.client import ClientOptions
|
|||
from hydrus.client import ClientServices
|
||||
from hydrus.client import ClientThreading
|
||||
from hydrus.client import ClientTime
|
||||
from hydrus.client.caches import ClientCaches
|
||||
from hydrus.client.db import ClientDB
|
||||
from hydrus.client.gui import ClientGUI
|
||||
from hydrus.client.gui import ClientGUIDialogs
|
||||
|
@ -2283,14 +2283,23 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
elif data_type == 'bmp':
|
||||
|
||||
media = data
|
||||
( media, optional_target_resolution_tuple ) = data
|
||||
|
||||
image_renderer = self.GetCache( 'images' ).GetImageRenderer( media )
|
||||
|
||||
def CopyToClipboard():
|
||||
|
||||
if optional_target_resolution_tuple is None:
|
||||
|
||||
target_resolution = None
|
||||
|
||||
else:
|
||||
|
||||
target_resolution = QC.QSize( optional_target_resolution_tuple[0], optional_target_resolution_tuple[1] )
|
||||
|
||||
|
||||
# this is faster than qpixmap, which converts to a qimage anyway
|
||||
qt_image = image_renderer.GetQtImage().copy()
|
||||
qt_image = image_renderer.GetQtImage( target_resolution = target_resolution ).copy()
|
||||
|
||||
QW.QApplication.clipboard().setImage( qt_image )
|
||||
|
||||
|
|
|
@ -350,7 +350,7 @@ class ClientFilesManager( object ):
|
|||
self._controller.new_options.SetBoolean( 'pause_subs_sync', True )
|
||||
self._controller.new_options.SetBoolean( 'pause_all_file_queues', True )
|
||||
|
||||
HydrusData.ShowText( 'All paged file import queues, subscriptions, and import folders have been paused. Resume them after restart under the file and network menus!' )
|
||||
HydrusData.ShowText( 'A critical drive error has occurred. All importers--subscriptions, import folders, and paged file import queues--have been paused. Once the issue is clear, restart the client and resume your imports after restart under the file and network menus!' )
|
||||
|
||||
self._controller.pub( 'notify_refresh_network_menu' )
|
||||
|
||||
|
|
|
@ -587,7 +587,7 @@ def GetTitleFromAllParseResults( all_parse_results ):
|
|||
def GetHTTPHeadersFromParseResults( parse_results ):
|
||||
|
||||
headers = {}
|
||||
|
||||
|
||||
for ( ( name, content_type, additional_info ), parsed_text ) in parse_results:
|
||||
|
||||
if content_type == HC.CONTENT_TYPE_HTTP_HEADERS:
|
||||
|
|
|
@ -2,6 +2,7 @@ import os
|
|||
import numpy
|
||||
import threading
|
||||
import time
|
||||
import typing
|
||||
|
||||
from qtpy import QtCore as QC
|
||||
from qtpy import QtWidgets as QW
|
||||
|
@ -18,6 +19,7 @@ from hydrus.core import HydrusVideoHandling
|
|||
from hydrus.client import ClientFiles
|
||||
from hydrus.client import ClientImageHandling
|
||||
from hydrus.client import ClientVideoHandling
|
||||
from hydrus.client.caches import ClientCachesBase
|
||||
|
||||
def FrameIndexOutOfRange( index, range_start, range_end ):
|
||||
|
||||
|
@ -68,10 +70,12 @@ def GenerateHydrusBitmapFromPILImage( pil_image, compressed = True ):
|
|||
|
||||
return HydrusBitmap( pil_image.tobytes(), pil_image.size, depth, compressed = compressed )
|
||||
|
||||
class ImageRenderer( object ):
|
||||
class ImageRenderer( ClientCachesBase.CacheableObject ):
|
||||
|
||||
def __init__( self, media, this_is_for_metadata_alone = False ):
|
||||
|
||||
ClientCachesBase.CacheableObject.__init__( self )
|
||||
|
||||
self._numpy_image = None
|
||||
|
||||
self._hash = media.GetHash()
|
||||
|
@ -290,7 +294,7 @@ class ImageRenderer( object ):
|
|||
|
||||
def GetResolution( self ): return self._resolution
|
||||
|
||||
def GetQtImage( self, clip_rect = None, target_resolution = None ):
|
||||
def GetQtImage( self, clip_rect: typing.Optional[ QC.QRect ] = None, target_resolution: typing.Optional[ QC.QSize ] = None ):
|
||||
|
||||
if clip_rect is None:
|
||||
|
||||
|
@ -408,10 +412,12 @@ class ImageRenderer( object ):
|
|||
return self._numpy_image is not None
|
||||
|
||||
|
||||
class ImageTile( object ):
|
||||
class ImageTile( ClientCachesBase.CacheableObject ):
|
||||
|
||||
def __init__( self, hash: bytes, clip_rect: QC.QRect, qt_pixmap: QG.QPixmap ):
|
||||
|
||||
ClientCachesBase.CacheableObject.__init__( self )
|
||||
|
||||
self.hash = hash
|
||||
self.clip_rect = clip_rect
|
||||
self.qt_pixmap = qt_pixmap
|
||||
|
@ -972,10 +978,12 @@ class RasterContainerVideo( RasterContainer ):
|
|||
self._stop = True
|
||||
|
||||
|
||||
class HydrusBitmap( object ):
|
||||
class HydrusBitmap( ClientCachesBase.CacheableObject ):
|
||||
|
||||
def __init__( self, data, size, depth, compressed = True ):
|
||||
|
||||
ClientCachesBase.CacheableObject.__init__( self )
|
||||
|
||||
self._compressed = compressed
|
||||
|
||||
if isinstance( data, memoryview ) and not data.c_contiguous:
|
||||
|
|
|
@ -171,14 +171,6 @@ class GIFRenderer( object ):
|
|||
|
||||
self._pil_global_palette = self._pil_image.palette
|
||||
|
||||
# TODO: u wot mate, why is this 'and False'? I assume this is preservation/transferrance of frame/image metadata, but it's been off for years probably, so...
|
||||
if self._pil_global_palette is not None and False:
|
||||
|
||||
self._pil_dirty = self._pil_image.palette.dirty
|
||||
self._pil_mode = self._pil_image.palette.mode
|
||||
self._pil_rawmode = self._pil_image.palette.rawmode
|
||||
|
||||
|
||||
self._next_render_index = 0
|
||||
self._last_frame = None
|
||||
|
||||
|
|
|
@ -3,6 +3,7 @@ import json
|
|||
import os
|
||||
import threading
|
||||
import time
|
||||
import typing
|
||||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusExceptions
|
||||
|
@ -19,221 +20,8 @@ from hydrus.client import ClientImageHandling
|
|||
from hydrus.client import ClientParsing
|
||||
from hydrus.client import ClientRendering
|
||||
from hydrus.client import ClientThreading
|
||||
from hydrus.client.caches import ClientCachesBase
|
||||
|
||||
class DataCache( object ):
|
||||
|
||||
def __init__( self, controller, name, cache_size, timeout = 1200 ):
|
||||
|
||||
self._controller = controller
|
||||
self._name = name
|
||||
self._cache_size = cache_size
|
||||
self._timeout = timeout
|
||||
|
||||
self._keys_to_data = {}
|
||||
self._keys_fifo = collections.OrderedDict()
|
||||
|
||||
self._total_estimated_memory_footprint = 0
|
||||
|
||||
self._lock = threading.Lock()
|
||||
|
||||
self._controller.sub( self, 'MaintainCache', 'memory_maintenance_pulse' )
|
||||
|
||||
|
||||
def _Delete( self, key ):
|
||||
|
||||
if key not in self._keys_to_data:
|
||||
|
||||
return
|
||||
|
||||
|
||||
( data, size_estimate ) = self._keys_to_data[ key ]
|
||||
|
||||
del self._keys_to_data[ key ]
|
||||
|
||||
self._total_estimated_memory_footprint -= size_estimate
|
||||
|
||||
if HG.cache_report_mode:
|
||||
|
||||
HydrusData.ShowText( 'Cache "{}" removing "{}", size "{}". Current size {}.'.format( self._name, key, HydrusData.ToHumanBytes( size_estimate ), HydrusData.ConvertValueRangeToBytes( self._total_estimated_memory_footprint, self._cache_size ) ) )
|
||||
|
||||
|
||||
|
||||
def _DeleteItem( self ):
|
||||
|
||||
( deletee_key, last_access_time ) = self._keys_fifo.popitem( last = False )
|
||||
|
||||
self._Delete( deletee_key )
|
||||
|
||||
|
||||
def _GetData( self, key ):
|
||||
|
||||
if key not in self._keys_to_data:
|
||||
|
||||
raise Exception( 'Cache error! Looking for {}, but it was missing.'.format( key ) )
|
||||
|
||||
|
||||
self._TouchKey( key )
|
||||
|
||||
( data, size_estimate ) = self._keys_to_data[ key ]
|
||||
|
||||
new_estimate = data.GetEstimatedMemoryFootprint()
|
||||
|
||||
if new_estimate != size_estimate:
|
||||
|
||||
self._total_estimated_memory_footprint += new_estimate - size_estimate
|
||||
|
||||
self._keys_to_data[ key ] = ( data, new_estimate )
|
||||
|
||||
|
||||
return data
|
||||
|
||||
|
||||
def _TouchKey( self, key ):
|
||||
|
||||
# have to delete first, rather than overwriting, so the ordereddict updates its internal order
|
||||
if key in self._keys_fifo:
|
||||
|
||||
del self._keys_fifo[ key ]
|
||||
|
||||
|
||||
self._keys_fifo[ key ] = HydrusTime.GetNow()
|
||||
|
||||
|
||||
def Clear( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._keys_to_data = {}
|
||||
self._keys_fifo = collections.OrderedDict()
|
||||
|
||||
self._total_estimated_memory_footprint = 0
|
||||
|
||||
|
||||
|
||||
def AddData( self, key, data ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if key not in self._keys_to_data:
|
||||
|
||||
while self._total_estimated_memory_footprint > self._cache_size:
|
||||
|
||||
self._DeleteItem()
|
||||
|
||||
|
||||
size_estimate = data.GetEstimatedMemoryFootprint()
|
||||
|
||||
self._keys_to_data[ key ] = ( data, size_estimate )
|
||||
|
||||
self._total_estimated_memory_footprint += size_estimate
|
||||
|
||||
self._TouchKey( key )
|
||||
|
||||
if HG.cache_report_mode:
|
||||
|
||||
HydrusData.ShowText(
|
||||
'Cache "{}" adding "{}" ({}). Current size {}.'.format(
|
||||
self._name,
|
||||
key,
|
||||
HydrusData.ToHumanBytes( size_estimate ),
|
||||
HydrusData.ConvertValueRangeToBytes( self._total_estimated_memory_footprint, self._cache_size )
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def DeleteData( self, key ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._Delete( key )
|
||||
|
||||
|
||||
|
||||
def GetData( self, key ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return self._GetData( key )
|
||||
|
||||
|
||||
|
||||
def GetIfHasData( self, key ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if key in self._keys_to_data:
|
||||
|
||||
return self._GetData( key )
|
||||
|
||||
else:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
|
||||
|
||||
def GetSizeLimit( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return self._cache_size
|
||||
|
||||
|
||||
|
||||
def HasData( self, key ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return key in self._keys_to_data
|
||||
|
||||
|
||||
|
||||
def MaintainCache( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
while self._total_estimated_memory_footprint > self._cache_size:
|
||||
|
||||
self._DeleteItem()
|
||||
|
||||
|
||||
while True:
|
||||
|
||||
if len( self._keys_fifo ) == 0:
|
||||
|
||||
break
|
||||
|
||||
else:
|
||||
|
||||
( key, last_access_time ) = next( iter( self._keys_fifo.items() ) )
|
||||
|
||||
if HydrusTime.TimeHasPassed( last_access_time + self._timeout ):
|
||||
|
||||
self._DeleteItem()
|
||||
|
||||
else:
|
||||
|
||||
break
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def SetCacheSizeAndTimeout( self, cache_size, timeout ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._cache_size = cache_size
|
||||
self._timeout = timeout
|
||||
|
||||
|
||||
self.MaintainCache()
|
||||
|
||||
|
||||
class LocalBooruCache( object ):
|
||||
|
||||
def __init__( self, controller ):
|
||||
|
@ -500,7 +288,7 @@ class ImageRendererCache( object ):
|
|||
cache_size = self._controller.new_options.GetInteger( 'image_cache_size' )
|
||||
cache_timeout = self._controller.new_options.GetInteger( 'image_cache_timeout' )
|
||||
|
||||
self._data_cache = DataCache( self._controller, 'image cache', cache_size, timeout = cache_timeout )
|
||||
self._data_cache = ClientCachesBase.DataCache( self._controller, 'image cache', cache_size, timeout = cache_timeout )
|
||||
|
||||
self._controller.sub( self, 'NotifyNewOptions', 'notify_new_options' )
|
||||
|
||||
|
@ -577,7 +365,7 @@ class ImageTileCache( object ):
|
|||
cache_size = self._controller.new_options.GetInteger( 'image_tile_cache_size' )
|
||||
cache_timeout = self._controller.new_options.GetInteger( 'image_tile_cache_timeout' )
|
||||
|
||||
self._data_cache = DataCache( self._controller, 'image tile cache', cache_size, timeout = cache_timeout )
|
||||
self._data_cache = ClientCachesBase.DataCache( self._controller, 'image tile cache', cache_size, timeout = cache_timeout )
|
||||
|
||||
self._controller.sub( self, 'NotifyNewOptions', 'notify_new_options' )
|
||||
self._controller.sub( self, 'Clear', 'clear_image_tile_cache' )
|
||||
|
@ -637,7 +425,7 @@ class ThumbnailCache( object ):
|
|||
cache_size = self._controller.new_options.GetInteger( 'thumbnail_cache_size' )
|
||||
cache_timeout = self._controller.new_options.GetInteger( 'thumbnail_cache_timeout' )
|
||||
|
||||
self._data_cache = DataCache( self._controller, 'thumbnail cache', cache_size, timeout = cache_timeout )
|
||||
self._data_cache = ClientCachesBase.DataCache( self._controller, 'thumbnail cache', cache_size, timeout = cache_timeout )
|
||||
|
||||
self._magic_mime_thumbnail_ease_score_lookup = {}
|
||||
|
|
@ -0,0 +1,238 @@
|
|||
import collections
|
||||
import threading
|
||||
import typing
|
||||
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
class CacheableObject( object ):
|
||||
|
||||
def GetEstimatedMemoryFootprint( self ) -> int:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
|
||||
class DataCache( object ):
|
||||
|
||||
def __init__( self, controller, name, cache_size, timeout = 1200 ):
|
||||
|
||||
self._controller = controller
|
||||
self._name = name
|
||||
self._cache_size = cache_size
|
||||
self._timeout = timeout
|
||||
|
||||
self._keys_to_data = {}
|
||||
self._keys_fifo = collections.OrderedDict()
|
||||
|
||||
self._total_estimated_memory_footprint = 0
|
||||
|
||||
self._lock = threading.Lock()
|
||||
|
||||
self._controller.sub( self, 'MaintainCache', 'memory_maintenance_pulse' )
|
||||
|
||||
|
||||
def _Delete( self, key ):
|
||||
|
||||
if key not in self._keys_to_data:
|
||||
|
||||
return
|
||||
|
||||
|
||||
( data, size_estimate ) = self._keys_to_data[ key ]
|
||||
|
||||
del self._keys_to_data[ key ]
|
||||
|
||||
self._total_estimated_memory_footprint -= size_estimate
|
||||
|
||||
if HG.cache_report_mode:
|
||||
|
||||
HydrusData.ShowText( 'Cache "{}" removing "{}", size "{}". Current size {}.'.format( self._name, key, HydrusData.ToHumanBytes( size_estimate ), HydrusData.ConvertValueRangeToBytes( self._total_estimated_memory_footprint, self._cache_size ) ) )
|
||||
|
||||
|
||||
|
||||
def _DeleteItem( self ):
|
||||
|
||||
( deletee_key, last_access_time ) = self._keys_fifo.popitem( last = False )
|
||||
|
||||
self._Delete( deletee_key )
|
||||
|
||||
|
||||
def _GetData( self, key ) -> CacheableObject:
|
||||
|
||||
if key not in self._keys_to_data:
|
||||
|
||||
raise Exception( 'Cache error! Looking for {}, but it was missing.'.format( key ) )
|
||||
|
||||
|
||||
self._TouchKey( key )
|
||||
|
||||
( data, size_estimate ) = self._keys_to_data[ key ]
|
||||
|
||||
new_estimate = data.GetEstimatedMemoryFootprint()
|
||||
|
||||
if new_estimate != size_estimate:
|
||||
|
||||
self._total_estimated_memory_footprint += new_estimate - size_estimate
|
||||
|
||||
self._keys_to_data[ key ] = ( data, new_estimate )
|
||||
|
||||
|
||||
return data
|
||||
|
||||
|
||||
def _TouchKey( self, key ):
|
||||
|
||||
# have to delete first, rather than overwriting, so the ordereddict updates its internal order
|
||||
if key in self._keys_fifo:
|
||||
|
||||
del self._keys_fifo[ key ]
|
||||
|
||||
|
||||
self._keys_fifo[ key ] = HydrusTime.GetNow()
|
||||
|
||||
|
||||
def Clear( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._keys_to_data = {}
|
||||
self._keys_fifo = collections.OrderedDict()
|
||||
|
||||
self._total_estimated_memory_footprint = 0
|
||||
|
||||
|
||||
|
||||
def AddData( self, key, data: CacheableObject ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if key not in self._keys_to_data:
|
||||
|
||||
while self._total_estimated_memory_footprint > self._cache_size:
|
||||
|
||||
self._DeleteItem()
|
||||
|
||||
|
||||
size_estimate = data.GetEstimatedMemoryFootprint()
|
||||
|
||||
self._keys_to_data[ key ] = ( data, size_estimate )
|
||||
|
||||
self._total_estimated_memory_footprint += size_estimate
|
||||
|
||||
self._TouchKey( key )
|
||||
|
||||
if HG.cache_report_mode:
|
||||
|
||||
HydrusData.ShowText(
|
||||
'Cache "{}" adding "{}" ({}). Current size {}.'.format(
|
||||
self._name,
|
||||
key,
|
||||
HydrusData.ToHumanBytes( size_estimate ),
|
||||
HydrusData.ConvertValueRangeToBytes( self._total_estimated_memory_footprint, self._cache_size )
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def DeleteData( self, key ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._Delete( key )
|
||||
|
||||
|
||||
|
||||
def GetData( self, key ) -> CacheableObject:
|
||||
|
||||
with self._lock:
|
||||
|
||||
return self._GetData( key )
|
||||
|
||||
|
||||
|
||||
def GetIfHasData( self, key ) -> typing.Optional[ CacheableObject ]:
|
||||
|
||||
with self._lock:
|
||||
|
||||
if key in self._keys_to_data:
|
||||
|
||||
return self._GetData( key )
|
||||
|
||||
else:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
|
||||
|
||||
def GetSizeLimit( self ) -> int:
|
||||
|
||||
with self._lock:
|
||||
|
||||
return self._cache_size
|
||||
|
||||
|
||||
|
||||
def HasData( self, key ) -> bool:
|
||||
|
||||
with self._lock:
|
||||
|
||||
return key in self._keys_to_data
|
||||
|
||||
|
||||
|
||||
def MaintainCache( self ) -> None:
|
||||
|
||||
with self._lock:
|
||||
|
||||
while self._total_estimated_memory_footprint > self._cache_size:
|
||||
|
||||
self._DeleteItem()
|
||||
|
||||
|
||||
while True:
|
||||
|
||||
if len( self._keys_fifo ) == 0:
|
||||
|
||||
break
|
||||
|
||||
else:
|
||||
|
||||
( key, last_access_time ) = next( iter( self._keys_fifo.items() ) )
|
||||
|
||||
if HydrusTime.TimeHasPassed( last_access_time + self._timeout ):
|
||||
|
||||
self._DeleteItem()
|
||||
|
||||
else:
|
||||
|
||||
break
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def SetCacheSizeAndTimeout( self, cache_size, timeout ) -> None:
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._cache_size = cache_size
|
||||
self._timeout = timeout
|
||||
|
||||
|
||||
self.MaintainCache()
|
||||
|
||||
|
||||
def TouchKey( self, key ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._TouchKey( key )
|
||||
|
||||
|
||||
|
|
@ -8733,7 +8733,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
self.modules_serialisable.SetJSONDump( new_options )
|
||||
|
||||
except:
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
|
@ -9029,7 +9029,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
self._ExecuteMany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', ( ( -num_cleared, clear_service_id, HC.SERVICE_INFO_NUM_DELETED_FILES ) for ( clear_service_id, num_cleared ) in service_ids_to_nums_cleared.items() ) )
|
||||
|
||||
|
||||
except:
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
|
@ -9442,6 +9442,38 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
|
||||
if version == 532:
|
||||
|
||||
try:
|
||||
|
||||
domain_manager = self.modules_serialisable.GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER )
|
||||
|
||||
domain_manager.Initialise()
|
||||
|
||||
#
|
||||
|
||||
domain_manager.OverwriteDefaultParsers( [
|
||||
'pixiv file page api parser'
|
||||
] )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.TryToLinkURLClassesAndParsers()
|
||||
|
||||
#
|
||||
|
||||
self.modules_serialisable.SetJSONDump( domain_manager )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
message = 'Trying to update some downloader objects failed! Please let hydrus dev know!'
|
||||
|
||||
self.pub_initial_message( message )
|
||||
|
||||
|
||||
|
||||
self._controller.frame_splash_status.SetTitleText( 'updated db to v{}'.format( HydrusData.ToHumanInt( version + 1 ) ) )
|
||||
|
||||
self._Execute( 'UPDATE version SET version = ?;', ( version + 1, ) )
|
||||
|
|
|
@ -1267,6 +1267,13 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
system_predicates = file_search_context.GetSystemPredicates()
|
||||
|
||||
system_limit = system_predicates.GetLimit( apply_implicit_limit = apply_implicit_limit )
|
||||
|
||||
if system_limit == 0:
|
||||
|
||||
return []
|
||||
|
||||
|
||||
location_context = file_search_context.GetLocationContext()
|
||||
tag_context = file_search_context.GetTagContext()
|
||||
|
||||
|
@ -2277,9 +2284,7 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
#
|
||||
|
||||
limit = system_predicates.GetLimit( apply_implicit_limit = apply_implicit_limit )
|
||||
|
||||
we_are_applying_limit = limit is not None and limit < len( query_hash_ids )
|
||||
we_are_applying_limit = system_limit is not None and system_limit < len( query_hash_ids )
|
||||
|
||||
if we_are_applying_limit and limit_sort_by is not None and sort_by is None:
|
||||
|
||||
|
@ -2299,11 +2304,11 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
if not did_sort:
|
||||
|
||||
query_hash_ids = random.sample( query_hash_ids, limit )
|
||||
query_hash_ids = random.sample( query_hash_ids, system_limit )
|
||||
|
||||
else:
|
||||
|
||||
query_hash_ids = query_hash_ids[:limit]
|
||||
query_hash_ids = query_hash_ids[:system_limit]
|
||||
|
||||
|
||||
|
||||
|
@ -2589,7 +2594,7 @@ class ClientDBFilesQuery( ClientDBModule.ClientDBModule ):
|
|||
|
||||
hash_ids = sorted( hash_ids, key = lambda hash_id: hash_ids_to_hex_hashes[ hash_id ] )
|
||||
|
||||
did_sort = True
|
||||
reverse = sort_order == CC.SORT_DESC
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1154,41 +1154,43 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
|
||||
def _ClearOrphanFiles( self ):
|
||||
|
||||
text = 'This job will iterate through every file in your database\'s file storage, removing any it does not expect to be there. It may take some time.'
|
||||
text = 'This job will iterate through every file in your database\'s file storage, extracting any it does not expect to be there. This is particularly useful for \'re-syncing\' your file storage to what it should be, and is particularly useful if you are marrying an older/newer database with a newer/older file storage.'
|
||||
text += os.linesep * 2
|
||||
text += 'Files and thumbnails will be inaccessible while this occurs, so it is best to leave the client alone until it is done.'
|
||||
text += 'You can choose to move the orphans in your file directories somewhere or delete them. Orphans in your thumbnail directories will always be deleted.'
|
||||
text += os.linesep * 2
|
||||
text += 'Files and thumbnails will be inaccessible while this runs, so it is best to leave the client alone until it is done. It may take some time.'
|
||||
|
||||
result = ClientGUIDialogsQuick.GetYesNo( self, text, yes_label = 'get started', no_label = 'forget it' )
|
||||
yes_tuples = []
|
||||
|
||||
if result == QW.QDialog.Accepted:
|
||||
yes_tuples.append( ( 'move them somewhere', 'move' ) )
|
||||
yes_tuples.append( ( 'delete them', 'delete' ) )
|
||||
|
||||
try:
|
||||
|
||||
text = 'What would you like to do with the orphaned files? Note that all orphaned thumbnails will be deleted.'
|
||||
result = ClientGUIDialogsQuick.GetYesYesNo( self, text, yes_tuples = yes_tuples, no_label = 'forget it' )
|
||||
|
||||
client_files_manager = self._controller.client_files_manager
|
||||
except HydrusExceptions.CancelledException:
|
||||
|
||||
( result, was_cancelled ) = ClientGUIDialogsQuick.GetYesNo( self, text, title = 'Choose what do to with the orphans.', yes_label = 'move them somewhere', no_label = 'delete them', check_for_cancelled = True )
|
||||
return
|
||||
|
||||
if was_cancelled:
|
||||
|
||||
client_files_manager = self._controller.client_files_manager
|
||||
|
||||
if result == 'move':
|
||||
|
||||
with QP.DirDialog( self, 'Select location.' ) as dlg_3:
|
||||
|
||||
return
|
||||
|
||||
|
||||
if result == QW.QDialog.Accepted:
|
||||
|
||||
with QP.DirDialog( self, 'Select location.' ) as dlg_3:
|
||||
if dlg_3.exec() == QW.QDialog.Accepted:
|
||||
|
||||
if dlg_3.exec() == QW.QDialog.Accepted:
|
||||
|
||||
path = dlg_3.GetPath()
|
||||
|
||||
self._controller.CallToThread( client_files_manager.ClearOrphans, path )
|
||||
|
||||
path = dlg_3.GetPath()
|
||||
|
||||
self._controller.CallToThread( client_files_manager.ClearOrphans, path )
|
||||
|
||||
|
||||
elif result == QW.QDialog.Rejected:
|
||||
|
||||
self._controller.CallToThread( client_files_manager.ClearOrphans )
|
||||
|
||||
|
||||
elif result == 'delete':
|
||||
|
||||
self._controller.CallToThread( client_files_manager.ClearOrphans )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -269,7 +269,7 @@ def IsQtAncestor( child: QW.QWidget, ancestor: QW.QWidget, through_tlws = False
|
|||
|
||||
if through_tlws:
|
||||
|
||||
while not parent is None:
|
||||
while parent is not None:
|
||||
|
||||
if parent == ancestor:
|
||||
|
||||
|
|
|
@ -103,7 +103,7 @@ class PagesSearchProvider( QAbstractLocatorSearchProvider ):
|
|||
|
||||
label = selectable_media_page.GetNameForMenu()
|
||||
|
||||
if not query in label:
|
||||
if query not in label:
|
||||
|
||||
continue
|
||||
|
||||
|
@ -213,7 +213,7 @@ class MainMenuSearchProvider( QAbstractLocatorSearchProvider ):
|
|||
|
||||
else:
|
||||
|
||||
if not query in action.text() and not query in actionText:
|
||||
if query not in action.text() and query not in actionText:
|
||||
|
||||
continue
|
||||
|
||||
|
@ -341,7 +341,7 @@ class MediaMenuSearchProvider( QAbstractLocatorSearchProvider ):
|
|||
|
||||
else:
|
||||
|
||||
if not query in action.text() and not query in actionText:
|
||||
if query not in action.text() and query not in actionText:
|
||||
|
||||
continue
|
||||
|
||||
|
@ -379,4 +379,3 @@ class MediaMenuSearchProvider( QAbstractLocatorSearchProvider ):
|
|||
return str() #TODO fill this in
|
||||
|
||||
# TODO: provider for page tab right click menu actions?
|
||||
|
|
@ -129,6 +129,7 @@ class PopupMessage( PopupWindow ):
|
|||
self._no.hide()
|
||||
|
||||
self._network_job_ctrl = ClientGUINetworkJobControl.NetworkJobControl( self )
|
||||
self._network_job_ctrl.SetShouldUpdateFreely( True )
|
||||
self._network_job_ctrl.hide()
|
||||
self._time_network_job_disappeared = 0
|
||||
|
||||
|
|
|
@ -1917,7 +1917,7 @@ class EditFileNotesPanel( CAC.ApplicationCommandProcessorMixin, ClientGUIScrolle
|
|||
|
||||
for item in names_and_notes:
|
||||
|
||||
if not isinstance( item, collections.abc.Collection ):
|
||||
if not isinstance( item, HydrusData.LIST_LIKE_COLLECTION ):
|
||||
|
||||
continue
|
||||
|
||||
|
|
|
@ -1,60 +1,10 @@
|
|||
import os
|
||||
import traceback
|
||||
from hydrus.client.gui import QtInitImportTest
|
||||
|
||||
# If not explicitly set, prefer PySide instead of PyQt, which is the qtpy default
|
||||
# It is critical that this runs on startup *before* anything is imported from qtpy.
|
||||
|
||||
def get_qt_library_str_status():
|
||||
|
||||
infos = []
|
||||
|
||||
try:
|
||||
|
||||
import PyQt5
|
||||
|
||||
infos.append( 'PyQt5 imported ok' )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
infos.append( 'PyQt5 did not import ok:\n{}'.format( traceback.format_exc() ) )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
import PySide2
|
||||
|
||||
infos.append( 'PySide2 imported ok' )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
infos.append( 'PySide2 did not import ok:\n{}'.format( traceback.format_exc() ) )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
import PyQt6
|
||||
|
||||
infos.append( 'PyQt6 imported ok' )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
infos.append( 'PyQt6 did not import ok:\n{}'.format( traceback.format_exc() ) )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
import PySide6
|
||||
|
||||
infos.append( 'PySide6 imported ok' )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
infos.append( 'PySide6 did not import ok:\n{}'.format( traceback.format_exc() ) )
|
||||
|
||||
|
||||
return '\n'.join( infos )
|
||||
|
||||
|
||||
if 'QT_API' in os.environ:
|
||||
|
||||
QT_API_INITIAL_VALUE = os.environ[ 'QT_API' ]
|
||||
|
@ -179,7 +129,7 @@ except ModuleNotFoundError as e:
|
|||
|
||||
message += '\n' * 2
|
||||
|
||||
message += 'Here is info on your available Qt Libraries:\n{}'.format( get_qt_library_str_status() )
|
||||
message += 'Here is info on your available Qt Libraries:\n{}'.format( QtInitImportTest.get_qt_library_str_status() )
|
||||
|
||||
raise Exception( message )
|
||||
|
||||
|
@ -218,7 +168,7 @@ except ModuleNotFoundError as e:
|
|||
|
||||
message += '\n' * 2
|
||||
|
||||
message += 'Here is info on your available Qt Libraries:\n\n{}'.format( get_qt_library_str_status() )
|
||||
message += 'Here is info on your available Qt Libraries:\n\n{}'.format( QtInitImportTest.get_qt_library_str_status() )
|
||||
|
||||
message += '\n' * 2
|
||||
|
||||
|
|
|
@ -0,0 +1,54 @@
|
|||
import traceback
|
||||
|
||||
# this is separate to help out some linting since we are spamming imports here
|
||||
|
||||
def get_qt_library_str_status():
|
||||
|
||||
infos = []
|
||||
|
||||
try:
|
||||
|
||||
import PyQt5
|
||||
|
||||
infos.append( 'PyQt5 imported ok' )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
infos.append( 'PyQt5 did not import ok:\n{}'.format( traceback.format_exc() ) )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
import PySide2
|
||||
|
||||
infos.append( 'PySide2 imported ok' )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
infos.append( 'PySide2 did not import ok:\n{}'.format( traceback.format_exc() ) )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
import PyQt6
|
||||
|
||||
infos.append( 'PyQt6 imported ok' )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
infos.append( 'PyQt6 did not import ok:\n{}'.format( traceback.format_exc() ) )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
import PySide6
|
||||
|
||||
infos.append( 'PySide6 imported ok' )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
infos.append( 'PySide6 did not import ok:\n{}'.format( traceback.format_exc() ) )
|
||||
|
||||
|
||||
return '\n'.join( infos )
|
||||
|
|
@ -1,6 +1,7 @@
|
|||
#This file is licensed under the Do What the Fuck You Want To Public License aka WTFPL
|
||||
|
||||
import os
|
||||
import typing
|
||||
|
||||
import qtpy
|
||||
|
||||
|
@ -2185,16 +2186,18 @@ class TreeWidgetWithInheritedCheckState( QW.QTreeWidget ):
|
|||
|
||||
|
||||
|
||||
def ListsToTuples( l ): # Since lists are not hashable, we need to (recursively) convert lists to tuples in data that is to be added to BetterListCtrl
|
||||
|
||||
def ListsToTuples( potentially_nested_lists ):
|
||||
|
||||
if isinstance( l, list ) or isinstance( l, tuple ):
|
||||
|
||||
return tuple( map( ListsToTuples, l ) )
|
||||
|
||||
if isinstance( potentially_nested_lists, HydrusData.LIST_LIKE_COLLECTION ):
|
||||
|
||||
return tuple( map( ListsToTuples, potentially_nested_lists ) )
|
||||
|
||||
else:
|
||||
|
||||
return l
|
||||
|
||||
return potentially_nested_lists
|
||||
|
||||
|
||||
|
||||
class WidgetEventFilter ( QC.QObject ):
|
||||
|
||||
|
@ -2216,7 +2219,7 @@ class WidgetEventFilter ( QC.QObject ):
|
|||
|
||||
def _ExecuteCallbacks( self, event_name, event ):
|
||||
|
||||
if not event_name in self._callback_map: return
|
||||
if event_name not in self._callback_map: return
|
||||
|
||||
event_killed = False
|
||||
|
||||
|
|
|
@ -8,6 +8,7 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusImageHandling
|
||||
from hydrus.core import HydrusLists
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusTags
|
||||
|
@ -377,7 +378,7 @@ class Canvas( CAC.ApplicationCommandProcessorMixin, QW.QWidget ):
|
|||
|
||||
|
||||
|
||||
def _CopyBMPToClipboard( self ):
|
||||
def _CopyBMPToClipboard( self, resolution = None ):
|
||||
|
||||
copied = False
|
||||
|
||||
|
@ -385,7 +386,7 @@ class Canvas( CAC.ApplicationCommandProcessorMixin, QW.QWidget ):
|
|||
|
||||
if self._current_media.GetMime() in HC.IMAGES:
|
||||
|
||||
HG.client_controller.pub( 'clipboard', 'bmp', self._current_media )
|
||||
HG.client_controller.pub( 'clipboard', 'bmp', ( self._current_media, resolution ) )
|
||||
|
||||
copied = True
|
||||
|
||||
|
@ -1550,7 +1551,16 @@ class CanvasPanel( Canvas ):
|
|||
|
||||
if self._current_media.GetMime() in HC.IMAGES:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( copy_menu, 'image (bitmap)', 'Copy this file to your clipboard as a bmp.', self._CopyBMPToClipboard )
|
||||
ClientGUIMenus.AppendMenuItem( copy_menu, 'bitmap', 'Copy this file to your clipboard as a bitmap.', self._CopyBMPToClipboard )
|
||||
|
||||
( width, height ) = self._current_media.GetResolution()
|
||||
|
||||
if width > 1024 or height > 1024:
|
||||
|
||||
( clip_rect, clipped_res ) = HydrusImageHandling.GetThumbnailResolutionAndClipRegion( self._current_media.GetResolution(), ( 1024, 1024 ), HydrusImageHandling.THUMBNAIL_SCALE_TO_FIT, 100 )
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( copy_menu, 'source lookup bitmap ({}x{})'.format( clipped_res[0], clipped_res[1] ), 'Copy a smaller bitmap of this file, for quicker lookup on source-finding websites.', self._CopyBMPToClipboard, clipped_res )
|
||||
|
||||
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( copy_menu, 'path', 'Copy this file\'s path to your clipboard.', self._CopyPathToClipboard )
|
||||
|
@ -4545,7 +4555,16 @@ class CanvasMediaListBrowser( CanvasMediaListNavigable ):
|
|||
|
||||
if self._current_media.GetMime() in HC.IMAGES:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( copy_menu, 'image (bitmap)', 'Copy this file to your clipboard as a BMP image.', self._CopyBMPToClipboard )
|
||||
ClientGUIMenus.AppendMenuItem( copy_menu, 'bitmap', 'Copy this file to your clipboard as a bitmap.', self._CopyBMPToClipboard )
|
||||
|
||||
( width, height ) = self._current_media.GetResolution()
|
||||
|
||||
if width > 1024 or height > 1024:
|
||||
|
||||
( clip_rect, clipped_res ) = HydrusImageHandling.GetThumbnailResolutionAndClipRegion( self._current_media.GetResolution(), ( 1024, 1024 ), HydrusImageHandling.THUMBNAIL_SCALE_TO_FIT, 100 )
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( copy_menu, 'source lookup bitmap ({}x{})'.format( clipped_res[0], clipped_res[1] ), 'Copy a smaller bitmap of this file, for quicker lookup on source-finding websites.', self._CopyBMPToClipboard, clipped_res )
|
||||
|
||||
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( copy_menu, 'path', 'Copy this file\'s path to your clipboard.', self._CopyPathToClipboard )
|
||||
|
|
|
@ -35,6 +35,7 @@ from hydrus.client.gui import ClientGUIShortcuts
|
|||
from hydrus.client.gui import QtInit
|
||||
from hydrus.client.gui import QtPorting as QP
|
||||
from hydrus.client.gui.canvas import ClientGUIMPV
|
||||
from hydrus.client.gui.canvas import ClientGUIMediaVolume
|
||||
from hydrus.client.gui.widgets import ClientGUICommon
|
||||
from hydrus.client.media import ClientMedia
|
||||
|
||||
|
@ -2975,6 +2976,9 @@ class QtMediaPlayer( QW.QWidget ):
|
|||
|
||||
self._my_shortcut_handler = ClientGUIShortcuts.ShortcutsHandler( self, [ shortcut_set ], catch_mouse = True )
|
||||
|
||||
HG.client_controller.sub( self, 'UpdateAudioMute', 'new_audio_mute' )
|
||||
HG.client_controller.sub( self, 'UpdateAudioVolume', 'new_audio_volume' )
|
||||
|
||||
|
||||
def _MediaStatusChanged( self, status ):
|
||||
|
||||
|
@ -3211,6 +3215,9 @@ class QtMediaPlayer( QW.QWidget ):
|
|||
self._media_player.play()
|
||||
|
||||
|
||||
self._my_audio_output.setVolume( ClientGUIMediaVolume.GetCorrectCurrentVolume( self._canvas_type ) )
|
||||
self._my_audio_output.setMuted( ClientGUIMediaVolume.GetCorrectCurrentMute( self._canvas_type ) )
|
||||
|
||||
|
||||
def TryToUnload( self ):
|
||||
|
||||
|
@ -3221,6 +3228,16 @@ class QtMediaPlayer( QW.QWidget ):
|
|||
|
||||
|
||||
|
||||
def UpdateAudioMute( self ):
|
||||
|
||||
self._my_audio_output.setMuted( ClientGUIMediaVolume.GetCorrectCurrentMute( self._canvas_type ) )
|
||||
|
||||
|
||||
def UpdateAudioVolume( self ):
|
||||
|
||||
self._my_audio_output.setVolume( ClientGUIMediaVolume.GetCorrectCurrentVolume( self._canvas_type ) )
|
||||
|
||||
|
||||
|
||||
class StaticImage( CAC.ApplicationCommandProcessorMixin, QW.QWidget ):
|
||||
|
||||
|
|
|
@ -1,4 +1,3 @@
|
|||
import functools
|
||||
import locale
|
||||
import os
|
||||
import traceback
|
||||
|
@ -12,7 +11,6 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusImageHandling
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusTime
|
||||
from hydrus.core import HydrusVideoHandling
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
|
@ -21,6 +19,7 @@ from hydrus.client.gui import ClientGUIMedia
|
|||
from hydrus.client.gui import ClientGUIMediaControls
|
||||
from hydrus.client.gui import ClientGUIShortcuts
|
||||
from hydrus.client.gui import QtPorting as QP
|
||||
from hydrus.client.gui.canvas import ClientGUIMediaVolume
|
||||
from hydrus.client.media import ClientMedia
|
||||
|
||||
mpv_failed_reason = 'MPV seems ok!'
|
||||
|
@ -286,46 +285,6 @@ class MPVWidget( CAC.ApplicationCommandProcessorMixin, QW.QWidget ):
|
|||
return ClientGUIMediaControls.volume_types_to_option_names[ ClientGUIMediaControls.AUDIO_GLOBAL ]
|
||||
|
||||
|
||||
def _GetCorrectCurrentMute( self ):
|
||||
|
||||
( global_mute_option_name, global_volume_option_name ) = ClientGUIMediaControls.volume_types_to_option_names[ ClientGUIMediaControls.AUDIO_GLOBAL ]
|
||||
|
||||
mute_option_name = global_mute_option_name
|
||||
|
||||
if self._canvas_type in CC.CANVAS_MEDIA_VIEWER_TYPES:
|
||||
|
||||
( mute_option_name, volume_option_name ) = ClientGUIMediaControls.volume_types_to_option_names[ ClientGUIMediaControls.AUDIO_MEDIA_VIEWER ]
|
||||
|
||||
elif self._canvas_type == CC.CANVAS_PREVIEW:
|
||||
|
||||
( mute_option_name, volume_option_name ) = ClientGUIMediaControls.volume_types_to_option_names[ ClientGUIMediaControls.AUDIO_PREVIEW ]
|
||||
|
||||
|
||||
return HG.client_controller.new_options.GetBoolean( mute_option_name ) or HG.client_controller.new_options.GetBoolean( global_mute_option_name )
|
||||
|
||||
|
||||
def _GetCorrectCurrentVolume( self ):
|
||||
|
||||
( mute_option_name, volume_option_name ) = ClientGUIMediaControls.volume_types_to_option_names[ ClientGUIMediaControls.AUDIO_GLOBAL ]
|
||||
|
||||
if self._canvas_type in CC.CANVAS_MEDIA_VIEWER_TYPES:
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'media_viewer_uses_its_own_audio_volume' ):
|
||||
|
||||
( mute_option_name, volume_option_name ) = ClientGUIMediaControls.volume_types_to_option_names[ ClientGUIMediaControls.AUDIO_MEDIA_VIEWER ]
|
||||
|
||||
|
||||
elif self._canvas_type == CC.CANVAS_PREVIEW:
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'preview_uses_its_own_audio_volume' ):
|
||||
|
||||
( mute_option_name, volume_option_name ) = ClientGUIMediaControls.volume_types_to_option_names[ ClientGUIMediaControls.AUDIO_PREVIEW ]
|
||||
|
||||
|
||||
|
||||
return HG.client_controller.new_options.GetInteger( volume_option_name )
|
||||
|
||||
|
||||
def _InitialiseMPVCallbacks( self ):
|
||||
|
||||
player = self._player
|
||||
|
@ -774,8 +733,8 @@ class MPVWidget( CAC.ApplicationCommandProcessorMixin, QW.QWidget ):
|
|||
HydrusData.ShowException( e )
|
||||
|
||||
|
||||
self._player.volume = self._GetCorrectCurrentVolume()
|
||||
self._player.mute = mute_override or self._GetCorrectCurrentMute()
|
||||
self._player.volume = ClientGUIMediaVolume.GetCorrectCurrentVolume( self._canvas_type )
|
||||
self._player.mute = mute_override or ClientGUIMediaVolume.GetCorrectCurrentMute( self._canvas_type )
|
||||
self._player.pause = start_paused
|
||||
|
||||
|
||||
|
@ -787,12 +746,12 @@ class MPVWidget( CAC.ApplicationCommandProcessorMixin, QW.QWidget ):
|
|||
|
||||
def UpdateAudioMute( self ):
|
||||
|
||||
self._player.mute = self._GetCorrectCurrentMute()
|
||||
self._player.mute = ClientGUIMediaVolume.GetCorrectCurrentMute( self._canvas_type )
|
||||
|
||||
|
||||
def UpdateAudioVolume( self ):
|
||||
|
||||
self._player.volume = self._GetCorrectCurrentVolume()
|
||||
self._player.volume = ClientGUIMediaVolume.GetCorrectCurrentVolume( self._canvas_type )
|
||||
|
||||
|
||||
def UpdateConf( self ):
|
||||
|
|
|
@ -0,0 +1,44 @@
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client.gui import ClientGUIMediaControls
|
||||
|
||||
def GetCorrectCurrentMute( canvas_type: int ):
|
||||
|
||||
( global_mute_option_name, global_volume_option_name ) = ClientGUIMediaControls.volume_types_to_option_names[ ClientGUIMediaControls.AUDIO_GLOBAL ]
|
||||
|
||||
mute_option_name = global_mute_option_name
|
||||
|
||||
if canvas_type in CC.CANVAS_MEDIA_VIEWER_TYPES:
|
||||
|
||||
( mute_option_name, volume_option_name ) = ClientGUIMediaControls.volume_types_to_option_names[ ClientGUIMediaControls.AUDIO_MEDIA_VIEWER ]
|
||||
|
||||
elif canvas_type == CC.CANVAS_PREVIEW:
|
||||
|
||||
( mute_option_name, volume_option_name ) = ClientGUIMediaControls.volume_types_to_option_names[ ClientGUIMediaControls.AUDIO_PREVIEW ]
|
||||
|
||||
|
||||
return HG.client_controller.new_options.GetBoolean( mute_option_name ) or HG.client_controller.new_options.GetBoolean( global_mute_option_name )
|
||||
|
||||
|
||||
def GetCorrectCurrentVolume( canvas_type: int ):
|
||||
|
||||
( mute_option_name, volume_option_name ) = ClientGUIMediaControls.volume_types_to_option_names[ ClientGUIMediaControls.AUDIO_GLOBAL ]
|
||||
|
||||
if canvas_type in CC.CANVAS_MEDIA_VIEWER_TYPES:
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'media_viewer_uses_its_own_audio_volume' ):
|
||||
|
||||
( mute_option_name, volume_option_name ) = ClientGUIMediaControls.volume_types_to_option_names[ ClientGUIMediaControls.AUDIO_MEDIA_VIEWER ]
|
||||
|
||||
|
||||
elif canvas_type == CC.CANVAS_PREVIEW:
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'preview_uses_its_own_audio_volume' ):
|
||||
|
||||
( mute_option_name, volume_option_name ) = ClientGUIMediaControls.volume_types_to_option_names[ ClientGUIMediaControls.AUDIO_PREVIEW ]
|
||||
|
||||
|
||||
|
||||
return HG.client_controller.new_options.GetInteger( volume_option_name )
|
||||
|
|
@ -28,6 +28,8 @@ class NetworkJobControl( QW.QFrame ):
|
|||
|
||||
self.setFrameStyle( QW.QFrame.Box | QW.QFrame.Raised )
|
||||
|
||||
self._should_update_freely = False
|
||||
|
||||
self._network_job = None
|
||||
|
||||
self._auto_override_bandwidth_rules = False
|
||||
|
@ -379,6 +381,11 @@ class NetworkJobControl( QW.QFrame ):
|
|||
self._error_button.show()
|
||||
|
||||
|
||||
def SetShouldUpdateFreely( self, should: bool ):
|
||||
|
||||
self._should_update_freely = should
|
||||
|
||||
|
||||
def ShowError( self ):
|
||||
|
||||
if self._error_text is None:
|
||||
|
@ -393,7 +400,7 @@ class NetworkJobControl( QW.QFrame ):
|
|||
|
||||
self._OverrideBandwidthIfAppropriate()
|
||||
|
||||
if HG.client_controller.gui.IShouldRegularlyUpdate( self ):
|
||||
if self._should_update_freely or HG.client_controller.gui.IShouldRegularlyUpdate( self ):
|
||||
|
||||
self._Update()
|
||||
|
||||
|
|
|
@ -830,11 +830,21 @@ class ManagementController( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
if name in self._variables:
|
||||
|
||||
if type( value ) == type( self._variables[ name ] ):
|
||||
existing_value = self._variables[ name ]
|
||||
|
||||
if isinstance( value, type( existing_value ) ):
|
||||
|
||||
if isinstance( value, HydrusSerialisable.SerialisableBase ):
|
||||
|
||||
if value.GetSerialisableTuple() == self._variables[ name ].GetSerialisableTuple():
|
||||
if value is existing_value:
|
||||
|
||||
# assume that it was changed and we can't detect that
|
||||
self._SerialisableChangeMade()
|
||||
|
||||
return
|
||||
|
||||
|
||||
if value.GetSerialisableTuple() == existing_value.GetSerialisableTuple():
|
||||
|
||||
return
|
||||
|
||||
|
|
|
@ -1072,7 +1072,7 @@ class Page( QW.QWidget ):
|
|||
|
||||
initial_media_results = []
|
||||
|
||||
for group_of_initial_hashes in HydrusLists.SplitListIntoChunks( initial_hashes, 256 ):
|
||||
for group_of_initial_hashes in HydrusLists.SplitListIntoChunks( initial_hashes, 64 ):
|
||||
|
||||
more_media_results = controller.Read( 'media_results', group_of_initial_hashes )
|
||||
|
||||
|
|
|
@ -12,6 +12,7 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusImageHandling
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusTime
|
||||
from hydrus.core.networking import HydrusNetwork
|
||||
|
@ -164,7 +165,7 @@ class MediaPanel( CAC.ApplicationCommandProcessorMixin, ClientMedia.ListeningMed
|
|||
ClientGUIMediaActions.ClearDeleteRecord( self, media )
|
||||
|
||||
|
||||
def _CopyBMPToClipboard( self ):
|
||||
def _CopyBMPToClipboard( self, resolution = None ):
|
||||
|
||||
copied = False
|
||||
|
||||
|
@ -176,7 +177,7 @@ class MediaPanel( CAC.ApplicationCommandProcessorMixin, ClientMedia.ListeningMed
|
|||
|
||||
if media.GetMime() in HC.IMAGES:
|
||||
|
||||
HG.client_controller.pub( 'clipboard', 'bmp', media )
|
||||
HG.client_controller.pub( 'clipboard', 'bmp', ( media, resolution ) )
|
||||
|
||||
copied = True
|
||||
|
||||
|
@ -4363,7 +4364,16 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
if self._focused_media.GetMime() in HC.IMAGES:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( copy_menu, 'image (bitmap)', 'Copy the selected file\'s image data to the clipboard (as a bmp).', self._CopyBMPToClipboard )
|
||||
ClientGUIMenus.AppendMenuItem( copy_menu, 'bitmap', 'Copy this file to your clipboard as a bitmap.', self._CopyBMPToClipboard )
|
||||
|
||||
( width, height ) = self._focused_media.GetResolution()
|
||||
|
||||
if width > 1024 or height > 1024:
|
||||
|
||||
( clip_rect, clipped_res ) = HydrusImageHandling.GetThumbnailResolutionAndClipRegion( self._focused_media.GetResolution(), ( 1024, 1024 ), HydrusImageHandling.THUMBNAIL_SCALE_TO_FIT, 100 )
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( copy_menu, 'source lookup bitmap ({}x{})'.format( clipped_res[0], clipped_res[1] ), 'Copy a smaller bitmap of this file, for quicker lookup on source-finding websites.', self._CopyBMPToClipboard, clipped_res )
|
||||
|
||||
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( copy_menu, 'path', 'Copy the selected file\'s path to the clipboard.', self._CopyPathToClipboard )
|
||||
|
|
|
@ -1612,7 +1612,7 @@ class PanelPredicateSystemLimit( PanelPredicateSystemSingle ):
|
|||
|
||||
PanelPredicateSystemSingle.__init__( self, parent )
|
||||
|
||||
self._limit = ClientGUICommon.BetterSpinBox( self, max=1000000, width = 60 )
|
||||
self._limit = ClientGUICommon.BetterSpinBox( self, min = 1, max=1000000, width = 60 )
|
||||
|
||||
#
|
||||
|
||||
|
|
|
@ -2993,17 +2993,17 @@ class ReviewServiceRepositorySubPanel( QW.QWidget ):
|
|||
|
||||
if self._service.GetServiceType() == HC.TAG_REPOSITORY:
|
||||
|
||||
l = HC.TAG_REPOSITORY_SERVICE_INFO_TYPES
|
||||
service_info_types = HC.TAG_REPOSITORY_SERVICE_INFO_TYPES
|
||||
|
||||
else:
|
||||
|
||||
l = HC.FILE_REPOSITORY_SERVICE_INFO_TYPES
|
||||
service_info_types = HC.FILE_REPOSITORY_SERVICE_INFO_TYPES
|
||||
|
||||
|
||||
message = 'Note that num file hashes and tags here include deleted content so will likely not line up with your review services value, which is only for current content.'
|
||||
message += os.linesep * 2
|
||||
|
||||
tuples = [ ( HC.service_info_enum_str_lookup[ info_type ], HydrusData.ToHumanInt( service_info_dict[ info_type ] ) ) for info_type in l if info_type in service_info_dict ]
|
||||
tuples = [ ( HC.service_info_enum_str_lookup[ info_type ], HydrusData.ToHumanInt( service_info_dict[ info_type ] ) ) for info_type in service_info_types if info_type in service_info_dict ]
|
||||
string_rows = [ '{}: {}'.format( info_type, info ) for ( info_type, info ) in tuples ]
|
||||
|
||||
message += os.linesep.join( string_rows )
|
||||
|
|
|
@ -1,8 +1,9 @@
|
|||
import collections.abc
|
||||
import os
|
||||
import re
|
||||
import typing
|
||||
|
||||
from qtpy import QtCore as QC, QtWidgets as QW
|
||||
from qtpy import QtCore as QC
|
||||
from qtpy import QtWidgets as QW
|
||||
from qtpy import QtGui as QG
|
||||
|
||||
|
@ -49,7 +50,7 @@ def WrapInGrid( parent, rows, expand_text = False, add_stretch_at_end = True ):
|
|||
|
||||
for row in rows:
|
||||
|
||||
if isinstance( row, typing.Collection ) and len( row ) == 2:
|
||||
if isinstance( row, HydrusData.LIST_LIKE_COLLECTION ) and len( row ) == 2:
|
||||
|
||||
( text, control ) = row
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ def CheckImporterCanDoFileWorkBecausePaused( paused: bool, file_seed_cache: Clie
|
|||
|
||||
if HG.client_controller.new_options.GetBoolean( 'pause_all_file_queues' ):
|
||||
|
||||
raise HydrusExceptions.VetoException( 'all file import queues are paused!' )
|
||||
raise HydrusExceptions.VetoException( 'all file import queues are paused! network->pause to resume!' )
|
||||
|
||||
|
||||
work_pending = file_seed_cache.WorkToDo()
|
||||
|
@ -55,7 +55,7 @@ def CheckImporterCanDoGalleryWorkBecausePaused( paused: bool, gallery_seed_log:
|
|||
|
||||
if HG.client_controller.new_options.GetBoolean( 'pause_all_gallery_searches' ):
|
||||
|
||||
raise HydrusExceptions.VetoException( 'all gallery searches are paused!' )
|
||||
raise HydrusExceptions.VetoException( 'all gallery searches are paused! network->pause to resume!' )
|
||||
|
||||
|
||||
if gallery_seed_log is not None:
|
||||
|
|
|
@ -530,6 +530,16 @@ class FileSeed( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def AddExternalAdditionalServiceKeysToTags( self, service_keys_to_tags ):
|
||||
|
||||
self._external_additional_service_keys_to_tags.update( service_keys_to_tags )
|
||||
|
||||
|
||||
def AddExternalFilterableTags( self, tags ):
|
||||
|
||||
self._external_filterable_tags.update( tags )
|
||||
|
||||
|
||||
def AddParseResults( self, parse_results, file_import_options: FileImportOptions.FileImportOptions ):
|
||||
|
||||
for ( hash_type, hash ) in ClientParsing.GetHashesFromParseResults( parse_results ):
|
||||
|
@ -564,6 +574,11 @@ class FileSeed( HydrusSerialisable.SerialisableBase ):
|
|||
self._UpdateModified()
|
||||
|
||||
|
||||
def AddRequestHeaders( self, request_headers: dict ):
|
||||
|
||||
self._request_headers.update( request_headers )
|
||||
|
||||
|
||||
def AddTags( self, tags ):
|
||||
|
||||
tags = HydrusTags.CleanTags( tags )
|
||||
|
@ -1216,16 +1231,6 @@ class FileSeed( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def SetExternalAdditionalServiceKeysToTags( self, service_keys_to_tags ):
|
||||
|
||||
self._external_additional_service_keys_to_tags = ClientTags.ServiceKeysToTags( service_keys_to_tags )
|
||||
|
||||
|
||||
def SetExternalFilterableTags( self, tags ):
|
||||
|
||||
self._external_filterable_tags = set( tags )
|
||||
|
||||
|
||||
def SetHash( self, hash ):
|
||||
|
||||
if hash is not None:
|
||||
|
@ -1239,11 +1244,6 @@ class FileSeed( HydrusSerialisable.SerialisableBase ):
|
|||
self._referral_url = referral_url
|
||||
|
||||
|
||||
def SetRequestHeaders( self, request_headers: dict ):
|
||||
|
||||
self._request_headers = dict( request_headers )
|
||||
|
||||
|
||||
def SetStatus( self, status: int, note: str = '', exception = None ):
|
||||
|
||||
if exception is not None:
|
||||
|
@ -1458,8 +1458,8 @@ class FileSeed( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
for file_seed in file_seeds:
|
||||
|
||||
file_seed.SetExternalFilterableTags( self._external_filterable_tags )
|
||||
file_seed.SetExternalAdditionalServiceKeysToTags( self._external_additional_service_keys_to_tags )
|
||||
file_seed.AddExternalFilterableTags( self._external_filterable_tags )
|
||||
file_seed.AddExternalAdditionalServiceKeysToTags( self._external_additional_service_keys_to_tags )
|
||||
|
||||
file_seed.AddPrimaryURLs( set( self._primary_urls ) )
|
||||
|
||||
|
@ -1562,7 +1562,7 @@ class FileSeed( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
duplicate_file_seed.SetReferralURL( url_for_child_referral )
|
||||
|
||||
duplicate_file_seed.SetRequestHeaders( self._request_headers )
|
||||
duplicate_file_seed.AddRequestHeaders( self._request_headers )
|
||||
|
||||
if self._referral_url is not None:
|
||||
|
||||
|
|
|
@ -19,6 +19,39 @@ from hydrus.client import ClientParsing
|
|||
from hydrus.client.importing import ClientImporting
|
||||
from hydrus.client.metadata import ClientTags
|
||||
|
||||
def ConvertAllParseResultsToSubGallerySeeds( all_parse_results, can_generate_more_pages ):
|
||||
|
||||
sub_gallery_seeds = []
|
||||
|
||||
seen_urls = set()
|
||||
|
||||
for parse_results in all_parse_results:
|
||||
|
||||
parsed_request_headers = ClientParsing.GetHTTPHeadersFromParseResults( parse_results )
|
||||
|
||||
parsed_urls = ClientParsing.GetURLsFromParseResults( parse_results, ( HC.URL_TYPE_SUB_GALLERY, ), only_get_top_priority = True )
|
||||
|
||||
parsed_urls = HydrusData.DedupeList( parsed_urls )
|
||||
|
||||
parsed_urls = [ url for url in parsed_urls if url not in seen_urls ]
|
||||
|
||||
seen_urls.update( parsed_urls )
|
||||
|
||||
for url in parsed_urls:
|
||||
|
||||
gallery_seed = GallerySeed( url = url, can_generate_more_pages = can_generate_more_pages )
|
||||
|
||||
gallery_seed.AddRequestHeaders( parsed_request_headers )
|
||||
|
||||
gallery_seed.AddParseResults( parse_results )
|
||||
|
||||
sub_gallery_seeds.append( gallery_seed )
|
||||
|
||||
|
||||
|
||||
return sub_gallery_seeds
|
||||
|
||||
|
||||
def GenerateGallerySeedLogStatus( statuses_to_counts ):
|
||||
|
||||
num_successful = statuses_to_counts[ CC.STATUS_SUCCESSFUL_AND_NEW ]
|
||||
|
@ -67,6 +100,7 @@ def GenerateGallerySeedLogStatus( statuses_to_counts ):
|
|||
|
||||
return ( status, ( total_processed, total ) )
|
||||
|
||||
|
||||
class GallerySeed( HydrusSerialisable.SerialisableBase ):
|
||||
|
||||
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_GALLERY_SEED
|
||||
|
@ -226,6 +260,30 @@ class GallerySeed( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def AddExternalAdditionalServiceKeysToTags( self, service_keys_to_tags ):
|
||||
|
||||
self._external_additional_service_keys_to_tags.update( service_keys_to_tags )
|
||||
|
||||
|
||||
def AddExternalFilterableTags( self, tags ):
|
||||
|
||||
self._external_filterable_tags.update( tags )
|
||||
|
||||
|
||||
def AddParseResults( self, parse_results ):
|
||||
|
||||
tags = ClientParsing.GetTagsFromParseResults( parse_results )
|
||||
|
||||
self._external_filterable_tags.update( tags )
|
||||
|
||||
self._UpdateModified()
|
||||
|
||||
|
||||
def AddRequestHeaders( self, request_headers: dict ):
|
||||
|
||||
self._request_headers.update( request_headers )
|
||||
|
||||
|
||||
def ForceNextPageURLGeneration( self ):
|
||||
|
||||
self._force_next_page_url_generation = True
|
||||
|
@ -272,26 +330,11 @@ class GallerySeed( HydrusSerialisable.SerialisableBase ):
|
|||
return network_job
|
||||
|
||||
|
||||
def SetExternalAdditionalServiceKeysToTags( self, service_keys_to_tags ):
|
||||
|
||||
self._external_additional_service_keys_to_tags = ClientTags.ServiceKeysToTags( service_keys_to_tags )
|
||||
|
||||
|
||||
def SetExternalFilterableTags( self, tags ):
|
||||
|
||||
self._external_filterable_tags = set( tags )
|
||||
|
||||
|
||||
def SetReferralURL( self, referral_url ):
|
||||
|
||||
self._referral_url = referral_url
|
||||
|
||||
|
||||
def SetRequestHeaders( self, request_headers: dict ):
|
||||
|
||||
self._request_headers = dict( request_headers )
|
||||
|
||||
|
||||
def SetRunToken( self, run_token: bytes ):
|
||||
|
||||
self._run_token = run_token
|
||||
|
@ -441,7 +484,7 @@ class GallerySeed( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
file_seed.SetReferralURL( url_for_child_referral )
|
||||
|
||||
file_seed.SetRequestHeaders( self._request_headers )
|
||||
file_seed.AddRequestHeaders( self._request_headers )
|
||||
|
||||
file_seeds_callable( ( file_seed, ) )
|
||||
|
||||
|
@ -477,8 +520,8 @@ class GallerySeed( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
for file_seed in file_seeds:
|
||||
|
||||
file_seed.SetExternalFilterableTags( self._external_filterable_tags )
|
||||
file_seed.SetExternalAdditionalServiceKeysToTags( self._external_additional_service_keys_to_tags )
|
||||
file_seed.AddExternalFilterableTags( self._external_filterable_tags )
|
||||
file_seed.AddExternalAdditionalServiceKeysToTags( self._external_additional_service_keys_to_tags )
|
||||
|
||||
|
||||
num_urls_total = len( file_seeds )
|
||||
|
@ -515,32 +558,27 @@ class GallerySeed( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._request_headers.update( parsed_request_headers )
|
||||
|
||||
sub_gallery_urls = ClientParsing.GetURLsFromParseResults( flattened_results, ( HC.URL_TYPE_SUB_GALLERY, ), only_get_top_priority = True )
|
||||
sub_gallery_seeds = ConvertAllParseResultsToSubGallerySeeds( all_parse_results, can_generate_more_pages = self._can_generate_more_pages )
|
||||
|
||||
sub_gallery_urls = HydrusData.DedupeList( sub_gallery_urls )
|
||||
new_sub_gallery_seeds = [ sub_gallery_seed for sub_gallery_seed in sub_gallery_seeds if sub_gallery_seed.url not in gallery_urls_seen_before ]
|
||||
|
||||
new_sub_gallery_urls = [ sub_gallery_url for sub_gallery_url in sub_gallery_urls if sub_gallery_url not in gallery_urls_seen_before ]
|
||||
|
||||
num_new_sub_gallery_urls = len( new_sub_gallery_urls )
|
||||
|
||||
if num_new_sub_gallery_urls > 0:
|
||||
|
||||
sub_gallery_seeds = [ GallerySeed( sub_gallery_url ) for sub_gallery_url in new_sub_gallery_urls ]
|
||||
if len( new_sub_gallery_seeds ) > 0:
|
||||
|
||||
for sub_gallery_seed in sub_gallery_seeds:
|
||||
|
||||
sub_gallery_seed.SetRunToken( self._run_token )
|
||||
sub_gallery_seed.SetExternalFilterableTags( self._external_filterable_tags )
|
||||
sub_gallery_seed.SetExternalAdditionalServiceKeysToTags( self._external_additional_service_keys_to_tags )
|
||||
sub_gallery_seed.SetReferralURL( url_for_child_referral )
|
||||
sub_gallery_seed.AddExternalFilterableTags( self._external_filterable_tags )
|
||||
sub_gallery_seed.AddExternalAdditionalServiceKeysToTags( self._external_additional_service_keys_to_tags )
|
||||
|
||||
gallery_urls_seen_before.add( sub_gallery_seed.url )
|
||||
|
||||
|
||||
gallery_seed_log.AddGallerySeeds( sub_gallery_seeds, parent_gallery_seed = self )
|
||||
|
||||
added_new_gallery_pages = True
|
||||
|
||||
gallery_urls_seen_before.update( sub_gallery_urls )
|
||||
|
||||
note += ' - {} sub-gallery urls found'.format( HydrusData.ToHumanInt( num_new_sub_gallery_urls ) )
|
||||
note += ' - {} sub-gallery urls found'.format( HydrusData.ToHumanInt( len( new_sub_gallery_seeds ) ) )
|
||||
|
||||
|
||||
if self._can_generate_more_pages and can_add_more_gallery_urls:
|
||||
|
@ -605,8 +643,8 @@ class GallerySeed( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
next_gallery_seed.SetRunToken( self._run_token )
|
||||
next_gallery_seed.SetReferralURL( url_for_child_referral )
|
||||
next_gallery_seed.SetExternalFilterableTags( self._external_filterable_tags )
|
||||
next_gallery_seed.SetExternalAdditionalServiceKeysToTags( self._external_additional_service_keys_to_tags )
|
||||
next_gallery_seed.AddExternalFilterableTags( self._external_filterable_tags )
|
||||
next_gallery_seed.AddExternalAdditionalServiceKeysToTags( self._external_additional_service_keys_to_tags )
|
||||
|
||||
|
||||
gallery_seed_log.AddGallerySeeds( next_gallery_seeds, parent_gallery_seed = self )
|
||||
|
|
|
@ -82,7 +82,7 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
if path in paths_to_additional_service_keys_to_tags:
|
||||
|
||||
file_seed.SetExternalAdditionalServiceKeysToTags( paths_to_additional_service_keys_to_tags[ path ] )
|
||||
file_seed.AddExternalAdditionalServiceKeysToTags( paths_to_additional_service_keys_to_tags[ path ] )
|
||||
|
||||
|
||||
file_seeds.append( file_seed )
|
||||
|
@ -149,7 +149,7 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
if path in paths_to_additional_service_keys_to_tags:
|
||||
|
||||
file_seed.SetExternalAdditionalServiceKeysToTags( paths_to_additional_service_keys_to_tags[ path ] )
|
||||
file_seed.AddExternalAdditionalServiceKeysToTags( paths_to_additional_service_keys_to_tags[ path ] )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1278,8 +1278,8 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
file_seed = ClientImportFileSeeds.FileSeed( ClientImportFileSeeds.FILE_SEED_TYPE_URL, url )
|
||||
|
||||
file_seed.SetExternalFilterableTags( filterable_tags )
|
||||
file_seed.SetExternalAdditionalServiceKeysToTags( additional_service_keys_to_tags )
|
||||
file_seed.AddExternalFilterableTags( filterable_tags )
|
||||
file_seed.AddExternalAdditionalServiceKeysToTags( additional_service_keys_to_tags )
|
||||
|
||||
file_seeds.append( file_seed )
|
||||
|
||||
|
@ -1289,8 +1289,8 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
gallery_seed = ClientImportGallerySeeds.GallerySeed( url, can_generate_more_pages = can_generate_more_pages )
|
||||
|
||||
gallery_seed.SetExternalFilterableTags( filterable_tags )
|
||||
gallery_seed.SetExternalAdditionalServiceKeysToTags( additional_service_keys_to_tags )
|
||||
gallery_seed.AddExternalFilterableTags( filterable_tags )
|
||||
gallery_seed.AddExternalAdditionalServiceKeysToTags( additional_service_keys_to_tags )
|
||||
|
||||
gallery_seeds.append( gallery_seed )
|
||||
|
||||
|
|
|
@ -263,12 +263,12 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
if filterable_tags is not None:
|
||||
|
||||
watcher.SetExternalFilterableTags( filterable_tags )
|
||||
watcher.AddExternalFilterableTags( filterable_tags )
|
||||
|
||||
|
||||
if additional_service_keys_to_tags is not None:
|
||||
|
||||
watcher.SetExternalAdditionalServiceKeysToTags( additional_service_keys_to_tags )
|
||||
watcher.AddExternalAdditionalServiceKeysToTags( additional_service_keys_to_tags )
|
||||
|
||||
|
||||
watcher.SetCheckerOptions( self._checker_options )
|
||||
|
@ -799,8 +799,8 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
gallery_seed = ClientImportGallerySeeds.GallerySeed( self._url, can_generate_more_pages = False )
|
||||
|
||||
gallery_seed.SetExternalFilterableTags( self._external_filterable_tags )
|
||||
gallery_seed.SetExternalAdditionalServiceKeysToTags( self._external_additional_service_keys_to_tags )
|
||||
gallery_seed.AddExternalFilterableTags( self._external_filterable_tags )
|
||||
gallery_seed.AddExternalAdditionalServiceKeysToTags( self._external_additional_service_keys_to_tags )
|
||||
|
||||
self._gallery_seed_log.AddGallerySeeds( ( gallery_seed, ) )
|
||||
|
||||
|
@ -1194,6 +1194,36 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def AddExternalAdditionalServiceKeysToTags( self, service_keys_to_tags ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
external_additional_service_keys_to_tags = ClientTags.ServiceKeysToTags( service_keys_to_tags )
|
||||
|
||||
if external_additional_service_keys_to_tags.DumpToString() != self._external_additional_service_keys_to_tags.DumpToString():
|
||||
|
||||
self._external_additional_service_keys_to_tags.update( service_keys_to_tags )
|
||||
|
||||
self._SerialisableChangeMade()
|
||||
|
||||
|
||||
|
||||
|
||||
def AddExternalFilterableTags( self, tags ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
tags_set = set( tags )
|
||||
|
||||
if tags_set != self._external_filterable_tags:
|
||||
|
||||
self._external_filterable_tags.update( tags )
|
||||
|
||||
self._SerialisableChangeMade()
|
||||
|
||||
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -1693,36 +1723,6 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def SetExternalAdditionalServiceKeysToTags( self, service_keys_to_tags ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
external_additional_service_keys_to_tags = ClientTags.ServiceKeysToTags( service_keys_to_tags )
|
||||
|
||||
if external_additional_service_keys_to_tags.DumpToString() != self._external_additional_service_keys_to_tags.DumpToString():
|
||||
|
||||
self._external_additional_service_keys_to_tags = external_additional_service_keys_to_tags
|
||||
|
||||
self._SerialisableChangeMade()
|
||||
|
||||
|
||||
|
||||
|
||||
def SetExternalFilterableTags( self, tags ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
tags_set = set( tags )
|
||||
|
||||
if tags_set != self._external_filterable_tags:
|
||||
|
||||
self._external_filterable_tags = tags_set
|
||||
|
||||
self._SerialisableChangeMade()
|
||||
|
||||
|
||||
|
||||
|
||||
def SetNoteImportOptions( self, note_import_options: NoteImportOptions.NoteImportOptions ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -1921,7 +1921,7 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
if HG.client_controller.new_options.GetBoolean( 'pause_all_watcher_checkers' ):
|
||||
|
||||
raise HydrusExceptions.VetoException( 'all checkers are paused!' )
|
||||
raise HydrusExceptions.VetoException( 'all checkers are paused! network->pause to resume!' )
|
||||
|
||||
|
||||
if not self._HasURL():
|
||||
|
|
|
@ -62,7 +62,7 @@ def ConvertAllParseResultsToFileSeeds( all_parse_results, source_url, file_impor
|
|||
|
||||
file_seed.SetReferralURL( source_url )
|
||||
|
||||
file_seed.SetRequestHeaders( parsed_request_headers )
|
||||
file_seed.AddRequestHeaders( parsed_request_headers )
|
||||
|
||||
file_seed.AddParseResults( parse_results, file_import_options )
|
||||
|
||||
|
|
|
@ -1391,7 +1391,7 @@ class MediaCollection( MediaList, Media ):
|
|||
self._RecalcArchiveInbox()
|
||||
|
||||
self._size = sum( [ media.GetSize() for media in self._sorted_media ] )
|
||||
self._size_definite = not False in ( media.IsSizeDefinite() for media in self._sorted_media )
|
||||
self._size_definite = False not in ( media.IsSizeDefinite() for media in self._sorted_media )
|
||||
|
||||
duration_sum = sum( [ media.GetDurationMS() for media in self._sorted_media if media.HasDuration() ] )
|
||||
|
||||
|
@ -1838,14 +1838,14 @@ class MediaSingleton( Media ):
|
|||
|
||||
timestamp = timestamps_manager.GetDeletedTimestamp( local_file_service.GetServiceKey() )
|
||||
|
||||
l = 'removed from {} {}'.format( local_file_service.GetName(), ClientTime.TimestampToPrettyTimeDelta( timestamp ) )
|
||||
line = 'removed from {} {}'.format( local_file_service.GetName(), ClientTime.TimestampToPrettyTimeDelta( timestamp ) )
|
||||
|
||||
if len( deleted_local_file_services ) == 1:
|
||||
|
||||
l = '{} ({})'.format( l, local_file_deletion_reason )
|
||||
line = f'{line} ({local_file_deletion_reason})'
|
||||
|
||||
|
||||
lines.append( ( True, l ) )
|
||||
lines.append( ( True, line ) )
|
||||
|
||||
|
||||
if len( deleted_local_file_services ) > 1:
|
||||
|
@ -2235,7 +2235,7 @@ class MediaSort( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
if sort_metatype == 'system':
|
||||
|
||||
if sort_data in ( CC.SORT_FILES_BY_MIME, CC.SORT_FILES_BY_RANDOM, CC.SORT_FILES_BY_HASH ):
|
||||
if sort_data in ( CC.SORT_FILES_BY_MIME, CC.SORT_FILES_BY_RANDOM ):
|
||||
|
||||
return False
|
||||
|
||||
|
@ -2583,7 +2583,7 @@ class MediaSort( HydrusSerialisable.SerialisableBase ):
|
|||
sort_string_lookup[ CC.SORT_FILES_BY_ARCHIVED_TIMESTAMP ] = ( 'oldest first', 'newest first', CC.SORT_DESC )
|
||||
sort_string_lookup[ CC.SORT_FILES_BY_MIME ] = ( 'filetype', 'filetype', CC.SORT_ASC )
|
||||
sort_string_lookup[ CC.SORT_FILES_BY_RANDOM ] = ( 'random', 'random', CC.SORT_ASC )
|
||||
sort_string_lookup[ CC.SORT_FILES_BY_HASH ] = ( 'hash', 'hash', CC.SORT_ASC )
|
||||
sort_string_lookup[ CC.SORT_FILES_BY_HASH ] = ( 'lexicographic', 'reverse lexicographic', CC.SORT_ASC )
|
||||
sort_string_lookup[ CC.SORT_FILES_BY_WIDTH ] = ( 'slimmest first', 'widest first', CC.SORT_ASC )
|
||||
sort_string_lookup[ CC.SORT_FILES_BY_HEIGHT ] = ( 'shortest first', 'tallest first', CC.SORT_ASC )
|
||||
sort_string_lookup[ CC.SORT_FILES_BY_RATIO ] = ( 'tallest first', 'widest first', CC.SORT_ASC )
|
||||
|
|
|
@ -11,6 +11,20 @@ from hydrus.core import HydrusTime
|
|||
|
||||
from hydrus.client.media import ClientMediaResult
|
||||
from hydrus.client.metadata import ClientTags
|
||||
from hydrus.client.caches import ClientCachesBase
|
||||
|
||||
class MediaResultCacheContainer( ClientCachesBase.CacheableObject ):
|
||||
|
||||
def __init__( self, media_result ):
|
||||
|
||||
self._media_result = media_result
|
||||
|
||||
|
||||
def GetEstimatedMemoryFootprint( self ) -> int:
|
||||
|
||||
return 1
|
||||
|
||||
|
||||
|
||||
class MediaResultCache( object ):
|
||||
|
||||
|
@ -21,6 +35,12 @@ class MediaResultCache( object ):
|
|||
self._hash_ids_to_media_results = weakref.WeakValueDictionary()
|
||||
self._hashes_to_media_results = weakref.WeakValueDictionary()
|
||||
|
||||
# ok this is a bit of an experiment, it may be a failure and just add overhead for no great reason. it force-keeps the most recent fetched media results for two minutes
|
||||
# this means that if a user refreshes a search and the existing media result handles briefly go to zero...
|
||||
# or if the client api makes repeated requests on the same media results...
|
||||
# then that won't be a chance for the weakvaluedict to step in. we'll keep this scratchpad of stuff
|
||||
self._fifo_timeout_cache = ClientCachesBase.DataCache( HG.client_controller, 'media result cache', 2048, 120 )
|
||||
|
||||
HG.client_controller.sub( self, 'ProcessContentUpdates', 'content_updates_data' )
|
||||
HG.client_controller.sub( self, 'ProcessServiceUpdates', 'service_updates_data' )
|
||||
HG.client_controller.sub( self, 'NewForceRefreshTags', 'notify_new_force_refresh_tags_data' )
|
||||
|
@ -39,6 +59,8 @@ class MediaResultCache( object ):
|
|||
self._hash_ids_to_media_results[ hash_id ] = media_result
|
||||
self._hashes_to_media_results[ hash ] = media_result
|
||||
|
||||
self._fifo_timeout_cache.AddData( hash_id, MediaResultCacheContainer( media_result ) )
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -60,6 +82,8 @@ class MediaResultCache( object ):
|
|||
del self._hashes_to_media_results[ hash ]
|
||||
|
||||
|
||||
self._fifo_timeout_cache.DeleteData( hash_id )
|
||||
|
||||
|
||||
|
||||
def FilterFiles( self, hash_ids: typing.Collection[ int ] ):
|
||||
|
@ -97,6 +121,8 @@ class MediaResultCache( object ):
|
|||
|
||||
media_results.append( media_result )
|
||||
|
||||
self._fifo_timeout_cache.TouchKey( hash_id )
|
||||
|
||||
|
||||
|
||||
return ( media_results, missing_hash_ids )
|
||||
|
|
|
@ -766,7 +766,7 @@ class SingleFileMetadataImporterTXT( SingleFileMetadataImporterSidecar, HydrusSe
|
|||
full_munge_text = ''
|
||||
|
||||
|
||||
return 'from .txt sidecar'.format( full_munge_text )
|
||||
return 'from .txt sidecar{}'.format( full_munge_text )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -2026,7 +2026,7 @@ class HydrusResourceClientAPIRestrictedAddTagsAddTags( HydrusResourceClientAPIRe
|
|||
|
||||
tag = tag_item
|
||||
|
||||
elif isinstance( tag_item, collections.abc.Collection ) and len( tag_item ) == 2:
|
||||
elif isinstance( tag_item, HydrusData.LIST_LIKE_COLLECTION ) and len( tag_item ) == 2:
|
||||
|
||||
( tag, reason ) = tag_item
|
||||
|
||||
|
|
|
@ -1090,7 +1090,7 @@ class LoginScriptDomain( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
self._login_steps = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_login_steps )
|
||||
|
||||
# convert lists to tups for listctrl data hashing
|
||||
self._example_domains_info = [ tuple( l ) for l in self._example_domains_info ]
|
||||
self._example_domains_info = [ tuple( list_of_info ) for list_of_info in self._example_domains_info ]
|
||||
|
||||
|
||||
def _UpdateSerialisableInfo( self, version, old_serialisable_info ):
|
||||
|
|
|
@ -7,7 +7,7 @@ from hydrus.core import HydrusTime
|
|||
def ParseFFMPEGAudio( lines ):
|
||||
|
||||
# the ^\sStream is to exclude the 'title' line, when it exists, includes the string 'Audio: ', ha ha
|
||||
lines_audio = [ l for l in lines if re.search( r'^\s*Stream', l ) is not None and 'Audio: ' in l ]
|
||||
lines_audio = [ line for line in lines if re.search( r'^\s*Stream', line ) is not None and 'Audio: ' in line ]
|
||||
|
||||
audio_found = lines_audio != []
|
||||
audio_format = None
|
||||
|
|
|
@ -100,7 +100,7 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 20
|
||||
SOFTWARE_VERSION = 532
|
||||
SOFTWARE_VERSION = 533
|
||||
CLIENT_API_VERSION = 47
|
||||
|
||||
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
|
|
@ -14,12 +14,15 @@ from hydrus.core import HydrusPubSub
|
|||
from hydrus.core import HydrusThreading
|
||||
from hydrus.core import HydrusTemp
|
||||
from hydrus.core import HydrusTime
|
||||
from hydrus.core.interfaces import HydrusControllerInterface
|
||||
from hydrus.core.networking import HydrusNATPunch
|
||||
|
||||
class HydrusController( object ):
|
||||
class HydrusController( HydrusControllerInterface.HydrusControllerInterface ):
|
||||
|
||||
def __init__( self, db_dir ):
|
||||
|
||||
HydrusControllerInterface.HydrusControllerInterface.__init__( self )
|
||||
|
||||
HG.controller = self
|
||||
|
||||
self._name = 'hydrus'
|
||||
|
|
|
@ -691,6 +691,15 @@ def IntelligentMassIntersect( sets_to_reduce ):
|
|||
return answer
|
||||
|
||||
|
||||
|
||||
# ok protip, don't do isinstance( possible_list, collections.abc.Collection ) for a 'list' detection--strings pass it (and sometimes with infinite recursion) lol!
|
||||
LIST_LIKE_COLLECTION = typing.Union[
|
||||
typing.Tuple,
|
||||
typing.List,
|
||||
typing.Set,
|
||||
typing.FrozenSet
|
||||
]
|
||||
|
||||
def IsAlreadyRunning( db_path, instance ):
|
||||
|
||||
path = os.path.join( db_path, instance + '_running' )
|
||||
|
|
|
@ -1,10 +1,13 @@
|
|||
import collections
|
||||
import threading
|
||||
import typing
|
||||
|
||||
from hydrus.core.interfaces import HydrusControllerInterface
|
||||
|
||||
controller: typing.Optional[ HydrusControllerInterface.HydrusControllerInterface ] = None
|
||||
client_controller: typing.Optional[ HydrusControllerInterface.HydrusControllerInterface ] = None
|
||||
server_controller: typing.Optional[ HydrusControllerInterface.HydrusControllerInterface ] = None
|
||||
test_controller: typing.Optional[ HydrusControllerInterface.HydrusControllerInterface ] = None
|
||||
|
||||
controller = None
|
||||
client_controller = None
|
||||
server_controller = None
|
||||
test_controller = None
|
||||
started_shutdown = False
|
||||
view_shutdown = False
|
||||
model_shutdown = False
|
||||
|
|
|
@ -7,13 +7,13 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
def Profile( summary, code, g, l, min_duration_ms = 20, show_summary = False ):
|
||||
def Profile( summary, code, global_vars, local_vars, min_duration_ms = 20, show_summary = False ):
|
||||
|
||||
profile = cProfile.Profile()
|
||||
|
||||
started = HydrusTime.GetNowPrecise()
|
||||
|
||||
profile.runctx( code, g, l )
|
||||
profile.runctx( code, global_vars, local_vars )
|
||||
|
||||
finished = HydrusTime.GetNowPrecise()
|
||||
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
import bisect
|
||||
import collections
|
||||
import os
|
||||
import queue
|
||||
import random
|
||||
|
@ -13,6 +12,7 @@ from hydrus.core import HydrusExceptions
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusProfiling
|
||||
from hydrus.core import HydrusTime
|
||||
from hydrus.core.interfaces import HydrusThreadingInterface
|
||||
|
||||
NEXT_THREAD_CLEAROUT = 0
|
||||
|
||||
|
@ -401,7 +401,7 @@ class THREADCallToThread( DAEMON ):
|
|||
|
||||
try:
|
||||
|
||||
( callable, args, kwargs ) = self._queue.get( 1.0 )
|
||||
( callable, args, kwargs ) = self._queue.get( timeout = 1.0 )
|
||||
|
||||
except queue.Empty:
|
||||
|
||||
|
@ -524,7 +524,7 @@ class JobScheduler( threading.Thread ):
|
|||
|
||||
|
||||
|
||||
def _SortWaiting( self ) -> bool:
|
||||
def _SortWaiting( self ):
|
||||
|
||||
# sort the waiting jobs in ascending order of expected work time
|
||||
|
||||
|
@ -716,12 +716,15 @@ class JobScheduler( threading.Thread ):
|
|||
|
||||
|
||||
|
||||
class SchedulableJob( object ):
|
||||
|
||||
class SchedulableJob( HydrusThreadingInterface.SchedulableJobInterface ):
|
||||
|
||||
PRETTY_CLASS_NAME = 'job base'
|
||||
|
||||
def __init__( self, controller, scheduler: JobScheduler, initial_delay, work_callable ):
|
||||
|
||||
HydrusThreadingInterface.SchedulableJobInterface.__init__( self )
|
||||
|
||||
self._controller = controller
|
||||
self._scheduler = scheduler
|
||||
self._work_callable = work_callable
|
||||
|
@ -771,6 +774,13 @@ class SchedulableJob( object ):
|
|||
return self._currently_working.is_set()
|
||||
|
||||
|
||||
def Delay( self, delay ) -> None:
|
||||
|
||||
self._next_work_time = HydrusTime.GetNowFloat() + delay
|
||||
|
||||
self._scheduler.WorkTimesHaveChanged()
|
||||
|
||||
|
||||
def GetDueString( self ) -> str:
|
||||
|
||||
due_delta = self._next_work_time - HydrusTime.GetNowFloat()
|
||||
|
@ -931,6 +941,7 @@ class SchedulableJob( object ):
|
|||
|
||||
|
||||
|
||||
|
||||
class SingleJob( SchedulableJob ):
|
||||
|
||||
PRETTY_CLASS_NAME = 'single job'
|
||||
|
@ -954,6 +965,7 @@ class SingleJob( SchedulableJob ):
|
|||
self._work_complete.set()
|
||||
|
||||
|
||||
|
||||
class RepeatingJob( SchedulableJob ):
|
||||
|
||||
PRETTY_CLASS_NAME = 'repeating job'
|
||||
|
@ -974,18 +986,6 @@ class RepeatingJob( SchedulableJob ):
|
|||
self._stop_repeating.set()
|
||||
|
||||
|
||||
def Delay( self, delay ) -> None:
|
||||
|
||||
self._next_work_time = HydrusTime.GetNowFloat() + delay
|
||||
|
||||
self._scheduler.WorkTimesHaveChanged()
|
||||
|
||||
|
||||
def IsRepeatingWorkFinished( self ) -> bool:
|
||||
|
||||
return self._stop_repeating.is_set()
|
||||
|
||||
|
||||
def StartWork( self ) -> None:
|
||||
|
||||
if self._stop_repeating.is_set():
|
||||
|
|
|
@ -673,7 +673,7 @@ def ParseFFMPEGDuration( lines ):
|
|||
try:
|
||||
|
||||
# had a vid with 'Duration:' in title, ha ha, so now a regex
|
||||
line = [ l for l in lines if re.search( r'^\s*Duration:', l ) is not None ][0]
|
||||
line = [ line for line in lines if re.search( r'^\s*Duration:', line ) is not None ][0]
|
||||
|
||||
if 'Duration: N/A' in line:
|
||||
|
||||
|
@ -929,7 +929,7 @@ def ParseFFMPEGMimeText( lines ):
|
|||
|
||||
try:
|
||||
|
||||
( input_line, ) = [ l for l in lines if l.startswith( 'Input #0' ) ]
|
||||
( input_line, ) = [ line for line in lines if line.startswith( 'Input #0' ) ]
|
||||
|
||||
# Input #0, matroska, webm, from 'm.mkv':
|
||||
|
||||
|
@ -1014,7 +1014,7 @@ def ParseFFMPEGVideoLine( lines, png_ok = False ) -> str:
|
|||
|
||||
# get the output line that speaks about video
|
||||
# the ^\sStream is to exclude the 'title' line, when it exists, includes the string 'Video: ', ha ha
|
||||
lines_video = [ l for l in lines if re.search( r'^\s*Stream', l ) is not None and 'Video: ' in l and True not in ( 'Video: {}'.format( bad_video_format ) in l for bad_video_format in bad_video_formats ) ] # mp3 says it has a 'png' video stream
|
||||
lines_video = [ line for line in lines if re.search( r'^\s*Stream', line ) is not None and 'Video: ' in line and True not in ( 'Video: {}'.format( bad_video_format ) in line for bad_video_format in bad_video_formats ) ] # mp3 says it has a 'png' video stream
|
||||
|
||||
if len( lines_video ) == 0:
|
||||
|
||||
|
|
|
@ -0,0 +1,255 @@
|
|||
|
||||
from hydrus.core.interfaces import HydrusThreadingInterface
|
||||
|
||||
class HydrusControllerInterface( object ):
|
||||
|
||||
def pub( self, topic, *args, **kwargs ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def pubimmediate( self, topic, *args, **kwargs ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def sub( self, object, method_name, topic ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def AcquireThreadSlot( self, thread_type ) -> bool:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def ThreadSlotsAreAvailable( self, thread_type ) -> bool:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def CallLater( self, initial_delay, func, *args, **kwargs ) -> HydrusThreadingInterface.SchedulableJobInterface:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def CallRepeating( self, initial_delay, period, func, *args, **kwargs ) -> HydrusThreadingInterface.SchedulableJobInterface:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def CallToThread( self, callable, *args, **kwargs ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def CallToThreadLongRunning( self, callable, *args, **kwargs ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def CleanRunningFile( self ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def ClearCaches( self ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def CurrentlyIdle( self ) -> bool:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def CurrentlyPubSubbing( self ) -> bool:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def DBCurrentlyDoingJob( self ) -> bool:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def DoingFastExit( self ) -> bool:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def GetBootTime( self ) -> int:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def GetDBDir( self ) -> str:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def GetDBStatus( self ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def GetCache( self, name ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def GetManager( self, name ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def GetTimestamp( self, name: str ) -> int:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def GoodTimeToStartBackgroundWork( self ) -> bool:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def GoodTimeToStartForegroundWork( self ) -> bool:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def JustWokeFromSleep( self ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def IsFirstStart( self ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def LastShutdownWasBad( self ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def MaintainDB( self, maintenance_mode = None, stop_time = None ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def MaintainMemoryFast( self ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def MaintainMemorySlow( self ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def PrintProfile( self, summary, profile_text = None ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def PrintQueryPlan( self, query, plan_lines ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def Read( self, action, *args, **kwargs ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def RecordRunningStart( self ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def ReleaseThreadSlot( self, thread_type ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def ReportDataUsed( self, num_bytes ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def ReportRequestUsed( self ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def ResetIdleTimer( self ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def SetDoingFastExit( self, value: bool ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def SetTimestamp( self, name: str, value: int ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def ShouldStopThisWork( self, maintenance_mode, stop_time = None ) -> bool:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def SleepCheck( self ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def SimulateWakeFromSleepEvent( self ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def SystemBusy( self ) -> bool:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def TouchTimestamp( self, name: str ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def WaitUntilDBEmpty( self ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def WaitUntilModelFree( self ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def WaitUntilPubSubsEmpty( self ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def WakeDaemon( self, name ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def Write( self, action, *args, **kwargs ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def WriteSynchronous( self, action, *args, **kwargs ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
|
@ -0,0 +1,100 @@
|
|||
|
||||
class SchedulableJobInterface( object ):
|
||||
|
||||
PRETTY_CLASS_NAME = 'job base interface'
|
||||
|
||||
def Cancel( self ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def CurrentlyWorking( self ) -> bool:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def Delay( self, delay ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def GetDueString( self ) -> str:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def GetNextWorkTime( self ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def GetPrettyJob( self ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def GetTimeDeltaUntilDue( self ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def IsCancelled( self ) -> bool:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def IsDead( self ) -> bool:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def IsDue( self ) -> bool:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def PubSubWake( self, *args, **kwargs ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def SetThreadSlotType( self, thread_type ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def ShouldDelayOnWakeup( self, value ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def SlotOK( self ) -> bool:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def StartWork( self ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def Wake( self, next_work_time = None ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def WakeOnPubSub( self, topic ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def WaitingOnWorkSlot( self ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def Work( self ) -> None:
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
|
@ -254,7 +254,7 @@ def GetUPnPMappingsParseResponse( stdout ):
|
|||
|
||||
while i < len( lines ):
|
||||
|
||||
if not lines[ i ][0] in ( ' ', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9' ): break
|
||||
if lines[ i ][0] not in ( ' ', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9' ): break
|
||||
|
||||
data_lines.append( lines[ i ] )
|
||||
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
import collections
|
||||
import os
|
||||
import threading
|
||||
import collections
|
||||
import tempfile
|
||||
import time
|
||||
import traceback
|
||||
|
@ -19,10 +18,8 @@ from hydrus.core import HydrusPaths
|
|||
from hydrus.core import HydrusPubSub
|
||||
from hydrus.core import HydrusSessions
|
||||
from hydrus.core import HydrusThreading
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientAPI
|
||||
from hydrus.client import ClientCaches
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientDefaults
|
||||
from hydrus.client import ClientFiles
|
||||
|
@ -30,6 +27,7 @@ from hydrus.client import ClientOptions
|
|||
from hydrus.client import ClientManagers
|
||||
from hydrus.client import ClientServices
|
||||
from hydrus.client import ClientThreading
|
||||
from hydrus.client.caches import ClientCaches
|
||||
from hydrus.client.gui import QtPorting as QP
|
||||
from hydrus.client.gui import ClientGUISplash
|
||||
from hydrus.client.gui.lists import ClientGUIListManager
|
||||
|
|
|
@ -23,7 +23,9 @@ Twisted>=20.3.0
|
|||
|
||||
opencv-python-headless==4.5.5.64
|
||||
python-mpv==1.0.3
|
||||
requests==2.31.0
|
||||
|
||||
QtPy==2.3.0
|
||||
PySide6==6.4.1
|
||||
requests==2.31.0
|
||||
|
||||
setuptools==65.5.1
|
||||
|
|
|
@ -23,11 +23,12 @@ Twisted>=20.3.0
|
|||
|
||||
opencv-python-headless==4.5.5.64
|
||||
python-mpv==0.5.2
|
||||
QtPy==2.3.0
|
||||
requests==2.31.0
|
||||
setuptools==65.5.1
|
||||
|
||||
QtPy==2.3.0
|
||||
PySide6==6.4.1
|
||||
|
||||
setuptools==65.5.1
|
||||
|
||||
pyinstaller==5.5
|
||||
mkdocs-material
|
||||
|
|
|
@ -23,9 +23,12 @@ Twisted>=20.3.0
|
|||
|
||||
opencv-python-headless==4.5.5.64
|
||||
python-mpv==1.0.3
|
||||
QtPy==2.2.1
|
||||
requests==2.31.0
|
||||
setuptools==65.5.1
|
||||
|
||||
zope==5.5.0
|
||||
|
||||
QtPy==2.3.0
|
||||
PyQt6==6.4.1
|
||||
PyQt6-Qt6==6.5.0
|
||||
|
||||
setuptools==65.5.1
|
||||
|
|
|
@ -23,12 +23,13 @@ Twisted>=20.3.0
|
|||
|
||||
opencv-python-headless==4.5.5.64
|
||||
python-mpv==1.0.3
|
||||
QtPy==2.3.0
|
||||
requests==2.31.0
|
||||
setuptools==65.5.1
|
||||
|
||||
QtPy==2.3.0
|
||||
PySide6==6.4.1
|
||||
|
||||
setuptools==65.5.1
|
||||
|
||||
pyinstaller==5.5
|
||||
mkdocs-material
|
||||
|
||||
|
|
Loading…
Reference in New Issue