Version 439

This commit is contained in:
Hydrus Network Developer 2021-05-12 15:49:20 -05:00
parent 458838c4b8
commit 4d9199bfaa
27 changed files with 874 additions and 242 deletions

View File

@ -22,7 +22,7 @@ jobs:
- name: Build Hydrus
run: |
cd $GITHUB_WORKSPACE
cp static/build_files/pyoxidizer.bzl pyoxidizer.bzl
cp static/build_files/macos_pyoxidizer.bzl pyoxidizer.bzl
basename $(rustc --print sysroot) | sed -e "s/^stable-//" > triple.txt
pyoxidizer build --release
cd build/$(head -n 1 triple.txt)/release

View File

@ -6,15 +6,15 @@ I am continually working on the software and try to put out a new release every
This github repository is currently a weekly sync with my home dev environment, where I work on hydrus by myself. **Feel free to fork and do whatever you like with my code, but please do not make pull requests.** The [issue tracker here on Github](https://github.com/hydrusnetwork/hydrus/issues) is active and run by blessed volunteer users. I am not active here on Github, and I have difficulty keeping up with social media in general, but I welcome feedback of any sort and will eventually catch up with and reply to email, the 8kun or Endchan boards, tumblr, twitter, or the discord.
The client can do quite a lot! Please check out the help inside the release or [here](http://hydrusnetwork.github.io/hydrus/help), which includes a comprehensive getting started guide.
The client can do quite a lot! Please check out the help inside the release or [here](https://hydrusnetwork.github.io/hydrus/help), which includes a comprehensive getting started guide.
* [homepage](http://hydrusnetwork.github.io/hydrus/)
* [homepage](https://hydrusnetwork.github.io/hydrus/)
* [issue tracker](https://github.com/hydrusnetwork/hydrus/issues)
* [email](mailto:hydrus.admin@gmail.com)
* [8chan.moe /t/ (Hydrus Network General)](https://8chan.moe/t/catalog.html)
* [endchan bunker](https://endchan.net/hydrus/)
* [twitter](https://twitter.com/hydrusnetwork)
* [tumblr](http://hydrus.tumblr.com/)
* [tumblr](https://hydrus.tumblr.com/)
* [discord](https://discord.gg/wPHPCUZ)
* [patreon](https://www.patreon.com/hydrus_dev)
* [user-run repository and wiki] (https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts)

View File

@ -8,6 +8,36 @@
<div class="content">
<h3 id="changelog"><a href="#changelog">changelog</a></h3>
<ul>
<li><h3 id="version_439"><a href="#version_439">version 439</a></h3></li>
<ul>
<li>tiled image renderer improvements:</li>
<li>I believe I fixed the 'non c-contiguous' crash issue with the new tile renderer. I had encountered this while developing, but it was still happening in rare situations--I _think_ in an unlucky edge case where a zoomed tile had the same resolution as the full image rotated by ninety degrees! there is now an additional catch for this situation, as well, to catch any future logical holes.</li>
<li>fixed a bug in the new renderer when copying an image to clipboard</li>
<li>I greatly mitigated the tiling artifacts with two changes:</li>
<li>- zoomed in tiles are now resized with a padding area of up to 4 pixels, with the actual tile cropped afterwards, which allows bilinear and lancsoz interpolation to get accurate neighbour data and have gradient math line up with neighbouring tiles more accurately</li>
<li>- on resize and zoom, media canvases now dynamically change tile size to 'neater' float/integer conversion dimensions to reduce sub-pixel panning alignment artifacts (e.g. if your zoom is 300%, the tile is now going to have a dimension that is a multiple of 3)</li>
<li>I hacked in a 'rescue offscreen media' calculation after any zoom event. now, if the window is completely out of view after a zoom, it'll snap to the nearest borders, lining against them or overlapping into a buffer zone depending on the zoom. let me know what you think!</li>
<li>I fixed a PyQt5 specific object tracking bug, I think the new renderer now works ok for PyQt5!</li>
<li>cleaned up some ugly code in the resize section that may have been resulting in incorrect interpolation algorithm choice in some situations</li>
<li>fixed a divide by zero issue when zooming out tiny images hugely (e.g. 32x32 at 1%)</li>
<li>media windows now try to have at least 1x1 size, just to catch some other weird error situations</li>
<li>similarly, tile and native sample sizes will have a minimum of size 1x1, which should fix issues during a delayed startup (issue #872)</li>
<li>cleaned up some misc media viewer and tile renderer code</li>
<li>.</li>
<li>the rest:</li>
<li>I started the next round of database optimisation tech, mostly testing out a pipeline upgrade. autocomplete fetching and wildcard file searching for very large queries should be a little faster to cancel now, and in some situations they should be a little faster. they may be slower for very small jobs, but I expect it to be unnoticeable. if you feel autocomplete is suddenly slow and laggy, let me know!</li>
<li>I optimised the basic 'ideal sibling normalisation' database query. this is used in a lot of places, so the little saving here should improve a bunch of work</li>
<li>I greatly optimised autocomplete sibling population, particularly for searches with a lot of tag results</li>
<li>I brushed up the tag import options UI: changed the 'use defaults' checkbox to a dropdown with clear labels for both modes; renamed the 'fetch tags even if' tag import options to 'force page fetch', which is a better description, and added tooltips to describe their ideal use; added tooltips to blacklist and whitelist; and hid the 'load from defaults' button if not set to view specific options</li>
<li>added a 'imgur single media file url' File URL Class, which points to direct file links without a referral header, which should fix some situations where these urls were pointed to by other site parsers</li>
<li>collections now store the _most recent_ import timestamp of their contents as the aggregate for time imported. previously they had no value, so would sort randomly with each other. collections therefore now sort by time imported reliably with each other, even if there is no 'correct' answer here</li>
<li>these new timestamps and service presence generally, and aggregated archive/inbox status, (all of which can update thumbnail display) is now recalculated when files are removed from the collection. so, hitting _right-click->remove->inbox_ will now update collections with a mix of archived and inboxed to remove the inbox icon immediately</li>
<li>as the "Retry has no attribute..." network errors have appeared in new forms, I gave the core of the problem another look. we could never really figure this out, but it seemed to be a network version thread safety issue. I think I have ruled this out, and I now believe these may have been occuring during faulty pickling during network session save/load. I fixed the problem here, so with luck this issue will not reappear--if you have had this a lot, let me know how you get on!</li>
<li>I broke the requirements.txt into several variants based on platform. we are going to try to pin down good fixed versions of python-mpv and requests/urllib3 for each platform</li>
<li>I also updated the 'running from source' help significantly, moving everything to the requirements.txt and making sections for things like FFMPEG and libmpv</li>
<li>Also updated the source and contact help around my work style and contact preferences</li>
<li>the test.py file now only does the final input() confirmation if there is an interactive stdin to respond</li>
</ul>
<li><h3 id="version_438"><a href="#version_438">version 438</a></h3></li>
<ul>
<li>media viewer:</li>

View File

@ -9,16 +9,18 @@
<h3 id="contact"><a href="#contact">contact and links</a></h3>
<p>I welcome all your bug reports, questions, ideas, and comments. It is always interesting to see how other people are using my software and what they generally think of it. Most of the changes every week are suggested by users.</p>
<p>You can contact me by email, twitter, tumblr, discord, or the 8chan.moe /t/ thread or Endchan board--I do not mind which. Please know that I have difficulty with social media, and while I try to reply to all messages, it sometimes takes me a while to catch up.</p>
<p>The <a href="https://github.com/hydrusnetwork/hydrus/issues">Github Issue Tracker</a> was turned off for some time, as it did not fit my workflow and I could not keep up, but it is now running again, managed by a team of volunteer users. Please feel free to submit feature requests there if you are comfortable with Github. I am not socially active on Github, and it is mostly just a mirror of my home dev environment, where I work alone.</p>
<p>I am on the discord on Saturday afternoon, USA time, if you would like to talk live, and briefly on Wednesday after I put the release out. If that is not a good time for you, feel free to leave me a DM and I will get to you when I can. There are also plenty of other hydrus users who idle who would be happy to help with any sort of support question.</p>
<p>The <a href="https://github.com/hydrusnetwork/hydrus/issues">Github Issue Tracker</a> was turned off for some time, as it did not fit my workflow and I could not keep up, but it is now running again, managed by a team of volunteer users. Please feel free to submit feature requests there if you are comfortable with Github. I am not socially active on Github, please do not ping me there.</p>
<p>I am on the discord on Saturday afternoon, USA time, if you would like to talk live, and briefly on Wednesday after I put the release out. If that is not a good time for you, please leave me a DM and I will get to you when I can. There are also plenty of other hydrus users who idle who can help with support questions.</p>
<p>I delete all tweets and resolved email conversations after three months. So, if you think you are waiting for a reply, or I said I was going to work on something you care about and seem to have forgotten, please do nudge me.</p>
<p>Anyway:</p>
<p>I am always overwhelmed by work and behind on my messages. This is not to say that I do not enjoy just hanging out or talking about possible new features, but forgive me if some work takes longer than expected or if I cannot get to a particular idea quickly. In the same way, if you encounter actual traceback-raising errors or crashes, there is only one guy to fix it, so I prefer to know ASAP so I can prioritise.</p>
<p>I work by myself because I have acute difficulty working with others. Please do not spontaneously write long design documents or prepare other work for me--I find it more stressful than helpful, every time, and I won't give it the attention it deserves. If you would like to contribute time to hydrus, the user projects like the downloader repository and wiki help guides always have things to do.</p>
<p>That said:</p>
<ul>
<li><a href="https://hydrusnetwork.github.io/hydrus/">homepage</a></li>
<li><a href="https://github.com/hydrusnetwork/hydrus">github</a> (<a href="https://github.com/hydrusnetwork/hydrus/releases/latest">latest build</a>)</li>
<li><a href="https://github.com/hydrusnetwork/hydrus/issues">issue tracker</a></li>
<li><a href="https://8chan.moe/t/catalog.html">8chan.moe /t/ (Hydrus Network General)</a> (<a href="https://endchan.net/hydrus/">endchan bunker</a> <a href="https://endchan.org/hydrus/">(.org)</a>)</li>
<li><a href="http://hydrus.tumblr.com">tumblr</a> (<a href="http://hydrus.tumblr.com/rss">rss</a>)</li>
<li><a href="https://hydrus.tumblr.com">tumblr</a> (<a href="http://hydrus.tumblr.com/rss">rss</a>)</li>
<li><a href="https://github.com/hydrusnetwork/hydrus/releases">new downloads</a></li>
<li><a href="https://www.mediafire.com/hydrus">old downloads</a></li>
<li><a href="https://twitter.com/hydrusnetwork">twitter</a></li>

View File

@ -14,12 +14,12 @@
<li><i>ImportError: /home/user/hydrus/libX11.so.6: undefined symbol: xcb_poll_for_reply64</i></li>
</ul></p>
<p>But that by simply deleting the <i>libX11.so.6</i> file in the hydrus install directory, he was able to boot. I presume this meant my hydrus build was then relying on his local libX11.so, which happened to have better API compatibility. If you receive a similar error, you might like to try the same sort of thing. Let me know if you discover anything!</p>
<h3 id="windows_build"><a href="#windows_build">building on windows</a></h3>
<h3 id="windows_build"><a href="#windows_build">building packages on windows</a></h3>
<p>Installing some packages on windows with pip may need Visual Studio's C++ Build Tools for your version of python. Although these tools are free, it can be a pain to get them through the official (and often huge) downloader installer from Microsoft. Instead, install Chocolatey and use this one simple line:</p>
<blockquote>choco install -y vcbuildtools visualstudio2017buildtools</blockquote>
<p>Trust me, this will save a ton of headaches!</a>
<p>Trust me, just do this, it will save a ton of headaches!</a>
<h3 id="what_you_need"><a href="#what_you_need">what you will need</a></h3>
<p>You will need basic python experience, python 3.x and a number of python modules. Most of it you can get through pip.</p>
<p>You will need basic python experience, python 3.x and a number of python modules, all through pip.</p>
<p>If you are on Linux or macOS, or if you are on Windows and have an existing python you do not want to stomp all over with new modules, I recommend you create a virtual environment:</p>
<p><i>Note, if you are on Linux, it may be easier to use your package manager instead of messing around with venv. A user has written a great summary with all needed packages <a href="running_from_source_linux_packages.txt">here</a>.</i></p>
<p>If you do want to create a new venv environment:</p>
@ -33,45 +33,36 @@
</ul>
<p>That '. venv/bin/activate' line turns your venv on, and will be needed every time you run the client.pyw/server.py files. You can easily tuck it into a launch script.</p>
<p>On Windows, the path is venv&#92;Scripts&#92;activate, and the whole deal is done much easier in cmd than Powershell. If you get Powershell by default, just type 'cmd' to get an old fashioned command line. In cmd, the launch command is just 'venv&#92;scripts&#92;activate', no leading period.</p>
<p>After that, you can go nuts with pip. I think this will do for most systems:</p>
<p>After that, you can use pip to install everything you need from the appropriate requirements.txt in the base install directory. For instance, for Windows, you would go:</p>
<ul>
<li>pip3 install beautifulsoup4 chardet html5lib lxml nose numpy opencv-python-headless six Pillow psutil PyYAML requests Send2Trash service_identity twisted</li>
<li>pip3 install -r requirements_windows.txt</li>
</ul>
<p>You may want to do all that in smaller batches.</p>
<p>You will also need Qt5. Either PySide2 (default) or PyQt5 are supported, through qtpy. You can install, again, with pip:</p>
<ul>
<li>pip3 install qtpy PySide2</li>
</ul>
<p>-or-</p>
<p>If you prefer to do things manually, inspect the document and install the modules yourself.</p>
<h3 id="pyqt5"><a href="#pyqt5">PyQt5 support</a></h3>
<p>For Qt, either PySide2 (default) or PyQt5 are supported, through qtpy. For PyQt5, go:</p>
<ul>
<li>pip3 install qtpy PyQtChart PyQt5</li>
</ul>
<p>Qt 5.15 currently seems to be working well, but 5.14 caused some trouble.</p>
<p>And optionally, you can add these packages:</p>
<ul>
<li>
<p><b>python-mpv - to get nice video and audio support!</b></p>
<blockquote>If you are on Linux/macOS, you will likely need the mpv library installed to your system, <i>not just mpv</i>, which is often called 'libmpv1'. You can usually get it with <i>apt</i>.</blockquote>
</li>
<li>lz4 - for some memory compression in the client</li>
<li>pylzma - for importing rare ZWS swf files</li>
<li>cloudscraper - for attempting to solve CloudFlare check pages</li>
<li>pysocks - for socks4/socks5 proxy support (although you may want to try "requests[socks]" instead)</li>
<li>>PyOpenSSL - to generate a certificate if you want to run the server or the client api</li>
<li>mock httmock pyinstaller - if you want to run test.py and make a build yourself</li>
<li>PyWin32 pypiwin32 pywin32-ctypes - helpful to ensure you have if you want to make a build in Windows</li>
</ul>
<p>Here is a masterline with everything for general use:</p>
<ul>
<li>pip3 install beautifulsoup4 chardet html5lib lxml nose numpy opencv-python-headless six Pillow psutil PyOpenSSL PyYAML requests Send2Trash service_identity twisted qtpy PySide2 python-mpv lz4 pylzma cloudscraper pysocks</li>
</ul>
<p>For Windows, depending on which compiler you are using, pip can have problems building some modules like lz4 and lxml. <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/">This page</a> has a lot of prebuilt binaries--I have found it very helpful many times. You may want to update python's sqlite3.dll as well--you can get it <a href="https://www.sqlite.org/download.html">here</a>, and just drop it in C:\Python37\DLLs or wherever you have python installed. I have a fair bit of experience with Windows python, so send me a mail if you need help.</a>
<p>If you don't have ffmpeg in your PATH and you want to import videos, you will need to put a static <a href="https://ffmpeg.org/">FFMPEG</a> executable in the install_dir/bin directory. Have a look at how I do it in the extractable compiled releases if you can't figure it out. On Windows, you can copy the exe from one of those releases, or just download the latest static build right from the FFMPEG site.</a>
<h3 id="ffmpeg"><a href="#ffmpeg">FFMPEG</a></h3>
<p>If you don't have FFMPEG in your PATH and you want to import anything more fun than jpegs, you will need to put a static <a href="https://ffmpeg.org/">FFMPEG</a> executable in your PATH or the install_dir/bin directory. If you can't find a static exe on Windows, you can copy the exe from one of my extractable releases.</a>
<h3 id="mpv"><a href="#mpv">mpv support</a></h3>
<p>MPV is optional and complicated, but it is great, so it is worth the time to figure out!</p>
<p>As well as the python wrapper, 'python-mpv' as in the requirements.txt, you also need the underlying library. This is <i>not</i> mpv the program, but 'libmpv', often called 'libmpv1'.</p>
<p>For Windows, the dll builds are <a href="https://sourceforge.net/projects/mpv-player-windows/files/libmpv/">here</a>, although getting the right version for the current wrapper can be difficult (you will get errors when you try to load video if it is not correct). Just put it in your hydrus base install directory. You can also just grab the 'mpv-1.dll' I bundle in my release. In my experience, <a href="https://sourceforge.net/projects/mpv-player-windows/files/libmpv/mpv-dev-x86_64-20210228-git-d1be8bb.7z/download">this</a> works with python-mpv 0.5.2.</a>
<p>If you are on Linux/macOS, you can usually get 'libmpv1' with <i>apt</i>. You might have to adjust your python-mpv version (e.g. "pip3 install python-mpv==0.4.5") to get it to work.</p>
<h3 id="sqlite"><a href="#sqlite">SQLite</a></h3>
<p>If you can, update python's SQLite--it'll improve performance.</p>
<p>On Windows, get the 64-bit sqlite3.dll <a href="https://www.sqlite.org/download.html">here</a>, and just drop it in C:\Python37\DLLs or wherever you have python installed. You'll be overwriting the old file, so make a backup if you want to (I have never had trouble updating like this, however).</p>
<p>I don't know how to do it for Linux or macOS, so if you do, please let me know!</p>
<h3 id="additional_windows"><a href="#additional_windows">additional windows info</a></h3>
<p>This may not matter any more, but in the old days, Windows pip could have problems building modules like lz4 and lxml. <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/">This page</a> has a lot of prebuilt binaries--I have found it very helpful many times.</p>
<p>I have a fair bit of experience with Windows python, so send me a mail if you need help.</a>
<h3 id="running_it"><a href="#running_it">running it</a></h3>
<p>Once you have everything set up, client.pyw and server.py should look for and run off client.db and server.db just like the executables. They will look in the 'db' directory by default, or anywhere you point them with the "-d" parameter, again just like the executables.</p>
<p>I develop hydrus on and am most experienced with Windows, so the program is more stable and reasonable on that. I do not have as much experience with Linux or macOS, so I would particularly appreciate your Linux/macOS bug reports and any informed suggestions.</p>
<p>I develop hydrus on and am most experienced with Windows, so the program is more stable and reasonable on that. I do not have as much experience with Linux or macOS, but I still appreciate and will work on your Linux/macOS bug reports.</p>
<h3 id="my_code"><a href="#my_code">my code</a></h3>
<p>Unlike most software people, I am more INFJ than INTP/J. My coding style is unusual and unprofessional, and everything is pretty much hacked together. Please look through the source if you are interested in how things work and ask me if you don't understand something. I'm constantly throwing new code together and then cleaning and overhauling it down the line.</p>
<p>I work strictly alone, so while I am very interested in detailed bug reports or suggestions for good libraries to use, I am not looking for pull requests. Everything I do is <a href="https://github.com/sirkris/WTFPL/blob/master/WTFPL.md">WTFPL</a>, so feel free to fork and play around with things on your end as much as you like.</p>
<p>My coding style is unusual and unprofessional. Everything is pretty much hacked together. If you are interested in how things work, please do look through the source and ask me if you don't understand something.</p>
<p>I'm constantly throwing new code together and then cleaning and overhauling it down the line. I work strictly alone, however, so while I am very interested in detailed bug reports or suggestions for good libraries to use, I am not looking for pull requests or suggestions on style. I know a lot of things are a mess. Everything I do is <a href="https://github.com/sirkris/WTFPL/blob/master/WTFPL.md">WTFPL</a>, so feel free to fork and play around with things on your end as much as you like.</p>
</div>
</body>
</html>

View File

@ -539,11 +539,19 @@ class ImageTileCache( object ):
self._data_cache.Clear()
def GetTile( self, image_renderer, media, clip_rect, target_resolution ):
def GetTile( self, image_renderer: ClientRendering.ImageRenderer, media, clip_rect, target_resolution ):
hash = media.GetHash()
key = ( hash, clip_rect, target_resolution )
key = (
hash,
clip_rect.left(),
clip_rect.top(),
clip_rect.right(),
clip_rect.bottom(),
target_resolution.width(),
target_resolution.height()
)
result = self._data_cache.GetIfHasData( key )

View File

@ -199,7 +199,7 @@ def ResizeNumPyImageForMediaViewer( mime, numpy_image, target_resolution ):
( scale_up_quality, scale_down_quality ) = new_options.GetMediaZoomQuality( mime )
( image_width, image_height, depth ) = numpy_image.shape
( image_height, image_width, depth ) = numpy_image.shape
if ( target_width, target_height ) == ( image_height, image_width ):
@ -207,7 +207,7 @@ def ResizeNumPyImageForMediaViewer( mime, numpy_image, target_resolution ):
else:
if target_width > image_height or target_height > image_width:
if target_width > image_width or target_height > image_height:
interpolation = cv_interpolation_enum_lookup[ scale_up_quality ]

View File

@ -100,35 +100,87 @@ class ImageRenderer( object ):
def _GetNumPyImage( self, clip_rect: QC.QRect, target_resolution: QC.QSize ):
clip_topleft = clip_rect.topLeft()
clip_size = clip_rect.size()
( my_width, my_height ) = self.GetResolution()
( my_width, my_height ) = self._resolution
my_full_rect = QC.QRect( 0, 0, my_width, my_height )
ZERO_MARGIN = QC.QMargins( 0, 0, 0, 0 )
clip_padding = ZERO_MARGIN
target_padding = ZERO_MARGIN
if clip_rect == my_full_rect:
# full image
source = self._numpy_image
else:
( x, y ) = ( clip_topleft.x(), clip_topleft.y() )
( clip_width, clip_height ) = ( clip_size.width(), clip_size.height() )
if target_resolution.width() > clip_size.width():
# this is a tile that is being scaled!
# to reduce tiling artifacts, we want to oversample the clip for our tile so lanczos and friends can get good neighbour data and then crop it
# therefore, we'll figure out some padding for the clip, and then calculate what that means in the target end, and do a crop at the end
# we want to pad. that means getting a larger resolution and keeping a record of the padding
# can't pad if we are at 0 for x or y, or up against width/height max
# but if we can pad, we will get a larger clip size and then _clip_ a better target endpoint. this is tricky.
PADDING_AMOUNT = 4
left_padding = min( PADDING_AMOUNT, clip_rect.x() )
top_padding = min( PADDING_AMOUNT, clip_rect.y() )
right_padding = min( PADDING_AMOUNT, my_width - clip_rect.bottomRight().x() )
bottom_padding = min( PADDING_AMOUNT, my_height - clip_rect.bottomRight().y() )
clip_padding = QC.QMargins( left_padding, top_padding, right_padding, bottom_padding )
# this is ugly and super inaccurate
target_padding = clip_padding * ( target_resolution.width() / clip_size.width() )
clip_rect_with_padding = clip_rect + clip_padding
( x, y, clip_width, clip_height ) = ( clip_rect_with_padding.x(), clip_rect_with_padding.y(), clip_rect_with_padding.width(), clip_rect_with_padding.height() )
source = self._numpy_image[ y : y + clip_height, x : x + clip_width ]
if target_resolution == clip_size:
return source.copy()
# 100% zoom
result = source
else:
numpy_image = ClientImageHandling.ResizeNumPyImageForMediaViewer( self._mime, source, ( target_resolution.width(), target_resolution.height() ) )
if clip_padding == ZERO_MARGIN:
result = ClientImageHandling.ResizeNumPyImageForMediaViewer( self._mime, source, ( target_resolution.width(), target_resolution.height() ) )
else:
target_width_with_padding = target_resolution.width() + target_padding.left() + target_padding.right()
target_height_with_padding = target_resolution.height() + target_padding.top() + target_padding.bottom()
result = ClientImageHandling.ResizeNumPyImageForMediaViewer( self._mime, source, ( target_width_with_padding, target_height_with_padding ) )
y = target_padding.top()
x = target_padding.left()
result = result[ y : y + target_resolution.height(), x : x + target_resolution.width() ]
return numpy_image
if not result.data.c_contiguous:
result = result.copy()
return result
def _Initialise( self ):
@ -145,7 +197,7 @@ class ImageRenderer( object ):
if self._numpy_image is None:
( width, height ) = self.GetResolution()
( width, height ) = self._resolution
return width * height * 3
@ -165,7 +217,9 @@ class ImageRenderer( object ):
if clip_rect is None:
clip_rect = QC.QRect( QC.QPoint( 0, 0 ), QC.QSize( self._resolution ) )
( width, height ) = self._resolution
clip_rect = QC.QRect( QC.QPoint( 0, 0 ), QC.QSize( width, height ) )
if target_resolution is None:
@ -186,7 +240,9 @@ class ImageRenderer( object ):
if clip_rect is None:
clip_rect = QC.QRect( QC.QPoint( 0, 0 ), QC.QSize( self._resolution ) )
( width, height ) = self._resolution
clip_rect = QC.QRect( QC.QPoint( 0, 0 ), QC.QSize( width, height ) )
if target_resolution is None:

View File

@ -1268,6 +1268,10 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
self._matchable_search_texts = set()
#
self._RecalcPythonHash()
def __eq__( self, other ):
@ -1286,12 +1290,7 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
def __hash__( self ):
if self._predicate_type == PREDICATE_TYPE_PARENT:
return self._parent_key.__hash__()
return ( self._predicate_type, self._value, self._inclusive ).__hash__()
return self._python_hash
def __repr__( self ):
@ -1299,6 +1298,18 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
return 'Predicate: ' + str( ( self._predicate_type, self._value, self._inclusive, self.GetCount() ) )
def _RecalcPythonHash( self ):
if self._predicate_type == PREDICATE_TYPE_PARENT:
self._python_hash = self._parent_key.__hash__()
else:
self._python_hash = ( self._predicate_type, self._value, self._inclusive ).__hash__()
def _GetSerialisableInfo( self ):
if self._predicate_type in ( PREDICATE_TYPE_SYSTEM_RATING, PREDICATE_TYPE_SYSTEM_FILE_SERVICE ):
@ -1413,6 +1424,8 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
self._value = tuple( self._value )
self._RecalcPythonHash()
def _UpdateSerialisableInfo( self, version, old_serialisable_info ):
@ -1574,7 +1587,12 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
self._inclusive = operator == '+'
else: self._inclusive = True
else:
self._inclusive = True
self._RecalcPythonHash()
return self._inclusive
@ -1785,6 +1803,8 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
self._inclusive = inclusive
self._RecalcPythonHash()
def SetKnownParents( self, parents: typing.Set[ str ] ):

View File

@ -2751,32 +2751,30 @@ class DB( HydrusDB.HydrusDB ):
return set( self._CacheTagSiblingsGetApplicableServiceIds( tag_service_id ) ).union( self._CacheTagParentsGetApplicableServiceIds( tag_service_id ) )
def _CacheTagDisplayGetChainMembers( self, display_type, tag_service_id, tag_id ):
# all parent definitions are sibling collapsed, so are terminus of their sibling chains
# so get all of the parent chain, then get all chains that point to those
ideal_tag_id = self._CacheTagSiblingsGetIdeal( display_type, tag_service_id, tag_id )
parent_chain_members = self._CacheTagParentsGetChainsMembers( display_type, tag_service_id, ( ideal_tag_id, ) )
complete_chain_members = self._CacheTagSiblingsGetChainsMembersFromIdeals( display_type, tag_service_id, parent_chain_members )
return complete_chain_members
def _CacheTagDisplayGetChainsMembers( self, display_type, tag_service_id, tag_ids ):
# all parent definitions are sibling collapsed, so are terminus of their sibling chains
# so get all of the parent chain, then get all chains that point to those
ideal_tag_ids = self._CacheTagSiblingsGetIdeals( display_type, tag_service_id, tag_ids )
parent_chain_members = self._CacheTagParentsGetChainsMembers( display_type, tag_service_id, ideal_tag_ids )
complete_chain_members = self._CacheTagSiblingsGetChainsMembersFromIdeals( display_type, tag_service_id, parent_chain_members )
return complete_chain_members
with HydrusDB.TemporaryIntegerTable( self._c, tag_ids, 'tag_id' ) as temp_tag_ids_table_name:
with HydrusDB.TemporaryIntegerTable( self._c, [], 'ideal_tag_id' ) as temp_ideal_tag_ids_table_name:
self._CacheTagSiblingsGetIdealsIntoTable( display_type, tag_service_id, temp_tag_ids_table_name, temp_ideal_tag_ids_table_name )
with HydrusDB.TemporaryIntegerTable( self._c, [], 'tag_id' ) as temp_parent_chain_members_table_name:
self._CacheTagParentsGetChainsMembersTables( display_type, tag_service_id, temp_ideal_tag_ids_table_name, temp_parent_chain_members_table_name )
with HydrusDB.TemporaryIntegerTable( self._c, [], 'tag_id' ) as temp_chain_members_table_name:
self._CacheTagSiblingsGetChainsMembersFromIdealsTables( display_type, tag_service_id, temp_parent_chain_members_table_name, temp_chain_members_table_name )
return self._STS( self._c.execute( 'SELECT tag_id FROM {};'.format( temp_chain_members_table_name ) ) )
def _CacheTagDisplayGetImpliedBy( self, display_type, tag_service_id, tag_id ):
@ -3628,20 +3626,6 @@ class DB( HydrusDB.HydrusDB ):
return self._service_ids_to_parent_applicable_service_ids[ tag_service_id ]
def _CacheTagParentsGetChainMembers( self, display_type: int, tag_service_id: int, ideal_tag_id: int ):
cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id )
chain_ids = self._STS( self._c.execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ? UNION ALL SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name, cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ) )
if len( chain_ids ) == 0:
chain_ids = { ideal_tag_id }
return chain_ids
def _CacheTagParentsGetChainsMembers( self, display_type: int, tag_service_id: int, ideal_tag_ids: typing.Collection[ int ] ):
if len( ideal_tag_ids ) == 0:
@ -3681,6 +3665,49 @@ class DB( HydrusDB.HydrusDB ):
return chain_tag_ids
def _CacheTagParentsGetChainsMembersTables( self, display_type: int, tag_service_id: int, ideal_tag_ids_table_name: str, results_table_name: str ):
# if it isn't crazy, I should write this whole lad to be one or two recursive queries
cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id )
first_ideal_tag_ids = self._STS( self._c.execute( 'SELECT ideal_tag_id FROM {};'.format( ideal_tag_ids_table_name ) ) )
chain_tag_ids = set( first_ideal_tag_ids )
we_have_looked_up = set()
next_search_tag_ids = set( first_ideal_tag_ids )
while len( next_search_tag_ids ) > 0:
if len( next_search_tag_ids ) == 1:
( ideal_tag_id, ) = next_search_tag_ids
round_of_tag_ids = self._STS( self._c.execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ? UNION ALL SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name, cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ) )
else:
with HydrusDB.TemporaryIntegerTable( self._c, next_search_tag_ids, 'tag_id' ) as temp_next_search_tag_ids_table_name:
round_of_tag_ids = self._STS( self._c.execute( 'SELECT child_tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id ) UNION ALL SELECT ancestor_tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name, temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name ) ) )
new_tag_ids = round_of_tag_ids.difference( chain_tag_ids )
if len( new_tag_ids ) > 0:
self._c.executemany( 'INSERT OR IGNORE INTO {} ( tag_id ) VALUES ( ? );', ( ( tag_id, ) for tag_id in round_of_tag_ids.difference( new_tag_ids ) ) )
chain_tag_ids.update( new_tag_ids )
we_have_looked_up.update( next_search_tag_ids )
next_search_tag_ids = round_of_tag_ids.difference( we_have_looked_up )
def _CacheTagParentsGetDescendants( self, display_type: int, tag_service_id: int, ideal_tag_id: int ):
cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id )
@ -4397,6 +4424,32 @@ class DB( HydrusDB.HydrusDB ):
return chain_tag_ids
def _CacheTagSiblingsFilterChainedIdealsIntoTable( self, display_type, tag_service_id, tag_ids_table_name, results_table_name ):
# get the tag_ids that are part of a sibling chain
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
# keep these separate--older sqlite can't do cross join to an OR ON
# temp tags to lookup
self._c.execute( 'INSERT OR IGNORE INTO {} SELECT ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) )
self._STI( self._c.execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) )
def _CacheTagSiblingsFilterChainedIntoTable( self, display_type, tag_service_id, tag_ids_table_name, results_table_name ):
# get the tag_ids that are part of a sibling chain
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
# keep these separate--older sqlite can't do cross join to an OR ON
# temp tags to lookup
self._c.execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) )
self._STI( self._c.execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) )
def _CacheTagSiblingsGenerate( self, tag_service_id ):
( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id )
@ -4469,6 +4522,16 @@ class DB( HydrusDB.HydrusDB ):
return sibling_tag_ids
def _CacheTagSiblingsGetChainsMembersFromIdealsTables( self, display_type, tag_service_id, ideal_tag_ids_table_name, results_table_name ) -> typing.Set[ int ]:
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
self._c.execute( 'INSERT OR IGNORE INTO {} ( tag_id ) SELECT ideal_tag_id FROM {};'.format( results_table_name, ideal_tag_ids_table_name ) )
# tags to lookup
self._c.execute( 'INSERT OR IGNORE INTO {} ( tag_id ) SELECT bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( results_table_name, ideal_tag_ids_table_name, cache_tag_siblings_lookup_table_name ) )
def _CacheTagSiblingsGetApplicableServiceIds( self, tag_service_id ):
if self._service_ids_to_sibling_applicable_service_ids is None:
@ -4517,6 +4580,22 @@ class DB( HydrusDB.HydrusDB ):
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
with HydrusDB.TemporaryIntegerTable( self._c, tag_ids, 'tag_id' ) as temp_tag_ids_table_name:
magic_case = 'CASE WHEN ideal_tag_id NOT NULL THEN ideal_tag_id ELSE tag_id END'
cursor = self._c.execute(
'SELECT {} FROM {} LEFT OUTER JOIN {} ON ( tag_id = bad_tag_id );'.format(
magic_case,
temp_tag_ids_table_name,
cache_tag_siblings_lookup_table_name
)
)
return self._STS( cursor )
'''
no_ideal_found_tag_ids = set( tag_ids )
ideal_tag_ids = set()
@ -4533,6 +4612,25 @@ class DB( HydrusDB.HydrusDB ):
return ideal_tag_ids
'''
def _CacheTagSiblingsGetIdealsIntoTable( self, display_type, tag_service_id, tag_ids_table_name, results_table_name ):
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
magic_case = 'CASE WHEN ideal_tag_id NOT NULL THEN ideal_tag_id ELSE tag_id END'
cursor = self._c.execute(
'INSERT OR IGNORE INTO {} ( ideal_tag_id ) SELECT {} FROM {} LEFT OUTER JOIN {} ON ( tag_id = bad_tag_id );'.format(
results_table_name,
magic_case,
tag_ids_table_name,
cache_tag_siblings_lookup_table_name
)
)
return self._STS( cursor )
def _CacheTagSiblingsGetIdealsToChains( self, display_type, tag_service_id, ideal_tag_ids ):
@ -8283,15 +8381,21 @@ class DB( HydrusDB.HydrusDB ):
else:
subtag_ids = self._GetSubtagIdsFromWildcard( file_service_id, tag_service_id, half_complete_searchable_subtag, job_key = job_key )
if namespace == '':
with HydrusDB.TemporaryIntegerTable( self._c, [], 'subtag_id' ) as temp_subtag_ids_table_name:
tag_ids = self._GetTagIdsFromSubtagIds( file_service_id, tag_service_id, subtag_ids, job_key = job_key )
self._GetSubtagIdsFromWildcardIntoTable( file_service_id, tag_service_id, half_complete_searchable_subtag, temp_subtag_ids_table_name, job_key = job_key )
else:
tag_ids = self._GetTagIdsFromNamespaceIdsSubtagIds( file_service_id, tag_service_id, namespace_ids, subtag_ids, job_key = job_key )
if namespace == '':
tag_ids = self._GetTagIdsFromSubtagIdsTable( file_service_id, tag_service_id, temp_subtag_ids_table_name, job_key = job_key )
else:
with HydrusDB.TemporaryIntegerTable( self._c, namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name:
tag_ids = self._GetTagIdsFromNamespaceIdsSubtagIdsTables( file_service_id, tag_service_id, temp_namespace_ids_table_name, temp_subtag_ids_table_name, job_key = job_key )
@ -8313,11 +8417,39 @@ class DB( HydrusDB.HydrusDB ):
tag_ids_without_siblings = list( tag_ids )
for sibling_tag_service_id in sibling_tag_service_ids:
seen_ideal_tag_ids = collections.defaultdict( set )
for batch_of_tag_ids in HydrusData.SplitListIntoChunks( tag_ids_without_siblings, 10240 ):
seen_ideal_tag_ids = set()
with HydrusDB.TemporaryIntegerTable( self._c, batch_of_tag_ids, 'tag_id' ) as temp_tag_ids_table_name:
for sibling_tag_service_id in sibling_tag_service_ids:
if job_key is not None and job_key.IsCancelled():
return set()
with HydrusDB.TemporaryIntegerTable( self._c, [], 'ideal_tag_id' ) as temp_ideal_tag_ids_table_name:
self._CacheTagSiblingsFilterChainedIdealsIntoTable( ClientTags.TAG_DISPLAY_ACTUAL, sibling_tag_service_id, temp_tag_ids_table_name, temp_ideal_tag_ids_table_name )
with HydrusDB.TemporaryIntegerTable( self._c, [], 'tag_id' ) as temp_chained_tag_ids_table_name:
self._CacheTagSiblingsGetChainsMembersFromIdealsTables( ClientTags.TAG_DISPLAY_ACTUAL, sibling_tag_service_id, temp_ideal_tag_ids_table_name, temp_chained_tag_ids_table_name )
tag_ids.update( self._STI( self._c.execute( 'SELECT tag_id FROM {};'.format( temp_chained_tag_ids_table_name ) ) ) )
for batch_of_tag_ids in HydrusData.SplitListIntoChunks( tag_ids_without_siblings, 10240 ):
'''
for batch_of_tag_ids in HydrusData.SplitListIntoChunks( tag_ids_without_siblings, 10240 ):
for sibling_tag_service_id in sibling_tag_service_ids:
if job_key is not None and job_key.IsCancelled():
@ -8326,12 +8458,13 @@ class DB( HydrusDB.HydrusDB ):
ideal_tag_ids = self._CacheTagSiblingsGetIdeals( ClientTags.TAG_DISPLAY_ACTUAL, sibling_tag_service_id, batch_of_tag_ids )
ideal_tag_ids.difference_update( seen_ideal_tag_ids )
seen_ideal_tag_ids.update( ideal_tag_ids )
ideal_tag_ids.difference_update( seen_ideal_tag_ids[ sibling_tag_service_id ] )
seen_ideal_tag_ids[ sibling_tag_service_id ].update( ideal_tag_ids )
tag_ids.update( self._CacheTagSiblingsGetChainsMembersFromIdeals( ClientTags.TAG_DISPLAY_ACTUAL, sibling_tag_service_id, ideal_tag_ids ) )
'''
return tag_ids
@ -8992,6 +9125,16 @@ class DB( HydrusDB.HydrusDB ):
return self._GetHashIdsFromTagIds( tag_display_type, file_service_key, tag_search_context, tag_ids, hash_ids = hash_ids, hash_ids_table_name = hash_ids_table_name, job_key = job_key )
def _GetHashIdsFromNamespaceIdsSubtagIdsTables( self, tag_display_type: int, file_service_key, tag_search_context: ClientSearch.TagSearchContext, namespace_ids_table_name, subtag_ids_table_name, hash_ids = None, hash_ids_table_name = None, job_key = None ):
file_service_id = self.modules_services.GetServiceId( file_service_key )
tag_service_id = self.modules_services.GetServiceId( tag_search_context.service_key )
tag_ids = self._GetTagIdsFromNamespaceIdsSubtagIdsTables( file_service_id, tag_service_id, namespace_ids_table_name, subtag_ids_table_name, job_key = job_key )
return self._GetHashIdsFromTagIds( tag_display_type, file_service_key, tag_search_context, tag_ids, hash_ids = hash_ids, hash_ids_table_name = hash_ids_table_name, job_key = job_key )
def _GetHashIdsFromNoteName( self, name: str, hash_ids_table_name: str ):
label_id = self.modules_texts.GetLabelId( name )
@ -10193,6 +10336,16 @@ class DB( HydrusDB.HydrusDB ):
return self._GetHashIdsFromTagIds( tag_display_type, file_service_key, tag_search_context, tag_ids, hash_ids = hash_ids, hash_ids_table_name = hash_ids_table_name, job_key = job_key )
def _GetHashIdsFromSubtagIdsTable( self, tag_display_type: int, file_service_key, tag_search_context: ClientSearch.TagSearchContext, subtag_ids_table_name, hash_ids = None, hash_ids_table_name = None, job_key = None ):
file_service_id = self.modules_services.GetServiceId( file_service_key )
tag_service_id = self.modules_services.GetServiceId( tag_search_context.service_key )
tag_ids = self._GetTagIdsFromSubtagIdsTable( file_service_id, tag_service_id, subtag_ids_table_name, job_key = job_key )
return self._GetHashIdsFromTagIds( tag_display_type, file_service_key, tag_search_context, tag_ids, hash_ids = hash_ids, hash_ids_table_name = hash_ids_table_name, job_key = job_key )
def _GetHashIdsFromTag( self, tag_display_type: int, file_service_key, tag_search_context: ClientSearch.TagSearchContext, tag, hash_ids = None, hash_ids_table_name = None, allow_unnamespaced_to_fetch_namespaced = True, job_key = None ):
( namespace, subtag ) = HydrusTags.SplitTag( tag )
@ -10445,17 +10598,23 @@ class DB( HydrusDB.HydrusDB ):
file_service_id = self.modules_services.GetServiceId( file_service_key )
tag_service_id = self.modules_services.GetServiceId( tag_search_context.service_key )
possible_subtag_ids = self._GetSubtagIdsFromWildcard( file_service_id, tag_service_id, subtag_wildcard, job_key = job_key )
if namespace_wildcard != '':
with HydrusDB.TemporaryIntegerTable( self._c, [], 'subtag_id' ) as temp_subtag_ids_table_name:
possible_namespace_ids = self._GetNamespaceIdsFromWildcard( namespace_wildcard )
self._GetSubtagIdsFromWildcardIntoTable( file_service_id, tag_service_id, subtag_wildcard, temp_subtag_ids_table_name, job_key = job_key )
return self._GetHashIdsFromNamespaceIdsSubtagIds( tag_display_type, file_service_key, tag_search_context, possible_namespace_ids, possible_subtag_ids, hash_ids = hash_ids, hash_ids_table_name = hash_ids_table_name, job_key = job_key )
else:
return self._GetHashIdsFromSubtagIds( tag_display_type, file_service_key, tag_search_context, possible_subtag_ids, hash_ids = hash_ids, hash_ids_table_name = hash_ids_table_name, job_key = job_key )
if namespace_wildcard != '':
possible_namespace_ids = self._GetNamespaceIdsFromWildcard( namespace_wildcard )
with HydrusDB.TemporaryIntegerTable( self._c, possible_namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name:
return self._GetHashIdsFromNamespaceIdsSubtagIdsTables( tag_display_type, file_service_key, tag_search_context, temp_namespace_ids_table_name, temp_subtag_ids_table_name, hash_ids = hash_ids, hash_ids_table_name = hash_ids_table_name, job_key = job_key )
else:
return self._GetHashIdsFromSubtagIdsTable( tag_display_type, file_service_key, tag_search_context, temp_subtag_ids_table_name, hash_ids = hash_ids, hash_ids_table_name = hash_ids_table_name, job_key = job_key )
@ -12133,6 +12292,117 @@ class DB( HydrusDB.HydrusDB ):
return result_subtag_ids
def _GetSubtagIdsFromWildcardIntoTable( self, file_service_id: int, tag_service_id: int, subtag_wildcard, subtag_id_table_name, job_key = None ):
if tag_service_id == self.modules_services.combined_tag_service_id:
search_tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
else:
search_tag_service_ids = ( tag_service_id, )
for search_tag_service_id in search_tag_service_ids:
if '*' in subtag_wildcard:
subtags_fts4_table_name = self._CacheTagsGetSubtagsFTS4TableName( file_service_id, search_tag_service_id )
wildcard_has_fts4_searchable_characters = WildcardHasFTS4SearchableCharacters( subtag_wildcard )
if subtag_wildcard == '*':
# hellmode, but shouldn't be called normally
cursor = self._c.execute( 'SELECT docid FROM {};'.format( subtags_fts4_table_name ) )
elif ClientSearch.IsComplexWildcard( subtag_wildcard ) or not wildcard_has_fts4_searchable_characters:
# FTS4 does not support complex wildcards, so instead we'll search our raw subtags
# however, since we want to search 'searchable' text, we use the 'searchable subtags map' to cross between real and searchable
like_param = ConvertWildcardToSQLiteLikeParameter( subtag_wildcard )
if subtag_wildcard.startswith( '*' ) or not wildcard_has_fts4_searchable_characters:
# this is a SCAN, but there we go
# a potential optimisation here, in future, is to store fts4 of subtags reversed, then for '*amus', we can just search that reverse cache for 'suma*'
# and this would only double the size of the fts4 cache, the largest cache in the whole db! a steal!
# it also would not fix '*amu*', but with some cleverness could speed up '*amus ar*'
query = 'SELECT docid FROM {} WHERE subtag LIKE ?;'.format( subtags_fts4_table_name )
cursor = self._c.execute( query, ( like_param, ) )
else:
# we have an optimisation here--rather than searching all subtags for bl*ah, let's search all the bl* subtags for bl*ah!
prefix_fts4_wildcard = subtag_wildcard.split( '*' )[0]
prefix_fts4_wildcard_param = '"{}*"'.format( prefix_fts4_wildcard )
query = 'SELECT docid FROM {} WHERE subtag MATCH ? AND subtag LIKE ?;'.format( subtags_fts4_table_name )
cursor = self._c.execute( query, ( prefix_fts4_wildcard_param, like_param ) )
else:
# we want the " " wrapping our search text to keep whitespace words connected and in order
# "samus ar*" should not match "around samus"
# simple 'sam*' style subtag, so we can search fts4 no prob
subtags_fts4_param = '"{}"'.format( subtag_wildcard )
cursor = self._c.execute( 'SELECT docid FROM {} WHERE subtag MATCH ?;'.format( subtags_fts4_table_name ), ( subtags_fts4_param, ) )
cancelled_hook = None
if job_key is not None:
cancelled_hook = job_key.IsCancelled
loop_of_subtag_id_tuples = HydrusDB.ReadFromCancellableCursor( cursor, 1024, cancelled_hook = cancelled_hook )
self._c.executemany( 'INSERT OR IGNORE INTO {} ( subtag_id ) VALUES ( ? );'.format( subtag_id_table_name ), loop_of_subtag_id_tuples )
else:
# old notes from before we had searchable subtag map. I deleted that map once, albeit in an older and less efficient form. *don't delete it again, it has use*
#
# NOTE: doing a subtag = 'blah' lookup on subtags_fts4 tables is ultra slow, lmao!
# attempts to match '/a/' to 'a' with clever FTS4 MATCHing (i.e. a MATCH on a*\b, then an '= a') proved not super successful
# in testing, it was still a bit slow. my guess is it is still iterating through all the nodes for ^a*, the \b just makes it a bit more efficient sometimes
# in tests '^a\b' was about twice as fast as 'a*', so the \b might not even be helping at all
# so, I decided to move back to a lean and upgraded searchable subtag map, and here we are
subtags_searchable_map_table_name = self._CacheTagsGetSubtagsSearchableMapTableName( file_service_id, search_tag_service_id )
searchable_subtag = subtag_wildcard
if self.modules_tags.SubtagExists( searchable_subtag ):
searchable_subtag_id = self.modules_tags.GetSubtagId( searchable_subtag )
self._c.execute( 'INSERT OR IGNORE INTO {} ( subtag_id ) VALUES ( ? );'.format( subtag_id_table_name ), ( searchable_subtag_id, ) )
self._c.execute( 'INSERT OR IGNORE INTO {} ( subtag_id ) SELECT subtag_id FROM {} WHERE searchable_subtag_id = ?;'.format( subtag_id_table_name, subtags_searchable_map_table_name ), ( searchable_subtag_id, ) )
if job_key is not None and job_key.IsCancelled():
self._c.execute( 'DELETE FROM {};'.format( subtag_id_table_name ) )
return
def _GetTagIdsFromNamespaceIds( self, file_service_id: int, tag_service_id: int, namespace_ids: typing.Collection[ int ], job_key = None ):
if len( namespace_ids ) == 0:
@ -12200,49 +12470,54 @@ class DB( HydrusDB.HydrusDB ):
return set()
final_result_tag_ids = set()
with HydrusDB.TemporaryIntegerTable( self._c, subtag_ids, 'subtag_id' ) as temp_subtag_ids_table_name:
with HydrusDB.TemporaryIntegerTable( self._c, namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name:
if tag_service_id == self.modules_services.combined_tag_service_id:
return self._GetTagIdsFromNamespaceIdsSubtagIdsTables( file_service_id, tag_service_id, temp_namespace_ids_table_name, temp_subtag_ids_table_name, job_key = job_key )
def _GetTagIdsFromNamespaceIdsSubtagIdsTables( self, file_service_id: int, tag_service_id: int, namespace_ids_table_name: str, subtag_ids_table_name: str, job_key = None ):
final_result_tag_ids = set()
if tag_service_id == self.modules_services.combined_tag_service_id:
search_tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
else:
search_tag_service_ids = ( tag_service_id, )
for search_tag_service_id in search_tag_service_ids:
tags_table_name = self._CacheTagsGetTagsTableName( file_service_id, search_tag_service_id )
# temp subtags to tags to temp namespaces
cursor = self._c.execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( subtag_id ) CROSS JOIN {} USING ( namespace_id );'.format( subtag_ids_table_name, tags_table_name, namespace_ids_table_name ) )
cancelled_hook = None
if job_key is not None:
cancelled_hook = job_key.IsCancelled
result_tag_ids = self._STS( HydrusDB.ReadFromCancellableCursor( cursor, 128, cancelled_hook = cancelled_hook ) )
if job_key is not None:
if job_key.IsCancelled():
search_tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
else:
search_tag_service_ids = ( tag_service_id, )
for search_tag_service_id in search_tag_service_ids:
tags_table_name = self._CacheTagsGetTagsTableName( file_service_id, search_tag_service_id )
# temp subtags to tags to temp namespaces
cursor = self._c.execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( subtag_id ) CROSS JOIN {} USING ( namespace_id );'.format( temp_subtag_ids_table_name, tags_table_name, temp_namespace_ids_table_name ) )
cancelled_hook = None
if job_key is not None:
cancelled_hook = job_key.IsCancelled
result_tag_ids = self._STS( HydrusDB.ReadFromCancellableCursor( cursor, 128, cancelled_hook = cancelled_hook ) )
if job_key is not None:
if job_key.IsCancelled():
return set()
final_result_tag_ids.update( result_tag_ids )
return set()
final_result_tag_ids.update( result_tag_ids )
return final_result_tag_ids
@ -12254,45 +12529,50 @@ class DB( HydrusDB.HydrusDB ):
return set()
final_result_tag_ids = set()
with HydrusDB.TemporaryIntegerTable( self._c, subtag_ids, 'subtag_id' ) as temp_subtag_ids_table_name:
if tag_service_id == self.modules_services.combined_tag_service_id:
return self._GetTagIdsFromSubtagIdsTable( file_service_id, tag_service_id, temp_subtag_ids_table_name, job_key = job_key )
def _GetTagIdsFromSubtagIdsTable( self, file_service_id: int, tag_service_id: int, subtag_ids_table_name: str, job_key = None ):
final_result_tag_ids = set()
if tag_service_id == self.modules_services.combined_tag_service_id:
search_tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
else:
search_tag_service_ids = ( tag_service_id, )
for search_tag_service_id in search_tag_service_ids:
tags_table_name = self._CacheTagsGetTagsTableName( file_service_id, search_tag_service_id )
# temp subtags to tags
cursor = self._c.execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( subtag_id );'.format( subtag_ids_table_name, tags_table_name ) )
cancelled_hook = None
if job_key is not None:
search_tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
else:
search_tag_service_ids = ( tag_service_id, )
cancelled_hook = job_key.IsCancelled
for search_tag_service_id in search_tag_service_ids:
result_tag_ids = self._STS( HydrusDB.ReadFromCancellableCursor( cursor, 128, cancelled_hook = cancelled_hook ) )
if job_key is not None:
tags_table_name = self._CacheTagsGetTagsTableName( file_service_id, search_tag_service_id )
# temp subtags to tags
cursor = self._c.execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( subtag_id );'.format( temp_subtag_ids_table_name, tags_table_name ) )
cancelled_hook = None
if job_key is not None:
if job_key.IsCancelled():
cancelled_hook = job_key.IsCancelled
return set()
result_tag_ids = self._STS( HydrusDB.ReadFromCancellableCursor( cursor, 128, cancelled_hook = cancelled_hook ) )
if job_key is not None:
if job_key.IsCancelled():
return set()
final_result_tag_ids.update( result_tag_ids )
final_result_tag_ids.update( result_tag_ids )
return final_result_tag_ids
@ -19222,6 +19502,32 @@ class DB( HydrusDB.HydrusDB ):
if version == 438:
try:
domain_manager = self.modules_serialisable.GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER )
domain_manager.Initialise()
#
domain_manager.OverwriteDefaultURLClasses( ( 'imgur single media file url', ) )
#
self.modules_serialisable.SetJSONDump( domain_manager )
except Exception as e:
HydrusData.PrintException( e )
message = 'Trying to update some url classes failed! Please let hydrus dev know!'
self.pub_initial_message( message )
self._controller.frame_splash_status.SetTitleText( 'updated db to v{}'.format( HydrusData.ToHumanInt( version + 1 ) ) )
self._c.execute( 'UPDATE version SET version = ?;', ( version + 1, ) )

View File

@ -1620,7 +1620,7 @@ class EditMediaViewOptionsPanel( ClientGUIScrolledPanels.EditPanel ):
if action == CC.MEDIA_VIEWER_ACTION_SHOW_WITH_MPV and self._mime in ( HC.IMAGE_GIF, HC.GENERAL_ANIMATION ):
s += ' (will show image gifs with native viewer)'
s += ' (will show unanimated gifs with native viewer)'
self._media_show_action.addItem( s, action )
@ -2023,17 +2023,22 @@ class EditTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ):
default_panel = ClientGUICommon.StaticBox( self, 'default options' )
self._is_default = QW.QCheckBox( default_panel )
self._use_default_dropdown = ClientGUICommon.BetterChoice( default_panel )
tt = 'If this is checked, the client will refer to the defaults (as set under "network->downloaders->manage default tag import options") for the appropriate tag import options at the time of import.'
self._use_default_dropdown.addItem( 'use the default tag import options at the time of import', True )
self._use_default_dropdown.addItem( 'set custom tag import options just for this downloader', False )
tt = 'Normally, the client will refer to the defaults (as set under "network->downloaders->manage default tag import options") for the appropriate tag import options at the time of import.'
tt += os.linesep * 2
tt += 'It is easier to manage tag import options by relying on the defaults, since any change in the single default location will update all the eventual import queues that refer to those defaults, whereas having specific options for every subscription or downloader means making an update to the blacklist or tag filter needs to be repeated dozens or hundreds of times.'
tt += 'It is easier to work this way, since you can change a single default setting and update all current and future downloaders that refer to those defaults, whereas having specific options for every subscription or downloader means you have to update every single one just to make a little change somewhere.'
tt += os.linesep * 2
tt += 'But if you are doing a one-time import that has some unusual tag rules, uncheck this and set those specific rules here.'
tt += 'But if you are doing a one-time import that has some unusual tag rules, set some specific rules here.'
self._is_default.setToolTip( tt )
self._use_default_dropdown.setToolTip( tt )
self._load_default_options = ClientGUICommon.BetterButton( default_panel, 'load one of the default options', self._LoadDefaultOptions )
#
self._load_default_options = ClientGUICommon.BetterButton( self, 'load one of the default options', self._LoadDefaultOptions )
#
@ -2046,6 +2051,18 @@ class EditTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ):
self._fetch_tags_even_if_url_recognised_and_file_already_in_db = QW.QCheckBox( downloader_options_panel )
self._fetch_tags_even_if_hash_recognised_and_file_already_in_db = QW.QCheckBox( downloader_options_panel )
tt = 'I strongly recommend you uncheck this for normal use. When it is on, downloaders are inefficent!'
tt += os.linesep * 2
tt += 'This will force the client to download the metadata for a file even if it thinks it has visited its page before. Normally, hydrus will skip an URL in this case. It is useful to turn this on if you want to force a recheck of the tags in that page.'
self._fetch_tags_even_if_url_recognised_and_file_already_in_db.setToolTip( tt )
tt = 'I strongly recommend you uncheck this for normal use. When it is on, downloaders could be inefficent!'
tt += os.linesep * 2
tt += 'This will force the client to download the metadata for a file even if the gallery step has given a hash that the client thinks it recognises. Normally, hydrus will skip an URL in this case (although the hash-from-gallery case is rare, so this option rarely matters). This is mostly a debug complement to the url check option.'
self._fetch_tags_even_if_hash_recognised_and_file_already_in_db.setToolTip( tt )
tag_blacklist = tag_import_options.GetTagBlacklist()
message = 'If a file about to be downloaded has a tag on the site that this blacklist blocks, the file will not be downloaded and imported. If you want to stop \'scat\' or \'gore\', just type them into the list.'
@ -2058,17 +2075,21 @@ class EditTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ):
self._tag_blacklist_button = ClientGUITags.TagFilterButton( downloader_options_panel, message, tag_blacklist, only_show_blacklist = True )
self._tag_blacklist_button.setToolTip( 'A blacklist will ignore files if they have any of a certain list of tags.' )
self._tag_whitelist = list( tag_import_options.GetTagWhitelist() )
self._tag_whitelist_button = ClientGUICommon.BetterButton( downloader_options_panel, 'whitelist', self._EditWhitelist )
self._tag_blacklist_button.setToolTip( 'A whitelist will ignore files if they do not have any of a certain list of tags.' )
self._UpdateTagWhitelistLabel()
self._services_vbox = QP.VBoxLayout()
#
self._is_default.setChecked( tag_import_options.IsDefault() )
self._use_default_dropdown.SetValue( tag_import_options.IsDefault() )
self._fetch_tags_even_if_url_recognised_and_file_already_in_db.setChecked( tag_import_options.ShouldFetchTagsEvenIfURLKnownAndFileAlreadyInDB() )
self._fetch_tags_even_if_hash_recognised_and_file_already_in_db.setChecked( tag_import_options.ShouldFetchTagsEvenIfHashKnownAndFileAlreadyInDB() )
@ -2079,12 +2100,6 @@ class EditTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ):
#
rows = []
rows.append( ( 'rely on the appropriate default tag import options at the time of import: ', self._is_default ) )
gridbox = ClientGUICommon.WrapInGrid( default_panel, rows )
if not HG.client_controller.new_options.GetBoolean( 'advanced_mode' ):
st = ClientGUICommon.BetterStaticText( default_panel, label = 'Most of the time, you want to rely on the default tag import options!' )
@ -2094,8 +2109,7 @@ class EditTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ):
default_panel.Add( st, CC.FLAGS_EXPAND_PERPENDICULAR )
default_panel.Add( gridbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
default_panel.Add( self._load_default_options, CC.FLAGS_EXPAND_PERPENDICULAR )
default_panel.Add( self._use_default_dropdown, CC.FLAGS_EXPAND_PERPENDICULAR )
if not allow_default_selection:
@ -2106,8 +2120,8 @@ class EditTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ):
rows = []
rows.append( ( 'fetch tags even if url recognised and file already in db: ', self._fetch_tags_even_if_url_recognised_and_file_already_in_db ) )
rows.append( ( 'fetch tags even if hash recognised and file already in db: ', self._fetch_tags_even_if_hash_recognised_and_file_already_in_db ) )
rows.append( ( 'force page fetch even if url recognised and file already in db: ', self._fetch_tags_even_if_url_recognised_and_file_already_in_db ) )
rows.append( ( 'force page fetch even if hash recognised and file already in db: ', self._fetch_tags_even_if_hash_recognised_and_file_already_in_db ) )
rows.append( ( 'set file blacklist: ', self._tag_blacklist_button ) )
rows.append( ( 'set file whitelist: ', self._tag_whitelist_button ) )
@ -2135,6 +2149,7 @@ class EditTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ):
QP.AddToLayout( vbox, help_button, CC.FLAGS_ON_RIGHT )
QP.AddToLayout( vbox, default_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
QP.AddToLayout( vbox, self._load_default_options, CC.FLAGS_EXPAND_PERPENDICULAR )
QP.AddToLayout( vbox, self._specific_options_panel, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
vbox.addStretch( 1 )
@ -2142,7 +2157,7 @@ class EditTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ):
#
self._is_default.clicked.connect( self._UpdateIsDefault )
self._use_default_dropdown.currentIndexChanged.connect( self._UpdateIsDefault )
self._UpdateIsDefault()
@ -2225,7 +2240,7 @@ class EditTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ):
def _SetValue( self, tag_import_options: ClientImportOptions.TagImportOptions ):
self._is_default.setChecked( tag_import_options.IsDefault() )
self._use_default_dropdown.SetValue( tag_import_options.IsDefault() )
self._tag_blacklist_button.SetValue( tag_import_options.GetTagBlacklist() )
@ -2267,10 +2282,12 @@ Please note that once you know what tags you like, you can (and should) set up t
def _UpdateIsDefault( self ):
is_default = self._is_default.isChecked()
is_default = self._use_default_dropdown.GetValue()
show_specific_options = not is_default
self._load_default_options.setVisible( show_specific_options )
self._specific_options_panel.setVisible( show_specific_options )
if not show_specific_options:
@ -2295,7 +2312,7 @@ Please note that once you know what tags you like, you can (and should) set up t
def GetValue( self ) -> ClientImportOptions.TagImportOptions:
is_default = self._is_default.isChecked()
is_default = self._use_default_dropdown.GetValue()
if is_default:

View File

@ -335,6 +335,9 @@ def CalculateMediaSize( media, zoom ):
media_width = int( round( zoom * original_width ) )
media_height = int( round( zoom * original_height ) )
media_width = max( 1, media_width )
media_height = max( 1, media_height )
return ( media_width, media_height )
class Canvas( QW.QWidget ):
@ -940,6 +943,45 @@ class Canvas( QW.QWidget ):
HG.client_controller.pub( 'canvas_new_zoom', self._canvas_key, self._current_zoom )
def _RescueOffScreenMediaWindow( self ):
size = self._GetMediaContainerSize()
my_rect = self.rect()
media_rect = QC.QRect( self._media_window_pos, size )
if not my_rect.intersects( media_rect ):
# up/down
height_buffer = min( media_rect.height(), self.height() // 5 )
if media_rect.bottom() < my_rect.top():
media_rect.moveBottom( my_rect.top() + height_buffer )
elif media_rect.top() > my_rect.bottom():
media_rect.moveTop( my_rect.bottom() - height_buffer )
# left/right
width_buffer = min( media_rect.width(), self.width() // 5 )
if media_rect.right() < my_rect.left():
media_rect.moveRight( my_rect.left() + width_buffer )
elif media_rect.left() > my_rect.right():
media_rect.moveLeft( my_rect.right() - width_buffer )
self._media_window_pos = media_rect.topLeft()
def _ResetMediaWindowCenterPosition( self ):
if self._current_media is None:
@ -1113,6 +1155,8 @@ class Canvas( QW.QWidget ):
self._current_zoom = new_zoom
self._RescueOffScreenMediaWindow()
HG.client_controller.pub( 'canvas_new_zoom', self._canvas_key, self._current_zoom )
'''
# rescue hack no longer needed as media center zoom is non-default

View File

@ -1643,6 +1643,22 @@ class StaticImage( QW.QWidget ):
self._zoom = self.width() / self._media.GetResolution()[ 0 ]
# it is most convenient to have tiles that line up with the current zoom ratio
# 768 is a convenient size for meaty GPU blitting, but as a number it doesn't make for nice multiplication
# a 'nice' size is one that divides nicely by our zoom, so that integer translations between canvas and native res aren't losing too much in the float remainder
if self.width() == 0 or self.height() == 0:
tile_dimension = 0
else:
tile_dimension = round( ( 768 // self._zoom ) * self._zoom )
self._canvas_tile_size = QC.QSize( tile_dimension, tile_dimension )
self._canvas_tiles = {}
self._is_rendered = False
@ -1714,7 +1730,8 @@ class StaticImage( QW.QWidget ):
canvas_height = my_height % normal_canvas_height
native_width = canvas_width * self._zoom
canvas_width = max( 1, canvas_width )
canvas_height = max( 1, canvas_height )
# if we are the last row/column our size is not this!
@ -1724,6 +1741,16 @@ class StaticImage( QW.QWidget ):
native_clip_rect = QC.QRect( canvas_topLeft / self._zoom, canvas_size / self._zoom )
if native_clip_rect.width() == 0:
native_clip_rect.setWidth( 1 )
if native_clip_rect.height() == 0:
native_clip_rect.setHeight( 1 )
return ( native_clip_rect, canvas_clip_rect )
@ -1737,6 +1764,11 @@ class StaticImage( QW.QWidget ):
def _GetTileCoordinatesInView( self, rect: QC.QRect ):
if self.width() == 0 or self.height() == 0:
return []
topLeft_tile_coordinate = self._GetTileCoordinateFromPoint( rect.topLeft() )
bottomRight_tile_coordinate = self._GetTileCoordinateFromPoint( rect.bottomRight() )

View File

@ -977,6 +977,11 @@ class MediaList( object ):
pass
def _RecalcAfterMediaRemove( self ):
self._RecalcHashes()
def _RecalcHashes( self ):
self._hashes = set()
@ -1042,7 +1047,7 @@ class MediaList( object ):
self._sorted_media.remove_items( singleton_media.union( collected_media ) )
self._RecalcHashes()
self._RecalcAfterMediaRemove()
def AddMedia( self, new_media ):
@ -2000,6 +2005,13 @@ class MediaCollection( MediaList, Media ):
def _RecalcAfterMediaRemove( self ):
MediaList._RecalcAfterMediaRemove( self )
self._RecalcArchiveInbox()
def _RecalcArchiveInbox( self ):
self._archive = True in ( media.HasArchive() for media in self._sorted_media )
@ -2026,6 +2038,38 @@ class MediaCollection( MediaList, Media ):
self._file_viewing_stats_manager = ClientMediaManagers.FileViewingStatsManager( preview_views, preview_viewtime, media_views, media_viewtime )
def _RecalcHashes( self ):
MediaList._RecalcHashes( self )
all_locations_managers = [ media.GetLocationsManager() for media in self._sorted_media ]
current_to_timestamps = {}
deleted_to_timestamps = {}
for service_key in HG.client_controller.services_manager.GetServiceKeys( HC.FILE_SERVICES ):
current_timestamps = [ timestamp for timestamp in ( locations_manager.GetCurrentTimestamp( service_key ) for locations_manager in all_locations_managers ) if timestamp is not None ]
if len( current_timestamps ) > 0:
current_to_timestamps[ service_key ] = max( current_timestamps )
deleted_timestamps = [ timestamps for timestamps in ( locations_manager.GetDeletedTimestamps( service_key ) for locations_manager in all_locations_managers ) if timestamps is not None and timestamps[0] is not None ]
if len( deleted_timestamps ) > 0:
deleted_to_timestamps[ service_key ] = max( deleted_timestamps, key = lambda ts: ts[0] )
pending = HydrusData.MassUnion( [ locations_manager.GetPending() for locations_manager in all_locations_managers ] )
petitioned = HydrusData.MassUnion( [ locations_manager.GetPetitioned() for locations_manager in all_locations_managers ] )
self._locations_manager = ClientMediaManagers.LocationsManager( current_to_timestamps, deleted_to_timestamps, pending, petitioned )
def _RecalcInternals( self ):
self._RecalcHashes()
@ -2046,15 +2090,6 @@ class MediaCollection( MediaList, Media ):
self._has_notes = True in ( media.HasNotes() for media in self._sorted_media )
all_locations_managers = [ media.GetLocationsManager() for media in self._sorted_media ]
current_to_timestamps = { service_key : None for service_key in HydrusData.MassUnion( [ locations_manager.GetCurrent() for locations_manager in all_locations_managers ] ) }
deleted_to_timestamps = { service_key : ( None, None ) for service_key in HydrusData.MassUnion( [ locations_manager.GetDeleted() for locations_manager in all_locations_managers ] ) }
pending = HydrusData.MassUnion( [ locations_manager.GetPending() for locations_manager in all_locations_managers ] )
petitioned = HydrusData.MassUnion( [ locations_manager.GetPetitioned() for locations_manager in all_locations_managers ] )
self._locations_manager = ClientMediaManagers.LocationsManager( current_to_timestamps, deleted_to_timestamps, pending, petitioned )
self._RecalcRatings()
self._RecalcFileViewingStats()
@ -2093,6 +2128,16 @@ class MediaCollection( MediaList, Media ):
self._RecalcInternals()
def GetCurrentTimestamp( self, service_key: bytes ) -> typing.Optional[ int ]:
return self._locations_manager.GetCurrentTimestamp( service_key )
def GetDeletedTimestamps( self, service_key: bytes ) -> typing.Tuple[ typing.Optional[ int ], typing.Optional[ int ] ]:
return self._locations_manager.GetDeletedTimestamps( service_key )
def GetDisplayMedia( self ):
first = self._GetFirst()
@ -2209,16 +2254,6 @@ class MediaCollection( MediaList, Media ):
return self._tags_manager
def GetCurrentTimestamp( self, service_key: bytes ) -> typing.Optional[ int ]:
return None
def GetDeletedTimestamps( self, service_key: bytes ) -> typing.Tuple[ typing.Optional[ int ], typing.Optional[ int ] ]:
return ( None, None )
def HasArchive( self ):
return self._archive

View File

@ -26,7 +26,7 @@ class NetworkSessionManagerSessionContainer( HydrusSerialisable.SerialisableBase
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_SESSION_MANAGER_SESSION_CONTAINER
SERIALISABLE_NAME = 'Session Manager Session Container'
SERIALISABLE_VERSION = 1
SERIALISABLE_VERSION = 2
def __init__( self, name, network_context = None, session = None ):
@ -55,31 +55,58 @@ class NetworkSessionManagerSessionContainer( HydrusSerialisable.SerialisableBase
serialisable_network_context = self.network_context.GetSerialisableTuple()
pickled_session_hex = pickle.dumps( self.session ).hex()
self.session.cookies.clear_session_cookies()
return ( serialisable_network_context, pickled_session_hex )
pickled_cookies_hex = pickle.dumps( self.session.cookies ).hex()
return ( serialisable_network_context, pickled_cookies_hex )
def _InitialiseFromSerialisableInfo( self, serialisable_info ):
( serialisable_network_context, pickled_session_hex ) = serialisable_info
( serialisable_network_context, pickled_cookies_hex ) = serialisable_info
self.network_context = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_network_context )
self._InitialiseEmptySession()
try:
self.session = pickle.loads( bytes.fromhex( pickled_session_hex ) )
cookies = pickle.loads( bytes.fromhex( pickled_cookies_hex ) )
self.session.cookies = cookies
except:
# a new version of requests messed this up lad, so reset
self._InitialiseEmptySession()
HydrusData.Print( "Could not load and set cookies for session {}".format( self.network_context ) )
self.session.cookies.clear_session_cookies()
def _UpdateSerialisableInfo( self, version, old_serialisable_info ):
if version == 1:
( serialisable_network_context, pickled_session_hex ) = old_serialisable_info
try:
session = pickle.loads( bytes.fromhex( pickled_session_hex ) )
except:
session = requests.Session()
pickled_cookies_hex = pickle.dumps( session.cookies ).hex()
new_serialisable_info = ( serialisable_network_context, pickled_cookies_hex )
return ( 2, new_serialisable_info )
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_SESSION_MANAGER_SESSION_CONTAINER ] = NetworkSessionManagerSessionContainer
class NetworkSessionManager( HydrusSerialisable.SerialisableBase ):

View File

@ -81,7 +81,7 @@ options = {}
# Misc
NETWORK_VERSION = 20
SOFTWARE_VERSION = 438
SOFTWARE_VERSION = 439
CLIENT_API_VERSION = 16
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )

View File

@ -394,7 +394,7 @@ def GenerateThumbnailBytesNumPy( numpy_image, mime ):
return GenerateThumbnailBytesPIL( pil_image, mime )
( im_y, im_x, depth ) = numpy_image.shape
( im_height, im_width, depth ) = numpy_image.shape
if depth == 4:

View File

@ -88,6 +88,9 @@ def boot():
print( 'This was version ' + str( HC.SOFTWARE_VERSION ) )
input()
if sys.stdin.isatty():
input( 'Press any key to exit.' )

View File

@ -13,9 +13,10 @@ pylzma>=0.5.0
pyOpenSSL>=19.1.0
PySide2>=5.15.0
PySocks>=1.7.0
python-mpv>=0.4.5
python-mpv==0.4.5
PyYAML>=5.0.0
QtPy>=1.9.0
urllib3==1.25.11
requests==2.23.0
Send2Trash>=1.5.0
service-identity>=18.1.0

View File

@ -0,0 +1,3 @@
pyoxidizer
mock>=4.0.0
httmock>=1.4.0

24
requirements_ubuntu.txt Normal file
View File

@ -0,0 +1,24 @@
beautifulsoup4>=4.0.0
chardet>=3.0.4
cloudscraper>=1.2.33
html5lib>=1.0.1
lxml>=4.5.0
lz4>=3.0.0
nose>=1.3.0
numpy>=1.16.0
opencv-python-headless>=4.0.0
Pillow>=6.0.0
psutil>=5.0.0
pylzma>=0.5.0
pyOpenSSL>=19.1.0
PySide2>=5.15.0
PySocks>=1.7.0
python-mpv==0.4.5
PyYAML>=5.0.0
QtPy>=1.9.0
urllib3==1.25.11
requests==2.23.0
Send2Trash>=1.5.0
service-identity>=18.1.0
six>=1.14.0
Twisted>=20.3.0

View File

@ -0,0 +1,3 @@
PyInstaller==3.5
mock>=4.0.0
httmock>=1.4.0

24
requirements_windows.txt Normal file
View File

@ -0,0 +1,24 @@
beautifulsoup4>=4.0.0
chardet>=3.0.4
cloudscraper>=1.2.33
html5lib>=1.0.1
lxml>=4.5.0
lz4>=3.0.0
nose>=1.3.0
numpy>=1.16.0
opencv-python-headless>=4.0.0
Pillow>=6.0.0
psutil>=5.0.0
pylzma>=0.5.0
pyOpenSSL>=19.1.0
PySide2>=5.15.0
PySocks>=1.7.0
python-mpv==0.5.2
PyYAML>=5.0.0
QtPy>=1.9.0
urllib3==1.25.11
requests==2.23.0
Send2Trash>=1.5.0
service-identity>=18.1.0
six>=1.14.0
Twisted>=20.3.0

View File

@ -0,0 +1,6 @@
PyInstaller==4.2
PyWin32
pypiwin32
pywin32-ctypes
mock>=4.0.0
httmock>=1.4.0

View File

@ -22,7 +22,7 @@ jobs:
- name: Build Hydrus
run: |
cd $GITHUB_WORKSPACE
cp static/build_files/pyoxidizer.bzl pyoxidizer.bzl
cp static/build_files/macos_pyoxidizer.bzl pyoxidizer.bzl
basename $(rustc --print sysroot) | sed -e "s/^stable-//" > triple.txt
pyoxidizer build --release
cd build/$(head -n 1 triple.txt)/release

View File

@ -23,7 +23,7 @@ def make_client(dist, policy):
config=python_config,
)
client.add_python_resources(client.pip_install(["--prefer-binary", "-r", "requirements.txt"]))
client.add_python_resources(client.pip_install(["--prefer-binary", "-r", "requirements_macos.txt"]))
return client

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 KiB