Version 358
This commit is contained in:
parent
aa249c8a2e
commit
916b8f3c94
|
@ -8,56 +8,86 @@
|
|||
<div class="content">
|
||||
<h3>changelog</h3>
|
||||
<ul>
|
||||
<li><h3>version 357</h3></li>
|
||||
<li><h3>version 358</h3></li>
|
||||
<ul>
|
||||
<li>duplicates:</li>
|
||||
<li>moved better/worse/same quality duplicates relationships to the new 'king' group-based model. rather than tracking every relationship, duplicates are now stored in groups with a single 'best' file</li>
|
||||
<li>as a result, duplicate relationships are now transitive! saying that one king is duplicate to another will merge groups. the 'better' king is the new king, and 'same quality' kings choose one of the kings pseudorandomly. advanced exceptions: saying that a king is better than a basic member of another group or saying that two members are same quality is still valid but will simply 'poach' the non-king member from the other group in order to ensure the wrong king doesn't end up on top in the eventual merge. saying KingA is same quality as MemberB will merge the groups with KingB as the new king (since it is presumably same/better quality to all members of A)</li>
|
||||
<li>the thumbnail right-click 'duplicates' entry is now renamed to 'file relationships' and is no longer advanced mode only. the 'find similar files' entry is folded into this</li>
|
||||
<li>the thumbnail 'file relationships' menu now shows a simple 'duplicates' count rather than the old messy better/worse/equal. it will show all the members of a duplicates group when clicked. the menu also notes if the focused file is the best file of its group. if it is not, you will get the option to show the best file or make the focused file the best</li>
|
||||
<li>as a result, it is now much simpler to view a group of duplicates and overrule a 'best quality' member as needed</li>
|
||||
<li>added the 'media' shortcut 'duplicate_media_set_focused_king' to shortcut setting a 'best quality' file</li>
|
||||
<li>the system:num duplicate relationships now has the simpler 'duplicates' entry, to search on size of the entire group. searching for kings/not kings will come soon</li>
|
||||
<li>due to the new duplicate transitivity rules, potential pairs are now eliminated at a much faster rate!</li>
|
||||
<li>setting duplicate relationships will overrule false positive or alternate relationships already in place</li>
|
||||
<li>manually setting alternate relationships to more than two thumbnails at once will now set each file as alternate to every other file in the selection, completely eliminating potential pairs within the group. if you try to do this to large groups of files you will get a longer yes/no confirmation message just to make sure you aren't overwriting some potential dupes by accident</li>
|
||||
<li>all existing better/worse/same relationships will be converted to the new group storage in this update, with appropriate kings determined. potential pair queue counts will be reduce accordingly, and the temporary alternate/duplicate confusion from the alternates update will be auto-resolved by merging truly duplicate 'alternates' together</li>
|
||||
<li>fleshed out the duplicate test code significantly to handle the new dupe groups and their interactions with the recent false positive and alternates changes</li>
|
||||
<li>refactored some db test code into separate client/server/duplicates files and cleaned up dupe tests readability</li>
|
||||
<li>potential pairs are now the only component of the new system still on the old pairs system. the duplicate filter will still serve up some inefficient (i.e. non-king) comparisons</li>
|
||||
<li>the final large data storage overhaul work of the duplicates work big job is done--potential duplicate information is now stored more sensibly and efficiently. potential pair information is now stored between duplicate file groups, rather than files themselves. when duplicate file groups are merged, or alternate or false positive relationships set, potentials are merged and culled appropriately</li>
|
||||
<li>your existing potential data will be updated. the current potential pairs queue size will shrink as duplicate potential relationships are merged</li>
|
||||
<li>the duplicate filter now presents file kings as comparison files when possible, increasing pair difference and decision value</li>
|
||||
<li>potential pair information is now stored with the 'distance' between the two files as found by the similar-files search system. the duplicate filter will serve files with closer distance first, which increases decision value by front-loading likely duplicates instead of alts. distance values for existing potential pair info is estimated on update, so if you have done search distance 2 or greater and would like to fill in this data accurately to get closer potentials first, you might like to reset your potential duplicates under the cog icon (bear in mind this reset will schedule a decent whack of CPU for your idle maintenance time)</li>
|
||||
<li>setting alternate relationship on a pair is now fixed more concretely, ensuring that in various search expansions or resets that the same pair will not come up again. this solves some related problems users have had trying to 'fix' larger alternate groups in place--you may see your alternates compared one last time, but that should be the final go. these fixed relationships are merged as intra-alternate group members merge due to duplicate-setting events</li>
|
||||
<li>a variety of potential duplicates code has been streamlined based on the new duplicate group relationship</li>
|
||||
<li>improved how a second-best king representative of a group is selected in various file relationship fetching jobs when the true king is not permitted by search domain</li>
|
||||
<li>one critical part of the new potential duplicates system is more complicated. if you experience much slower searches or count retrievals IRL, please let me know your details</li>
|
||||
<li>expanded duplicates unit tests to test potential counts for all tested situations</li>
|
||||
<li>fixed a bug where alternate group merging would not cull now-invalid false-positive potential pairs</li>
|
||||
<li>the rest:</li>
|
||||
<li>updated the default pixiv parser to work with their new format--thank you to a user for providing this fix</li>
|
||||
<li>fixed the issue where mouse scroll events were not being processed by the main viewer canvas when it did not have focus</li>
|
||||
<li>file page parsers that produce multiple urls through subsidiary page parsers now correctly pass down associated urls and tags to their child file import items</li>
|
||||
<li>updated to wx 4.0.6 on all built platforms--looks like a bunch of bug fixes, so fingers-crossed this improves some stability and jank</li>
|
||||
<li>updated the recent server access-key-arg-parsing routine to check access from the header before parsing args, which fixes an issue with testing decompression bomb permission on file POST requests on the file repository. generally improved code here to deal more gracefully with failures</li>
|
||||
<li>the repositories now max out at 1000 count when fetching pending petition counts (speeding up access when there are large queues)</li>
|
||||
<li>the repositories now fetch petitions much faster when there are large queues</li>
|
||||
<li>frames and dialogs will be slightly more aggressive about ensuring their parents now get focus back when they are closed (rather than the top level main gui, which sometimes happens due to window manager weirdness)</li>
|
||||
<li>rewrote a bad old legacy method of refocusing the manage tags panel that kicks in when the 'open manage tags' action is processed by the media viewer canvas but the panel is already open</li>
|
||||
<li>hitting 'refresh account' on a paused service now gives a better immediate message rather than failing after delay on a confusing 'bad login' error</li>
|
||||
<li>improved login errors' text to specify the exact problem raised by the login manager</li>
|
||||
<li>fixed a problem in the duplicates page when a status update is called before the initial db status fetch is complete</li>
|
||||
<li>the manage tag siblings panel now detects if the pair you wish to add connects to a loop already in the database (which is a rare but possible case). previously it would hang indefinitely! it now cancels the add, communicates the tags in the loop, and recommends you break it manually</li>
|
||||
<li>added a link to https://github.com/cravxx/hydrus.js , a node.js module that plugs into the client api, to the help</li>
|
||||
<li>a variety of user-started network jobs such as refreshing account and testing a server connection under manage services now only attempt connection once (to fail faster as the user waits)</li>
|
||||
<li>the 'test address' job under manage services is now asynchronous and will not hang the ui while it waits for a response</li>
|
||||
<li>fixed some unstable thread-to-wx code under the 'test access key' job under manage services</li>
|
||||
<li>improved some file handling to ensure open files are closed more promptly in certain circumstances</li>
|
||||
<li>fixed some unstable thread-to-wx communication in the ipfs review services panel</li>
|
||||
<li>improved the accuracy of the network engine's 'incomplete download' test and bandwidth reporting to work with exact byte counts when available, regardless of content encoding. downloads that provide too few bytes in ways that were previously not caught will be reattempted according to the normal connection reattempt rules. these network fixes may solve some broken jpegs and json some users have seen from unreliable servers</li>
|
||||
<li>fixed watcher entries in the watcher page list not reporting their file and check download status as they work (as the gallery downloader does)</li>
|
||||
<li>the client api will now deliver cleaner 400 errors when a given url argument is empty or otherwise fails to normalise (previously it was giving 500s)</li>
|
||||
<li>misc cleanup</li>
|
||||
</ul>
|
||||
<li><h3>version 357</h3></li>
|
||||
<ul>
|
||||
<li>client api:</li>
|
||||
<li>the client api can now receive the access key through a GET or POST parameter rather than the header</li>
|
||||
<li>the client api now supports GET /session_key, which provides a temporary key that gives the same access as its permanent access key with the Hydrus-Client-API-Session-Key name through header or GET/POST param. it expires after 24 hours of inactivity or if the client is restarted</li>
|
||||
<li>the GET /manage_pages/get_pages call now returns the unique 'page_key' identifier that will be useful in future page management when multiple pages share a name</li>
|
||||
<li>the POST /add_urls/add_url command now takes 'destination_page_key' to exactly specify which page you would like a URL to end up on. if the page is not found, or it is the incorrect type, the standard page selection/creation rules will apply</li>
|
||||
<li>cleaned up some serverside request processing code</li>
|
||||
<li>cleaned up some misc client api permission checking code</li>
|
||||
<li>updated client unit tests to check the new changes</li>
|
||||
<li>updated client api help to reflect the new changes</li>
|
||||
<li>cleaned up some GET and POST parameter parsing</li>
|
||||
<li>client api version is now 8</li>
|
||||
<li>.</li>
|
||||
<li>shortcut and hover window fixes:</li>
|
||||
<li>moved the canvas shortcut processing code more towards the new shortcut system</li>
|
||||
<li>the OS X shortcut-in-media-viewer issue, which was being boshed in a similar way to the main gui last week, should now be fixed</li>
|
||||
<li>when the hover windows have focus, they now pass shortcuts up to the canvas parent more reliably</li>
|
||||
<li>removed a legacy menu highlight-tracking system that was malfunctioning and generally throwing a slow-memory-leaking wrench in several places, particularly some non-Windows situations</li>
|
||||
<li>the 'menubar is open' test code is now only active for Windows. the other platforms have mixed reliability with menubar open/close events</li>
|
||||
<li>some related OS X hover-window flickering and hiding-under the main page problems (having problems due to thinking menus were open) are also fixed</li>
|
||||
<li>some hover window flicker on certain focus changes due to clicking focus windows should be fixed</li>
|
||||
<li>hover windows now try to size themselves a little better on init, which reduces some initial flicker or false-positive single-frame display on some systems</li>
|
||||
<li>extended the hover report mode to report some 'ideal' pos/size info as well</li>
|
||||
<li>under file->shortcuts, custom shortcuts are now hidden for non-advanced-mode users</li>
|
||||
<li>.</li>
|
||||
<li>the rest:</li>
|
||||
<li>fixed the issue where many clipboard-watcher-caught URLs that did not match were producing false-positive 'could not generate new page for that URL' error popups</li>
|
||||
<li>the clipboard text-fetcher now tests against incompatible clipboard types (like a screenshot) better, and all instances of text fetching now report errors more gracefully and with more information</li>
|
||||
<li>fixed the unusual OS X issue where many shortcuts were not being processed after client boot until the top menubar was opened and closed. a variety of other blocking-while-menubar-is-open issues that were false-positive misfiring are now fixed as well, please let me know if you still have trouble here</li>
|
||||
<li>the file menu now has an 'exit and force shutdown maintenance' option to force-run outstanding maintenance jobs</li>
|
||||
<li>when shutdown maintenance work is going on, the shutdown splash screen now has a 'stop shutdown maintenance' button!</li>
|
||||
<li>cleaned up some file maintenance manager maintenance locking and shutdown cancel logic</li>
|
||||
<li>moved all the idle-mode maintenance checks to a new system that explicitly defines idle/shutdown/forced maintenance work and tests those states in a unified manner, checking idle mode and the new splash cancel button status and so on more reliably. a lot of maintenance should cancel out quicker when appropriate</li>
|
||||
<li>misc shutdown logic cleanup</li>
|
||||
<li>added a 'file maintenance' option to the database->maintenance menu that forces the new file maintenance manager to run its queue. it'll make a little popup as it works, or a note that no work is due</li>
|
||||
<li>the 'regenerate' thumbnail menu is also available to all users</li>
|
||||
<li>jpeg quality estimates are now available for all users in the duplicate filter. they only display when the two jpegs' quality have different labels</li>
|
||||
<li>the jpeg quality estimator now handles some unusual jpegs that load with empty quantization table arrays</li>
|
||||
<li>the duplicate filter now handles bad jpeg quality estimations gracefully</li>
|
||||
<li>cleaned up some ffmpeg communication code</li>
|
||||
<li>the ffmpeg debug text that spawns on a help->about call that fails to discover ffmpeg version information now prints stderr output as well. if you have been hit by this, please give it another go and let me know what you get</li>
|
||||
<li>the same ffmpeg 'no response' error on file parse now popups and prints some debug info and returns a better error</li>
|
||||
<li>dialogs and windows on the new panel system now support a new pre-close tidying system</li>
|
||||
<li>the manage tags dialog and window will now cancel any pending large tag autocomplete queries on close</li>
|
||||
<li>regular gui pages now support a new pre-close tidying system</li>
|
||||
<li>search pages will now cancel any pending search results loading or tag autocomplete queries on close</li>
|
||||
<li>improved reliability of the popup message manager chasing the main gui when it is sent to another screen by a keyboard shortcut (such as shift+win+arrow on Windows). it should work now if the mouse cursor is in either window. please let me know if this causes trouble for virtual display navigation</li>
|
||||
<li>the network engine now waits significantly longer--60s--on connection errors before trying again, and with every failed attempt will wait n times longer again. when in this waiting state, a manual user cancel command cancels it out faster</li>
|
||||
<li>I believe I have fixed/improved a situation where media viewer hover windows would sometimes disappear immediately after appearing on some Linux window managers</li>
|
||||
<li>improved hover window report mode to state more focus info in case the above is insufficient</li>
|
||||
<li>to better link the two requests and consume bandwidth under strict rules more precisely, the override bandwidth rule that kicks in when a file page has a single file is now 3 seconds instead of 30</li>
|
||||
<li>updated options->connection page to specify that 'socks4a'/'socks5h' is needed to force remote dns resolution</li>
|
||||
<li>sped up tag parents initialisation</li>
|
||||
<li>repositories now group tag sibling and parent petitions by the parent/better tag's namespace</li>
|
||||
<li>removed some old network 'death time' code that is no longer useful and was interfering with heavy petition processing</li>
|
||||
<li>the log now flushes itself to disk every 60s rather than 300s</li>
|
||||
<li>misc fixes and cleanup</li>
|
||||
<li>the popup message toaster now always shows its 'dismiss all' summary bar whenever any messages are being displayed. the summary bar now also has a ▼/▲ button to collapse/expand its messages!</li>
|
||||
<li>added duplicate comparison score options (under options->duplicates) for the new jpeg quality estimator</li>
|
||||
<li>fixed the default duplicate comparison score values, which appeared to be reversed for higher vs much higher--they will be reset to these new defaults on update, so recheck them if you prefer different</li>
|
||||
<li>in manage tag siblings and parents, the filename tagging dialog, and some misc options panels, tag autocomplete input controls now have 'paste' buttons to make entering many results much easier</li>
|
||||
<li>to reduce update flicker, the downloader and watcher pages do not list seconds in their 'added' column ('12 minutes 24 seconds ago' is now '12 minutes ago')</li>
|
||||
<li>improved clipboard access cleanup on in-clipboard errors, which was sometimes leading to error popups or clipboard lockup</li>
|
||||
<li>rather than the simple 'yes', the review bandwidth usage dialog now puts the waiting estimate (like '12 minutes 50 seconds') in the 'blocked?' column</li>
|
||||
<li>improved external program launch code for non-Windows to remove hydrus-specific LD_LIBRARY_PATH completely when no OS default exists to restore it to. this should fix ffmpeg connection for certain installs</li>
|
||||
<li>fixed a rare bug when initial media results of a page failed to load due to a subset of unexpectedly unfetchable file records</li>
|
||||
<li>gave the rare 'ui freezup on dialog close' event yet another pass. closing via escape key should now be immune to this</li>
|
||||
<li>'remote' files that were once in the client but since deleted now have the 'trashed' icon and will state so on their right-click info summary lines</li>
|
||||
<li>fixed various instances where selection-from-list dialogs were failing to sort their list based on underlying data object uncomparibility. an example of this was when selecting which queries to pull from a separating subscription</li>
|
||||
<li>on the edit parser panels, fetching test data successfully via the quick button or the manual URL entry will now set that URL in the example parsing context</li>
|
||||
<li>on the edit parser panels, the subsidiary page parser's separation formula now launches with the correct example data (the original data from the parent dialog, rather than the post-separated data) on which to test separation. this should nest correctly for multiple subsidiary page parsers</li>
|
||||
<li>to reduce server load spikes, clientside petition processing now approves very large mapping and file petitions (such as a petition to delete one tag from 50k files) as a sequence of smaller chunks</li>
|
||||
</ul>
|
||||
<li><h3>version 356</h3></li>
|
||||
<ul>
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
<li><a href="https://gitgud.io/prkc/hydrus-companion">https://gitgud.io/prkc/hydrus-companion</a> - Hydrus Companion, a browser extension for hydrus.</li>
|
||||
<li><a href="https://gitgud.io/prkc/dolphin-hydrus-actions">https://gitgud.io/prkc/dolphin-hydrus-actions</a> - Adds Hydrus right-click context menu actions to Dolphin file manager.</li>
|
||||
<li><a href="https://gitlab.com/cryzed/hydrus-api">https://gitlab.com/cryzed/hydrus-api</a> - A python module that talks to the API.</li>
|
||||
<li><a href="https://github.com/cravxx/hydrus.js">https://github.com/cravxx/hydrus.js</a> - A node.js module that talks to the API.</li>
|
||||
</ul>
|
||||
<h3>API</h3>
|
||||
<p>If the API returns anything but actual files on 200, it should always return JSON. On 4XX and 5XX, assume it will return plain text, sometimes a raw traceback. You'll typically get 400 for a missing parameter, 401/403/419 for missing/insufficient/expired access, and 500 for a real deal serverside error.</p>
|
||||
|
@ -792,4 +793,4 @@
|
|||
</div>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
</html>
|
||||
|
|
|
@ -191,6 +191,10 @@ def CollapseTagSiblingPairs( groups_of_pairs ):
|
|||
|
||||
return siblings
|
||||
|
||||
def DeLoopTagSiblingPairs( groups_of_pairs ):
|
||||
|
||||
pass
|
||||
|
||||
def LoopInSimpleChildrenToParents( simple_children_to_parents, child, parent ):
|
||||
|
||||
potential_loop_paths = { parent }
|
||||
|
|
1013
include/ClientDB.py
1013
include/ClientDB.py
File diff suppressed because it is too large
Load Diff
|
@ -1192,8 +1192,6 @@ class Canvas( wx.Window ):
|
|||
self._dirty = True
|
||||
self._closing = False
|
||||
|
||||
self._manage_tags_panel = None
|
||||
|
||||
self._service_keys_to_services = {}
|
||||
|
||||
self._current_media = None
|
||||
|
@ -1506,9 +1504,9 @@ class Canvas( wx.Window ):
|
|||
return self._GetShowAction( self._current_media ) not in ( CC.MEDIA_VIEWER_ACTION_SHOW_OPEN_EXTERNALLY_BUTTON, CC.MEDIA_VIEWER_ACTION_DO_NOT_SHOW_ON_ACTIVATION_OPEN_EXTERNALLY, CC.MEDIA_VIEWER_ACTION_DO_NOT_SHOW )
|
||||
|
||||
|
||||
def _IShouldCatchShortcutEvent( self ):
|
||||
def _IShouldCatchShortcutEvent( self, event = None ):
|
||||
|
||||
return ClientGUIShortcuts.IShouldCatchShortcutEvent( self, child_tlp_classes_who_can_pass_up = ( ClientGUIHoverFrames.FullscreenHoverFrame, ) )
|
||||
return ClientGUIShortcuts.IShouldCatchShortcutEvent( self, event = event, child_tlp_classes_who_can_pass_up = ( ClientGUIHoverFrames.FullscreenHoverFrame, ) )
|
||||
|
||||
|
||||
def _MaintainZoom( self, previous_media ):
|
||||
|
@ -1638,27 +1636,33 @@ class Canvas( wx.Window ):
|
|||
return
|
||||
|
||||
|
||||
if self._manage_tags_panel:
|
||||
for child in self.GetChildren():
|
||||
|
||||
self._manage_tags_panel.SetFocus()
|
||||
|
||||
else:
|
||||
|
||||
# take any focus away from hover window, which will mess up window order when it hides due to the new frame
|
||||
self.SetFocus()
|
||||
|
||||
title = 'manage tags'
|
||||
frame_key = 'manage_tags_frame'
|
||||
|
||||
manage_tags = ClientGUITopLevelWindows.FrameThatTakesScrollablePanel( self, title, frame_key )
|
||||
|
||||
panel = ClientGUITags.ManageTagsPanel( manage_tags, self._file_service_key, ( self._current_media, ), immediate_commit = True, canvas_key = self._canvas_key )
|
||||
|
||||
manage_tags.SetPanel( panel )
|
||||
|
||||
self._manage_tags_panel = panel
|
||||
if isinstance( child, ClientGUITopLevelWindows.FrameThatTakesScrollablePanel ):
|
||||
|
||||
panel = child.GetPanel()
|
||||
|
||||
if isinstance( panel, ClientGUITags.ManageTagsPanel ):
|
||||
|
||||
panel.SetFocus()
|
||||
|
||||
return
|
||||
|
||||
|
||||
|
||||
|
||||
# take any focus away from hover window, which will mess up window order when it hides due to the new frame
|
||||
self.SetFocus()
|
||||
|
||||
title = 'manage tags'
|
||||
frame_key = 'manage_tags_frame'
|
||||
|
||||
manage_tags = ClientGUITopLevelWindows.FrameThatTakesScrollablePanel( self, title, frame_key )
|
||||
|
||||
panel = ClientGUITags.ManageTagsPanel( manage_tags, self._file_service_key, ( self._current_media, ), immediate_commit = True, canvas_key = self._canvas_key )
|
||||
|
||||
manage_tags.SetPanel( panel )
|
||||
|
||||
|
||||
def _ManageURLs( self ):
|
||||
|
||||
|
@ -2100,7 +2104,7 @@ class Canvas( wx.Window ):
|
|||
|
||||
def EventCharHook( self, event ):
|
||||
|
||||
if self._IShouldCatchShortcutEvent():
|
||||
if self._IShouldCatchShortcutEvent( event = event ):
|
||||
|
||||
shortcut = ClientGUIShortcuts.ConvertKeyEventToShortcut( event )
|
||||
|
||||
|
@ -3722,7 +3726,7 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
|||
|
||||
def EventCharHook( self, event ):
|
||||
|
||||
if self._IShouldCatchShortcutEvent():
|
||||
if self._IShouldCatchShortcutEvent( event = event ):
|
||||
|
||||
( modifier, key ) = ClientGUIShortcuts.ConvertKeyEventToSimpleTuple( event )
|
||||
|
||||
|
@ -3761,7 +3765,7 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
|||
|
||||
def EventMouse( self, event ):
|
||||
|
||||
if self._IShouldCatchShortcutEvent():
|
||||
if self._IShouldCatchShortcutEvent( event = event ):
|
||||
|
||||
if event.ShiftDown():
|
||||
|
||||
|
@ -4398,7 +4402,7 @@ class CanvasMediaListFilterArchiveDelete( CanvasMediaList ):
|
|||
|
||||
def EventCharHook( self, event ):
|
||||
|
||||
if self._IShouldCatchShortcutEvent():
|
||||
if self._IShouldCatchShortcutEvent( event = event ):
|
||||
|
||||
( modifier, key ) = ClientGUIShortcuts.ConvertKeyEventToSimpleTuple( event )
|
||||
|
||||
|
@ -4416,7 +4420,7 @@ class CanvasMediaListFilterArchiveDelete( CanvasMediaList ):
|
|||
|
||||
def EventDelete( self, event ):
|
||||
|
||||
if self._IShouldCatchShortcutEvent():
|
||||
if self._IShouldCatchShortcutEvent( event = event ):
|
||||
|
||||
self._Delete()
|
||||
|
||||
|
@ -4428,7 +4432,7 @@ class CanvasMediaListFilterArchiveDelete( CanvasMediaList ):
|
|||
|
||||
def EventMouse( self, event ):
|
||||
|
||||
if self._IShouldCatchShortcutEvent():
|
||||
if self._IShouldCatchShortcutEvent( event = event ):
|
||||
|
||||
if event.ShiftDown():
|
||||
|
||||
|
@ -4480,7 +4484,7 @@ class CanvasMediaListFilterArchiveDelete( CanvasMediaList ):
|
|||
|
||||
def EventUndelete( self, event ):
|
||||
|
||||
if self._IShouldCatchShortcutEvent():
|
||||
if self._IShouldCatchShortcutEvent( event = event ):
|
||||
|
||||
self._Undelete()
|
||||
|
||||
|
@ -4836,7 +4840,7 @@ class CanvasMediaListBrowser( CanvasMediaListNavigable ):
|
|||
|
||||
def EventCharHook( self, event ):
|
||||
|
||||
if self._IShouldCatchShortcutEvent():
|
||||
if self._IShouldCatchShortcutEvent( event = event ):
|
||||
|
||||
( modifier, key ) = ClientGUIShortcuts.ConvertKeyEventToSimpleTuple( event )
|
||||
|
||||
|
@ -4857,12 +4861,18 @@ class CanvasMediaListBrowser( CanvasMediaListNavigable ):
|
|||
|
||||
def EventMouseWheel( self, event ):
|
||||
|
||||
if self._IShouldCatchShortcutEvent():
|
||||
if self._IShouldCatchShortcutEvent( event = event ):
|
||||
|
||||
if event.CmdDown():
|
||||
|
||||
if event.GetWheelRotation() > 0: self._ZoomIn()
|
||||
else: self._ZoomOut()
|
||||
if event.GetWheelRotation() > 0:
|
||||
|
||||
self._ZoomIn()
|
||||
|
||||
else:
|
||||
|
||||
self._ZoomOut()
|
||||
|
||||
|
||||
else:
|
||||
|
||||
|
|
|
@ -986,6 +986,8 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
|
|||
self._job_key = None
|
||||
self._in_break = False
|
||||
|
||||
self._similar_files_maintenance_status = None
|
||||
|
||||
new_options = self._controller.new_options
|
||||
|
||||
self._currently_refreshing_maintenance_numbers = False
|
||||
|
@ -1458,6 +1460,11 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
|
|||
|
||||
work_can_be_done = False
|
||||
|
||||
if self._similar_files_maintenance_status is None:
|
||||
|
||||
return
|
||||
|
||||
|
||||
( num_phashes_to_regen, num_branches_to_regen, searched_distances_to_count ) = self._similar_files_maintenance_status
|
||||
|
||||
self._cog_button.Enable()
|
||||
|
@ -1516,7 +1523,7 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
|
|||
|
||||
self._search_distance_button.SetLabelText( button_label )
|
||||
|
||||
num_searched = sum( ( count for ( value, count ) in list(searched_distances_to_count.items()) if value is not None and value >= search_distance ) )
|
||||
num_searched = sum( ( count for ( value, count ) in searched_distances_to_count.items() if value is not None and value >= search_distance ) )
|
||||
|
||||
if num_searched == total_num_files:
|
||||
|
||||
|
|
|
@ -4217,9 +4217,12 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
count = file_duplicate_types_to_counts[ duplicate_type ]
|
||||
|
||||
label = HydrusData.ToHumanInt( count ) + ' ' + HC.duplicate_type_string_lookup[ duplicate_type ]
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, duplicates_view_menu, label, 'Show these duplicates in a new page.', self._ShowDuplicatesInNewPage, focussed_hash, duplicate_type )
|
||||
if count > 0:
|
||||
|
||||
label = HydrusData.ToHumanInt( count ) + ' ' + HC.duplicate_type_string_lookup[ duplicate_type ]
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, duplicates_view_menu, label, 'Show these duplicates in a new page.', self._ShowDuplicatesInNewPage, focussed_hash, duplicate_type )
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -864,6 +864,13 @@ class ReviewServicePanel( wx.Panel ):
|
|||
return
|
||||
|
||||
|
||||
if self._service.GetServiceType() in HC.REPOSITORIES and self._service.IsPaused():
|
||||
|
||||
wx.MessageBox( 'The service is paused! Please unpause it to refresh its account.' )
|
||||
|
||||
return
|
||||
|
||||
|
||||
self._refresh_account_button.Disable()
|
||||
self._refresh_account_button.SetLabelText( 'fetching\u2026' )
|
||||
|
||||
|
@ -1222,11 +1229,13 @@ class ReviewServicePanel( wx.Panel ):
|
|||
|
||||
def wx_clean_up():
|
||||
|
||||
if self:
|
||||
if not self:
|
||||
|
||||
self._check_running_button.Enable()
|
||||
return
|
||||
|
||||
|
||||
self._check_running_button.Enable()
|
||||
|
||||
|
||||
def do_it():
|
||||
|
||||
|
@ -1238,9 +1247,9 @@ class ReviewServicePanel( wx.Panel ):
|
|||
|
||||
wx.CallAfter( wx.MessageBox, message )
|
||||
|
||||
except:
|
||||
except Exception as e:
|
||||
|
||||
message = 'There was a problem! Check your popup messages for the error.'
|
||||
message = 'There was a problem: {}'.format( str( e ) )
|
||||
|
||||
wx.CallAfter( wx.MessageBox, message )
|
||||
|
||||
|
@ -1334,6 +1343,11 @@ class ReviewServicePanel( wx.Panel ):
|
|||
|
||||
def wx_done():
|
||||
|
||||
if not self:
|
||||
|
||||
return
|
||||
|
||||
|
||||
self._ipfs_shares_panel.Enable()
|
||||
|
||||
|
||||
|
@ -1367,6 +1381,11 @@ class ReviewServicePanel( wx.Panel ):
|
|||
|
||||
def wx_done():
|
||||
|
||||
if not self:
|
||||
|
||||
return
|
||||
|
||||
|
||||
self._ipfs_shares_panel.Enable()
|
||||
|
||||
self._my_updater.Update()
|
||||
|
|
|
@ -696,6 +696,46 @@ class ManageClientServicesPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
def _TestAddress( self ):
|
||||
|
||||
def wx_done( message ):
|
||||
|
||||
if not self:
|
||||
|
||||
return
|
||||
|
||||
|
||||
wx.MessageBox( message )
|
||||
|
||||
self._test_address_button.Enable()
|
||||
self._test_address_button.SetLabel( 'test address' )
|
||||
|
||||
|
||||
def do_it():
|
||||
|
||||
( host, port ) = credentials.GetAddress()
|
||||
|
||||
url = scheme + host + ':' + str( port ) + '/' + request
|
||||
|
||||
network_job = ClientNetworkingJobs.NetworkJobHydrus( CC.TEST_SERVICE_KEY, 'GET', url )
|
||||
|
||||
network_job.OnlyTryConnectionOnce()
|
||||
network_job.OverrideBandwidth()
|
||||
|
||||
network_job.SetForLogin( True )
|
||||
|
||||
HG.client_controller.network_engine.AddJob( network_job )
|
||||
|
||||
try:
|
||||
|
||||
network_job.WaitUntilDone()
|
||||
|
||||
wx.CallAfter( wx_done, 'Looks good!' )
|
||||
|
||||
except HydrusExceptions.NetworkException as e:
|
||||
|
||||
wx.CallAfter( wx_done, 'Problem with that address: ' + str( e ) )
|
||||
|
||||
|
||||
|
||||
try:
|
||||
|
||||
credentials = self.GetCredentials()
|
||||
|
@ -712,8 +752,6 @@ class ManageClientServicesPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
return
|
||||
|
||||
|
||||
( host, port ) = credentials.GetAddress()
|
||||
|
||||
if self._service_type == HC.IPFS:
|
||||
|
||||
scheme = 'http://'
|
||||
|
@ -725,26 +763,10 @@ class ManageClientServicesPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
request = ''
|
||||
|
||||
|
||||
url = scheme + host + ':' + str( port ) + '/' + request
|
||||
self._test_address_button.Disable()
|
||||
self._test_address_button.SetLabel( 'testing\u2026' )
|
||||
|
||||
network_job = ClientNetworkingJobs.NetworkJobHydrus( CC.TEST_SERVICE_KEY, 'GET', url )
|
||||
|
||||
network_job.OverrideBandwidth()
|
||||
|
||||
network_job.SetForLogin( True )
|
||||
|
||||
HG.client_controller.network_engine.AddJob( network_job )
|
||||
|
||||
try:
|
||||
|
||||
network_job.WaitUntilDone()
|
||||
|
||||
wx.MessageBox( 'Got an ok response!' )
|
||||
|
||||
except HydrusExceptions.NetworkException as e:
|
||||
|
||||
wx.MessageBox( 'Problem with that address: ' + str( e ) )
|
||||
|
||||
HG.client_controller.CallToThread( do_it )
|
||||
|
||||
|
||||
def GetCredentials( self ):
|
||||
|
@ -832,6 +854,8 @@ class ManageClientServicesPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
self._access_key.SetValue( access_key_encoded )
|
||||
|
||||
wx.MessageBox( 'Looks good!' )
|
||||
|
||||
|
||||
def do_it( credentials, registration_key ):
|
||||
|
||||
|
@ -843,6 +867,7 @@ class ManageClientServicesPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
network_job = ClientNetworkingJobs.NetworkJobHydrus( CC.TEST_SERVICE_KEY, 'GET', url )
|
||||
|
||||
network_job.OnlyTryConnectionOnce()
|
||||
network_job.OverrideBandwidth()
|
||||
|
||||
network_job.SetForLogin( True )
|
||||
|
@ -861,8 +886,6 @@ class ManageClientServicesPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
wx.CallAfter( wx_setkey, access_key_encoded )
|
||||
|
||||
wx.CallAfter( wx.MessageBox, 'Looks good!' )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
@ -935,44 +958,54 @@ class ManageClientServicesPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
def _TestCredentials( self ):
|
||||
|
||||
def do_it( credentials ):
|
||||
def wx_done( message ):
|
||||
|
||||
service = ClientServices.GenerateService( CC.TEST_SERVICE_KEY, self._service_type, 'test service' )
|
||||
if not self:
|
||||
|
||||
return
|
||||
|
||||
|
||||
wx.MessageBox( message )
|
||||
|
||||
self._test_credentials_button.Enable()
|
||||
self._test_credentials_button.SetLabel( 'test access key' )
|
||||
|
||||
|
||||
|
||||
def do_it( credentials, service_type ):
|
||||
|
||||
service = ClientServices.GenerateService( CC.TEST_SERVICE_KEY, service_type, 'test service' )
|
||||
|
||||
service.SetCredentials( credentials )
|
||||
|
||||
try:
|
||||
|
||||
if self._service_type in HC.RESTRICTED_SERVICES:
|
||||
response = service.Request( HC.GET, 'access_key_verification' )
|
||||
|
||||
if not response[ 'verified' ]:
|
||||
|
||||
response = service.Request( HC.GET, 'access_key_verification' )
|
||||
message = 'That access key was not recognised!'
|
||||
|
||||
if not response[ 'verified' ]:
|
||||
|
||||
wx.CallAfter( wx.MessageBox, 'That access key was not recognised!' )
|
||||
|
||||
else:
|
||||
|
||||
wx.CallAfter( wx.MessageBox, 'Everything looks ok!' )
|
||||
|
||||
else:
|
||||
|
||||
message = 'Everything looks ok!'
|
||||
|
||||
|
||||
except HydrusExceptions.WrongServiceTypeException:
|
||||
|
||||
wx.CallAfter( wx.MessageBox, 'Connection was made, but the service was not a ' + HC.service_string_lookup[ self._service_type ] + '.' )
|
||||
|
||||
return
|
||||
message = 'Connection was made, but the service was not a {}.'.format( HC.service_string_lookup[ service_type ] )
|
||||
|
||||
except HydrusExceptions.NetworkException as e:
|
||||
|
||||
wx.CallAfter( wx.MessageBox, 'Network problem: ' + str( e ) )
|
||||
message = 'Network problem: {}'.format( e )
|
||||
|
||||
return
|
||||
except Exception as e:
|
||||
|
||||
message = 'Unexpected error: {}'.format( e )
|
||||
|
||||
finally:
|
||||
|
||||
self._test_credentials_button.Enable()
|
||||
self._test_credentials_button.SetLabel( 'test access key' )
|
||||
wx.CallAfter( wx_done, message )
|
||||
|
||||
|
||||
|
||||
|
@ -995,7 +1028,7 @@ class ManageClientServicesPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
self._test_credentials_button.Disable()
|
||||
self._test_credentials_button.SetLabel( 'fetching\u2026' )
|
||||
|
||||
HG.client_controller.CallToThread( do_it, credentials )
|
||||
HG.client_controller.CallToThread( do_it, credentials, self._service_type )
|
||||
|
||||
|
||||
def GetCredentials( self ):
|
||||
|
|
|
@ -126,7 +126,7 @@ def ConvertMouseEventToShortcut( event ):
|
|||
|
||||
return None
|
||||
|
||||
def IShouldCatchShortcutEvent( evt_handler, child_tlp_classes_who_can_pass_up = None ):
|
||||
def IShouldCatchShortcutEvent( evt_handler, event = None, child_tlp_classes_who_can_pass_up = None ):
|
||||
|
||||
if HC.PLATFORM_WINDOWS and FLASHWIN_OK:
|
||||
|
||||
|
@ -138,21 +138,34 @@ def IShouldCatchShortcutEvent( evt_handler, child_tlp_classes_who_can_pass_up =
|
|||
|
||||
|
||||
|
||||
if not ClientGUIFunctions.WindowOrSameTLPChildHasFocus( evt_handler ):
|
||||
do_focus_test = True
|
||||
|
||||
if event is not None and isinstance( event, wx.MouseEvent ):
|
||||
|
||||
if child_tlp_classes_who_can_pass_up is not None:
|
||||
if event.GetEventType() == wx.wxEVT_MOUSEWHEEL:
|
||||
|
||||
child_tlp_has_focus = ClientGUIFunctions.WindowOrAnyTLPChildHasFocus( evt_handler ) and isinstance( ClientGUIFunctions.GetFocusTLP(), child_tlp_classes_who_can_pass_up )
|
||||
do_focus_test = False
|
||||
|
||||
if not child_tlp_has_focus:
|
||||
|
||||
|
||||
if do_focus_test:
|
||||
|
||||
if not ClientGUIFunctions.WindowOrSameTLPChildHasFocus( evt_handler ):
|
||||
|
||||
if child_tlp_classes_who_can_pass_up is not None:
|
||||
|
||||
child_tlp_has_focus = ClientGUIFunctions.WindowOrAnyTLPChildHasFocus( evt_handler ) and isinstance( ClientGUIFunctions.GetFocusTLP(), child_tlp_classes_who_can_pass_up )
|
||||
|
||||
if not child_tlp_has_focus:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
else:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
else:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
|
||||
return True
|
||||
|
@ -638,7 +651,7 @@ class ShortcutsHandler( object ):
|
|||
|
||||
message = 'Key shortcut "' + shortcut.ToString() + '" passing through ' + repr( self._parent ) + '.'
|
||||
|
||||
if IShouldCatchShortcutEvent( self._parent ):
|
||||
if IShouldCatchShortcutEvent( self._parent, event = event ):
|
||||
|
||||
message += ' I am in a state to catch it.'
|
||||
|
||||
|
@ -650,7 +663,7 @@ class ShortcutsHandler( object ):
|
|||
HydrusData.ShowText( message )
|
||||
|
||||
|
||||
if IShouldCatchShortcutEvent( self._parent ):
|
||||
if IShouldCatchShortcutEvent( self._parent, event = event ):
|
||||
|
||||
shortcut_processed = self._ProcessShortcut( shortcut )
|
||||
|
||||
|
|
|
@ -3579,6 +3579,8 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
if potential_new in current_olds:
|
||||
|
||||
seen_tags = set()
|
||||
|
||||
d = dict( current_pairs )
|
||||
|
||||
next_new = potential_new
|
||||
|
@ -3594,6 +3596,19 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
return False
|
||||
|
||||
|
||||
if next_new in seen_tags:
|
||||
|
||||
message = 'The pair you mean to add seems to connect to a sibling loop already in your database! Please undo this loop first. The tags involved in the loop are:'
|
||||
message += os.linesep * 2
|
||||
message += ', '.join( seen_tags )
|
||||
|
||||
wx.MessageBox( message )
|
||||
|
||||
return False
|
||||
|
||||
|
||||
seen_tags.add( next_new )
|
||||
|
||||
|
||||
|
||||
return True
|
||||
|
@ -3794,7 +3809,7 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
show_all = self._show_all.GetValue()
|
||||
|
||||
for ( status, pairs ) in list(self._current_statuses_to_pairs.items()):
|
||||
for ( status, pairs ) in self._current_statuses_to_pairs.items():
|
||||
|
||||
if status == HC.CONTENT_STATUS_DELETED:
|
||||
|
||||
|
@ -4002,7 +4017,7 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
current_statuses_to_pairs = collections.defaultdict( set )
|
||||
|
||||
current_statuses_to_pairs.update( { key : set( value ) for ( key, value ) in list(original_statuses_to_pairs.items()) } )
|
||||
current_statuses_to_pairs.update( { key : set( value ) for ( key, value ) in original_statuses_to_pairs.items() } )
|
||||
|
||||
wx.CallAfter( wx_code, original_statuses_to_pairs, current_statuses_to_pairs )
|
||||
|
||||
|
|
|
@ -398,7 +398,12 @@ class NewDialog( wx.Dialog ):
|
|||
|
||||
def CleanBeforeDestroy( self ):
|
||||
|
||||
pass
|
||||
parent = self.GetParent()
|
||||
|
||||
if parent is not None and not ClientGUIFunctions.GetTLP( parent ) == HG.client_controller.gui:
|
||||
|
||||
wx.CallAfter( parent.SetFocus )
|
||||
|
||||
|
||||
|
||||
def DoOK( self ):
|
||||
|
@ -741,7 +746,12 @@ class Frame( wx.Frame ):
|
|||
|
||||
def CleanBeforeDestroy( self ):
|
||||
|
||||
pass
|
||||
parent = self.GetParent()
|
||||
|
||||
if parent is not None and not ClientGUIFunctions.GetTLP( parent ) == HG.client_controller.gui:
|
||||
|
||||
wx.CallAfter( parent.SetFocus )
|
||||
|
||||
|
||||
|
||||
def EventAboutToClose( self, event ):
|
||||
|
@ -838,6 +848,11 @@ class FrameThatTakesScrollablePanel( FrameThatResizes ):
|
|||
|
||||
|
||||
|
||||
def GetPanel( self ):
|
||||
|
||||
return self._panel
|
||||
|
||||
|
||||
def SetPanel( self, panel ):
|
||||
|
||||
self._panel = panel
|
||||
|
|
|
@ -1152,6 +1152,9 @@ class FileSeed( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
file_seed._fixed_service_keys_to_tags = self._fixed_service_keys_to_tags.Duplicate()
|
||||
|
||||
file_seed._urls.update( self._urls )
|
||||
file_seed._tags.update( self._tags )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
|
|
|
@ -1086,6 +1086,14 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
return self._no_work_until_reason + ' - ' + 'next check ' + HydrusData.TimestampToPrettyTimeDelta( self._next_check_time )
|
||||
|
||||
elif self._watcher_status != '':
|
||||
|
||||
return self._watcher_status
|
||||
|
||||
elif self._file_status != '':
|
||||
|
||||
return self._file_status
|
||||
|
||||
else:
|
||||
|
||||
return ''
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
import collections
|
||||
from . import ClientAPI
|
||||
from . import ClientConstants as CC
|
||||
from . import ClientFiles
|
||||
from . import ClientImportFileSeeds
|
||||
from . import ClientSearch
|
||||
from . import ClientTags
|
||||
|
@ -518,61 +517,6 @@ class HydrusResourceClientAPI( HydrusServerResources.HydrusResource ):
|
|||
return request
|
||||
|
||||
|
||||
def _ParseClientAPIKey( self, request, name_of_key ):
|
||||
|
||||
if request.requestHeaders.hasHeader( name_of_key ):
|
||||
|
||||
key_texts = request.requestHeaders.getRawHeaders( name_of_key )
|
||||
|
||||
key_text = key_texts[0]
|
||||
|
||||
try:
|
||||
|
||||
key = bytes.fromhex( key_text )
|
||||
|
||||
except:
|
||||
|
||||
raise Exception( 'Problem parsing {}!'.format( name_of_key ) )
|
||||
|
||||
|
||||
elif name_of_key in request.parsed_request_args:
|
||||
|
||||
key = request.parsed_request_args[ name_of_key ]
|
||||
|
||||
else:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
return key
|
||||
|
||||
|
||||
def _ParseClientAPIAccessKey( self, request ):
|
||||
|
||||
access_key = self._ParseClientAPIKey( request, 'Hydrus-Client-API-Access-Key' )
|
||||
|
||||
if access_key is None:
|
||||
|
||||
session_key = self._ParseClientAPIKey( request, 'Hydrus-Client-API-Session-Key' )
|
||||
|
||||
if session_key is None:
|
||||
|
||||
raise HydrusExceptions.MissingCredentialsException( 'No access key or session key provided!' )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
access_key = HG.client_controller.client_api_manager.GetAccessKey( session_key )
|
||||
|
||||
except HydrusExceptions.DataMissing as e:
|
||||
|
||||
raise HydrusExceptions.SessionException( str( e ) )
|
||||
|
||||
|
||||
|
||||
return access_key
|
||||
|
||||
|
||||
def _reportDataUsed( self, request, num_bytes ):
|
||||
|
||||
self._service.ReportDataUsed( num_bytes )
|
||||
|
@ -645,27 +589,53 @@ class HydrusResourceClientAPIRestricted( HydrusResourceClientAPI ):
|
|||
|
||||
HydrusResourceClientAPI._callbackCheckAccountRestrictions( self, request )
|
||||
|
||||
self._EstablishAPIPermissions( request )
|
||||
|
||||
self._CheckAPIPermissions( request )
|
||||
|
||||
return request
|
||||
|
||||
|
||||
def _callbackEstablishAccountFromHeader( self, request ):
|
||||
|
||||
access_key = self._ParseClientAPIAccessKey( request, 'header' )
|
||||
|
||||
if access_key is not None:
|
||||
|
||||
self._EstablishAPIPermissions( request, access_key )
|
||||
|
||||
|
||||
return request
|
||||
|
||||
|
||||
def _callbackEstablishAccountFromArgs( self, request ):
|
||||
|
||||
if request.client_api_permissions is None:
|
||||
|
||||
access_key = self._ParseClientAPIAccessKey( request, 'args' )
|
||||
|
||||
if access_key is not None:
|
||||
|
||||
self._EstablishAPIPermissions( request, access_key )
|
||||
|
||||
|
||||
|
||||
if request.client_api_permissions is None:
|
||||
|
||||
raise HydrusExceptions.MissingCredentialsException( 'No access key or session key provided!' )
|
||||
|
||||
|
||||
return request
|
||||
|
||||
|
||||
def _CheckAPIPermissions( self, request ):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def _EstablishAPIPermissions( self, request ):
|
||||
|
||||
client_api_manager = HG.client_controller.client_api_manager
|
||||
|
||||
access_key = self._ParseClientAPIAccessKey( request )
|
||||
def _EstablishAPIPermissions( self, request, access_key ):
|
||||
|
||||
try:
|
||||
|
||||
api_permissions = client_api_manager.GetPermissions( access_key )
|
||||
api_permissions = HG.client_controller.client_api_manager.GetPermissions( access_key )
|
||||
|
||||
except HydrusExceptions.DataMissing as e:
|
||||
|
||||
|
@ -675,6 +645,65 @@ class HydrusResourceClientAPIRestricted( HydrusResourceClientAPI ):
|
|||
request.client_api_permissions = api_permissions
|
||||
|
||||
|
||||
def _ParseClientAPIKey( self, request, source, name_of_key ):
|
||||
|
||||
key = None
|
||||
|
||||
if source == 'header':
|
||||
|
||||
if request.requestHeaders.hasHeader( name_of_key ):
|
||||
|
||||
key_texts = request.requestHeaders.getRawHeaders( name_of_key )
|
||||
|
||||
key_text = key_texts[0]
|
||||
|
||||
try:
|
||||
|
||||
key = bytes.fromhex( key_text )
|
||||
|
||||
except:
|
||||
|
||||
raise Exception( 'Problem parsing {}!'.format( name_of_key ) )
|
||||
|
||||
|
||||
|
||||
elif source == 'args':
|
||||
|
||||
if name_of_key in request.parsed_request_args:
|
||||
|
||||
key = request.parsed_request_args[ name_of_key ]
|
||||
|
||||
|
||||
|
||||
return key
|
||||
|
||||
|
||||
def _ParseClientAPIAccessKey( self, request, source ):
|
||||
|
||||
access_key = self._ParseClientAPIKey( request, source, 'Hydrus-Client-API-Access-Key' )
|
||||
|
||||
if access_key is None:
|
||||
|
||||
session_key = self._ParseClientAPIKey( request, source, 'Hydrus-Client-API-Session-Key' )
|
||||
|
||||
if session_key is None:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
try:
|
||||
|
||||
access_key = HG.client_controller.client_api_manager.GetAccessKey( session_key )
|
||||
|
||||
except HydrusExceptions.DataMissing as e:
|
||||
|
||||
raise HydrusExceptions.SessionException( str( e ) )
|
||||
|
||||
|
||||
|
||||
return access_key
|
||||
|
||||
|
||||
class HydrusResourceClientAPIRestrictedAccount( HydrusResourceClientAPIRestricted ):
|
||||
|
||||
def _CheckAPIPermissions( self, request ):
|
||||
|
@ -1116,7 +1145,19 @@ class HydrusResourceClientAPIRestrictedAddURLsGetURLFiles( HydrusResourceClientA
|
|||
|
||||
url = request.parsed_request_args[ 'url' ]
|
||||
|
||||
normalised_url = HG.client_controller.network_engine.domain_manager.NormaliseURL( url )
|
||||
if url == '':
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Given URL was empty!' )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
normalised_url = HG.client_controller.network_engine.domain_manager.NormaliseURL( url )
|
||||
|
||||
except HydrusExceptions.URLClassException as e:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( e )
|
||||
|
||||
|
||||
url_statuses = HG.client_controller.Read( 'url_statuses', normalised_url )
|
||||
|
||||
|
@ -1148,7 +1189,19 @@ class HydrusResourceClientAPIRestrictedAddURLsGetURLInfo( HydrusResourceClientAP
|
|||
|
||||
url = request.parsed_request_args[ 'url' ]
|
||||
|
||||
normalised_url = HG.client_controller.network_engine.domain_manager.NormaliseURL( url )
|
||||
if url == '':
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Given URL was empty!' )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
normalised_url = HG.client_controller.network_engine.domain_manager.NormaliseURL( url )
|
||||
|
||||
except HydrusExceptions.URLClassException as e:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( e )
|
||||
|
||||
|
||||
( url_type, match_name, can_parse ) = HG.client_controller.network_engine.domain_manager.GetURLParseCapability( normalised_url )
|
||||
|
||||
|
@ -1167,6 +1220,11 @@ class HydrusResourceClientAPIRestrictedAddURLsImportURL( HydrusResourceClientAPI
|
|||
|
||||
url = request.parsed_request_args[ 'url' ]
|
||||
|
||||
if url == '':
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Given URL was empty!' )
|
||||
|
||||
|
||||
service_keys_to_tags = None
|
||||
|
||||
if 'service_names_to_tags' in request.parsed_request_args:
|
||||
|
@ -1240,7 +1298,14 @@ class HydrusResourceClientAPIRestrictedAddURLsImportURL( HydrusResourceClientAPI
|
|||
return HG.client_controller.gui.ImportURLFromAPI( url, service_keys_to_tags, destination_page_name, destination_page_key, show_destination_page )
|
||||
|
||||
|
||||
( normalised_url, result_text ) = HG.client_controller.CallBlockingToWX( HG.client_controller.gui, do_it )
|
||||
try:
|
||||
|
||||
( normalised_url, result_text ) = HG.client_controller.CallBlockingToWX( HG.client_controller.gui, do_it )
|
||||
|
||||
except HydrusExceptions.URLClassException as e:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( e )
|
||||
|
||||
|
||||
time.sleep( 0.05 ) # yield and give the ui time to catch up with new URL pubsubs in case this is being spammed
|
||||
|
||||
|
|
|
@ -282,11 +282,11 @@ class NetworkEngine( object ):
|
|||
|
||||
if job.IsHydrusJob():
|
||||
|
||||
message = 'This hydrus service (' + job.GetLoginNetworkContext().ToString() + ') recently failed to log in. Please hit its \'refresh account\' under \'review services\' and try again.'
|
||||
message = 'This hydrus service (' + job.GetLoginNetworkContext().ToString() + ') could not do work because: {}'.format( str( e ) )
|
||||
|
||||
else:
|
||||
|
||||
message = 'This job\'s network context (' + job.GetLoginNetworkContext().ToString() + ') seems to have an invalid login, and it is not willing to wait! Please review its login details and then try again.'
|
||||
message = 'This job\'s network context (' + job.GetLoginNetworkContext().ToString() + ') seems to have an invalid login. The error was: {}'.format( str( e ) )
|
||||
|
||||
|
||||
job.Cancel( message )
|
||||
|
|
|
@ -110,6 +110,8 @@ class NetworkJob( object ):
|
|||
self._method = method
|
||||
self._url = url
|
||||
|
||||
self._max_connection_attempts_allowed = 5
|
||||
|
||||
self._domain = ClientNetworkingDomain.ConvertURLIntoDomain( self._url )
|
||||
self._second_level_domain = ClientNetworkingDomain.ConvertURLIntoSecondLevelDomain( self._url )
|
||||
|
||||
|
@ -167,9 +169,7 @@ class NetworkJob( object ):
|
|||
|
||||
def _CanReattemptConnection( self ):
|
||||
|
||||
max_attempts_allowed = 3
|
||||
|
||||
return self._current_connection_attempt_number <= max_attempts_allowed
|
||||
return self._current_connection_attempt_number <= self._max_connection_attempts_allowed
|
||||
|
||||
|
||||
def _CanReattemptRequest( self ):
|
||||
|
@ -347,7 +347,7 @@ class NetworkJob( object ):
|
|||
|
||||
|
||||
if 'content-length' in response.headers:
|
||||
|
||||
|
||||
self._num_bytes_to_read = int( response.headers[ 'content-length' ] )
|
||||
|
||||
if max_allowed is not None and self._num_bytes_to_read > max_allowed:
|
||||
|
@ -368,6 +368,8 @@ class NetworkJob( object ):
|
|||
|
||||
|
||||
|
||||
num_bytes_read_is_accurate = True
|
||||
|
||||
for chunk in response.iter_content( chunk_size = 65536 ):
|
||||
|
||||
if self._IsCancelled():
|
||||
|
@ -377,11 +379,33 @@ class NetworkJob( object ):
|
|||
|
||||
stream_dest.write( chunk )
|
||||
|
||||
chunk_length = len( chunk )
|
||||
total_bytes_read = response.raw.tell()
|
||||
|
||||
if total_bytes_read == 0:
|
||||
|
||||
# this seems to occur when the response is chunked transfer encoding (note, no Content-Length)
|
||||
# there's no great way to track raw bytes read in this case. the iter_content chunk can be unzipped from that
|
||||
# nonetheless, requests does raise ChunkedEncodingError if it stops early, so not a huge deal to miss here, just slightly off bandwidth tracking
|
||||
|
||||
num_bytes_read_is_accurate = False
|
||||
|
||||
chunk_num_bytes = len( chunk )
|
||||
|
||||
self._num_bytes_read += chunk_num_bytes
|
||||
|
||||
else:
|
||||
|
||||
chunk_num_bytes = total_bytes_read - self._num_bytes_read
|
||||
|
||||
self._num_bytes_read = total_bytes_read
|
||||
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._num_bytes_read += chunk_length
|
||||
if self._num_bytes_to_read is not None and num_bytes_read_is_accurate and self._num_bytes_read > self._num_bytes_to_read:
|
||||
|
||||
raise HydrusExceptions.NetworkException( 'Too much data: Was expecting {} but server continued responding!'.format( HydrusData.ToHumanBytes( self._num_bytes_to_read ) ) )
|
||||
|
||||
|
||||
if max_allowed is not None and self._num_bytes_read > max_allowed:
|
||||
|
||||
|
@ -396,7 +420,7 @@ class NetworkJob( object ):
|
|||
|
||||
|
||||
|
||||
self._ReportDataUsed( chunk_length )
|
||||
self._ReportDataUsed( chunk_num_bytes )
|
||||
self._WaitOnOngoingBandwidth()
|
||||
|
||||
if HG.view_shutdown:
|
||||
|
@ -405,9 +429,9 @@ class NetworkJob( object ):
|
|||
|
||||
|
||||
|
||||
if self._num_bytes_to_read is not None and self._num_bytes_read < self._num_bytes_to_read * 0.8:
|
||||
if self._num_bytes_to_read is not None and num_bytes_read_is_accurate and self._num_bytes_read < self._num_bytes_to_read:
|
||||
|
||||
raise HydrusExceptions.ShouldReattemptNetworkException( 'Was expecting ' + HydrusData.ToHumanBytes( self._num_bytes_to_read ) + ' but only got ' + HydrusData.ToHumanBytes( self._num_bytes_read ) + '.' )
|
||||
raise HydrusExceptions.ShouldReattemptNetworkException( 'Incomplete response: Was expecting {} but actually got {} !'.format( HydrusData.ToHumanBytes( self._num_bytes_to_read ), HydrusData.ToHumanBytes( self._num_bytes_read ) ) )
|
||||
|
||||
|
||||
|
||||
|
@ -809,6 +833,11 @@ class NetworkJob( object ):
|
|||
return self._ObeysBandwidth()
|
||||
|
||||
|
||||
def OnlyTryConnectionOnce( self ):
|
||||
|
||||
self._max_connection_attempts_allowed = 1
|
||||
|
||||
|
||||
def OverrideBandwidth( self, delay = None ):
|
||||
|
||||
with self._lock:
|
||||
|
|
|
@ -2611,6 +2611,7 @@ class ParseRootFileLookup( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
additional_headers = {}
|
||||
files = None
|
||||
f = None
|
||||
|
||||
if self._file_identifier_type == FILE_IDENTIFIER_TYPE_FILE:
|
||||
|
||||
|
@ -2633,7 +2634,9 @@ class ParseRootFileLookup( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
else:
|
||||
|
||||
files = { self._file_identifier_arg_name : open( path, 'rb' ) }
|
||||
f = open( path, 'rb' )
|
||||
|
||||
files = { self._file_identifier_arg_name : f }
|
||||
|
||||
|
||||
else:
|
||||
|
@ -2650,7 +2653,7 @@ class ParseRootFileLookup( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
network_job.SetFiles( files )
|
||||
|
||||
|
||||
for ( key, value ) in list(additional_headers.items()):
|
||||
for ( key, value ) in additional_headers.items():
|
||||
|
||||
network_job.AddAdditionalHeader( key, value )
|
||||
|
||||
|
@ -2680,6 +2683,13 @@ class ParseRootFileLookup( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
raise
|
||||
|
||||
finally:
|
||||
|
||||
if f is not None:
|
||||
|
||||
f.close()
|
||||
|
||||
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
|
|
|
@ -934,6 +934,11 @@ class ServiceRestricted( ServiceRemote ):
|
|||
|
||||
network_job = ClientNetworkingJobs.NetworkJobHydrus( self._service_key, method, url, body = body, temp_path = temp_path )
|
||||
|
||||
if command in ( '', 'account', 'access_key_verification' ):
|
||||
|
||||
network_job.OnlyTryConnectionOnce()
|
||||
|
||||
|
||||
if command in ( '', 'access_key', 'access_key_verification' ):
|
||||
|
||||
# don't try to establish a session key for these requests
|
||||
|
@ -1037,10 +1042,9 @@ class ServiceRestricted( ServiceRemote ):
|
|||
|
||||
do_it = True
|
||||
|
||||
if force:
|
||||
|
||||
self._no_requests_until = 0
|
||||
|
||||
self._no_requests_until = 0
|
||||
|
||||
self._account = HydrusNetwork.Account.GenerateUnknownAccount()
|
||||
|
||||
else:
|
||||
|
||||
|
@ -1790,6 +1794,7 @@ class ServiceIPFS( ServiceRemote ):
|
|||
|
||||
network_job = ClientNetworkingJobs.NetworkJob( 'GET', url )
|
||||
|
||||
network_job.OnlyTryConnectionOnce()
|
||||
network_job.OverrideBandwidth()
|
||||
|
||||
HG.client_controller.network_engine.AddJob( network_job )
|
||||
|
@ -2015,17 +2020,20 @@ class ServiceIPFS( ServiceRemote ):
|
|||
|
||||
path = client_files_manager.GetFilePath( hash, mime )
|
||||
|
||||
files = { 'path' : ( hash.hex(), open( path, 'rb' ), mime_string ) }
|
||||
|
||||
network_job = ClientNetworkingJobs.NetworkJob( 'GET', url )
|
||||
|
||||
network_job.SetFiles( files )
|
||||
|
||||
network_job.OverrideBandwidth()
|
||||
|
||||
HG.client_controller.network_engine.AddJob( network_job )
|
||||
|
||||
network_job.WaitUntilDone()
|
||||
with open( path, 'rb' ) as f:
|
||||
|
||||
files = { 'path' : ( hash.hex(), f, mime_string ) }
|
||||
|
||||
network_job = ClientNetworkingJobs.NetworkJob( 'GET', url )
|
||||
|
||||
network_job.SetFiles( files )
|
||||
|
||||
network_job.OverrideBandwidth()
|
||||
|
||||
HG.client_controller.network_engine.AddJob( network_job )
|
||||
|
||||
network_job.WaitUntilDone()
|
||||
|
||||
|
||||
parsing_text = network_job.GetContentText()
|
||||
|
||||
|
|
|
@ -67,7 +67,7 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 18
|
||||
SOFTWARE_VERSION = 357
|
||||
SOFTWARE_VERSION = 358
|
||||
CLIENT_API_VERSION = 8
|
||||
|
||||
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
@ -205,6 +205,7 @@ DUPLICATE_LARGER_BETTER = 6
|
|||
DUPLICATE_WORSE = 7
|
||||
DUPLICATE_MEMBER = 8
|
||||
DUPLICATE_KING = 9
|
||||
DUPLICATE_CONFIRMED_ALTERNATE = 10
|
||||
|
||||
duplicate_type_string_lookup = {}
|
||||
|
||||
|
@ -218,6 +219,7 @@ duplicate_type_string_lookup[ DUPLICATE_LARGER_BETTER ] = 'larger hash_id is bet
|
|||
duplicate_type_string_lookup[ DUPLICATE_WORSE ] = 'this is worse'
|
||||
duplicate_type_string_lookup[ DUPLICATE_MEMBER ] = 'duplicates'
|
||||
duplicate_type_string_lookup[ DUPLICATE_KING ] = 'the best quality duplicate'
|
||||
duplicate_type_string_lookup[ DUPLICATE_CONFIRMED_ALTERNATE ] = 'confirmed alternates'
|
||||
|
||||
ENCODING_RAW = 0
|
||||
ENCODING_HEX = 1
|
||||
|
|
|
@ -220,18 +220,12 @@ def GenerateNumPyImageFromPILImage( pil_image ):
|
|||
|
||||
def GeneratePILImage( path ):
|
||||
|
||||
fp = open( path, 'rb' )
|
||||
|
||||
try:
|
||||
|
||||
pil_image = PILImage.open( fp )
|
||||
pil_image = PILImage.open( path )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
# pil doesn't clean up its open file on exception, jej
|
||||
|
||||
fp.close()
|
||||
|
||||
raise HydrusExceptions.MimeException( 'Could not load the image--it was likely malformed!' )
|
||||
|
||||
|
||||
|
|
|
@ -381,7 +381,7 @@ class Account( object ):
|
|||
|
||||
if not self._account_type.BandwidthOK( self._bandwidth_tracker ):
|
||||
|
||||
raise HydrusExceptions.InsufficientCredentialsException( 'account has exceeded bandwidth' )
|
||||
raise HydrusExceptions.BandwidthException( 'account has exceeded bandwidth' )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -19,6 +19,7 @@ class HydrusRequest( Request ):
|
|||
self.parsed_request_args = None
|
||||
self.hydrus_response_context = None
|
||||
self.hydrus_account = None
|
||||
self.client_api_permissions = None
|
||||
|
||||
|
||||
class HydrusRequestLogging( HydrusRequest ):
|
||||
|
|
|
@ -2,7 +2,6 @@ from . import HydrusConstants as HC
|
|||
from . import HydrusExceptions
|
||||
from . import HydrusFileHandling
|
||||
from . import HydrusImageHandling
|
||||
from . import HydrusNetwork
|
||||
from . import HydrusNetworking
|
||||
from . import HydrusPaths
|
||||
from . import HydrusSerialisable
|
||||
|
@ -322,6 +321,16 @@ class HydrusResource( Resource ):
|
|||
return request
|
||||
|
||||
|
||||
def _callbackEstablishAccountFromHeader( self, request ):
|
||||
|
||||
return request
|
||||
|
||||
|
||||
def _callbackEstablishAccountFromArgs( self, request ):
|
||||
|
||||
return request
|
||||
|
||||
|
||||
def _callbackParseGETArgs( self, request ):
|
||||
|
||||
return request
|
||||
|
@ -670,12 +679,12 @@ class HydrusResource( Resource ):
|
|||
|
||||
allowed_methods = []
|
||||
|
||||
if self._threadDoGETJob.__func__ != HydrusResource._threadDoGETJob:
|
||||
if self._threadDoGETJob.__func__ is not HydrusResource._threadDoGETJob:
|
||||
|
||||
allowed_methods.append( 'GET' )
|
||||
|
||||
|
||||
if self._threadDoPOSTJob.__func__ != HydrusResource._threadDoPOSTJob:
|
||||
if self._threadDoPOSTJob.__func__ is not HydrusResource._threadDoPOSTJob:
|
||||
|
||||
allowed_methods.append( 'POST' )
|
||||
|
||||
|
@ -736,8 +745,12 @@ class HydrusResource( Resource ):
|
|||
|
||||
d.addCallback( self._callbackCheckServiceRestrictions )
|
||||
|
||||
d.addCallback( self._callbackEstablishAccountFromHeader )
|
||||
|
||||
d.addCallback( self._callbackParseGETArgs )
|
||||
|
||||
d.addCallback( self._callbackEstablishAccountFromArgs )
|
||||
|
||||
d.addCallback( self._callbackCheckAccountRestrictions )
|
||||
|
||||
d.addCallback( self._callbackDoGETJob )
|
||||
|
@ -786,8 +799,12 @@ class HydrusResource( Resource ):
|
|||
|
||||
d.addCallback( self._callbackCheckServiceRestrictions )
|
||||
|
||||
d.addCallback( self._callbackEstablishAccountFromHeader )
|
||||
|
||||
d.addCallback( self._callbackParsePOSTArgs )
|
||||
|
||||
d.addCallback( self._callbackEstablishAccountFromArgs )
|
||||
|
||||
d.addCallback( self._callbackCheckAccountRestrictions )
|
||||
|
||||
d.addCallback( self._callbackDoPOSTJob )
|
||||
|
|
|
@ -692,7 +692,10 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
result = self._c.execute( 'SELECT account_id FROM accounts WHERE account_key = ?;', ( sqlite3.Binary( account_key ), ) ).fetchone()
|
||||
|
||||
if result is None: raise HydrusExceptions.InsufficientCredentialsException( 'The service could not find that account key in its database.' )
|
||||
if result is None:
|
||||
|
||||
raise HydrusExceptions.InsufficientCredentialsException( 'The service could not find that account key in its database.' )
|
||||
|
||||
|
||||
( account_id, ) = result
|
||||
|
||||
|
@ -1932,13 +1935,15 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name, ip_addresses_table_name ) = GenerateRepositoryFilesTableNames( service_id )
|
||||
|
||||
result = self._c.execute( 'SELECT account_id, reason_id FROM ' + petitioned_files_table_name + ' ORDER BY RANDOM() LIMIT 1;' ).fetchone()
|
||||
result = self._c.execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_files_table_name + ' LIMIT 100;' ).fetchall()
|
||||
|
||||
if result is None:
|
||||
if len( result ) == 0:
|
||||
|
||||
raise HydrusExceptions.NotFoundException( 'No petitions!' )
|
||||
|
||||
|
||||
result = random.choice( result )
|
||||
|
||||
( petitioner_account_id, reason_id ) = result
|
||||
|
||||
action = HC.CONTENT_UPDATE_PETITION
|
||||
|
@ -1990,7 +1995,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if account.HasPermission( HC.CONTENT_TYPE_FILES, HC.PERMISSION_ACTION_OVERRULE ):
|
||||
|
||||
( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT account_id, reason_id FROM ' + petitioned_files_table_name + ' );' ).fetchone()
|
||||
( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT account_id, reason_id FROM ' + petitioned_files_table_name + ' LIMIT 1000 );' ).fetchone()
|
||||
|
||||
petition_count_info.append( ( HC.CONTENT_TYPE_FILES, HC.CONTENT_STATUS_PETITIONED, num_petitions ) )
|
||||
|
||||
|
@ -1999,7 +2004,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if account.HasPermission( HC.CONTENT_TYPE_MAPPINGS, HC.PERMISSION_ACTION_OVERRULE ):
|
||||
|
||||
( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT service_tag_id, account_id, reason_id FROM ' + petitioned_mappings_table_name + ' );' ).fetchone()
|
||||
( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT service_tag_id, account_id, reason_id FROM ' + petitioned_mappings_table_name + ' LIMIT 1000 );' ).fetchone()
|
||||
|
||||
petition_count_info.append( ( HC.CONTENT_TYPE_MAPPINGS, HC.CONTENT_STATUS_PETITIONED, num_petitions ) )
|
||||
|
||||
|
@ -2008,11 +2013,11 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if account.HasPermission( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.PERMISSION_ACTION_OVERRULE ):
|
||||
|
||||
( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ' + pending_tag_parents_table_name + ';' ).fetchone()
|
||||
( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ' + pending_tag_parents_table_name + ' LIMIT 1000;' ).fetchone()
|
||||
|
||||
petition_count_info.append( ( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_STATUS_PENDING, num_petitions ) )
|
||||
|
||||
( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ' + petitioned_tag_parents_table_name + ';' ).fetchone()
|
||||
( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ' + petitioned_tag_parents_table_name + ' LIMIT 1000;' ).fetchone()
|
||||
|
||||
petition_count_info.append( ( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_STATUS_PETITIONED, num_petitions ) )
|
||||
|
||||
|
@ -2021,11 +2026,11 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if account.HasPermission( HC.CONTENT_TYPE_TAG_PARENTS, HC.PERMISSION_ACTION_OVERRULE ):
|
||||
|
||||
( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ' + pending_tag_siblings_table_name + ';' ).fetchone()
|
||||
( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ' + pending_tag_siblings_table_name + ' LIMIT 1000;' ).fetchone()
|
||||
|
||||
petition_count_info.append( ( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_STATUS_PENDING, num_petitions ) )
|
||||
|
||||
( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ' + petitioned_tag_siblings_table_name + ';' ).fetchone()
|
||||
( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ' + petitioned_tag_siblings_table_name + ' LIMIT 1000;' ).fetchone()
|
||||
|
||||
petition_count_info.append( ( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_STATUS_PETITIONED, num_petitions ) )
|
||||
|
||||
|
@ -2037,13 +2042,15 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id )
|
||||
|
||||
result = self._c.execute( 'SELECT account_id, reason_id FROM ' + petitioned_mappings_table_name + ' ORDER BY RANDOM() LIMIT 1;' ).fetchone()
|
||||
result = self._c.execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_mappings_table_name + ' LIMIT 100;' ).fetchall()
|
||||
|
||||
if result is None:
|
||||
if len( result ) == 0:
|
||||
|
||||
raise HydrusExceptions.NotFoundException( 'No petitions!' )
|
||||
|
||||
|
||||
result = random.choice( result )
|
||||
|
||||
( petitioner_account_id, reason_id ) = result
|
||||
|
||||
action = HC.CONTENT_UPDATE_PETITION
|
||||
|
@ -2306,13 +2313,15 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id )
|
||||
|
||||
result = self._c.execute( 'SELECT account_id, reason_id FROM ' + pending_tag_parents_table_name + ' ORDER BY RANDOM() LIMIT 1;' ).fetchone()
|
||||
result = self._c.execute( 'SELECT DISTINCT account_id, reason_id FROM ' + pending_tag_parents_table_name + ' LIMIT 100;' ).fetchall()
|
||||
|
||||
if result is None:
|
||||
if len( result ) == 0:
|
||||
|
||||
raise HydrusExceptions.NotFoundException( 'No petitions!' )
|
||||
|
||||
|
||||
result = random.choice( result )
|
||||
|
||||
( petitioner_account_id, reason_id ) = result
|
||||
|
||||
action = HC.CONTENT_UPDATE_PEND
|
||||
|
@ -2357,13 +2366,15 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id )
|
||||
|
||||
result = self._c.execute( 'SELECT account_id, reason_id FROM ' + petitioned_tag_parents_table_name + ' ORDER BY RANDOM() LIMIT 1;' ).fetchone()
|
||||
result = self._c.execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_tag_parents_table_name + ' LIMIT 100;' ).fetchall()
|
||||
|
||||
if result is None:
|
||||
if len( result ) == 0:
|
||||
|
||||
raise HydrusExceptions.NotFoundException( 'No petitions!' )
|
||||
|
||||
|
||||
result = random.choice( result )
|
||||
|
||||
( petitioner_account_id, reason_id ) = result
|
||||
|
||||
action = HC.CONTENT_UPDATE_PETITION
|
||||
|
@ -2411,13 +2422,15 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id )
|
||||
|
||||
result = self._c.execute( 'SELECT account_id, reason_id FROM ' + pending_tag_siblings_table_name + ' ORDER BY RANDOM() LIMIT 1;' ).fetchone()
|
||||
result = self._c.execute( 'SELECT DISTINCT account_id, reason_id FROM ' + pending_tag_siblings_table_name + ' LIMIT 100;' ).fetchall()
|
||||
|
||||
if result is None:
|
||||
if len( result ) == 0:
|
||||
|
||||
raise HydrusExceptions.NotFoundException( 'No petitions!' )
|
||||
|
||||
|
||||
result = random.choice( result )
|
||||
|
||||
( petitioner_account_id, reason_id ) = result
|
||||
|
||||
action = HC.CONTENT_UPDATE_PEND
|
||||
|
@ -2462,13 +2475,15 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id )
|
||||
|
||||
result = self._c.execute( 'SELECT account_id, reason_id FROM ' + petitioned_tag_siblings_table_name + ' ORDER BY RANDOM() LIMIT 1;' ).fetchone()
|
||||
result = self._c.execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_tag_siblings_table_name + ' LIMIT 100;' ).fetchall()
|
||||
|
||||
if result is None:
|
||||
if len( result ) == 0:
|
||||
|
||||
raise HydrusExceptions.NotFoundException( 'No petitions!' )
|
||||
|
||||
|
||||
result = random.choice( result )
|
||||
|
||||
( petitioner_account_id, reason_id ) = result
|
||||
|
||||
action = HC.CONTENT_UPDATE_PETITION
|
||||
|
|
|
@ -174,38 +174,16 @@ class HydrusResourceRestricted( HydrusResourceHydrusNetwork ):
|
|||
|
||||
HydrusResourceHydrusNetwork._callbackCheckAccountRestrictions( self, request )
|
||||
|
||||
self._checkSession( request )
|
||||
|
||||
self._checkAccount( request )
|
||||
|
||||
return request
|
||||
|
||||
|
||||
def _checkAccount( self, request ):
|
||||
|
||||
request.hydrus_account.CheckFunctional()
|
||||
|
||||
return request
|
||||
|
||||
|
||||
def _checkBandwidth( self, request ):
|
||||
|
||||
if not self._service.BandwidthOK():
|
||||
|
||||
raise HydrusExceptions.BandwidthException( 'This service has run out of bandwidth. Please try again later.' )
|
||||
|
||||
|
||||
if not HG.server_controller.ServerBandwidthOK():
|
||||
|
||||
raise HydrusExceptions.BandwidthException( 'This server has run out of bandwidth. Please try again later.' )
|
||||
|
||||
|
||||
|
||||
def _checkSession( self, request ):
|
||||
def _callbackEstablishAccountFromHeader( self, request ):
|
||||
|
||||
if not request.requestHeaders.hasHeader( 'Cookie' ):
|
||||
|
||||
raise HydrusExceptions.MissingCredentialsException( 'No cookies found!' )
|
||||
raise HydrusExceptions.MissingCredentialsException( 'No Session Cookie found!' )
|
||||
|
||||
|
||||
cookie_texts = request.requestHeaders.getRawHeaders( 'Cookie' )
|
||||
|
@ -232,7 +210,7 @@ class HydrusResourceRestricted( HydrusResourceHydrusNetwork ):
|
|||
|
||||
except:
|
||||
|
||||
raise Exception( 'Problem parsing cookies!' )
|
||||
raise Exception( 'Problem parsing cookies for Session Cookie!' )
|
||||
|
||||
|
||||
account = HG.server_controller.server_session_manager.GetAccount( self._service_key, session_key )
|
||||
|
@ -242,6 +220,26 @@ class HydrusResourceRestricted( HydrusResourceHydrusNetwork ):
|
|||
return request
|
||||
|
||||
|
||||
def _checkAccount( self, request ):
|
||||
|
||||
request.hydrus_account.CheckFunctional()
|
||||
|
||||
return request
|
||||
|
||||
|
||||
def _checkBandwidth( self, request ):
|
||||
|
||||
if not self._service.BandwidthOK():
|
||||
|
||||
raise HydrusExceptions.BandwidthException( 'This service has run out of bandwidth. Please try again later.' )
|
||||
|
||||
|
||||
if not HG.server_controller.ServerBandwidthOK():
|
||||
|
||||
raise HydrusExceptions.BandwidthException( 'This server has run out of bandwidth. Please try again later.' )
|
||||
|
||||
|
||||
|
||||
def _reportDataUsed( self, request, num_bytes ):
|
||||
|
||||
HydrusResourceHydrusNetwork._reportDataUsed( self, request, num_bytes )
|
||||
|
@ -448,7 +446,14 @@ class HydrusResourceRestrictedRepositoryFile( HydrusResourceRestricted ):
|
|||
|
||||
def _DecompressionBombsOK( self, request ):
|
||||
|
||||
return request.hydrus_account.HasPermission( HC.CONTENT_TYPE_ACCOUNTS, HC.PERMISSION_ACTION_CREATE )
|
||||
if request.hydrus_account is None:
|
||||
|
||||
return False
|
||||
|
||||
else:
|
||||
|
||||
return request.hydrus_account.HasPermission( HC.CONTENT_TYPE_ACCOUNTS, HC.PERMISSION_ACTION_CREATE )
|
||||
|
||||
|
||||
|
||||
def _threadDoGETJob( self, request ):
|
||||
|
|
|
@ -96,12 +96,13 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
if HC.DUPLICATE_FALSE_POSITIVE in file_duplicate_types_to_counts:
|
||||
|
||||
# this would not work if the fp group had mutiple alt members
|
||||
num_potentials -= file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ]
|
||||
|
||||
|
||||
if HC.DUPLICATE_ALTERNATE in file_duplicate_types_to_counts:
|
||||
|
||||
num_potentials -= file_duplicate_types_to_counts[ HC.DUPLICATE_ALTERNATE ]
|
||||
num_potentials -= file_duplicate_types_to_counts[ HC.DUPLICATE_CONFIRMED_ALTERNATE ]
|
||||
|
||||
|
||||
return num_potentials
|
||||
|
@ -141,7 +142,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
num_potentials = self._read( 'potential_duplicates_count', self._file_search_context, both_files_match )
|
||||
|
||||
#self.assertEqual( num_potentials, self._expected_num_potentials )
|
||||
self.assertEqual( num_potentials, self._expected_num_potentials )
|
||||
|
||||
result = self._read( 'random_potential_duplicate_hashes', self._file_search_context, both_files_match )
|
||||
|
||||
|
@ -165,7 +166,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 1 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
|
||||
result = self._read( 'file_duplicate_hashes', CC.LOCAL_FILE_SERVICE_KEY, self._dupe_hashes[0], HC.DUPLICATE_POTENTIAL )
|
||||
|
||||
|
@ -192,7 +193,15 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
num_potentials = self._read( 'potential_duplicates_count', self._file_search_context, both_files_match )
|
||||
|
||||
#self.assertEqual( num_potentials, self._expected_num_potentials )
|
||||
self._num_free_agents -= 1
|
||||
|
||||
self._expected_num_potentials -= self._num_free_agents
|
||||
|
||||
self._num_free_agents -= 1
|
||||
|
||||
self._expected_num_potentials -= self._num_free_agents
|
||||
|
||||
self.assertEqual( num_potentials, self._expected_num_potentials )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash )
|
||||
|
||||
|
@ -202,7 +211,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 2 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._dupe_hashes[1] )
|
||||
|
@ -213,7 +222,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 2 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._dupe_hashes[2] )
|
||||
|
@ -224,7 +233,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 2 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_hashes', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash, HC.DUPLICATE_KING )
|
||||
|
@ -270,7 +279,11 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
num_potentials = self._read( 'potential_duplicates_count', self._file_search_context, both_files_match )
|
||||
|
||||
#self.assertEqual( num_potentials, self._expected_num_potentials )
|
||||
self._num_free_agents -= 1
|
||||
|
||||
self._expected_num_potentials -= self._num_free_agents
|
||||
|
||||
self.assertEqual( num_potentials, self._expected_num_potentials )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash )
|
||||
|
||||
|
@ -280,7 +293,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 2 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._old_king_hash )
|
||||
|
@ -291,7 +304,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 2 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_hashes', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash, HC.DUPLICATE_KING )
|
||||
|
@ -331,7 +344,15 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
num_potentials = self._read( 'potential_duplicates_count', self._file_search_context, both_files_match )
|
||||
|
||||
#self.assertEqual( num_potentials, self._expected_num_potentials )
|
||||
self._num_free_agents -= 1
|
||||
|
||||
self._expected_num_potentials -= self._num_free_agents
|
||||
|
||||
self._num_free_agents -= 1
|
||||
|
||||
self._expected_num_potentials -= self._num_free_agents
|
||||
|
||||
self.assertEqual( num_potentials, self._expected_num_potentials )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash )
|
||||
|
||||
|
@ -341,7 +362,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 2 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._dupe_hashes[4] )
|
||||
|
@ -352,7 +373,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 2 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._dupe_hashes[5] )
|
||||
|
@ -363,7 +384,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 2 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_hashes', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash, HC.DUPLICATE_KING )
|
||||
|
@ -446,7 +467,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 2 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._second_group_king_hash )
|
||||
|
@ -457,7 +478,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 2 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_second_dupe_group_hashes ) - 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_hashes', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash, HC.DUPLICATE_KING )
|
||||
|
@ -487,6 +508,14 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self._write( 'duplicate_pair_status', [ row ] )
|
||||
|
||||
both_files_match = True
|
||||
|
||||
num_potentials = self._read( 'potential_duplicates_count', self._file_search_context, both_files_match )
|
||||
|
||||
self.assertLess( num_potentials, self._expected_num_potentials )
|
||||
|
||||
self._expected_num_potentials = num_potentials
|
||||
|
||||
self._our_second_dupe_group_hashes.discard( self._second_group_dupe_hashes[2] )
|
||||
self._our_main_dupe_group_hashes.add( self._second_group_dupe_hashes[2] )
|
||||
|
||||
|
@ -500,7 +529,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 2 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._second_group_king_hash )
|
||||
|
@ -511,7 +540,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 2 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_second_dupe_group_hashes ) - 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_hashes', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash, HC.DUPLICATE_KING )
|
||||
|
@ -553,6 +582,14 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self._write( 'duplicate_pair_status', rows )
|
||||
|
||||
both_files_match = True
|
||||
|
||||
num_potentials = self._read( 'potential_duplicates_count', self._file_search_context, both_files_match )
|
||||
|
||||
self.assertLess( num_potentials, self._expected_num_potentials )
|
||||
|
||||
self._expected_num_potentials = num_potentials
|
||||
|
||||
self._our_main_dupe_group_hashes.update( ( self._dupe_hashes[ i ] for i in range( 6, 14 ) ) )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash )
|
||||
|
@ -563,7 +600,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 2 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_hashes', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash, HC.DUPLICATE_KING )
|
||||
|
@ -585,6 +622,14 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self._write( 'duplicate_pair_status', rows )
|
||||
|
||||
both_files_match = True
|
||||
|
||||
num_potentials = self._read( 'potential_duplicates_count', self._file_search_context, both_files_match )
|
||||
|
||||
self.assertLess( num_potentials, self._expected_num_potentials )
|
||||
|
||||
self._expected_num_potentials = num_potentials
|
||||
|
||||
self._our_fp_dupe_group_hashes.add( self._similar_looking_false_positive_hashes[1] )
|
||||
self._our_fp_dupe_group_hashes.add( self._similar_looking_false_positive_hashes[2] )
|
||||
|
||||
|
@ -599,7 +644,9 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
num_potentials = self._read( 'potential_duplicates_count', self._file_search_context, both_files_match )
|
||||
|
||||
#self.assertEqual( num_potentials, self._expected_num_potentials )
|
||||
self.assertLess( num_potentials, self._expected_num_potentials )
|
||||
|
||||
self._expected_num_potentials = num_potentials
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash )
|
||||
|
||||
|
@ -609,7 +656,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 3 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ], 1 )
|
||||
|
||||
|
@ -621,7 +668,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 3 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_fp_dupe_group_hashes ) - 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ], 1 )
|
||||
|
||||
|
@ -643,6 +690,14 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self._write( 'duplicate_pair_status', rows )
|
||||
|
||||
both_files_match = True
|
||||
|
||||
num_potentials = self._read( 'potential_duplicates_count', self._file_search_context, both_files_match )
|
||||
|
||||
self.assertLess( num_potentials, self._expected_num_potentials )
|
||||
|
||||
self._expected_num_potentials = num_potentials
|
||||
|
||||
self._our_alt_dupe_group_hashes.add( self._similar_looking_alternate_hashes[1] )
|
||||
self._our_alt_dupe_group_hashes.add( self._similar_looking_alternate_hashes[2] )
|
||||
|
||||
|
@ -657,7 +712,9 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
num_potentials = self._read( 'potential_duplicates_count', self._file_search_context, both_files_match )
|
||||
|
||||
#self.assertEqual( num_potentials, self._expected_num_potentials )
|
||||
self.assertLess( num_potentials, self._expected_num_potentials )
|
||||
|
||||
self._expected_num_potentials = num_potentials
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash )
|
||||
|
||||
|
@ -665,12 +722,15 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
file_duplicate_types_to_counts = result[ 'counts' ]
|
||||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 4 )
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 5 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ], 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_ALTERNATE ], 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_CONFIRMED_ALTERNATE ], 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_hashes', CC.LOCAL_FILE_SERVICE_KEY, self._alternate_king_hash, HC.DUPLICATE_POTENTIAL )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._alternate_king_hash )
|
||||
|
||||
|
@ -678,12 +738,13 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
file_duplicate_types_to_counts = result[ 'counts' ]
|
||||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 4 )
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 5 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_alt_dupe_group_hashes ) - 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ], 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_ALTERNATE ], 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_CONFIRMED_ALTERNATE ], 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_hashes', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash, HC.DUPLICATE_ALTERNATE )
|
||||
|
||||
|
@ -703,14 +764,16 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self._write( 'duplicate_pair_status', rows )
|
||||
|
||||
self._our_fp_dupe_group_hashes.add( self._similar_looking_false_positive_hashes[3] )
|
||||
self._our_fp_dupe_group_hashes.add( self._similar_looking_false_positive_hashes[4] )
|
||||
|
||||
both_files_match = True
|
||||
|
||||
num_potentials = self._read( 'potential_duplicates_count', self._file_search_context, both_files_match )
|
||||
|
||||
#self.assertEqual( num_potentials, self._expected_num_potentials )
|
||||
self.assertLess( num_potentials, self._expected_num_potentials )
|
||||
|
||||
self._expected_num_potentials = num_potentials
|
||||
|
||||
self._our_fp_dupe_group_hashes.add( self._similar_looking_false_positive_hashes[3] )
|
||||
self._our_fp_dupe_group_hashes.add( self._similar_looking_false_positive_hashes[4] )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash )
|
||||
|
||||
|
@ -718,12 +781,13 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
file_duplicate_types_to_counts = result[ 'counts' ]
|
||||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 4 )
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 5 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ], 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_ALTERNATE ], 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_CONFIRMED_ALTERNATE ], 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._false_positive_king_hash )
|
||||
|
||||
|
@ -733,7 +797,7 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 3 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_fp_dupe_group_hashes ) - 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ], 2 )
|
||||
|
||||
|
@ -757,14 +821,16 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self._write( 'duplicate_pair_status', rows )
|
||||
|
||||
self._our_alt_dupe_group_hashes.add( self._similar_looking_alternate_hashes[3] )
|
||||
self._our_alt_dupe_group_hashes.add( self._similar_looking_alternate_hashes[4] )
|
||||
|
||||
both_files_match = True
|
||||
|
||||
num_potentials = self._read( 'potential_duplicates_count', self._file_search_context, both_files_match )
|
||||
|
||||
#self.assertEqual( num_potentials, self._expected_num_potentials )
|
||||
self.assertLess( num_potentials, self._expected_num_potentials )
|
||||
|
||||
self._expected_num_potentials = num_potentials
|
||||
|
||||
self._our_alt_dupe_group_hashes.add( self._similar_looking_alternate_hashes[3] )
|
||||
self._our_alt_dupe_group_hashes.add( self._similar_looking_alternate_hashes[4] )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash )
|
||||
|
||||
|
@ -772,12 +838,13 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
file_duplicate_types_to_counts = result[ 'counts' ]
|
||||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 4 )
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 5 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ], 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_ALTERNATE ], 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_CONFIRMED_ALTERNATE ], 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_info', CC.LOCAL_FILE_SERVICE_KEY, self._alternate_king_hash )
|
||||
|
||||
|
@ -785,12 +852,13 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
file_duplicate_types_to_counts = result[ 'counts' ]
|
||||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 4 )
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 5 )
|
||||
|
||||
#self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_alt_dupe_group_hashes ) - 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_FALSE_POSITIVE ], 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_ALTERNATE ], 1 )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_CONFIRMED_ALTERNATE ], 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_hashes', CC.LOCAL_FILE_SERVICE_KEY, self._king_hash, HC.DUPLICATE_ALTERNATE )
|
||||
|
||||
|
@ -836,8 +904,10 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
n = len( self._all_hashes )
|
||||
|
||||
self._num_free_agents = n
|
||||
|
||||
# initial number pair combinations is (n(n-1))/2
|
||||
self._expected_num_potentials = n * ( n - 1 ) / 2
|
||||
self._expected_num_potentials = int( n * ( n - 1 ) / 2 )
|
||||
|
||||
size_pred = ClientSearch.Predicate( HC.PREDICATE_TYPE_SYSTEM_SIZE, ( '=', 65535, HydrusData.ConvertUnitToInt( 'B' ) ) )
|
||||
|
||||
|
|
|
@ -175,7 +175,7 @@ class TestServer( unittest.TestCase ):
|
|||
except: pass
|
||||
|
||||
path = os.path.join( HC.STATIC_DIR, 'hydrus.png' )
|
||||
|
||||
|
||||
with open( path, 'rb' ) as f:
|
||||
|
||||
file_bytes = f.read()
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 2.9 KiB After Width: | Height: | Size: 2.9 KiB |
Loading…
Reference in New Issue