Version 260
This commit is contained in:
parent
a73d18d6c4
commit
64bf9bcebb
|
@ -8,6 +8,26 @@
|
|||
<div class="content">
|
||||
<h3>changelog</h3>
|
||||
<ul>
|
||||
<li><h3>version 260</h3></li>
|
||||
<ul>
|
||||
<li>?fixed video parsing when the video metadata includes random non-utf-friendly garbage</li>
|
||||
<li>fixed video parsing when ffmpeg reports no fps at all</li>
|
||||
<li>improved video frame counting accuracy</li>
|
||||
<li>thumbnail waterfall process now adapts to the current speed of the cache and can hence use significantly less overhead</li>
|
||||
<li>the thumbnail fading process now adapts to the current rendering speed, delivering smooth fade when CPU is available but otherwise skipping frames and more reliably filling in thumbnails within a quarter of a second</li>
|
||||
<li>canvases now draw their thumbnails with slightly less overhead</li>
|
||||
<li>increased database synchronous pragma to FULL to better safeguard against power loss during multiple-db commit--we'll see if it slows things down too much, and maybe add an option about it</li>
|
||||
<li>cleaned up and improved some client gui pubsubs</li>
|
||||
<li>made pubsub profile mode far less popup-spammy</li>
|
||||
<li>if a profile takes less than 20ms, it now won't be fully printed to the log</li>
|
||||
<li>tweaked some more server object maintenance code</li>
|
||||
<li>added 'changelog' link to help menu</li>
|
||||
<li>updated lz4 library and fixed some old deprecated calls</li>
|
||||
<li>misc serverside pubsub cleanup</li>
|
||||
<li>misc fixes</li>
|
||||
<li>misc refactoring</li>
|
||||
<li>misc code cleaning</li>
|
||||
</ul>
|
||||
<li><h3>version 259</h3></li>
|
||||
<ul>
|
||||
<li>planned out new networking engine and started the principal objects</li>
|
||||
|
|
|
@ -89,7 +89,7 @@
|
|||
<p>Here are two same quality duplicates:</p>
|
||||
<p><a href="dupe_exact_match_1.png"><img src="dupe_exact_match_1.png" /></a></p>
|
||||
<p><a href="dupe_exact_match_2.png"><img src="dupe_exact_match_2.png" /></a></p>
|
||||
<p>There is no obvious different between those two. The filesize is significantly different, so I suspect the smaller is a lossless png optimisation. Many of the big content providers--Facebook, Google, Clouflare--automatically 'optimise' the data that goes through their networks in order to save bandwidth. With pngs it is usually mostly harmless, but jpegs are often a slaughterhouse.</p>
|
||||
<p>There is no obvious different between those two. The filesize is significantly different, so I suspect the smaller is a lossless png optimisation, but in the grand scheme of things, that doesn't matter so much. Many of the big content providers--Facebook, Google, Clouflare--automatically 'optimise' the data that goes through their networks in order to save bandwidth. With pngs it is usually mostly harmless, but jpegs can be a slaughterhouse.</p>
|
||||
<p>Given the filesize, you might decide that these are actually a better/worse pair--but if the larger image had tags and was the 'canonical' version on most boorus, the decision might not be so clear. Sometimes you just want to keep both without a firm decision on which is best, in which case you can just set this 'same quality' status and move on.</p>
|
||||
<p>The default action on setting a same quality pair is to copy all <i>local tags</i> between the two files in both directions.</p>
|
||||
</li>
|
||||
|
|
|
@ -1013,7 +1013,7 @@ class ClientFilesManager( object ):
|
|||
|
||||
self._bad_error_occured = True
|
||||
|
||||
HydrusData.ShowText( 'A thumbnail for a file, ' + hash.encode( 'hex' ) + ', was missing. It has been regenerated from the original file, but this event could indicate hard drive corruption. Please check everything is ok. This error may be occuring for many files, but this message will only display once per boot. If you are recovering from a fractured database, you may wish to run \'database->maintenance->regenerate thumbnails\'.' )
|
||||
HydrusData.ShowText( 'A thumbnail for a file, ' + hash.encode( 'hex' ) + ', was missing. It has been regenerated from the original file, but this event could indicate hard drive corruption. Please check everything is ok. This error may be occuring for many files, but this message will only display once per boot. If you are recovering from a fractured database, you may wish to run \'database->regenerate->all thumbnails\'.' )
|
||||
|
||||
|
||||
|
||||
|
@ -2014,34 +2014,32 @@ class ThumbnailCache( object ):
|
|||
last_paused = HydrusData.GetNowPrecise()
|
||||
|
||||
|
||||
start_time = HydrusData.GetNowPrecise()
|
||||
stop_time = start_time + 0.005 # a bit of a typical frame
|
||||
|
||||
page_keys_to_rendered_medias = collections.defaultdict( list )
|
||||
|
||||
while not HydrusData.TimeHasPassedPrecise( stop_time ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if len( self._waterfall_queue_random ) == 0:
|
||||
|
||||
continue
|
||||
break
|
||||
|
||||
else:
|
||||
|
||||
result = self._waterfall_queue_random.pop( 0 )
|
||||
|
||||
self._waterfall_queue_quick.discard( result )
|
||||
|
||||
|
||||
( page_key, media ) = result
|
||||
|
||||
|
||||
|
||||
try:
|
||||
|
||||
self.GetThumbnail( media ) # to load it
|
||||
|
||||
self._controller.pub( 'waterfall_thumbnail', page_key, media )
|
||||
|
||||
if HydrusData.GetNowPrecise() - last_paused > 0.005:
|
||||
|
||||
time.sleep( 0.00001 )
|
||||
|
||||
last_paused = HydrusData.GetNowPrecise()
|
||||
|
||||
page_keys_to_rendered_medias[ page_key ].append( media )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
|
@ -2049,6 +2047,14 @@ class ThumbnailCache( object ):
|
|||
|
||||
|
||||
|
||||
for ( page_key, rendered_medias ) in page_keys_to_rendered_medias.items():
|
||||
|
||||
self._controller.pub( 'waterfall_thumbnails', page_key, rendered_medias )
|
||||
|
||||
|
||||
time.sleep( 0.00001 )
|
||||
|
||||
|
||||
|
||||
class ServicesManager( object ):
|
||||
|
||||
|
|
|
@ -1057,9 +1057,9 @@ class Controller( HydrusController.HydrusController ):
|
|||
HydrusController.HydrusController.ShutdownView( self )
|
||||
|
||||
|
||||
def StartFileQuery( self, query_key, search_context ):
|
||||
def StartFileQuery( self, page_key, job_key, search_context ):
|
||||
|
||||
self.CallToThread( self.THREADDoFileQuery, query_key, search_context )
|
||||
self.CallToThread( self.THREADDoFileQuery, page_key, job_key, search_context )
|
||||
|
||||
|
||||
def SystemBusy( self ):
|
||||
|
@ -1119,7 +1119,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
return False
|
||||
|
||||
|
||||
def THREADDoFileQuery( self, query_key, search_context ):
|
||||
def THREADDoFileQuery( self, page_key, job_key, search_context ):
|
||||
|
||||
QUERY_CHUNK_SIZE = 256
|
||||
|
||||
|
@ -1129,20 +1129,23 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
for sub_query_hash_ids in HydrusData.SplitListIntoChunks( query_hash_ids, QUERY_CHUNK_SIZE ):
|
||||
|
||||
if query_key.IsCancelled(): return
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return
|
||||
|
||||
|
||||
more_media_results = self.Read( 'media_results_from_ids', sub_query_hash_ids )
|
||||
|
||||
media_results.extend( more_media_results )
|
||||
|
||||
self.pub( 'set_num_query_results', len( media_results ), len( query_hash_ids ) )
|
||||
self.pub( 'set_num_query_results', page_key, len( media_results ), len( query_hash_ids ) )
|
||||
|
||||
self.WaitUntilPubSubsEmpty()
|
||||
|
||||
|
||||
search_context.SetComplete()
|
||||
|
||||
self.pub( 'file_query_done', query_key, media_results )
|
||||
self.pub( 'file_query_done', page_key, job_key, media_results )
|
||||
|
||||
|
||||
def THREADBootEverything( self ):
|
||||
|
|
|
@ -1509,19 +1509,6 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
|
|||
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'help', 'Open hydrus\'s local help in your web browser.', webbrowser.open, 'file://' + HC.HELP_DIR + '/index.html' )
|
||||
|
||||
check_manager = ClientGUICommon.CheckboxManagerOptions( 'advanced_mode' )
|
||||
|
||||
current_value = check_manager.GetCurrentValue()
|
||||
func = check_manager.Invert
|
||||
|
||||
ClientGUIMenus.AppendMenuCheckItem( self, menu, 'advanced mode', 'Turn on advanced menu options and buttons.', current_value, func )
|
||||
|
||||
dont_know = wx.Menu()
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, dont_know, 'just set up some repositories for me, please', 'This will add the hydrus dev\'s two repositories to your client.', self._AutoRepoSetup )
|
||||
|
||||
ClientGUIMenus.AppendMenu( menu, dont_know, 'I don\'t know what I am doing' )
|
||||
|
||||
links = wx.Menu()
|
||||
|
||||
site = ClientGUIMenus.AppendMenuBitmapItem( self, links, 'site', 'Open hydrus\'s website, which is mostly a mirror of the local help.', CC.GlobalBMPs.file_repository, webbrowser.open, 'https://hydrusnetwork.github.io/hydrus/' )
|
||||
|
@ -1533,6 +1520,27 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
|
|||
|
||||
ClientGUIMenus.AppendMenu( menu, links, 'links' )
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'changelog', 'Open hydrus\'s local changelog in your web browser.', webbrowser.open, 'file://' + HC.HELP_DIR + '/changelog.html' )
|
||||
|
||||
ClientGUIMenus.AppendSeparator( menu )
|
||||
|
||||
dont_know = wx.Menu()
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, dont_know, 'just set up some repositories for me, please', 'This will add the hydrus dev\'s two repositories to your client.', self._AutoRepoSetup )
|
||||
|
||||
ClientGUIMenus.AppendMenu( menu, dont_know, 'I don\'t know what I am doing' )
|
||||
|
||||
ClientGUIMenus.AppendSeparator( menu )
|
||||
|
||||
check_manager = ClientGUICommon.CheckboxManagerOptions( 'advanced_mode' )
|
||||
|
||||
current_value = check_manager.GetCurrentValue()
|
||||
func = check_manager.Invert
|
||||
|
||||
ClientGUIMenus.AppendMenuCheckItem( self, menu, 'advanced mode', 'Turn on advanced menu options and buttons.', current_value, func )
|
||||
|
||||
ClientGUIMenus.AppendSeparator( menu )
|
||||
|
||||
debug = wx.Menu()
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, debug, 'make some popups', 'Throw some varied popups at the message manager, just to check it is working.', self._DebugMakeSomePopups )
|
||||
|
@ -1551,6 +1559,8 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
|
|||
|
||||
ClientGUIMenus.AppendMenu( menu, debug, 'debug' )
|
||||
|
||||
ClientGUIMenus.AppendSeparator( menu )
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'hardcoded shortcuts', 'Review some currently hardcoded shortcuts.', wx.MessageBox, CC.SHORTCUT_HELP )
|
||||
ClientGUIMenus.AppendMenuItem( self, menu, 'about', 'See this client\'s version and other information.', self._AboutWindow )
|
||||
|
||||
|
|
|
@ -3366,7 +3366,7 @@ class ManagementPanelQuery( ManagementPanel ):
|
|||
|
||||
self._search_enabled = self._management_controller.GetVariable( 'search_enabled' )
|
||||
|
||||
self._query_key = ClientThreading.JobKey( cancellable = True )
|
||||
self._query_job_key = ClientThreading.JobKey( cancellable = True )
|
||||
|
||||
initial_predicates = file_search_context.GetPredicates()
|
||||
|
||||
|
@ -3410,9 +3410,9 @@ class ManagementPanelQuery( ManagementPanel ):
|
|||
|
||||
self._controller.ResetIdleTimer()
|
||||
|
||||
self._query_key.Cancel()
|
||||
self._query_job_key.Cancel()
|
||||
|
||||
self._query_key = ClientThreading.JobKey()
|
||||
self._query_job_key = ClientThreading.JobKey()
|
||||
|
||||
if self._management_controller.GetVariable( 'search_enabled' ) and self._management_controller.GetVariable( 'synchronised' ):
|
||||
|
||||
|
@ -3430,7 +3430,7 @@ class ManagementPanelQuery( ManagementPanel ):
|
|||
|
||||
if len( current_predicates ) > 0:
|
||||
|
||||
self._controller.StartFileQuery( self._query_key, file_search_context )
|
||||
self._controller.StartFileQuery( self._page_key, self._query_job_key, file_search_context )
|
||||
|
||||
panel = ClientGUIMedia.MediaPanelLoading( self._page, self._page_key, file_service_key )
|
||||
|
||||
|
@ -3469,9 +3469,12 @@ class ManagementPanelQuery( ManagementPanel ):
|
|||
sizer.AddF( tags_box, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
|
||||
|
||||
def AddMediaResultsFromQuery( self, query_key, media_results ):
|
||||
def AddMediaResultsFromQuery( self, query_job_key, media_results ):
|
||||
|
||||
if query_job_key == self._query_job_key:
|
||||
|
||||
self._controller.pub( 'add_media_results', self._page_key, media_results, append = False )
|
||||
|
||||
if query_key == self._query_key: self._controller.pub( 'add_media_results', self._page_key, media_results, append = False )
|
||||
|
||||
|
||||
def ChangeFileServicePubsub( self, page_key, service_key ):
|
||||
|
@ -3486,7 +3489,7 @@ class ManagementPanelQuery( ManagementPanel ):
|
|||
|
||||
ManagementPanel.CleanBeforeDestroy( self )
|
||||
|
||||
self._query_key.Cancel()
|
||||
self._query_job_key.Cancel()
|
||||
|
||||
|
||||
def GetPredicates( self ):
|
||||
|
@ -3528,9 +3531,9 @@ class ManagementPanelQuery( ManagementPanel ):
|
|||
|
||||
|
||||
|
||||
def ShowQuery( self, query_key, media_results ):
|
||||
def ShowQuery( self, page_key, query_job_key, media_results ):
|
||||
|
||||
if query_key == self._query_key:
|
||||
if page_key == self._page_key and query_job_key == self._query_job_key:
|
||||
|
||||
current_predicates = self._current_predicates_box.GetPredicates()
|
||||
|
||||
|
|
|
@ -1639,9 +1639,14 @@ class MediaPanelLoading( MediaPanel ):
|
|||
return s
|
||||
|
||||
|
||||
def GetSortedMedia( self ): return []
|
||||
def GetSortedMedia( self ):
|
||||
|
||||
def SetNumQueryResults( self, current, max ):
|
||||
return []
|
||||
|
||||
|
||||
def SetNumQueryResults( self, page_key, current, max ):
|
||||
|
||||
if page_key == self._page_key:
|
||||
|
||||
self._current = current
|
||||
|
||||
|
@ -1650,6 +1655,7 @@ class MediaPanelLoading( MediaPanel ):
|
|||
self._PublishSelectionChange()
|
||||
|
||||
|
||||
|
||||
class MediaPanelThumbnails( MediaPanel ):
|
||||
|
||||
def __init__( self, parent, page_key, file_service_key, media_results ):
|
||||
|
@ -1692,7 +1698,7 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
HG.client_controller.sub( self, 'NewThumbnails', 'new_thumbnails' )
|
||||
HG.client_controller.sub( self, 'ThumbnailsResized', 'thumbnail_resize' )
|
||||
HG.client_controller.sub( self, 'RefreshAcceleratorTable', 'notify_new_options' )
|
||||
HG.client_controller.sub( self, 'WaterfallThumbnail', 'waterfall_thumbnail' )
|
||||
HG.client_controller.sub( self, 'WaterfallThumbnails', 'waterfall_thumbnails' )
|
||||
|
||||
|
||||
def _CalculateVisiblePageIndices( self ):
|
||||
|
@ -1838,7 +1844,11 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
|
||||
|
||||
def _FadeThumbnail( self, thumbnail ):
|
||||
def _FadeThumbnails( self, thumbnails ):
|
||||
|
||||
now_precise = HydrusData.GetNowPrecise()
|
||||
|
||||
for thumbnail in thumbnails:
|
||||
|
||||
try:
|
||||
|
||||
|
@ -1869,20 +1879,30 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
try: image.InitAlpha()
|
||||
except: pass
|
||||
|
||||
image = image.AdjustChannels( 1, 1, 1, 0.25 )
|
||||
image = image.AdjustChannels( 1, 1, 1, 0.20 )
|
||||
|
||||
alpha_bmp = wx.BitmapFromImage( image, 32 )
|
||||
|
||||
image.Destroy()
|
||||
|
||||
self._thumbnails_being_faded_in[ hash ] = ( bmp, alpha_bmp, thumbnail_index, thumbnail, 0 )
|
||||
|
||||
if not self._timer_animation.IsRunning(): self._timer_animation.Start( 1, wx.TIMER_ONE_SHOT )
|
||||
self._thumbnails_being_faded_in[ hash ] = ( bmp, alpha_bmp, thumbnail_index, thumbnail, now_precise, 0 )
|
||||
|
||||
|
||||
def _GenerateMediaCollection( self, media_results ): return ThumbnailMediaCollection( self._file_service_key, media_results )
|
||||
if not self._timer_animation.IsRunning():
|
||||
|
||||
self._timer_animation.Start( 1, wx.TIMER_ONE_SHOT )
|
||||
|
||||
|
||||
|
||||
def _GenerateMediaCollection( self, media_results ):
|
||||
|
||||
return ThumbnailMediaCollection( self._file_service_key, media_results )
|
||||
|
||||
|
||||
def _GenerateMediaSingleton( self, media_result ):
|
||||
|
||||
return ThumbnailMediaSingleton( self._file_service_key, media_result )
|
||||
|
||||
def _GenerateMediaSingleton( self, media_result ): return ThumbnailMediaSingleton( self._file_service_key, media_result )
|
||||
|
||||
def _GetMediaCoordinates( self, media ):
|
||||
|
||||
|
@ -2058,13 +2078,14 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
thumbnail_cache = HG.client_controller.GetCache( 'thumbnail' )
|
||||
|
||||
thumbnails_to_render_now = []
|
||||
thumbnails_to_render_later = []
|
||||
|
||||
for thumbnail in visible_thumbnails:
|
||||
|
||||
if thumbnail_cache.HasThumbnailCached( thumbnail ):
|
||||
|
||||
self._FadeThumbnail( thumbnail )
|
||||
thumbnails_to_render_now.append( thumbnail )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -2072,6 +2093,11 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
|
||||
|
||||
if len( thumbnails_to_render_now ) > 0:
|
||||
|
||||
self._FadeThumbnails( thumbnails_to_render_now )
|
||||
|
||||
|
||||
if len( thumbnails_to_render_later ) > 0:
|
||||
|
||||
HG.client_controller.GetCache( 'thumbnail' ).Waterfall( self._page_key, thumbnails_to_render_later )
|
||||
|
@ -2217,7 +2243,7 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
if hash in self._thumbnails_being_faded_in:
|
||||
|
||||
( bmp, alpha_bmp, thumbnail_index, thumbnail, num_frames ) = self._thumbnails_being_faded_in[ hash ]
|
||||
( bmp, alpha_bmp, thumbnail_index, thumbnail, animation_started, num_frames ) = self._thumbnails_being_faded_in[ hash ]
|
||||
|
||||
del self._thumbnails_being_faded_in[ hash ]
|
||||
|
||||
|
@ -2255,10 +2281,7 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
self._RecalculateVirtualSize()
|
||||
|
||||
for thumbnail in thumbnails:
|
||||
|
||||
self._FadeThumbnail( thumbnail )
|
||||
|
||||
self._FadeThumbnails( thumbnails )
|
||||
|
||||
if len( self._selected_media ) == 0:
|
||||
|
||||
|
@ -3489,7 +3512,11 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
try:
|
||||
|
||||
started = HydrusData.GetNowPrecise()
|
||||
FRAME_DURATION = 1.0 / 60
|
||||
NUM_FRAMES_TO_FILL_IN = 15
|
||||
|
||||
loop_started = HydrusData.GetNowPrecise()
|
||||
loop_should_break_time = loop_started + ( FRAME_DURATION / 2 )
|
||||
|
||||
( thumbnail_span_width, thumbnail_span_height ) = self._GetThumbnailSpanDimensions()
|
||||
|
||||
|
@ -3501,16 +3528,26 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
for hash in hashes:
|
||||
|
||||
( original_bmp, alpha_bmp, thumbnail_index, thumbnail, num_frames_rendered ) = self._thumbnails_being_faded_in[ hash ]
|
||||
( original_bmp, alpha_bmp, thumbnail_index, thumbnail, animation_started, num_frames_rendered ) = self._thumbnails_being_faded_in[ hash ]
|
||||
|
||||
num_frames_rendered += 1
|
||||
num_frames_supposed_to_be_rendered = int( ( loop_started - animation_started ) / FRAME_DURATION )
|
||||
|
||||
page_index = self._GetPageIndexFromThumbnailIndex( thumbnail_index )
|
||||
num_frames_to_render = num_frames_supposed_to_be_rendered - num_frames_rendered
|
||||
|
||||
if num_frames_to_render > 0:
|
||||
|
||||
delete_entry = False
|
||||
|
||||
try: expected_thumbnail = self._sorted_media[ thumbnail_index ]
|
||||
except: expected_thumbnail = None
|
||||
try:
|
||||
|
||||
expected_thumbnail = self._sorted_media[ thumbnail_index ]
|
||||
|
||||
except:
|
||||
|
||||
expected_thumbnail = None
|
||||
|
||||
|
||||
page_index = self._GetPageIndexFromThumbnailIndex( thumbnail_index )
|
||||
|
||||
if expected_thumbnail != thumbnail:
|
||||
|
||||
|
@ -3522,7 +3559,9 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
else:
|
||||
|
||||
if num_frames_rendered >= 9:
|
||||
times_to_draw = 1
|
||||
|
||||
if num_frames_supposed_to_be_rendered >= NUM_FRAMES_TO_FILL_IN:
|
||||
|
||||
bmp_to_use = original_bmp
|
||||
|
||||
|
@ -3530,9 +3569,13 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
else:
|
||||
|
||||
times_to_draw = num_frames_to_render
|
||||
|
||||
bmp_to_use = alpha_bmp
|
||||
|
||||
self._thumbnails_being_faded_in[ hash ] = ( original_bmp, alpha_bmp, thumbnail_index, thumbnail, num_frames_rendered )
|
||||
num_frames_rendered += times_to_draw
|
||||
|
||||
self._thumbnails_being_faded_in[ hash ] = ( original_bmp, alpha_bmp, thumbnail_index, thumbnail, animation_started, num_frames_rendered )
|
||||
|
||||
|
||||
thumbnail_col = thumbnail_index % self._num_columns
|
||||
|
@ -3554,9 +3597,12 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
dc = dcs[ page_index ]
|
||||
|
||||
for i in range( times_to_draw ):
|
||||
|
||||
dc.DrawBitmap( bmp_to_use, x, y, True )
|
||||
|
||||
|
||||
|
||||
if delete_entry:
|
||||
|
||||
del self._thumbnails_being_faded_in[ hash ]
|
||||
|
@ -3565,7 +3611,8 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
alpha_bmp.Destroy()
|
||||
|
||||
|
||||
if HydrusData.TimeHasPassedPrecise( started + 0.016 ):
|
||||
|
||||
if HydrusData.TimeHasPassedPrecise( loop_should_break_time ):
|
||||
|
||||
break
|
||||
|
||||
|
@ -3573,15 +3620,7 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
if len( self._thumbnails_being_faded_in ) > 0:
|
||||
|
||||
finished = HydrusData.GetNowPrecise()
|
||||
|
||||
time_this_took_in_ms = ( finished - started ) * 1000
|
||||
|
||||
ms_to_wait = int( round( 16.7 - time_this_took_in_ms ) )
|
||||
|
||||
ms_to_wait = max( 1, ms_to_wait )
|
||||
|
||||
self._timer_animation.Start( ms_to_wait, wx.TIMER_ONE_SHOT )
|
||||
self._timer_animation.Start( 1, wx.TIMER_ONE_SHOT )
|
||||
|
||||
|
||||
self.Refresh()
|
||||
|
@ -3598,11 +3637,11 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
|
||||
|
||||
|
||||
def WaterfallThumbnail( self, page_key, thumbnail ):
|
||||
def WaterfallThumbnails( self, page_key, thumbnails ):
|
||||
|
||||
if self._page_key == page_key:
|
||||
|
||||
self._FadeThumbnail( thumbnail )
|
||||
self._FadeThumbnails( thumbnails )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ import HydrusImageHandling
|
|||
import HydrusGlobals as HG
|
||||
import HydrusThreading
|
||||
import HydrusVideoHandling
|
||||
import lz4
|
||||
import lz4.block
|
||||
import os
|
||||
import threading
|
||||
import time
|
||||
|
@ -560,7 +560,7 @@ class HydrusBitmap( object ):
|
|||
|
||||
if self._compressed:
|
||||
|
||||
self._data = lz4.dumps( data )
|
||||
self._data = lz4.block.compress( data )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -575,7 +575,7 @@ class HydrusBitmap( object ):
|
|||
|
||||
if self._compressed:
|
||||
|
||||
return lz4.loads( self._data )
|
||||
return lz4.block.decompress( self._data )
|
||||
|
||||
else:
|
||||
|
||||
|
|
|
@ -49,7 +49,7 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 18
|
||||
SOFTWARE_VERSION = 259
|
||||
SOFTWARE_VERSION = 260
|
||||
|
||||
UNSCALED_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
||||
|
|
|
@ -281,8 +281,14 @@ class HydrusController( object ):
|
|||
|
||||
self._currently_doing_pubsub = True
|
||||
|
||||
try: self._pubsub.Process()
|
||||
finally: self._currently_doing_pubsub = False
|
||||
try:
|
||||
|
||||
self._pubsub.Process()
|
||||
|
||||
finally:
|
||||
|
||||
self._currently_doing_pubsub = False
|
||||
|
||||
|
||||
|
||||
def Read( self, action, *args, **kwargs ):
|
||||
|
|
|
@ -76,7 +76,7 @@ def SetupDBCreatePragma( c, no_wal = False ):
|
|||
c.execute( 'PRAGMA journal_mode = WAL;' )
|
||||
|
||||
|
||||
c.execute( 'PRAGMA synchronous = 1;' )
|
||||
c.execute( 'PRAGMA synchronous = 2;' )
|
||||
|
||||
def VacuumDB( db_path ):
|
||||
|
||||
|
@ -444,7 +444,8 @@ class HydrusDB( object ):
|
|||
|
||||
self._c.execute( 'PRAGMA ' + db_name + '.journal_mode = WAL;' )
|
||||
|
||||
self._c.execute( 'PRAGMA ' + db_name + '.synchronous = 1;' )
|
||||
# This was 1 previously, but some interrupted commits were rolling back inconsistently across the attached dbs
|
||||
self._c.execute( 'PRAGMA ' + db_name + '.synchronous = 2;' )
|
||||
|
||||
try:
|
||||
|
||||
|
|
|
@ -661,12 +661,20 @@ def GetHideTerminalSubprocessStartupInfo():
|
|||
|
||||
return startupinfo
|
||||
|
||||
def GetNow(): return int( time.time() )
|
||||
def GetNow():
|
||||
|
||||
return int( time.time() )
|
||||
|
||||
def GetNowPrecise():
|
||||
|
||||
if HC.PLATFORM_WINDOWS: return time.clock()
|
||||
else: return time.time()
|
||||
if HC.PLATFORM_WINDOWS:
|
||||
|
||||
return time.clock()
|
||||
|
||||
else:
|
||||
|
||||
return time.time()
|
||||
|
||||
|
||||
def GetSiblingProcessPorts( db_path, instance ):
|
||||
|
||||
|
@ -884,12 +892,18 @@ def PrintException( e, do_wait = True ):
|
|||
|
||||
ShowException = PrintException
|
||||
|
||||
def Profile( summary, code, g, l ):
|
||||
def Profile( summary, code, g, l, min_duration_ms = 20 ):
|
||||
|
||||
profile = cProfile.Profile()
|
||||
|
||||
started = GetNowPrecise()
|
||||
|
||||
profile.runctx( code, g, l )
|
||||
|
||||
finished = GetNowPrecise()
|
||||
|
||||
if finished - started > min_duration_ms / 1000.0:
|
||||
|
||||
output = cStringIO.StringIO()
|
||||
|
||||
stats = pstats.Stats( profile, stream = output )
|
||||
|
@ -910,7 +924,15 @@ def Profile( summary, code, g, l ):
|
|||
|
||||
output.seek( 0 )
|
||||
|
||||
HG.controller.PrintProfile( summary, output.read() )
|
||||
details = output.read()
|
||||
|
||||
else:
|
||||
|
||||
details = 'It took less than ' + ConvertIntToPrettyString( min_duration_ms ) + 'ms.' + os.linesep * 2
|
||||
|
||||
|
||||
HG.controller.PrintProfile( summary, details )
|
||||
|
||||
|
||||
def RandomPop( population ):
|
||||
|
||||
|
|
|
@ -7,9 +7,6 @@ import socket
|
|||
import subprocess
|
||||
import threading
|
||||
import traceback
|
||||
from twisted.internet import reactor, defer
|
||||
from twisted.internet.threads import deferToThread
|
||||
from twisted.python import log
|
||||
|
||||
# new stuff starts here
|
||||
|
||||
|
|
|
@ -676,7 +676,7 @@ class Account( object ):
|
|||
|
||||
dictionary[ 'bandwidth_tracker' ] = bandwidth_tracker
|
||||
|
||||
dictionary = HydrusSerialisable.CreateFromSerialisableTuple( dictionary.GetSerialisableTuple() )
|
||||
dictionary = dictionary.Duplicate()
|
||||
|
||||
return ( account_key, account_type, created, expires, dictionary )
|
||||
|
||||
|
@ -1983,13 +1983,21 @@ class ServerService( object ):
|
|||
self._dirty = True
|
||||
|
||||
|
||||
def BandwidthOK( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return True
|
||||
|
||||
|
||||
|
||||
def Duplicate( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
dictionary = self._GetSerialisableDictionary()
|
||||
|
||||
dictionary = HydrusSerialisable.CreateFromSerialisableTuple( dictionary.GetSerialisableTuple() )
|
||||
dictionary = dictionary.Duplicate()
|
||||
|
||||
duplicate = GenerateService( self._service_key, self._service_type, self._name, self._port, dictionary )
|
||||
|
||||
|
@ -2045,14 +2053,6 @@ class ServerService( object ):
|
|||
|
||||
|
||||
|
||||
def BandwidthOK( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return True
|
||||
|
||||
|
||||
|
||||
def RequestMade( self, num_bytes ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -2109,7 +2109,7 @@ class ServerService( object ):
|
|||
|
||||
dictionary = self._GetSerialisableDictionary()
|
||||
|
||||
dictionary = HydrusSerialisable.CreateFromSerialisableTuple( dictionary.GetSerialisableTuple() )
|
||||
dictionary = dictionary.Duplicate()
|
||||
|
||||
return ( self._service_key, self._service_type, self._name, self._port, dictionary )
|
||||
|
||||
|
|
|
@ -103,11 +103,9 @@ class HydrusPubSub( object ):
|
|||
|
||||
# do this _outside_ the lock, lol
|
||||
|
||||
for callable in callables:
|
||||
|
||||
if HG.pubsub_profile_mode:
|
||||
|
||||
summary = 'Profiling ' + topic + ': ' + repr( callable )
|
||||
summary = 'Profiling ' + HydrusData.ConvertIntToPrettyString( len( callables ) ) + ' x ' + topic
|
||||
|
||||
if topic == 'message':
|
||||
|
||||
|
@ -118,10 +116,15 @@ class HydrusPubSub( object ):
|
|||
HydrusData.ShowText( summary )
|
||||
|
||||
|
||||
for callable in callables:
|
||||
|
||||
HydrusData.Profile( summary, 'callable( *args, **kwargs )', globals(), locals() )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
for callable in callables:
|
||||
|
||||
try:
|
||||
|
||||
callable( *args, **kwargs )
|
||||
|
@ -151,7 +154,10 @@ class HydrusPubSub( object ):
|
|||
callables = self._GetCallables( topic )
|
||||
|
||||
|
||||
for callable in callables: callable( *args, **kwargs )
|
||||
for callable in callables:
|
||||
|
||||
callable( *args, **kwargs )
|
||||
|
||||
|
||||
|
||||
def sub( self, object, method_name, topic ):
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
import json
|
||||
import lz4
|
||||
import lz4.block
|
||||
import zlib
|
||||
|
||||
SERIALISABLE_TYPE_BASE = 0
|
||||
|
@ -59,7 +59,7 @@ def CreateFromNetworkString( network_string ):
|
|||
|
||||
except zlib.error:
|
||||
|
||||
obj_string = lz4.loads( network_string )
|
||||
obj_string = lz4.block.decompress( network_string )
|
||||
|
||||
|
||||
return CreateFromString( obj_string )
|
||||
|
|
|
@ -12,7 +12,6 @@ import threading
|
|||
import time
|
||||
import traceback
|
||||
import urllib
|
||||
from twisted.internet.threads import deferToThread
|
||||
import HydrusData
|
||||
import HydrusGlobals as HG
|
||||
|
||||
|
|
|
@ -272,7 +272,16 @@ def Hydrusffmpeg_parse_infos(filename, print_infos=False, count_frames_manually
|
|||
|
||||
|
||||
|
||||
infos = proc.stderr.read().decode('utf8')
|
||||
raw_infos = proc.stderr.read()
|
||||
|
||||
try:
|
||||
|
||||
infos = raw_infos.decode( 'utf8' )
|
||||
|
||||
except UnicodeDecodeError:
|
||||
|
||||
infos = raw_infos
|
||||
|
||||
|
||||
proc.wait()
|
||||
|
||||
|
@ -389,26 +398,24 @@ def Hydrusffmpeg_parse_infos(filename, print_infos=False, count_frames_manually
|
|||
else:
|
||||
|
||||
# get the frame rate
|
||||
try:
|
||||
|
||||
match = re.search("( [0-9]*.| )[0-9]* tbr", line)
|
||||
|
||||
if match is not None:
|
||||
|
||||
fps = line[match.start():match.end()].split(' ')[1]
|
||||
|
||||
if fps.endswith( 'k' ):
|
||||
|
||||
raise Exception()
|
||||
|
||||
|
||||
result['video_fps'] = float( fps )
|
||||
|
||||
except:
|
||||
if match is None or fps.endswith( 'k' ):
|
||||
|
||||
match = re.search("( [0-9]*.| )[0-9]* fps", line)
|
||||
|
||||
if match is not None:
|
||||
|
||||
fps = line[match.start():match.end()].split(' ')[1]
|
||||
|
||||
if fps.endswith( 'k' ) or float( fps ) > 60:
|
||||
|
||||
if match is None or fps.endswith( 'k' ) or float( fps ) > 60:
|
||||
|
||||
if not doing_manual_frame_count:
|
||||
|
||||
|
@ -419,15 +426,16 @@ def Hydrusffmpeg_parse_infos(filename, print_infos=False, count_frames_manually
|
|||
raise Exception( 'Could not determine framerate!' )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
|
||||
result['video_fps'] = float( fps )
|
||||
|
||||
|
||||
|
||||
num_frames = result['duration'] * result['video_fps']
|
||||
|
||||
if num_frames != int( num_frames ): num_frames += 1 # rounding up
|
||||
if num_frames != int( num_frames ):
|
||||
|
||||
return Hydrusffmpeg_parse_infos( filename, count_frames_manually = True )
|
||||
|
||||
|
||||
result['video_nframes'] = int( num_frames )
|
||||
|
||||
|
|
|
@ -346,7 +346,10 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
|
||||
|
||||
def JustWokeFromSleep( self ): return False
|
||||
def JustWokeFromSleep( self ):
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def MaintainDB( self, stop_time = None ):
|
||||
|
||||
|
@ -435,6 +438,8 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
def SetServices( self, services ):
|
||||
|
||||
# doesn't need the dirty_object_lock because the caller takes it
|
||||
|
||||
self._services = services
|
||||
|
||||
[ self._admin_service ] = [ service for service in self._services if service.GetServiceType() == HC.SERVER_ADMIN ]
|
||||
|
@ -494,7 +499,3 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
|
||||
|
||||
def UpdateAccounts( self, service_key, accounts ):
|
||||
|
||||
self._server_session_manager.UpdateAccounts( service_key, accounts )
|
||||
|
||||
|
|
|
@ -1136,7 +1136,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
subject_account_keys = [ subject_account.GetAccountKey() for subject_account in subject_accounts ]
|
||||
|
||||
HG.server_controller.pub( 'update_session_accounts', service_key, subject_account_keys )
|
||||
self.pub_after_commit( 'update_session_accounts', service_key, subject_account_keys )
|
||||
|
||||
|
||||
def _ModifyAccountTypes( self, service_key, account, account_types, deletee_account_type_keys_to_new_account_type_keys ):
|
||||
|
@ -1193,7 +1193,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
self._RefreshAccountTypeCache( service_id )
|
||||
|
||||
HG.server_controller.pub( 'update_all_session_accounts', service_key )
|
||||
self.pub_after_commit( 'update_all_session_accounts', service_key )
|
||||
|
||||
|
||||
def _ModifyServices( self, account, services ):
|
||||
|
@ -3160,177 +3160,6 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
HydrusData.Print( 'The server is updating to version ' + str( version + 1 ) )
|
||||
|
||||
if version == 198:
|
||||
|
||||
HydrusData.Print( 'exporting mappings to external db' )
|
||||
|
||||
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_mappings.mappings ( service_id INTEGER, tag_id INTEGER, hash_id INTEGER, account_id INTEGER, timestamp INTEGER, PRIMARY KEY ( service_id, tag_id, hash_id ) );' )
|
||||
|
||||
self._c.execute( 'INSERT INTO external_mappings.mappings SELECT * FROM main.mappings;' )
|
||||
|
||||
self._c.execute( 'DROP TABLE main.mappings;' )
|
||||
|
||||
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_mappings.mapping_petitions ( service_id INTEGER, account_id INTEGER, tag_id INTEGER, hash_id INTEGER, reason_id INTEGER, timestamp INTEGER, status INTEGER, PRIMARY KEY ( service_id, account_id, tag_id, hash_id, status ) );' )
|
||||
|
||||
self._c.execute( 'INSERT INTO external_mappings.mapping_petitions SELECT * FROM main.mapping_petitions;' )
|
||||
|
||||
self._c.execute( 'DROP TABLE main.mapping_petitions;' )
|
||||
|
||||
|
||||
if version == 200:
|
||||
|
||||
HydrusData.Print( 'exporting hashes to external db' )
|
||||
|
||||
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.hashes ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES UNIQUE );' )
|
||||
|
||||
self._c.execute( 'INSERT INTO external_master.hashes SELECT * FROM main.hashes;' )
|
||||
|
||||
self._c.execute( 'DROP TABLE main.hashes;' )
|
||||
|
||||
HydrusData.Print( 'exporting tags to external db' )
|
||||
|
||||
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.tags ( tag_id INTEGER PRIMARY KEY, tag TEXT UNIQUE );' )
|
||||
|
||||
self._c.execute( 'INSERT INTO external_master.tags SELECT * FROM main.tags;' )
|
||||
|
||||
self._c.execute( 'DROP TABLE main.tags;' )
|
||||
|
||||
#
|
||||
|
||||
HydrusData.Print( 'compacting mappings tables' )
|
||||
|
||||
self._c.execute( 'DROP INDEX mapping_petitions_service_id_account_id_reason_id_tag_id_index;' )
|
||||
self._c.execute( 'DROP INDEX mapping_petitions_service_id_tag_id_hash_id_index;' )
|
||||
self._c.execute( 'DROP INDEX mapping_petitions_service_id_status_index;' )
|
||||
self._c.execute( 'DROP INDEX mapping_petitions_service_id_timestamp_index;' )
|
||||
|
||||
self._c.execute( 'DROP INDEX mappings_account_id_index;' )
|
||||
self._c.execute( 'DROP INDEX mappings_timestamp_index;' )
|
||||
|
||||
self._c.execute( 'ALTER TABLE mapping_petitions RENAME TO mapping_petitions_old;' )
|
||||
self._c.execute( 'ALTER TABLE mappings RENAME TO mappings_old;' )
|
||||
|
||||
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_mappings.mapping_petitions ( service_id INTEGER, account_id INTEGER, tag_id INTEGER, hash_id INTEGER, reason_id INTEGER, timestamp INTEGER, status INTEGER, PRIMARY KEY ( service_id, account_id, tag_id, hash_id, status ) ) WITHOUT ROWID;' )
|
||||
|
||||
self._c.execute( 'INSERT INTO mapping_petitions SELECT * FROM mapping_petitions_old;' )
|
||||
|
||||
self._c.execute( 'DROP TABLE mapping_petitions_old;' )
|
||||
|
||||
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_mappings.mappings ( service_id INTEGER, tag_id INTEGER, hash_id INTEGER, account_id INTEGER, timestamp INTEGER, PRIMARY KEY ( service_id, tag_id, hash_id ) ) WITHOUT ROWID;' )
|
||||
|
||||
self._c.execute( 'INSERT INTO mappings SELECT * FROM mappings_old;' )
|
||||
|
||||
self._c.execute( 'DROP TABLE mappings_old;' )
|
||||
|
||||
self._c.execute( 'CREATE INDEX IF NOT EXISTS external_mappings.mapping_petitions_service_id_account_id_reason_id_tag_id_index ON mapping_petitions ( service_id, account_id, reason_id, tag_id );' )
|
||||
self._c.execute( 'CREATE INDEX IF NOT EXISTS external_mappings.mapping_petitions_service_id_tag_id_hash_id_index ON mapping_petitions ( service_id, tag_id, hash_id );' )
|
||||
self._c.execute( 'CREATE INDEX IF NOT EXISTS external_mappings.mapping_petitions_service_id_status_index ON mapping_petitions ( service_id, status );' )
|
||||
self._c.execute( 'CREATE INDEX IF NOT EXISTS external_mappings.mapping_petitions_service_id_timestamp_index ON mapping_petitions ( service_id, timestamp );' )
|
||||
|
||||
self._c.execute( 'CREATE INDEX IF NOT EXISTS external_mappings.mappings_account_id_index ON mappings ( account_id );' )
|
||||
self._c.execute( 'CREATE INDEX IF NOT EXISTS external_mappings.mappings_timestamp_index ON mappings ( timestamp );' )
|
||||
|
||||
#
|
||||
|
||||
self._Commit()
|
||||
|
||||
self._CloseDBCursor()
|
||||
|
||||
try:
|
||||
|
||||
for filename in self._db_filenames.values():
|
||||
|
||||
HydrusData.Print( 'vacuuming ' + filename )
|
||||
|
||||
db_path = os.path.join( self._db_dir, filename )
|
||||
|
||||
if HydrusDB.CanVacuum( db_path ):
|
||||
|
||||
HydrusDB.VacuumDB( db_path )
|
||||
|
||||
|
||||
|
||||
finally:
|
||||
|
||||
self._InitDBCursor()
|
||||
|
||||
self._BeginImmediate()
|
||||
|
||||
|
||||
|
||||
if version == 202:
|
||||
|
||||
self._c.execute( 'DELETE FROM analyze_timestamps;' )
|
||||
|
||||
|
||||
if version == 207:
|
||||
|
||||
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_mappings.petitioned_mappings ( service_id INTEGER, account_id INTEGER, tag_id INTEGER, hash_id INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, tag_id, hash_id, account_id ) ) WITHOUT ROWID;' )
|
||||
|
||||
self._c.execute( 'CREATE TABLE IF NOT EXISTS external_mappings.deleted_mappings ( service_id INTEGER, account_id INTEGER, tag_id INTEGER, hash_id INTEGER, reason_id INTEGER, timestamp INTEGER, PRIMARY KEY ( service_id, tag_id, hash_id ) ) WITHOUT ROWID;' )
|
||||
|
||||
#
|
||||
|
||||
self._c.execute( 'INSERT INTO petitioned_mappings ( service_id, account_id, tag_id, hash_id, reason_id ) SELECT service_id, account_id, tag_id, hash_id, reason_id FROM mapping_petitions WHERE status = ?;', ( HC.CONTENT_STATUS_PETITIONED, ) )
|
||||
self._c.execute( 'INSERT INTO deleted_mappings ( service_id, account_id, tag_id, hash_id, reason_id, timestamp ) SELECT service_id, account_id, tag_id, hash_id, reason_id, timestamp FROM mapping_petitions WHERE status = ?;', ( HC.CONTENT_STATUS_DELETED, ) )
|
||||
|
||||
#
|
||||
|
||||
self._c.execute( 'CREATE INDEX IF NOT EXISTS external_mappings.petitioned_mappings_service_id_account_id_reason_id_tag_id_index ON petitioned_mappings ( service_id, account_id, reason_id, tag_id );' )
|
||||
|
||||
self._c.execute( 'CREATE INDEX IF NOT EXISTS external_mappings.deleted_mappings_service_id_account_id_index ON deleted_mappings ( service_id, account_id );' )
|
||||
self._c.execute( 'CREATE INDEX IF NOT EXISTS external_mappings.deleted_mappings_service_id_timestamp_index ON deleted_mappings ( service_id, timestamp );' )
|
||||
|
||||
#
|
||||
|
||||
self._c.execute( 'DROP TABLE mapping_petitions;' )
|
||||
|
||||
|
||||
if version == 208:
|
||||
|
||||
old_thumbnail_dir = os.path.join( self._db_dir, 'server_thumbnails' )
|
||||
|
||||
for prefix in HydrusData.IterateHexPrefixes():
|
||||
|
||||
HydrusData.Print( 'moving thumbnails: ' + prefix )
|
||||
|
||||
source_dir = os.path.join( old_thumbnail_dir, prefix )
|
||||
dest_dir = os.path.join( self._files_dir, prefix )
|
||||
|
||||
source_filenames = os.listdir( source_dir )
|
||||
|
||||
for source_filename in source_filenames:
|
||||
|
||||
source_path = os.path.join( source_dir, source_filename )
|
||||
|
||||
dest_filename = source_filename + '.thumbnail'
|
||||
|
||||
dest_path = os.path.join( dest_dir, dest_filename )
|
||||
|
||||
try:
|
||||
|
||||
HydrusPaths.MergeFile( source_path, dest_path )
|
||||
|
||||
except:
|
||||
|
||||
HydrusData.Print( 'Problem moving thumbnail from ' + source_path + ' to ' + dest_path + '.' )
|
||||
HydrusData.Print( 'Abandoning thumbnail transfer for ' + source_dir + '.' )
|
||||
|
||||
break
|
||||
|
||||
|
||||
|
||||
|
||||
try:
|
||||
|
||||
HydrusPaths.DeletePath( old_thumbnail_dir )
|
||||
|
||||
except:
|
||||
|
||||
HydrusData.Print( 'Could not delete old thumbnail directory at ' + old_thumbnail_dir )
|
||||
|
||||
|
||||
|
||||
if version == 212:
|
||||
|
||||
for prefix in HydrusData.IterateHexPrefixes():
|
||||
|
|
|
@ -219,11 +219,13 @@ class HydrusResourceRestrictedAccountModification( HydrusResourceRestricted ):
|
|||
|
||||
kwargs = request.hydrus_args # for things like expires, title, and so on
|
||||
|
||||
server_session_manager = HG.server_controller.GetServerSessionManager()
|
||||
|
||||
with HG.dirty_object_lock:
|
||||
|
||||
HG.server_controller.WriteSynchronous( 'account_modification', self._service_key, request.hydrus_account, action, subject_accounts, **kwargs )
|
||||
|
||||
HG.server_controller.UpdateAccounts( self._service_key, subject_accounts )
|
||||
server_session_manager.UpdateAccounts( self._service_key, subject_accounts )
|
||||
|
||||
|
||||
response_context = HydrusServerResources.ResponseContext( 200 )
|
||||
|
|
|
@ -681,8 +681,8 @@ class TestClientDB( unittest.TestCase ):
|
|||
|
||||
test_files.append( ( 'muh_swf.swf', 'edfef9905fdecde38e0752a5b6ab7b6df887c3968d4246adc9cffc997e168cdf', 456774, HC.APPLICATION_FLASH, 400, 400, 33, 1, None ) )
|
||||
test_files.append( ( 'muh_mp4.mp4', '2fa293907144a046d043d74e9570b1c792cbfd77ee3f5c93b2b1a1cb3e4c7383', 570534, HC.VIDEO_MP4, 480, 480, 'mp4_duration', 151, None ) )
|
||||
test_files.append( ( 'muh_mpeg.mpeg', 'aebb10aaf3b27a5878fd2732ea28aaef7bbecef7449eaa759421c4ba4efff494', 772096, HC.VIDEO_MPEG, 720, 480, 2966, 89, None ) )
|
||||
test_files.append( ( 'muh_webm.webm', '55b6ce9d067326bf4b2fbe66b8f51f366bc6e5f776ba691b0351364383c43fcb', 84069, HC.VIDEO_WEBM, 640, 360, 4010, 121, None ) )
|
||||
test_files.append( ( 'muh_mpeg.mpeg', 'aebb10aaf3b27a5878fd2732ea28aaef7bbecef7449eaa759421c4ba4efff494', 772096, HC.VIDEO_MPEG, 720, 480, 2966, 105, None ) )
|
||||
test_files.append( ( 'muh_webm.webm', '55b6ce9d067326bf4b2fbe66b8f51f366bc6e5f776ba691b0351364383c43fcb', 84069, HC.VIDEO_WEBM, 640, 360, 4010, 120, None ) )
|
||||
test_files.append( ( 'muh_jpg.jpg', '5d884d84813beeebd59a35e474fa3e4742d0f2b6679faa7609b245ddbbd05444', 42296, HC.IMAGE_JPEG, 392, 498, None, None, None ) )
|
||||
test_files.append( ( 'muh_png.png', 'cdc67d3b377e6e1397ffa55edc5b50f6bdf4482c7a6102c6f27fa351429d6f49', 31452, HC.IMAGE_PNG, 191, 196, None, None, None ) )
|
||||
test_files.append( ( 'muh_gif.gif', '00dd9e9611ebc929bfc78fde99a0c92800bbb09b9d18e0946cea94c099b211c2', 15660, HC.IMAGE_GIF, 329, 302, 600, 5, None ) )
|
||||
|
|
Loading…
Reference in New Issue