Version 109

This commit is contained in:
Hydrus 2014-03-26 16:23:10 -05:00
parent c14d8f4008
commit 130da0a904
14 changed files with 745 additions and 605 deletions

View File

@ -8,6 +8,23 @@
<div class="content">
<h3>changelog</h3>
<ul>
<li><h3>version 109</h3></li>
<ul>
<li>started processed_mappings table. for now, it mirrors normal mappings table</li>
<li>improved manage tags dialog logic</li>
<li>fixed only_add issue for parent tags</li>
<li>fixed an important tags box surplus tags display bug when focus repository was changed</li>
<li>fixed petitioned tag counts in tag boxes, as long as 'all known tags' is not selected</li>
<li>fixed occasional zero tag count in tags box</li>
<li>added petitioned tags to normal media fetch, not sure why they were missing</li>
<li>fixed a random error when trying to refresh a 'open selection in new page' page</li>
<li>improved help to reflect new manage tags dialog</li>
<li>cleaned up an annoying spam-text-to-console issue when running test.py</li>
<li>reworked the way the server clean/create update daemons interface with the database</li>
<li>improved how updates are stored and fetched in the server db--no longer needs a db hit to fetch</li>
<li>harmonised client db's update naming convention to the server's</li>
<li>improved server db to update the same new-ish way client db does it</li>
</ul>
<li><h3>version 108</h3></li>
<ul>
<li>added 'database->delete orphan files' for manual firing of this maintenance routine</li>

View File

@ -21,16 +21,16 @@
<h3>adding tags</h3>
<p>If you connect to a repository that has a lot of tags, you'll probably see them appearing in your client in any normal search. But if you want tags for your rarer files, or you don't connect to any big repositories, you'll have to add some tags yourself.</p>
<p>Select some files. Right click on them and select <i>manage->tags</i>, or hit F3. This will boot the very important <i>manage tags dialog</i>.</p>
<p><a href="manage_tags.png"><img src="manage_tags.png" width="683" height="384" /></a></p>
<p>This shows the <b>intersection</b> of the current selection's tags (it only shows the tags that <i>every</i> file in the selection has), and will equally add/remove tags to/from the entire selection. There's another autocomplete dropdown, just like when you search, that throws what you input at the box above. Submitting a tag that already exists will attempt to remove it, or you can just double click it. You may be prompted to give a reason for removing a tag from a remote repository, creating a petition that an administrator will review.</p>
<p>The tag box will prepend certain identifiers to show the changes that'll occur or pend when you hit <i>apply</i>:</p>
<p><a href="manage_tags.png"><img src="manage_tags.png" /></a></p>
<p>This shows the current selection's tags, and will add/remove tags to/from the entire selection. There's another autocomplete dropdown, just like when you search, that throws what you input at the box above. Submitting a tag that already exists will attempt to remove it, or you can just double-click it. You may be prompted to give a reason for removing a tag from a remote repository, creating a petition that an administrator will review.</p>
<p>The tag box will show the current changes with numbers in different sets of parentheses:</p>
<ul class="bulletpoints">
<li><b>Nothing</b> - Existing tag.</li>
<li><b>(+)</b> - Will be applied/pended.</li>
<li><b>(-)</b> - Will be removed/petitioned.</li>
<li><b>(X)</b> - Has already been applied and deleted; you may or may not have authority to re-apply.</li>
<li><b>(n)</b> - Tag exists for n files of the selection.</li>
<li><b>(+n)</b> - Tag will be pended to n files.</li>
<li><b>(-n)</b> - Tag will be petitioned for n files.</li>
<li><b>(Xn)</b> - Tag has been applied and deleted for n files; you may or may not have authority to re-apply.</li>
</ul>
<p>If you edit tags for the local tags service, they will be applied instantly; otherwise, they will be pended, waiting until you are ready to upload. When you are ready to upload all your pending tag changes, use the <i>pending</i> menu in the main interface.</p>
<p>If you edit tags for the local tags service, they will be applied instantly; otherwise, they will be pended, waiting until you are ready to upload. You can re-open the dialog and edit your pending changes as much as you like. When you are ready to upload all your pending tag changes, use the <i>pending</i> menu in the main interface.</p>
<p>Please do not upload tags to my public tag repo until you get a rough feel for the <a href="tagging_schema.html">tag schema</a>, or just lurk until you get the idea. I am only interested in objective tags. If you don't like my guidelines, feel free to start your own tag repo!</p>
<p>You can synchronise with more than one tag repository. Press the up or down arrow keys on an empty autocomplete input to quickly jump between your repositories. Each repo's beliefs about which tags go with which files will be applied according to a certain precedence that you can edit in <i>services->manage tag service precedence</i>.</p>
<p><a href="faq.html#delays">FAQ: why can my friend not see what I just uploaded?</a></p>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 317 KiB

After

Width:  |  Height:  |  Size: 310 KiB

View File

@ -640,9 +640,9 @@ def GetThumbnailPath( hash, full_size = True ):
return path
def GetUpdatePath( service_key, number ):
def GetUpdatePath( service_key, begin ):
return HC.CLIENT_UPDATES_DIR + os.path.sep + service_key.encode( 'hex' ) + '_' + str( number ) + '.yaml'
return HC.CLIENT_UPDATES_DIR + os.path.sep + service_key.encode( 'hex' ) + '_' + str( begin ) + '.yaml'
def IterateAllFilePaths():

View File

@ -2280,6 +2280,10 @@ class ServiceDB( FileDB, MessageDB, TagDB, RatingDB ):
hash_ids_to_tags = HC.BuildKeyToListDict( [ ( hash_id, ( service_id, ( status, namespace + ':' + tag ) ) ) if namespace != '' else ( hash_id, ( service_id, ( status, tag ) ) ) for ( hash_id, service_id, namespace, tag, status ) in c.execute( 'SELECT hash_id, service_id, namespace, tag, status FROM namespaces, ( tags, mappings USING ( tag_id ) ) USING ( namespace_id ) WHERE hash_id IN ' + splayed_hash_ids + ';' ) ] )
hash_ids_to_petitioned_tags = HC.BuildKeyToListDict( [ ( hash_id, ( service_id, ( HC.PETITIONED, namespace + ':' + tag ) ) ) if namespace != '' else ( hash_id, ( service_id, ( HC.PETITIONED, tag ) ) ) for ( hash_id, service_id, namespace, tag ) in c.execute( 'SELECT hash_id, service_id, namespace, tag FROM namespaces, ( tags, mapping_petitions USING ( tag_id ) ) USING ( namespace_id ) WHERE hash_id IN ' + splayed_hash_ids + ';' ) ] )
for ( hash_id, tag_data ) in hash_ids_to_petitioned_tags.items(): hash_ids_to_tags[ hash_id ].extend( tag_data )
hash_ids_to_current_file_service_ids = HC.BuildKeyToListDict( c.execute( 'SELECT hash_id, service_id FROM files_info WHERE hash_id IN ' + splayed_hash_ids + ';' ) )
hash_ids_to_deleted_file_service_ids = HC.BuildKeyToListDict( c.execute( 'SELECT hash_id, service_id FROM deleted_files WHERE hash_id IN ' + splayed_hash_ids + ';' ) )
@ -3337,6 +3341,7 @@ class ServiceDB( FileDB, MessageDB, TagDB, RatingDB ):
for ( namespace_id, tag_id, hash_ids ) in advanced_mappings_ids:
c.execute( 'DELETE FROM mappings WHERE service_id = ? AND namespace_id = ? AND tag_id = ? AND hash_id IN ' + HC.SplayListForDB( hash_ids ) + ';', ( service_id, namespace_id, tag_id ) )
c.execute( 'DELETE FROM processed_mappings WHERE service_id = ? AND namespace_id = ? AND tag_id = ? AND hash_id IN ' + HC.SplayListForDB( hash_ids ) + ';', ( service_id, namespace_id, tag_id ) )
c.execute( 'DELETE FROM service_info WHERE service_id = ?;', ( service_id, ) )
@ -3695,6 +3700,7 @@ class ServiceDB( FileDB, MessageDB, TagDB, RatingDB ):
service_ids = [ service_id for ( service_id, ) in c.execute( 'SELECT service_id FROM tag_service_precedence ORDER BY precedence DESC;' ) ]
c.execute( 'DELETE FROM mappings WHERE service_id = ?;', ( self._combined_tag_service_id, ) )
c.execute( 'DELETE FROM processed_mappings WHERE service_id = ?;', ( self._combined_tag_service_id, ) )
first_round = True
@ -3713,6 +3719,8 @@ class ServiceDB( FileDB, MessageDB, TagDB, RatingDB ):
first_round = False
c.execute( 'INSERT INTO processed_mappings SELECT * FROM mappings WHERE service_id = ?;', ( self._combined_tag_service_id, ) )
c.execute( 'DELETE FROM autocomplete_tags_cache WHERE tag_service_id = ?;', ( self._combined_tag_service_id, ) )
file_service_identifiers = self._GetServiceIdentifiers( c, ( HC.FILE_REPOSITORY, HC.LOCAL_FILE, HC.COMBINED_FILE ) )
@ -3834,10 +3842,12 @@ class ServiceDB( FileDB, MessageDB, TagDB, RatingDB ):
deletable_hash_ids = existing_hash_ids.intersection( appropriate_hash_ids )
c.execute( 'DELETE FROM mappings WHERE service_id = ? AND namespace_id = ? AND tag_id = ? AND hash_id IN ' + HC.SplayListForDB( deletable_hash_ids ) + ' AND status = ?;', ( service_id, namespace_id, tag_id, old_status ) )
c.execute( 'DELETE FROM processed_mappings WHERE service_id = ? AND namespace_id = ? AND tag_id = ? AND hash_id IN ' + HC.SplayListForDB( deletable_hash_ids ) + ' AND status = ?;', ( service_id, namespace_id, tag_id, old_status ) )
num_old_deleted = self._GetRowCount( c )
c.execute( 'UPDATE mappings SET status = ? WHERE service_id = ? AND namespace_id = ? AND tag_id = ? AND hash_id IN ' + HC.SplayListForDB( appropriate_hash_ids ) + ' AND status = ?;', ( new_status, service_id, namespace_id, tag_id, old_status ) )
c.execute( 'UPDATE processed_mappings SET status = ? WHERE service_id = ? AND namespace_id = ? AND tag_id = ? AND hash_id IN ' + HC.SplayListForDB( appropriate_hash_ids ) + ' AND status = ?;', ( new_status, service_id, namespace_id, tag_id, old_status ) )
num_old_made_new = self._GetRowCount( c )
@ -3906,6 +3916,7 @@ class ServiceDB( FileDB, MessageDB, TagDB, RatingDB ):
deletable_hash_ids.update( search_hash_ids )
c.execute( 'DELETE FROM mappings WHERE service_id = ? AND namespace_id = ? AND tag_id = ? AND hash_id IN ' + HC.SplayListForDB( deletable_hash_ids ) + ';', ( self._combined_tag_service_id, namespace_id, tag_id ) )
c.execute( 'DELETE FROM processed_mappings WHERE service_id = ? AND namespace_id = ? AND tag_id = ? AND hash_id IN ' + HC.SplayListForDB( deletable_hash_ids ) + ';', ( self._combined_tag_service_id, namespace_id, tag_id ) )
UpdateAutocompleteTagCacheFromCombinedCurrentTags( namespace_id, tag_id, deletable_hash_ids, -1 )
@ -3918,6 +3929,7 @@ class ServiceDB( FileDB, MessageDB, TagDB, RatingDB ):
deletable_hash_ids = set( hash_ids ).difference( existing_other_precedence_hash_ids )
c.execute( 'DELETE FROM mappings WHERE service_id = ? AND namespace_id = ? AND tag_id = ? AND hash_id IN ' + HC.SplayListForDB( deletable_hash_ids ) + ' AND status = ?;', ( self._combined_tag_service_id, namespace_id, tag_id, HC.PENDING ) )
c.execute( 'DELETE FROM processed_mappings WHERE service_id = ? AND namespace_id = ? AND tag_id = ? AND hash_id IN ' + HC.SplayListForDB( deletable_hash_ids ) + ' AND status = ?;', ( self._combined_tag_service_id, namespace_id, tag_id, HC.PENDING ) )
UpdateAutocompleteTagCacheFromCombinedPendingTags( namespace_id, tag_id, deletable_hash_ids, -1 )
@ -3931,6 +3943,7 @@ class ServiceDB( FileDB, MessageDB, TagDB, RatingDB ):
new_hash_ids = set( hash_ids ).difference( arguing_higher_precedence_hash_ids ).difference( existing_combined_hash_ids )
c.executemany( 'INSERT OR IGNORE INTO mappings VALUES ( ?, ?, ?, ?, ? );', [ ( self._combined_tag_service_id, namespace_id, tag_id, hash_id, HC.CURRENT ) for hash_id in new_hash_ids ] )
c.executemany( 'INSERT OR IGNORE INTO processed_mappings VALUES ( ?, ?, ?, ?, ? );', [ ( self._combined_tag_service_id, namespace_id, tag_id, hash_id, HC.CURRENT ) for hash_id in new_hash_ids ] )
UpdateAutocompleteTagCacheFromCombinedCurrentTags( namespace_id, tag_id, new_hash_ids, 1 )
@ -3942,6 +3955,7 @@ class ServiceDB( FileDB, MessageDB, TagDB, RatingDB ):
new_hash_ids = set( hash_ids ).difference( existing_combined_hash_ids )
c.executemany( 'INSERT OR IGNORE INTO mappings VALUES ( ?, ?, ?, ?, ? );', [ ( self._combined_tag_service_id, namespace_id, tag_id, hash_id, HC.PENDING ) for hash_id in new_hash_ids ] )
c.executemany( 'INSERT OR IGNORE INTO processed_mappings VALUES ( ?, ?, ?, ?, ? );', [ ( self._combined_tag_service_id, namespace_id, tag_id, hash_id, HC.PENDING ) for hash_id in new_hash_ids ] )
UpdateAutocompleteTagCacheFromCombinedPendingTags( namespace_id, tag_id, new_hash_ids, 1 )
@ -3949,6 +3963,7 @@ class ServiceDB( FileDB, MessageDB, TagDB, RatingDB ):
def DeletePending( namespace_id, tag_id, hash_ids ):
c.execute( 'DELETE FROM mappings WHERE service_id = ? AND namespace_id = ? AND tag_id = ? AND hash_id IN ' + HC.SplayListForDB( hash_ids ) + ' AND status = ?;', ( service_id, namespace_id, tag_id, HC.PENDING ) )
c.execute( 'DELETE FROM processed_mappings WHERE service_id = ? AND namespace_id = ? AND tag_id = ? AND hash_id IN ' + HC.SplayListForDB( hash_ids ) + ' AND status = ?;', ( service_id, namespace_id, tag_id, HC.PENDING ) )
num_deleted = self._GetRowCount( c )
@ -3976,6 +3991,7 @@ class ServiceDB( FileDB, MessageDB, TagDB, RatingDB ):
new_hash_ids = set( hash_ids ).difference( existing_hash_ids )
c.executemany( 'INSERT OR IGNORE INTO mappings VALUES ( ?, ?, ?, ?, ? );', [ ( service_id, namespace_id, tag_id, hash_id, status ) for hash_id in new_hash_ids ] )
c.executemany( 'INSERT OR IGNORE INTO processed_mappings VALUES ( ?, ?, ?, ?, ? );', [ ( service_id, namespace_id, tag_id, hash_id, status ) for hash_id in new_hash_ids ] )
num_rows_added = self._GetRowCount( c )
@ -4630,6 +4646,13 @@ class DB( ServiceDB ):
c.execute( 'CREATE TABLE perceptual_hashes ( hash_id INTEGER PRIMARY KEY, phash BLOB_BYTES );' )
c.execute( 'CREATE TABLE processed_mappings ( service_id INTEGER REFERENCES services ON DELETE CASCADE, namespace_id INTEGER, tag_id INTEGER, hash_id INTEGER, status INTEGER, PRIMARY KEY( service_id, namespace_id, tag_id, hash_id, status ) );' )
c.execute( 'CREATE INDEX processed_mappings_hash_id_index ON processed_mappings ( hash_id );' )
c.execute( 'CREATE INDEX processed_mappings_service_id_tag_id_index ON processed_mappings ( service_id, tag_id );' )
c.execute( 'CREATE INDEX processed_mappings_service_id_hash_id_index ON processed_mappings ( service_id, hash_id );' )
c.execute( 'CREATE INDEX processed_mappings_service_id_status_index ON processed_mappings ( service_id, status );' )
c.execute( 'CREATE INDEX processed_mappings_status_index ON processed_mappings ( status );' )
c.execute( 'CREATE TABLE ratings_filter ( service_id INTEGER REFERENCES services ON DELETE CASCADE, hash_id INTEGER, min REAL, max REAL, PRIMARY KEY( service_id, hash_id ) );' )
c.execute( 'CREATE TABLE reasons ( reason_id INTEGER PRIMARY KEY, reason TEXT );' )
@ -4805,239 +4828,6 @@ class DB( ServiceDB ):
self._UpdateDBOld( c, version )
if version == 91:
( HC.options, ) = c.execute( 'SELECT options FROM options;' ).fetchone()
HC.options[ 'num_autocomplete_chars' ] = 2
c.execute( 'UPDATE options SET options = ?;', ( HC.options, ) )
if version == 93:
c.execute( 'CREATE TABLE gui_sessions ( name TEXT, info TEXT_YAML );' )
if version == 94:
# I changed a variable name in account, so old yaml dumps need to be refreshed
unknown_account = HC.GetUnknownAccount()
c.execute( 'UPDATE accounts SET account = ?;', ( unknown_account, ) )
for ( name, info ) in c.execute( 'SELECT name, info FROM gui_sessions;' ).fetchall():
for ( page_name, c_text, args, kwargs ) in info:
if 'do_query' in kwargs: del kwargs[ 'do_query' ]
c.execute( 'UPDATE gui_sessions SET info = ? WHERE name = ?;', ( info, name ) )
if version == 95:
c.execute( 'COMMIT' )
c.execute( 'PRAGMA foreign_keys = OFF;' )
c.execute( 'BEGIN IMMEDIATE' )
service_basic_info = c.execute( 'SELECT service_id, service_key, type, name FROM services;' ).fetchall()
service_address_info = c.execute( 'SELECT service_id, host, port, last_error FROM addresses;' ).fetchall()
service_account_info = c.execute( 'SELECT service_id, access_key, account FROM accounts;' ).fetchall()
service_repository_info = c.execute( 'SELECT service_id, first_begin, next_begin FROM repositories;' ).fetchall()
service_ratings_like_info = c.execute( 'SELECT service_id, like, dislike FROM ratings_like;' ).fetchall()
service_ratings_numerical_info = c.execute( 'SELECT service_id, lower, upper FROM ratings_numerical;' ).fetchall()
service_address_info = { service_id : ( host, port, last_error ) for ( service_id, host, port, last_error ) in service_address_info }
service_account_info = { service_id : ( access_key, account ) for ( service_id, access_key, account ) in service_account_info }
service_repository_info = { service_id : ( first_begin, next_begin ) for ( service_id, first_begin, next_begin ) in service_repository_info }
service_ratings_like_info = { service_id : ( like, dislike ) for ( service_id, like, dislike ) in service_ratings_like_info }
service_ratings_numerical_info = { service_id : ( lower, upper ) for ( service_id, lower, upper ) in service_ratings_numerical_info }
c.execute( 'DROP TABLE services;' )
c.execute( 'DROP TABLE addresses;' )
c.execute( 'DROP TABLE accounts;' )
c.execute( 'DROP TABLE repositories;' )
c.execute( 'DROP TABLE ratings_like;' )
c.execute( 'DROP TABLE ratings_numerical;' )
c.execute( 'CREATE TABLE services ( service_id INTEGER PRIMARY KEY, service_key BLOB_BYTES, service_type INTEGER, name TEXT, info TEXT_YAML );' )
c.execute( 'CREATE UNIQUE INDEX services_service_key_index ON services ( service_key );' )
services = []
for ( service_id, service_key, service_type, name ) in service_basic_info:
info = {}
if service_id in service_address_info:
( host, port, last_error ) = service_address_info[ service_id ]
info[ 'host' ] = host
info[ 'port' ] = port
info[ 'last_error' ] = last_error
if service_id in service_account_info:
( access_key, account ) = service_account_info[ service_id ]
info[ 'access_key' ] = access_key
info[ 'account' ] = account
if service_id in service_repository_info:
( first_begin, next_begin ) = service_repository_info[ service_id ]
info[ 'first_begin' ] = first_begin
info[ 'next_begin' ] = next_begin
if service_id in service_ratings_like_info:
( like, dislike ) = service_ratings_like_info[ service_id ]
info[ 'like' ] = like
info[ 'dislike' ] = dislike
if service_id in service_ratings_numerical_info:
( lower, upper ) = service_ratings_numerical_info[ service_id ]
info[ 'lower' ] = lower
info[ 'upper' ] = upper
c.execute( 'INSERT INTO services ( service_id, service_key, service_type, name, info ) VALUES ( ?, ?, ?, ?, ? );', ( service_id, sqlite3.Binary( service_key ), service_type, name, info ) )
c.execute( 'COMMIT' )
c.execute( 'PRAGMA foreign_keys = ON;' )
c.execute( 'BEGIN IMMEDIATE' )
if version == 95:
for ( service_id, info ) in c.execute( 'SELECT service_id, info FROM services;' ).fetchall():
if 'account' in info:
info[ 'account' ].MakeStale()
c.execute( 'UPDATE services SET info = ? WHERE service_id = ?;', ( info, service_id ) )
if version == 101:
c.execute( 'CREATE TABLE yaml_dumps ( dump_type INTEGER, dump_name TEXT, dump TEXT_YAML, PRIMARY KEY ( dump_type, dump_name ) );' )
inserts = []
# singles
data = c.execute( 'SELECT token, pin, timeout FROM fourchan_pass;' ).fetchone()
if data is not None: inserts.append( ( YAML_DUMP_ID_SINGLE, '4chan_pass', data ) )
data = c.execute( 'SELECT pixiv_id, password FROM pixiv_account;' ).fetchone()
if data is not None: inserts.append( ( YAML_DUMP_ID_SINGLE, 'pixiv_account', data ) )
# boorus
data = c.execute( 'SELECT name, booru FROM boorus;' ).fetchall()
inserts.extend( ( ( YAML_DUMP_ID_BOORU, name, booru ) for ( name, booru ) in data ) )
# favourite custom filter actions
data = c.execute( 'SELECT name, actions FROM favourite_custom_filter_actions;' )
inserts.extend( ( ( YAML_DUMP_ID_FAVOURITE_CUSTOM_FILTER_ACTIONS, name, actions ) for ( name, actions ) in data ) )
# gui sessions
data = c.execute( 'SELECT name, info FROM gui_sessions;' ).fetchall()
inserts.extend( ( ( YAML_DUMP_ID_GUI_SESSION, name, info ) for ( name, info ) in data ) )
# imageboards
all_imageboards = []
all_sites = c.execute( 'SELECT site_id, name FROM imageboard_sites;' ).fetchall()
for ( site_id, name ) in all_sites:
imageboards = [ imageboard for ( imageboard, ) in c.execute( 'SELECT imageboard FROM imageboards WHERE site_id = ? ORDER BY name;', ( site_id, ) ) ]
inserts.append( ( YAML_DUMP_ID_IMAGEBOARD, name, imageboards ) )
# import folders
data = c.execute( 'SELECT path, details FROM import_folders;' )
inserts.extend( ( ( YAML_DUMP_ID_IMPORT_FOLDER, path, details ) for ( path, details ) in data ) )
# subs
subs = c.execute( 'SELECT site_download_type, name, info FROM subscriptions;' )
names = set()
for ( site_download_type, name, old_info ) in subs:
if name in names: name = name + str( site_download_type )
( query_type, query, frequency_type, frequency_number, advanced_tag_options, advanced_import_options, last_checked, url_cache, paused ) = old_info
info = {}
info[ 'site_type' ] = site_download_type
info[ 'query_type' ] = query_type
info[ 'query' ] = query
info[ 'frequency_type' ] = frequency_type
info[ 'frequency' ] = frequency_number
info[ 'advanced_tag_options' ] = advanced_tag_options
info[ 'advanced_import_options' ] = advanced_import_options
info[ 'last_checked' ] = last_checked
info[ 'url_cache' ] = url_cache
info[ 'paused' ] = paused
inserts.append( ( YAML_DUMP_ID_SUBSCRIPTION, name, info ) )
names.add( name )
#
c.executemany( 'INSERT INTO yaml_dumps VALUES ( ?, ?, ? );', inserts )
#
c.execute( 'DROP TABLE fourchan_pass;' )
c.execute( 'DROP TABLE pixiv_account;' )
c.execute( 'DROP TABLE boorus;' )
c.execute( 'DROP TABLE favourite_custom_filter_actions;' )
c.execute( 'DROP TABLE gui_sessions;' )
c.execute( 'DROP TABLE imageboard_sites;' )
c.execute( 'DROP TABLE imageboards;' )
c.execute( 'DROP TABLE subscriptions;' )
if version == 105:
if not os.path.exists( HC.CLIENT_UPDATES_DIR ): os.mkdir( HC.CLIENT_UPDATES_DIR )
@ -5084,6 +4874,58 @@ class DB( ServiceDB ):
c.execute( 'DROP TABLE namespace_blacklists;' )
if version == 108:
c.execute( 'CREATE TABLE processed_mappings ( service_id INTEGER REFERENCES services ON DELETE CASCADE, namespace_id INTEGER, tag_id INTEGER, hash_id INTEGER, status INTEGER, PRIMARY KEY( service_id, namespace_id, tag_id, hash_id, status ) );' )
c.execute( 'CREATE INDEX processed_mappings_hash_id_index ON processed_mappings ( hash_id );' )
c.execute( 'CREATE INDEX processed_mappings_service_id_tag_id_index ON processed_mappings ( service_id, tag_id );' )
c.execute( 'CREATE INDEX processed_mappings_service_id_hash_id_index ON processed_mappings ( service_id, hash_id );' )
c.execute( 'CREATE INDEX processed_mappings_service_id_status_index ON processed_mappings ( service_id, status );' )
c.execute( 'CREATE INDEX processed_mappings_status_index ON processed_mappings ( status );' )
service_ids = [ service_id for ( service_id, ) in c.execute( 'SELECT service_id FROM services;' ) ]
for ( i, service_id ) in enumerate( service_ids ):
HC.app.SetSplashText( 'copying mappings ' + str( i ) + '/' + str( len( service_ids ) ) )
c.execute( 'INSERT INTO processed_mappings SELECT * FROM mappings WHERE service_id = ?;', ( service_id, ) )
current_updates = dircache.listdir( HC.CLIENT_UPDATES_DIR )
for filename in current_updates:
path = HC.CLIENT_UPDATES_DIR + os.path.sep + filename
os.rename( path, path + 'old' )
current_updates = dircache.listdir( HC.CLIENT_UPDATES_DIR )
for ( i, filename ) in enumerate( current_updates ):
if i % 100 == 0: HC.app.SetSplashText( 'renaming updates ' + str( i ) + '/' + str( len( current_updates ) ) )
( service_key_hex, gumpf ) = filename.split( '_' )
service_key = service_key_hex.decode( 'hex' )
path = HC.CLIENT_UPDATES_DIR + os.path.sep + filename
with open( path, 'rb' ) as f: update_text = f.read()
update = yaml.safe_load( update_text )
( begin, end ) = update.GetBeginEnd()
new_path = CC.GetUpdatePath( service_key, begin )
if os.path.exists( new_path ): os.remove( path )
else: os.rename( path, new_path )
c.execute( 'UPDATE version SET version = ?;', ( version + 1, ) )
HC.is_db_updated = True
@ -6690,6 +6532,239 @@ class DB( ServiceDB ):
c.execute( 'UPDATE options SET options = ?;', ( HC.options, ) )
if version == 91:
( HC.options, ) = c.execute( 'SELECT options FROM options;' ).fetchone()
HC.options[ 'num_autocomplete_chars' ] = 2
c.execute( 'UPDATE options SET options = ?;', ( HC.options, ) )
if version == 93:
c.execute( 'CREATE TABLE gui_sessions ( name TEXT, info TEXT_YAML );' )
if version == 94:
# I changed a variable name in account, so old yaml dumps need to be refreshed
unknown_account = HC.GetUnknownAccount()
c.execute( 'UPDATE accounts SET account = ?;', ( unknown_account, ) )
for ( name, info ) in c.execute( 'SELECT name, info FROM gui_sessions;' ).fetchall():
for ( page_name, c_text, args, kwargs ) in info:
if 'do_query' in kwargs: del kwargs[ 'do_query' ]
c.execute( 'UPDATE gui_sessions SET info = ? WHERE name = ?;', ( info, name ) )
if version == 95:
c.execute( 'COMMIT' )
c.execute( 'PRAGMA foreign_keys = OFF;' )
c.execute( 'BEGIN IMMEDIATE' )
service_basic_info = c.execute( 'SELECT service_id, service_key, type, name FROM services;' ).fetchall()
service_address_info = c.execute( 'SELECT service_id, host, port, last_error FROM addresses;' ).fetchall()
service_account_info = c.execute( 'SELECT service_id, access_key, account FROM accounts;' ).fetchall()
service_repository_info = c.execute( 'SELECT service_id, first_begin, next_begin FROM repositories;' ).fetchall()
service_ratings_like_info = c.execute( 'SELECT service_id, like, dislike FROM ratings_like;' ).fetchall()
service_ratings_numerical_info = c.execute( 'SELECT service_id, lower, upper FROM ratings_numerical;' ).fetchall()
service_address_info = { service_id : ( host, port, last_error ) for ( service_id, host, port, last_error ) in service_address_info }
service_account_info = { service_id : ( access_key, account ) for ( service_id, access_key, account ) in service_account_info }
service_repository_info = { service_id : ( first_begin, next_begin ) for ( service_id, first_begin, next_begin ) in service_repository_info }
service_ratings_like_info = { service_id : ( like, dislike ) for ( service_id, like, dislike ) in service_ratings_like_info }
service_ratings_numerical_info = { service_id : ( lower, upper ) for ( service_id, lower, upper ) in service_ratings_numerical_info }
c.execute( 'DROP TABLE services;' )
c.execute( 'DROP TABLE addresses;' )
c.execute( 'DROP TABLE accounts;' )
c.execute( 'DROP TABLE repositories;' )
c.execute( 'DROP TABLE ratings_like;' )
c.execute( 'DROP TABLE ratings_numerical;' )
c.execute( 'CREATE TABLE services ( service_id INTEGER PRIMARY KEY, service_key BLOB_BYTES, service_type INTEGER, name TEXT, info TEXT_YAML );' )
c.execute( 'CREATE UNIQUE INDEX services_service_key_index ON services ( service_key );' )
services = []
for ( service_id, service_key, service_type, name ) in service_basic_info:
info = {}
if service_id in service_address_info:
( host, port, last_error ) = service_address_info[ service_id ]
info[ 'host' ] = host
info[ 'port' ] = port
info[ 'last_error' ] = last_error
if service_id in service_account_info:
( access_key, account ) = service_account_info[ service_id ]
info[ 'access_key' ] = access_key
info[ 'account' ] = account
if service_id in service_repository_info:
( first_begin, next_begin ) = service_repository_info[ service_id ]
info[ 'first_begin' ] = first_begin
info[ 'next_begin' ] = next_begin
if service_id in service_ratings_like_info:
( like, dislike ) = service_ratings_like_info[ service_id ]
info[ 'like' ] = like
info[ 'dislike' ] = dislike
if service_id in service_ratings_numerical_info:
( lower, upper ) = service_ratings_numerical_info[ service_id ]
info[ 'lower' ] = lower
info[ 'upper' ] = upper
c.execute( 'INSERT INTO services ( service_id, service_key, service_type, name, info ) VALUES ( ?, ?, ?, ?, ? );', ( service_id, sqlite3.Binary( service_key ), service_type, name, info ) )
c.execute( 'COMMIT' )
c.execute( 'PRAGMA foreign_keys = ON;' )
c.execute( 'BEGIN IMMEDIATE' )
if version == 95:
for ( service_id, info ) in c.execute( 'SELECT service_id, info FROM services;' ).fetchall():
if 'account' in info:
info[ 'account' ].MakeStale()
c.execute( 'UPDATE services SET info = ? WHERE service_id = ?;', ( info, service_id ) )
if version == 101:
c.execute( 'CREATE TABLE yaml_dumps ( dump_type INTEGER, dump_name TEXT, dump TEXT_YAML, PRIMARY KEY ( dump_type, dump_name ) );' )
inserts = []
# singles
data = c.execute( 'SELECT token, pin, timeout FROM fourchan_pass;' ).fetchone()
if data is not None: inserts.append( ( YAML_DUMP_ID_SINGLE, '4chan_pass', data ) )
data = c.execute( 'SELECT pixiv_id, password FROM pixiv_account;' ).fetchone()
if data is not None: inserts.append( ( YAML_DUMP_ID_SINGLE, 'pixiv_account', data ) )
# boorus
data = c.execute( 'SELECT name, booru FROM boorus;' ).fetchall()
inserts.extend( ( ( YAML_DUMP_ID_BOORU, name, booru ) for ( name, booru ) in data ) )
# favourite custom filter actions
data = c.execute( 'SELECT name, actions FROM favourite_custom_filter_actions;' )
inserts.extend( ( ( YAML_DUMP_ID_FAVOURITE_CUSTOM_FILTER_ACTIONS, name, actions ) for ( name, actions ) in data ) )
# gui sessions
data = c.execute( 'SELECT name, info FROM gui_sessions;' ).fetchall()
inserts.extend( ( ( YAML_DUMP_ID_GUI_SESSION, name, info ) for ( name, info ) in data ) )
# imageboards
all_imageboards = []
all_sites = c.execute( 'SELECT site_id, name FROM imageboard_sites;' ).fetchall()
for ( site_id, name ) in all_sites:
imageboards = [ imageboard for ( imageboard, ) in c.execute( 'SELECT imageboard FROM imageboards WHERE site_id = ? ORDER BY name;', ( site_id, ) ) ]
inserts.append( ( YAML_DUMP_ID_IMAGEBOARD, name, imageboards ) )
# import folders
data = c.execute( 'SELECT path, details FROM import_folders;' )
inserts.extend( ( ( YAML_DUMP_ID_IMPORT_FOLDER, path, details ) for ( path, details ) in data ) )
# subs
subs = c.execute( 'SELECT site_download_type, name, info FROM subscriptions;' )
names = set()
for ( site_download_type, name, old_info ) in subs:
if name in names: name = name + str( site_download_type )
( query_type, query, frequency_type, frequency_number, advanced_tag_options, advanced_import_options, last_checked, url_cache, paused ) = old_info
info = {}
info[ 'site_type' ] = site_download_type
info[ 'query_type' ] = query_type
info[ 'query' ] = query
info[ 'frequency_type' ] = frequency_type
info[ 'frequency' ] = frequency_number
info[ 'advanced_tag_options' ] = advanced_tag_options
info[ 'advanced_import_options' ] = advanced_import_options
info[ 'last_checked' ] = last_checked
info[ 'url_cache' ] = url_cache
info[ 'paused' ] = paused
inserts.append( ( YAML_DUMP_ID_SUBSCRIPTION, name, info ) )
names.add( name )
#
c.executemany( 'INSERT INTO yaml_dumps VALUES ( ?, ?, ? );', inserts )
#
c.execute( 'DROP TABLE fourchan_pass;' )
c.execute( 'DROP TABLE pixiv_account;' )
c.execute( 'DROP TABLE boorus;' )
c.execute( 'DROP TABLE favourite_custom_filter_actions;' )
c.execute( 'DROP TABLE gui_sessions;' )
c.execute( 'DROP TABLE imageboard_sites;' )
c.execute( 'DROP TABLE imageboards;' )
c.execute( 'DROP TABLE subscriptions;' )
def _Vacuum( self ):
@ -7661,12 +7736,12 @@ def DAEMONSynchroniseRepositories():
update = service.Request( HC.GET, 'update', { 'begin' : next_download_timestamp } )
update_path = CC.GetUpdatePath( service_key, value )
( begin, end ) = update.GetBeginEnd()
update_path = CC.GetUpdatePath( service_key, begin )
with open( update_path, 'wb' ) as f: f.write( yaml.safe_dump( update ) )
( begin, end ) = update.GetBeginEnd()
next_download_timestamp = end + 1
service_updates = [ HC.ServiceUpdate( HC.SERVICE_UPDATE_NEXT_DOWNLOAD_TIMESTAMP, next_download_timestamp ) ]
@ -7730,7 +7805,7 @@ def DAEMONSynchroniseRepositories():
message.SetInfo( 'text', prefix_string + 'processing' )
message.SetInfo( 'mode', 'gauge' )
update_path = CC.GetUpdatePath( service_key, value )
update_path = CC.GetUpdatePath( service_key, next_processing_timestamp )
with open( update_path, 'rb' ) as f: update_yaml = f.read()

View File

@ -4226,7 +4226,7 @@ class TagsBoxCounts( TagsBox ):
self._tag_service_identifier = service_identifier
if self._last_media is not None: self.SetTagsByMedia( self._last_media )
if self._last_media is not None: self.SetTagsByMedia( self._last_media, force_reload = True )
def SetSort( self, sort ):
@ -4299,6 +4299,16 @@ class TagsBoxCounts( TagsBox ):
self._pending_tags_to_count.update( pending_tags_to_count )
self._petitioned_tags_to_count.update( petitioned_tags_to_count )
for counter in ( self._current_tags_to_count, self._deleted_tags_to_count, self._pending_tags_to_count, self._petitioned_tags_to_count ):
tags = counter.keys()
for tag in tags:
if counter[ tag ] == 0: del counter[ tag ]
self._last_media = media

View File

@ -7271,7 +7271,7 @@ class DialogManageTags( ClientGUIDialogs.Dialog ):
if self._i_am_local_tag_service:
if num_current < num_files: choices.append( ( 'add ' + tag, HC.CONTENT_UPDATE_ADD ) )
if num_current > 0: choices.append( ( 'delete ' + tag, HC.CONTENT_UPDATE_DELETE ) )
if num_current > 0 and not only_add: choices.append( ( 'delete ' + tag, HC.CONTENT_UPDATE_DELETE ) )
else:
@ -7279,12 +7279,13 @@ class DialogManageTags( ClientGUIDialogs.Dialog ):
num_petitioned = len( [ 1 for tag_manager in tag_managers if tag in tag_manager.GetPetitioned( self._tag_service_identifier ) ] )
if num_current + num_pending < num_files: choices.append( ( 'pend ' + tag, HC.CONTENT_UPDATE_PENDING ) )
if num_current > num_petitioned: choices.append( ( 'petition ' + tag, HC.CONTENT_UPDATE_PETITION ) )
if num_pending > 0: choices.append( ( 'rescind pending ' + tag, HC.CONTENT_UPDATE_RESCIND_PENDING ) )
if num_current > num_petitioned and not only_add: choices.append( ( 'petition ' + tag, HC.CONTENT_UPDATE_PETITION ) )
if num_pending > 0 and not only_add: choices.append( ( 'rescind pending ' + tag, HC.CONTENT_UPDATE_RESCIND_PENDING ) )
if num_petitioned > 0: choices.append( ( 'rescind petitioned ' + tag, HC.CONTENT_UPDATE_RESCIND_PETITION ) )
if len( choices ) > 1:
if len( choices ) == 0: return
elif len( choices ) > 1:
intro = 'What would you like to do?'

View File

@ -2007,7 +2007,9 @@ class ManagementPanelQuery( ManagementPanel ):
self._include_current_tags = True
self._include_pending_tags = True
if show_search:
self._show_search = show_search
if self._show_search:
self._search_panel = ClientGUICommon.StaticBox( self, 'search' )
@ -2024,7 +2026,7 @@ class ManagementPanelQuery( ManagementPanel ):
self._MakeSort( vbox )
self._MakeCollect( vbox )
if show_search: vbox.AddF( self._search_panel, FLAGS_EXPAND_PERPENDICULAR )
if self._show_search: vbox.AddF( self._search_panel, FLAGS_EXPAND_PERPENDICULAR )
self._MakeCurrentSelectionTagsBox( vbox )
@ -2050,7 +2052,7 @@ class ManagementPanelQuery( ManagementPanel ):
self._query_key = HC.JobKey()
if self._synchronised:
if self._show_search and self._synchronised:
try:

View File

@ -48,7 +48,7 @@ TEMP_DIR = BASE_DIR + os.path.sep + 'temp'
# Misc
NETWORK_VERSION = 13
SOFTWARE_VERSION = 108
SOFTWARE_VERSION = 109
UNSCALED_THUMBNAIL_DIMENSIONS = ( 200, 200 )

View File

@ -1146,9 +1146,7 @@ class HydrusResourceCommandRestrictedUpdate( HydrusResourceCommandRestricted ):
begin = request.hydrus_args[ 'begin' ]
update_key = HC.app.Read( 'update_key', self._service_identifier, begin )
path = SC.GetPath( 'update', update_key )
path = SC.GetUpdatePath( self._service_identifier, begin )
response_context = HC.ResponseContext( 200, path = path, is_yaml = True )

View File

@ -4,6 +4,8 @@ import HydrusExceptions
import itertools
import os
def GetAllHashes( file_type ): return { os.path.split( path )[1].decode( 'hex' ) for path in IterateAllPaths( file_type ) }
def GetExpectedPath( file_type, hash ):
if file_type == 'file': directory = HC.SERVER_FILES_DIR
@ -19,6 +21,16 @@ def GetExpectedPath( file_type, hash ):
return path
def GetExpectedUpdatePath( service_identifier, begin ):
service_key = service_identifier.GetServiceKey()
begin = int( begin )
path = HC.SERVER_UPDATES_DIR + os.path.sep + service_key.encode( 'hex' ) + '_' + str( begin )
return path
def GetPath( file_type, hash ):
path = GetExpectedPath( file_type, hash )
@ -27,8 +39,14 @@ def GetPath( file_type, hash ):
return path
def GetAllHashes( file_type ): return { os.path.split( path )[1].decode( 'hex' ) for path in IterateAllPaths( file_type ) }
def GetUpdatePath( service_identifier, begin ):
path = GetExpectedUpdatePath( service_identifier, begin )
if not os.path.exists( path ): raise HydrusExceptions.NotFoundException( 'Update not found!' )
return path
def IterateAllPaths( file_type ):
if file_type == 'file': directory = HC.SERVER_FILES_DIR

View File

@ -1030,18 +1030,16 @@ class ServiceDB( FileDB, MessageDB, TagDB ):
def _CleanUpdate( self, c, service_id, begin, end ):
def _CleanUpdate( self, c, service_identifier, begin, end ):
service_type = self._GetServiceType( c, service_id )
service_type = service_identifier.GetType()
service_id = self._GetServiceId( c, service_identifier )
if service_type == HC.FILE_REPOSITORY: clean_update = self._GenerateFileUpdate( c, service_id, begin, end )
elif service_type == HC.TAG_REPOSITORY: clean_update = self._GenerateTagUpdate( c, service_id, begin, end )
( update_key, ) = c.execute( 'SELECT update_key FROM update_cache WHERE service_id = ? AND begin = ?;', ( service_id, begin ) ).fetchone()
update_key_bytes = update_key.decode( 'hex' )
path = SC.GetExpectedPath( 'update', update_key_bytes )
path = SC.GetExpectedUpdatePath( service_identifier, begin )
with open( path, 'wb' ) as f: f.write( yaml.safe_dump( clean_update ) )
@ -1055,22 +1053,20 @@ class ServiceDB( FileDB, MessageDB, TagDB ):
c.execute( 'DELETE FROM bans WHERE expires < ?;', ( now, ) )
def _CreateUpdate( self, c, service_id, begin, end ):
def _CreateUpdate( self, c, service_identifier, begin, end ):
service_type = self._GetServiceType( c, service_id )
service_type = service_identifier.GetType()
service_id = self._GetServiceId( c, service_identifier )
if service_type == HC.FILE_REPOSITORY: update = self._GenerateFileUpdate( c, service_id, begin, end )
elif service_type == HC.TAG_REPOSITORY: update = self._GenerateTagUpdate( c, service_id, begin, end )
update_key_bytes = os.urandom( 32 )
update_key = update_key_bytes.encode( 'hex' )
path = SC.GetExpectedPath( 'update', update_key_bytes )
path = SC.GetExpectedUpdatePath( service_identifier, begin )
with open( path, 'wb' ) as f: f.write( yaml.safe_dump( update ) )
c.execute( 'INSERT OR REPLACE INTO update_cache ( service_id, begin, end, update_key, dirty ) VALUES ( ?, ?, ?, ?, ? );', ( service_id, begin, end, update_key, False ) )
c.execute( 'INSERT OR REPLACE INTO update_cache ( service_id, begin, end, dirty ) VALUES ( ?, ?, ?, ? );', ( service_id, begin, end, False ) )
def _DeleteOrphans( self, c ):
@ -1344,7 +1340,21 @@ class ServiceDB( FileDB, MessageDB, TagDB ):
return [ account_type for ( account_type, ) in c.execute( 'SELECT account_type FROM account_type_map, account_types USING ( account_type_id ) WHERE service_id = ?;', ( service_id, ) ) ]
def _GetDirtyUpdates( self, c ): return c.execute( 'SELECT service_id, begin, end FROM update_cache WHERE dirty = ?;', ( True, ) ).fetchall()
def _GetDirtyUpdates( self, c ):
service_ids_to_tuples = HC.BuildKeyToListDict( [ ( service_id, ( begin, end ) ) for ( service_id, begin, end ) in c.execute( 'SELECT service_id, begin, end FROM update_cache WHERE dirty = ?;', ( True, ) ) ] )
service_identifiers_to_tuples = {}
for ( service_id, tuples ) in service_ids_to_tuples.items():
service_identifier = self._GetServiceIdentifier( c, service_id )
service_identifiers_to_tuples[ service_identifier ] = tuples
return service_identifiers_to_tuples
def _GetMessagingSessions( self, c ):
@ -1542,26 +1552,18 @@ class ServiceDB( FileDB, MessageDB, TagDB ):
service_ids = self._GetServiceIds( c, HC.REPOSITORIES )
result = [ c.execute( 'SELECT service_id, end FROM update_cache WHERE service_id = ? ORDER BY end DESC LIMIT 1;', ( service_id, ) ).fetchone() for service_id in service_ids ]
results = {}
return result
for service_id in service_ids:
( end, ) = c.execute( 'SELECT end FROM update_cache WHERE service_id = ? ORDER BY end DESC LIMIT 1;', ( service_id, ) ).fetchone()
service_identifier = self._GetServiceIdentifier( c, service_id )
results[ service_identifier ] = end
def _GetUpdateKey( self, c, service_identifier, begin ):
service_id = self._GetServiceId( c, service_identifier )
result = c.execute( 'SELECT update_key FROM update_cache WHERE service_id = ? AND begin = ?;', ( service_id, begin ) ).fetchone()
if result is None: result = c.execute( 'SELECT update_key FROM update_cache WHERE service_id = ? AND ? BETWEEN begin AND end;', ( service_id, begin ) ).fetchone()
if result is None: raise HydrusExceptions.NotFoundException( 'Could not find that update in db!' )
( update_key, ) = result
update_key_bytes = update_key.decode( 'hex' )
return update_key_bytes
return results
def _InitAdmin( self, c ):
@ -1723,7 +1725,7 @@ class ServiceDB( FileDB, MessageDB, TagDB ):
begin = 0
end = HC.GetNow()
self._CreateUpdate( c, service_id, begin, end )
self._CreateUpdate( c, service_identifier, begin, end )
modified_services[ service_identifier ] = options
@ -1987,7 +1989,30 @@ class DB( ServiceDB ):
( db, c ) = self._GetDBCursor()
self._UpdateDB( c )
( version, ) = c.execute( 'SELECT version FROM version;' ).fetchone()
while version < HC.SOFTWARE_VERSION:
time.sleep( 1 )
try: c.execute( 'BEGIN IMMEDIATE' )
except Exception as e: raise HydrusExceptions.DBAccessException( HC.u( e ) )
try:
self._UpdateDB( c, version )
c.execute( 'COMMIT' )
except:
c.execute( 'ROLLBACK' )
raise Exception( 'Updating the server db to version ' + HC.u( version ) + ' caused this error:' + os.linesep + traceback.format_exc() )
( version, ) = c.execute( 'SELECT version FROM version;' ).fetchone()
service_identifiers = self._GetServiceIdentifiers( c )
@ -2013,18 +2038,20 @@ class DB( ServiceDB ):
dirs = ( HC.SERVER_FILES_DIR, HC.SERVER_THUMBNAILS_DIR, HC.SERVER_UPDATES_DIR, HC.SERVER_MESSAGES_DIR )
hex_chars = '0123456789abcdef'
for dir in dirs:
if not os.path.exists( dir ): os.mkdir( dir )
for ( one, two ) in itertools.product( hex_chars, hex_chars ):
new_dir = dir + os.path.sep + one + two
if not os.path.exists( new_dir ): os.mkdir( new_dir )
dirs = ( HC.SERVER_FILES_DIR, HC.SERVER_THUMBNAILS_DIR, HC.SERVER_MESSAGES_DIR )
hex_chars = '0123456789abcdef'
for ( one, two ) in itertools.product( hex_chars, hex_chars ):
new_dir = dir + os.path.sep + one + two
if not os.path.exists( new_dir ): os.mkdir( new_dir )
( db, c ) = self._GetDBCursor()
@ -2117,7 +2144,7 @@ class DB( ServiceDB ):
c.execute( 'CREATE TABLE tags ( tag_id INTEGER PRIMARY KEY, tag TEXT );' )
c.execute( 'CREATE UNIQUE INDEX tags_tag_index ON tags ( tag );' )
c.execute( 'CREATE TABLE update_cache ( service_id INTEGER REFERENCES services ON DELETE CASCADE, begin INTEGER, end INTEGER, update_key TEXT, dirty INTEGER_BOOLEAN, PRIMARY KEY( service_id, begin ) );' )
c.execute( 'CREATE TABLE update_cache ( service_id INTEGER REFERENCES services ON DELETE CASCADE, begin INTEGER, end INTEGER, dirty INTEGER_BOOLEAN, PRIMARY KEY( service_id, begin ) );' )
c.execute( 'CREATE UNIQUE INDEX update_cache_service_id_end_index ON update_cache ( service_id, end );' )
c.execute( 'CREATE INDEX update_cache_service_id_dirty_index ON update_cache ( service_id, dirty );' )
@ -2181,222 +2208,44 @@ class DB( ServiceDB ):
shutil.copytree( HC.SERVER_UPDATES_DIR, HC.SERVER_UPDATES_DIR + '_backup' )
def _UpdateDB( self, c ):
def _UpdateDB( self, c, version ):
( version, ) = c.execute( 'SELECT version FROM version;' ).fetchone()
self._UpdateDBOld( c, version )
if version != HC.SOFTWARE_VERSION:
if version == 93:
c.execute( 'BEGIN IMMEDIATE' )
c.execute( 'CREATE TABLE messaging_sessions ( service_id INTEGER REFERENCES services ON DELETE CASCADE, session_key BLOB_BYTES, account_id INTEGER, identifier BLOB_BYTES, name TEXT, expiry INTEGER );' )
try:
if version == 108:
data = c.execute( 'SELECT service_id, begin, end FROM update_cache;' ).fetchall()
c.execute( 'DROP TABLE update_cache;' )
c.execute( 'CREATE TABLE update_cache ( service_id INTEGER REFERENCES services ON DELETE CASCADE, begin INTEGER, end INTEGER, dirty INTEGER_BOOLEAN, PRIMARY KEY( service_id, begin ) );' )
c.execute( 'CREATE UNIQUE INDEX update_cache_service_id_end_index ON update_cache ( service_id, end );' )
c.execute( 'CREATE INDEX update_cache_service_id_dirty_index ON update_cache ( service_id, dirty );' )
c.executemany( 'INSERT INTO update_cache ( service_id, begin, end, dirty ) VALUES ( ?, ?, ?, ? );', ( ( service_id, begin, end, True ) for ( service_id, begin, end ) in data ) )
stuff_in_updates_dir = dircache.listdir( HC.SERVER_UPDATES_DIR )
for filename in stuff_in_updates_dir:
self._UpdateDBOld( c, version )
path = HC.SERVER_UPDATES_DIR + os.path.sep + filename
if version < 56:
c.execute( 'DROP INDEX mappings_service_id_account_id_index;' )
c.execute( 'DROP INDEX mappings_service_id_timestamp_index;' )
c.execute( 'CREATE INDEX mappings_account_id_index ON mappings ( account_id );' )
c.execute( 'CREATE INDEX mappings_timestamp_index ON mappings ( timestamp );' )
if version < 61:
c.execute( 'CREATE TABLE sessions ( session_key BLOB_BYTES, service_id INTEGER REFERENCES services ON DELETE CASCADE, account_id INTEGER, expiry INTEGER );' )
c.execute( 'CREATE TABLE registration_keys ( registration_key BLOB_BYTES PRIMARY KEY, service_id INTEGER REFERENCES services ON DELETE CASCADE, account_type_id INTEGER, access_key BLOB_BYTES, expiry INTEGER );' )
c.execute( 'CREATE UNIQUE INDEX registration_keys_access_key_index ON registration_keys ( access_key );' )
if version < 68:
dirs = ( HC.SERVER_FILES_DIR, HC.SERVER_THUMBNAILS_DIR, HC.SERVER_UPDATES_DIR, HC.SERVER_MESSAGES_DIR )
hex_chars = '0123456789abcdef'
for dir in dirs:
for ( one, two ) in itertools.product( hex_chars, hex_chars ):
new_dir = dir + os.path.sep + one + two
if not os.path.exists( new_dir ): os.mkdir( new_dir )
filenames = dircache.listdir( dir )
for filename in filenames:
try:
source_path = dir + os.path.sep + filename
first_two_chars = filename[:2]
destination_path = dir + os.path.sep + first_two_chars + os.path.sep + filename
shutil.move( source_path, destination_path )
except: continue
if version < 71:
try: c.execute( 'CREATE INDEX mappings_account_id_index ON mappings ( account_id );' )
except: pass
try: c.execute( 'CREATE INDEX mappings_timestamp_index ON mappings ( timestamp );' )
except: pass
if version < 72:
c.execute( 'ANALYZE' )
#
now = HC.GetNow()
#
petitioned_inserts = [ ( service_id, account_id, hash_id, reason_id, now, HC.PETITIONED ) for ( service_id, account_id, hash_id, reason_id ) in c.execute( 'SELECT service_id, account_id, hash_id, reason_id FROM file_petitions;' ) ]
deleted_inserts = [ ( service_id, account_id, hash_id, reason_id, timestamp, HC.DELETED ) for ( service_id, account_id, hash_id, reason_id, timestamp ) in c.execute( 'SELECT service_id, admin_account_id, hash_id, reason_id, timestamp FROM deleted_files;' ) ]
c.execute( 'DROP TABLE file_petitions;' )
c.execute( 'DROP TABLE deleted_files;' )
c.execute( 'CREATE TABLE file_petitions ( service_id INTEGER REFERENCES services ON DELETE CASCADE, account_id INTEGER, hash_id INTEGER, reason_id INTEGER, timestamp INTEGER, status INTEGER, PRIMARY KEY( service_id, account_id, hash_id, status ) );' )
c.execute( 'CREATE INDEX file_petitions_service_id_account_id_reason_id_index ON file_petitions ( service_id, account_id, reason_id );' )
c.execute( 'CREATE INDEX file_petitions_service_id_hash_id_index ON file_petitions ( service_id, hash_id );' )
c.execute( 'CREATE INDEX file_petitions_service_id_status_index ON file_petitions ( service_id, status );' )
c.execute( 'CREATE INDEX file_petitions_service_id_timestamp_index ON file_petitions ( service_id, timestamp );' )
c.executemany( 'INSERT INTO file_petitions VALUES ( ?, ?, ?, ?, ?, ? );', petitioned_inserts )
c.executemany( 'INSERT INTO file_petitions VALUES ( ?, ?, ?, ?, ?, ? );', deleted_inserts )
#
petitioned_inserts = [ ( service_id, account_id, tag_id, hash_id, reason_id, now, HC.PETITIONED ) for ( service_id, account_id, tag_id, hash_id, reason_id ) in c.execute( 'SELECT service_id, account_id, tag_id, hash_id, reason_id FROM mapping_petitions;' ) ]
deleted_inserts = [ ( service_id, account_id, tag_id, hash_id, reason_id, timestamp, HC.DELETED ) for ( service_id, account_id, tag_id, hash_id, reason_id, timestamp ) in c.execute( 'SELECT service_id, admin_account_id, tag_id, hash_id, reason_id, timestamp FROM deleted_mappings;' ) ]
c.execute( 'DROP TABLE mapping_petitions;' )
c.execute( 'DROP TABLE deleted_mappings;' )
c.execute( 'CREATE TABLE mapping_petitions ( service_id INTEGER REFERENCES services ON DELETE CASCADE, account_id INTEGER, tag_id INTEGER, hash_id INTEGER, reason_id INTEGER, timestamp INTEGER, status INTEGER, PRIMARY KEY( service_id, account_id, tag_id, hash_id, status ) );' )
c.execute( 'CREATE INDEX mapping_petitions_service_id_account_id_reason_id_tag_id_index ON mapping_petitions ( service_id, account_id, reason_id, tag_id );' )
c.execute( 'CREATE INDEX mapping_petitions_service_id_tag_id_hash_id_index ON mapping_petitions ( service_id, tag_id, hash_id );' )
c.execute( 'CREATE INDEX mapping_petitions_service_id_status_index ON mapping_petitions ( service_id, status );' )
c.execute( 'CREATE INDEX mapping_petitions_service_id_timestamp_index ON mapping_petitions ( service_id, timestamp );' )
c.executemany( 'INSERT INTO mapping_petitions VALUES ( ?, ?, ?, ?, ?, ?, ? );', petitioned_inserts )
c.executemany( 'INSERT INTO mapping_petitions VALUES ( ?, ?, ?, ?, ?, ?, ? );', deleted_inserts )
#
try:
c.execute( 'DROP TABLE tag_siblings;' )
c.execute( 'DROP TABLE deleted_tag_siblings;' )
c.execute( 'DROP TABLE pending_tag_siblings;' )
c.execute( 'DROP TABLE tag_sibling_petitions;' )
except: pass
c.execute( 'CREATE TABLE tag_siblings ( service_id INTEGER REFERENCES services ON DELETE CASCADE, account_id INTEGER, old_tag_id INTEGER, new_tag_id INTEGER, timestamp INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, account_id, old_tag_id, status ) );' )
c.execute( 'CREATE INDEX tag_siblings_service_id_old_tag_id_index ON tag_siblings ( service_id, old_tag_id );' )
c.execute( 'CREATE INDEX tag_siblings_service_id_timestamp_index ON tag_siblings ( service_id, timestamp );' )
c.execute( 'CREATE INDEX tag_siblings_service_id_status_index ON tag_siblings ( service_id, status );' )
#
c.execute( 'CREATE TABLE tag_parents ( service_id INTEGER REFERENCES services ON DELETE CASCADE, account_id INTEGER, old_tag_id INTEGER, new_tag_id INTEGER, timestamp INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, account_id, old_tag_id, new_tag_id, status ) );' )
c.execute( 'CREATE INDEX tag_parents_service_id_old_tag_id_index ON tag_parents ( service_id, old_tag_id, new_tag_id );' )
c.execute( 'CREATE INDEX tag_parents_service_id_timestamp_index ON tag_parents ( service_id, timestamp );' )
c.execute( 'CREATE INDEX tag_parents_service_id_status_index ON tag_parents ( service_id, status );' )
# update objects have changed
results = c.execute( 'SELECT service_id, begin, end FROM update_cache WHERE begin = 0;' ).fetchall()
c.execute( 'DELETE FROM update_cache;' )
update_paths = [ path for path in SC.IterateAllPaths( 'update' ) ]
for path in update_paths: os.remove( path )
for ( service_id, begin, end ) in results: self._CreateUpdate( c, service_id, begin, end )
if version < 87:
c.execute( 'COMMIT' )
c.execute( 'PRAGMA foreign_keys = OFF;' )
c.execute( 'BEGIN IMMEDIATE' )
old_service_info = c.execute( 'SELECT * FROM services;' ).fetchall()
c.execute( 'DROP TABLE services;' )
c.execute( 'CREATE TABLE services ( service_id INTEGER PRIMARY KEY, service_key BLOB_BYTES, type INTEGER, options TEXT_YAML );' ).fetchall()
for ( service_id, service_type, port, options ) in old_service_info:
if service_type == HC.SERVER_ADMIN: service_key = 'server admin'
else: service_key = os.urandom( 32 )
options[ 'port' ] = port
c.execute( 'INSERT INTO services ( service_id, service_key, type, options ) VALUES ( ?, ?, ?, ? );', ( service_id, sqlite3.Binary( service_key ), service_type, options ) )
c.execute( 'COMMIT' )
c.execute( 'PRAGMA foreign_keys = ON;' )
c.execute( 'BEGIN IMMEDIATE' )
if version < 91:
for ( service_id, options ) in c.execute( 'SELECT service_id, options FROM services;' ).fetchall():
options[ 'upnp' ] = None
c.execute( 'UPDATE services SET options = ? WHERE service_id = ?;', ( options, service_id ) )
if version < 94:
c.execute( 'CREATE TABLE messaging_sessions ( service_id INTEGER REFERENCES services ON DELETE CASCADE, session_key BLOB_BYTES, account_id INTEGER, identifier BLOB_BYTES, name TEXT, expiry INTEGER );' )
c.execute( 'UPDATE version SET version = ?;', ( HC.SOFTWARE_VERSION, ) )
c.execute( 'COMMIT' )
wx.MessageBox( 'The server has updated successfully!' )
except:
c.execute( 'ROLLBACK' )
raise Exception( 'Tried to update the server db, but something went wrong:' + os.linesep + traceback.format_exc() )
if os.path.isdir( path ): shutil.rmtree( path )
else: os.remove( path )
self._UpdateDBOldPost( c, version )
c.execute( 'UPDATE version SET version = ?;', ( version + 1, ) )
def _UpdateDBOld( self, c, version ):
if version < 29:
if version == 28:
files_db_path = HC.DB_DIR + os.path.sep + 'server_files.db'
@ -2445,7 +2294,7 @@ class DB( ServiceDB ):
os.remove( files_db_path )
if version < 30:
if version == 29:
thumbnails_db_path = HC.DB_DIR + os.path.sep + 'server_thumbnails.db'
@ -2480,7 +2329,86 @@ class DB( ServiceDB ):
os.remove( thumbnails_db_path )
if version < 37:
if version == 34:
main_db_path = HC.DB_DIR + os.path.sep + 'server_main.db'
files_info_db_path = HC.DB_DIR + os.path.sep + 'server_files_info.db'
mappings_db_path = HC.DB_DIR + os.path.sep + 'server_mappings.db'
updates_db_path = HC.DB_DIR + os.path.sep + 'server_updates.db'
if os.path.exists( main_db_path ):
c.execute( 'COMMIT' )
c.execute( 'ATTACH database "' + main_db_path + '" as main_db;' )
c.execute( 'ATTACH database "' + files_info_db_path + '" as files_info_db;' )
c.execute( 'ATTACH database "' + mappings_db_path + '" as mappings_db;' )
c.execute( 'ATTACH database "' + updates_db_path + '" as updates_db;' )
c.execute( 'BEGIN IMMEDIATE' )
c.execute( 'REPLACE INTO main.services SELECT * FROM main_db.services;' )
all_service_ids = [ service_id for ( service_id, ) in c.execute( 'SELECT service_id FROM main.services;' ) ]
c.execute( 'DELETE FROM main_db.account_map WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM main_db.account_scores WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM main_db.account_type_map WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM main_db.bans WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM main_db.news WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM mappings_db.deleted_mappings WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM mappings_db.mappings WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM mappings_db.mapping_petitions WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM files_info_db.deleted_files WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM files_info_db.file_map WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM files_info_db.ip_addresses WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM files_info_db.file_petitions WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM updates_db.update_cache WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'REPLACE INTO main.accounts SELECT * FROM main_db.accounts;' )
c.execute( 'REPLACE INTO main.account_map SELECT * FROM main_db.account_map;' )
c.execute( 'REPLACE INTO main.account_scores SELECT * FROM main_db.account_scores;' )
c.execute( 'REPLACE INTO main.account_type_map SELECT * FROM main_db.account_type_map;' )
c.execute( 'REPLACE INTO main.account_types SELECT * FROM main_db.account_types;' )
c.execute( 'REPLACE INTO main.bans SELECT * FROM main_db.bans;' )
c.execute( 'REPLACE INTO main.hashes SELECT * FROM main_db.hashes;' )
c.execute( 'REPLACE INTO main.news SELECT * FROM main_db.news;' )
c.execute( 'REPLACE INTO main.reasons SELECT * FROM main_db.reasons;' )
c.execute( 'REPLACE INTO main.tags SELECT * FROM main_db.tags;' )
# don't do version, lol
c.execute( 'REPLACE INTO main.deleted_mappings SELECT * FROM mappings_db.deleted_mappings;' )
c.execute( 'REPLACE INTO main.mappings SELECT * FROM mappings_db.mappings;' )
c.execute( 'REPLACE INTO main.mapping_petitions SELECT * FROM mappings_db.mapping_petitions;' )
c.execute( 'REPLACE INTO main.deleted_files SELECT * FROM files_info_db.deleted_files;' )
c.execute( 'REPLACE INTO main.files_info SELECT * FROM files_info_db.files_info;' )
c.execute( 'REPLACE INTO main.file_map SELECT * FROM files_info_db.file_map;' )
c.execute( 'REPLACE INTO main.file_petitions SELECT * FROM files_info_db.file_petitions;' )
c.execute( 'REPLACE INTO main.ip_addresses SELECT * FROM files_info_db.ip_addresses;' )
c.execute( 'REPLACE INTO main.update_cache SELECT * FROM updates_db.update_cache;' )
c.execute( 'COMMIT' )
c.execute( 'DETACH database main_db;' )
c.execute( 'DETACH database files_info_db;' )
c.execute( 'DETACH database mappings_db;' )
c.execute( 'DETACH database updates_db;' )
os.remove( main_db_path )
os.remove( mappings_db_path )
os.remove( files_info_db_path )
os.remove( updates_db_path )
c.execute( 'BEGIN IMMEDIATE' )
if version == 36:
os.mkdir( HC.SERVER_MESSAGES_DIR )
@ -2492,7 +2420,7 @@ class DB( ServiceDB ):
c.execute( 'CREATE INDEX messages_timestamp_index ON messages ( timestamp );' )
if version < 38:
if version == 37:
c.execute( 'COMMIT' )
c.execute( 'PRAGMA journal_mode=WAL;' ) # possibly didn't work last time, cause of sqlite dll issue
@ -2509,7 +2437,7 @@ class DB( ServiceDB ):
c.execute( 'CREATE INDEX message_statuses_timestamp_index ON message_statuses ( timestamp );' )
if version < 40:
if version == 39:
try: c.execute( 'SELECT 1 FROM message_statuses;' ).fetchone() # didn't update dbinit on 38
except:
@ -2526,7 +2454,7 @@ class DB( ServiceDB ):
if version < 50:
if version == 40:
c.execute( 'CREATE TABLE ratings ( service_id INTEGER REFERENCES services ON DELETE CASCADE, account_id INTEGER, hash_id INTEGER, rating REAL, PRIMARY KEY( service_id, account_id, hash_id ) );' )
c.execute( 'CREATE INDEX ratings_hash_id ON ratings ( hash_id );' )
@ -2536,7 +2464,7 @@ class DB( ServiceDB ):
c.execute( 'CREATE INDEX ratings_aggregates_current_timestamp ON ratings_aggregates ( current_timestamp );' )
if version < 51:
if version == 50:
if not os.path.exists( HC.SERVER_UPDATES_DIR ): os.mkdir( HC.SERVER_UPDATES_DIR )
@ -2560,95 +2488,182 @@ class DB( ServiceDB ):
def _UpdateDBOldPost( self, c, version ):
if version == 55:
c.execute( 'DROP INDEX mappings_service_id_account_id_index;' )
c.execute( 'DROP INDEX mappings_service_id_timestamp_index;' )
c.execute( 'CREATE INDEX mappings_account_id_index ON mappings ( account_id );' )
c.execute( 'CREATE INDEX mappings_timestamp_index ON mappings ( timestamp );' )
if version == 34: # == is important here
if version == 60:
c.execute( 'CREATE TABLE sessions ( session_key BLOB_BYTES, service_id INTEGER REFERENCES services ON DELETE CASCADE, account_id INTEGER, expiry INTEGER );' )
c.execute( 'CREATE TABLE registration_keys ( registration_key BLOB_BYTES PRIMARY KEY, service_id INTEGER REFERENCES services ON DELETE CASCADE, account_type_id INTEGER, access_key BLOB_BYTES, expiry INTEGER );' )
c.execute( 'CREATE UNIQUE INDEX registration_keys_access_key_index ON registration_keys ( access_key );' )
if version == 67:
dirs = ( HC.SERVER_FILES_DIR, HC.SERVER_THUMBNAILS_DIR, HC.SERVER_UPDATES_DIR, HC.SERVER_MESSAGES_DIR )
hex_chars = '0123456789abcdef'
for dir in dirs:
for ( one, two ) in itertools.product( hex_chars, hex_chars ):
new_dir = dir + os.path.sep + one + two
if not os.path.exists( new_dir ): os.mkdir( new_dir )
filenames = dircache.listdir( dir )
for filename in filenames:
try:
source_path = dir + os.path.sep + filename
first_two_chars = filename[:2]
destination_path = dir + os.path.sep + first_two_chars + os.path.sep + filename
shutil.move( source_path, destination_path )
except: continue
if version == 70:
try: c.execute( 'CREATE INDEX mappings_account_id_index ON mappings ( account_id );' )
except: pass
try: c.execute( 'CREATE INDEX mappings_timestamp_index ON mappings ( timestamp );' )
except: pass
if version == 71:
c.execute( 'ANALYZE' )
#
now = HC.GetNow()
#
petitioned_inserts = [ ( service_id, account_id, hash_id, reason_id, now, HC.PETITIONED ) for ( service_id, account_id, hash_id, reason_id ) in c.execute( 'SELECT service_id, account_id, hash_id, reason_id FROM file_petitions;' ) ]
deleted_inserts = [ ( service_id, account_id, hash_id, reason_id, timestamp, HC.DELETED ) for ( service_id, account_id, hash_id, reason_id, timestamp ) in c.execute( 'SELECT service_id, admin_account_id, hash_id, reason_id, timestamp FROM deleted_files;' ) ]
c.execute( 'DROP TABLE file_petitions;' )
c.execute( 'DROP TABLE deleted_files;' )
c.execute( 'CREATE TABLE file_petitions ( service_id INTEGER REFERENCES services ON DELETE CASCADE, account_id INTEGER, hash_id INTEGER, reason_id INTEGER, timestamp INTEGER, status INTEGER, PRIMARY KEY( service_id, account_id, hash_id, status ) );' )
c.execute( 'CREATE INDEX file_petitions_service_id_account_id_reason_id_index ON file_petitions ( service_id, account_id, reason_id );' )
c.execute( 'CREATE INDEX file_petitions_service_id_hash_id_index ON file_petitions ( service_id, hash_id );' )
c.execute( 'CREATE INDEX file_petitions_service_id_status_index ON file_petitions ( service_id, status );' )
c.execute( 'CREATE INDEX file_petitions_service_id_timestamp_index ON file_petitions ( service_id, timestamp );' )
c.executemany( 'INSERT INTO file_petitions VALUES ( ?, ?, ?, ?, ?, ? );', petitioned_inserts )
c.executemany( 'INSERT INTO file_petitions VALUES ( ?, ?, ?, ?, ?, ? );', deleted_inserts )
#
petitioned_inserts = [ ( service_id, account_id, tag_id, hash_id, reason_id, now, HC.PETITIONED ) for ( service_id, account_id, tag_id, hash_id, reason_id ) in c.execute( 'SELECT service_id, account_id, tag_id, hash_id, reason_id FROM mapping_petitions;' ) ]
deleted_inserts = [ ( service_id, account_id, tag_id, hash_id, reason_id, timestamp, HC.DELETED ) for ( service_id, account_id, tag_id, hash_id, reason_id, timestamp ) in c.execute( 'SELECT service_id, admin_account_id, tag_id, hash_id, reason_id, timestamp FROM deleted_mappings;' ) ]
c.execute( 'DROP TABLE mapping_petitions;' )
c.execute( 'DROP TABLE deleted_mappings;' )
c.execute( 'CREATE TABLE mapping_petitions ( service_id INTEGER REFERENCES services ON DELETE CASCADE, account_id INTEGER, tag_id INTEGER, hash_id INTEGER, reason_id INTEGER, timestamp INTEGER, status INTEGER, PRIMARY KEY( service_id, account_id, tag_id, hash_id, status ) );' )
c.execute( 'CREATE INDEX mapping_petitions_service_id_account_id_reason_id_tag_id_index ON mapping_petitions ( service_id, account_id, reason_id, tag_id );' )
c.execute( 'CREATE INDEX mapping_petitions_service_id_tag_id_hash_id_index ON mapping_petitions ( service_id, tag_id, hash_id );' )
c.execute( 'CREATE INDEX mapping_petitions_service_id_status_index ON mapping_petitions ( service_id, status );' )
c.execute( 'CREATE INDEX mapping_petitions_service_id_timestamp_index ON mapping_petitions ( service_id, timestamp );' )
c.executemany( 'INSERT INTO mapping_petitions VALUES ( ?, ?, ?, ?, ?, ?, ? );', petitioned_inserts )
c.executemany( 'INSERT INTO mapping_petitions VALUES ( ?, ?, ?, ?, ?, ?, ? );', deleted_inserts )
#
try:
main_db_path = HC.DB_DIR + os.path.sep + 'server_main.db'
files_info_db_path = HC.DB_DIR + os.path.sep + 'server_files_info.db'
mappings_db_path = HC.DB_DIR + os.path.sep + 'server_mappings.db'
updates_db_path = HC.DB_DIR + os.path.sep + 'server_updates.db'
c.execute( 'DROP TABLE tag_siblings;' )
c.execute( 'DROP TABLE deleted_tag_siblings;' )
c.execute( 'DROP TABLE pending_tag_siblings;' )
c.execute( 'DROP TABLE tag_sibling_petitions;' )
if os.path.exists( main_db_path ):
# can't do it inside transaction
c.execute( 'ATTACH database "' + main_db_path + '" as main_db;' )
c.execute( 'ATTACH database "' + files_info_db_path + '" as files_info_db;' )
c.execute( 'ATTACH database "' + mappings_db_path + '" as mappings_db;' )
c.execute( 'ATTACH database "' + updates_db_path + '" as updates_db;' )
c.execute( 'BEGIN IMMEDIATE' )
c.execute( 'REPLACE INTO main.services SELECT * FROM main_db.services;' )
all_service_ids = [ service_id for ( service_id, ) in c.execute( 'SELECT service_id FROM main.services;' ) ]
c.execute( 'DELETE FROM main_db.account_map WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM main_db.account_scores WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM main_db.account_type_map WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM main_db.bans WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM main_db.news WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM mappings_db.deleted_mappings WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM mappings_db.mappings WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM mappings_db.mapping_petitions WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM files_info_db.deleted_files WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM files_info_db.file_map WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM files_info_db.ip_addresses WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM files_info_db.file_petitions WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'DELETE FROM updates_db.update_cache WHERE service_id NOT IN ' + HC.SplayListForDB( all_service_ids ) + ';' )
c.execute( 'REPLACE INTO main.accounts SELECT * FROM main_db.accounts;' )
c.execute( 'REPLACE INTO main.account_map SELECT * FROM main_db.account_map;' )
c.execute( 'REPLACE INTO main.account_scores SELECT * FROM main_db.account_scores;' )
c.execute( 'REPLACE INTO main.account_type_map SELECT * FROM main_db.account_type_map;' )
c.execute( 'REPLACE INTO main.account_types SELECT * FROM main_db.account_types;' )
c.execute( 'REPLACE INTO main.bans SELECT * FROM main_db.bans;' )
c.execute( 'REPLACE INTO main.hashes SELECT * FROM main_db.hashes;' )
c.execute( 'REPLACE INTO main.news SELECT * FROM main_db.news;' )
c.execute( 'REPLACE INTO main.reasons SELECT * FROM main_db.reasons;' )
c.execute( 'REPLACE INTO main.tags SELECT * FROM main_db.tags;' )
# don't do version, lol
c.execute( 'REPLACE INTO main.deleted_mappings SELECT * FROM mappings_db.deleted_mappings;' )
c.execute( 'REPLACE INTO main.mappings SELECT * FROM mappings_db.mappings;' )
c.execute( 'REPLACE INTO main.mapping_petitions SELECT * FROM mappings_db.mapping_petitions;' )
c.execute( 'REPLACE INTO main.deleted_files SELECT * FROM files_info_db.deleted_files;' )
c.execute( 'REPLACE INTO main.files_info SELECT * FROM files_info_db.files_info;' )
c.execute( 'REPLACE INTO main.file_map SELECT * FROM files_info_db.file_map;' )
c.execute( 'REPLACE INTO main.file_petitions SELECT * FROM files_info_db.file_petitions;' )
c.execute( 'REPLACE INTO main.ip_addresses SELECT * FROM files_info_db.ip_addresses;' )
c.execute( 'REPLACE INTO main.update_cache SELECT * FROM updates_db.update_cache;' )
c.execute( 'COMMIT' )
c.execute( 'DETACH database main_db;' )
c.execute( 'DETACH database files_info_db;' )
c.execute( 'DETACH database mappings_db;' )
c.execute( 'DETACH database updates_db;' )
os.remove( main_db_path )
os.remove( mappings_db_path )
os.remove( files_info_db_path )
os.remove( updates_db_path )
except: pass
c.execute( 'CREATE TABLE tag_siblings ( service_id INTEGER REFERENCES services ON DELETE CASCADE, account_id INTEGER, old_tag_id INTEGER, new_tag_id INTEGER, timestamp INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, account_id, old_tag_id, status ) );' )
c.execute( 'CREATE INDEX tag_siblings_service_id_old_tag_id_index ON tag_siblings ( service_id, old_tag_id );' )
c.execute( 'CREATE INDEX tag_siblings_service_id_timestamp_index ON tag_siblings ( service_id, timestamp );' )
c.execute( 'CREATE INDEX tag_siblings_service_id_status_index ON tag_siblings ( service_id, status );' )
#
c.execute( 'CREATE TABLE tag_parents ( service_id INTEGER REFERENCES services ON DELETE CASCADE, account_id INTEGER, old_tag_id INTEGER, new_tag_id INTEGER, timestamp INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, account_id, old_tag_id, new_tag_id, status ) );' )
c.execute( 'CREATE INDEX tag_parents_service_id_old_tag_id_index ON tag_parents ( service_id, old_tag_id, new_tag_id );' )
c.execute( 'CREATE INDEX tag_parents_service_id_timestamp_index ON tag_parents ( service_id, timestamp );' )
c.execute( 'CREATE INDEX tag_parents_service_id_status_index ON tag_parents ( service_id, status );' )
# update objects have changed
results = c.execute( 'SELECT service_id, begin, end FROM update_cache WHERE begin = 0;' ).fetchall()
c.execute( 'DELETE FROM update_cache;' )
update_paths = [ path for path in SC.IterateAllPaths( 'update' ) ]
for path in update_paths: os.remove( path )
for ( service_id, begin, end ) in results: self._CreateUpdate( c, service_id, begin, end )
if version == 86:
c.execute( 'COMMIT' )
c.execute( 'PRAGMA foreign_keys = OFF;' )
c.execute( 'BEGIN IMMEDIATE' )
old_service_info = c.execute( 'SELECT * FROM services;' ).fetchall()
c.execute( 'DROP TABLE services;' )
c.execute( 'CREATE TABLE services ( service_id INTEGER PRIMARY KEY, service_key BLOB_BYTES, type INTEGER, options TEXT_YAML );' ).fetchall()
for ( service_id, service_type, port, options ) in old_service_info:
except Exception as e:
if service_type == HC.SERVER_ADMIN: service_key = 'server admin'
else: service_key = os.urandom( 32 )
HC.ShowException( e )
options[ 'port' ] = port
try: c.execute( 'ROLLBACK' )
except: pass
c.execute( 'INSERT INTO services ( service_id, service_key, type, options ) VALUES ( ?, ?, ?, ? );', ( service_id, sqlite3.Binary( service_key ), service_type, options ) )
raise Exception( 'Tried to update the server db, but something went wrong:' + os.linesep + traceback.format_exc() )
c.execute( 'COMMIT' )
c.execute( 'PRAGMA foreign_keys = ON;' )
c.execute( 'BEGIN IMMEDIATE' )
if version == 90:
for ( service_id, options ) in c.execute( 'SELECT service_id, options FROM services;' ).fetchall():
options[ 'upnp' ] = None
c.execute( 'UPDATE services SET options = ? WHERE service_id = ?;', ( options, service_id ) )
@ -2676,7 +2691,6 @@ class DB( ServiceDB ):
elif action == 'sessions': result = self._GetSessions( c, *args, **kwargs )
elif action == 'stats': result = self._GetStats( c, *args, **kwargs )
elif action == 'update_ends': result = self._GetUpdateEnds( c, *args, **kwargs )
elif action == 'update_key': result = self._GetUpdateKey( c, *args, **kwargs )
else: raise Exception( 'db received an unknown read command: ' + action )
return result
@ -2832,11 +2846,14 @@ def DAEMONGenerateUpdates():
dirty_updates = HC.app.ReadDaemon( 'dirty_updates' )
for ( service_id, begin, end ) in dirty_updates: HC.app.WriteDaemon( 'clean_update', service_id, begin, end )
for ( service_identifier, tuples ) in dirty_updates.items():
for ( begin, end ) in tuples: HC.app.WriteDaemon( 'clean_update', service_identifier, begin, end )
update_ends = HC.app.ReadDaemon( 'update_ends' )
for ( service_id, biggest_end ) in update_ends:
for ( service_identifier, biggest_end ) in update_ends.items():
now = HC.GetNow()
@ -2845,7 +2862,7 @@ def DAEMONGenerateUpdates():
while next_end < now:
HC.app.WriteDaemon( 'create_update', service_id, next_begin, next_end )
HC.app.WriteDaemon( 'create_update', service_identifier, next_begin, next_end )
biggest_end = next_end

View File

@ -242,16 +242,14 @@ class TestServer( unittest.TestCase ):
# update
update = 'update'
begin = 100
update_key = os.urandom( 32 )
path = SC.GetExpectedPath( 'update', update_key )
if service_type == HC.FILE_REPOSITORY: path = SC.GetExpectedUpdatePath( self._file_service_identifier, begin )
elif service_type == HC.TAG_REPOSITORY: path = SC.GetExpectedUpdatePath( self._tag_service_identifier, begin )
with open( path, 'wb' ) as f: f.write( update )
HC.app.SetRead( 'update_key', update_key )
response = service.Request( HC.GET, 'update', { 'begin' : 100 } )
response = service.Request( HC.GET, 'update', { 'begin' : begin } )
self.assertEqual( response, update )

View File

@ -32,6 +32,10 @@ class App( wx.App ):
HC.http = HydrusNetworking.HTTPConnectionManager()
def show_text( text ): pass
HC.ShowText = show_text
self._reads = {}
self._reads[ 'hydrus_sessions' ] = []