parent
245a885eef
commit
43ae2c2593
|
@ -7,6 +7,63 @@ title: Changelog
|
|||
!!! note
|
||||
This is the new changelog, only the most recent builds. For all versions, see the [old changelog](old_changelog.html).
|
||||
|
||||
## [Version 524](https://github.com/hydrusnetwork/hydrus/releases/tag/v524)
|
||||
|
||||
### timestamp sidecars
|
||||
|
||||
* the sidecars system now supports timestamps. it just uses the unix timestamp number, but if you need it, you can use string conversion to create a full datestring. each sidecar node only selects/sets that one timestamp, so this may get spammy if you want to migrate everything, but you can now migrate archived/imported/whatever time from one client to another! the content updates from sidecar imports apply immediately _after_ the file is fully imported, so it is safe and good to sidecar-import 'my files imported time' etc.. for new files, and it should all get set correctly, but obviously let me know otherwise. if you set 'archived time', the files have to be in an archived state immediately after import, which means importing and archiving them previously, or hitting 'archive all imports' on the respective file import options
|
||||
* sidecars are getting complex, so I expect I will soon add a button that sets up a 'full' JSON sidecar import/export in one click, basically just spamming/sucking everything the sidecar system can do, pretty soon, so it is easier to set up larger migrations
|
||||
|
||||
### timestamp merge
|
||||
|
||||
* the duplicate merge options now have an action for 'sync file modified date?'. you can set so both files get their earliest (the new default for 'they are the same'), or that the earlier worse can be applied to the later better (the new default for 'this is better') (issue #1203)
|
||||
* in the duplicate system, when URLs are merged, their respective domain-based timestamps are also merged according to the earliest, as above
|
||||
|
||||
### more timestamps
|
||||
|
||||
* hydrus now supports timestamps before 1970. should be good now, lol, back to 1AD (and my tests show BC dates seem to be working too?). it is probably a meme to apply a modified date of 1505 to some painting, but when I add timestamps to the API maybe we can have some fun. btw calendar calculations and timezones are hell on earth at times, and there's a decent chance that your pre-1970 dates may show up on hour out of phase in labels (a daylight savings time thing) of what you enter in some other area of UI. in either case, my code is not clever enough to apply DST schedules retroactively to older dates, so your search ranges may simply be an hour out back in 1953. it sounds stupid, but it may matter if we are talking midnight boundaries, so let me know how you find it
|
||||
* when you set a new file modified date, the file on disk's modified date will only be updated if the date set is after 1980-01-01 (Windows) or 1970-01-01 (Linux) due to system limitations
|
||||
* fixed a typo bug in last week's work that meant file service timestamp editing was not updating the media object (i.e. changes were not visible until a restart)
|
||||
* fixed a bug where collections that contained files with delete timestamps were throwing errors on display. (they were calculating aggregate timestamp data wrong)
|
||||
* I rejiggered how the 'is this timestamp sensible?' test applies. this test essentially discounts any timestamp before 1970-01-08 to catch any weird mis-parses and stop them nuking your aggregate modified timestamp values. it now won't apply to internal duplicate merge and so on, but it still applies when you parse timestamps in the downloader system, so you still can't parse anything pre-1970 for now
|
||||
* one thing I noticed is my '5 years 1 months ago' calculation, which uses a fixed 30 day month and doesn't count the extra day of leap years, is showing obviously increasingly inaccurate numbers here. I'll fix it up
|
||||
|
||||
### export folders
|
||||
|
||||
* export folders can now show a popup while they work. there's a new checkbox for it in their edit UI. default is ON, so you'll start seeing popups for export folders that run in the background. this popup is cancellable, too, so you can now stop in-progress export runs if things seem wrong
|
||||
* both import and export folders will force-show working popups whenever you trigger them manually
|
||||
* export folders no longer have the weird and confusing 'paused' and 'run regularly?' duality. this was a legacy error handling thing, now cleaned up and merged into 'run regularly?'
|
||||
* when 'run regularly?' is unchecked, the run period and new 'show popup while working regularly?' checkboxes are now disabled
|
||||
|
||||
### misc
|
||||
|
||||
* added 'system:ratio is square/portrait/landscape' nicer label aliases for =/taller/wider 1:1 ratio. I added them to the quick-select list on the edit panel, too. they also parse in the system predicate parser!
|
||||
* I added a bit to the 'getting started with downloading' help page about getting access to difficult sites. I refer to Hydrus Companion as a good internal login solution, and link to yt-dlp, gallery-dl, and imgbrd-grabber with a little discussion on setting up external import workflows. I tried gallery-dl on twitter this week and it was excellent. it can also take your login credentials as either user/pass or cookies.txt (or pull cookies straight from firefox/safari) and give access to nsfw. since twitter has rapidly become a pain for us recently, I will be pointing people to gallery-dl for now
|
||||
* fixed my Qt subclass definitions for PySide6 6.5.0, which strictly requires the Qt object to be the rightmost base class in multiple inheritance subclasses, wew. this his AUR users last week, I understand!
|
||||
|
||||
### client api (and local booru lol)
|
||||
|
||||
* if you set the Client API to not allow non-local connections, it now binds to 127.0.0.1 and ::1 specifically, which tell your OS we only want the loopback interface. this increases security, and on Windows _should_ mean it only does that first-time firewall dialog popup when 'allow non-local connections' is unchecked
|
||||
* I brushed up the manage services UI for the Client API. the widgets all line up better now, and turning the service on and off isn't the awkward '[] do not run the service' any more
|
||||
* fixed the 'disable idle mode if the client api does stuff' check, which was wired up wrong! also, the reset here now fires as a request starts, not when it is complete, meaning if you are already in idle mode, a client api request will now quickly cancel idle mode and hopefully free up any locked database situation promptly
|
||||
|
||||
### boring cleanup and stuff
|
||||
|
||||
* reworked all timestamp-datetime conversion to be happier with pre-1970 dates regardless of system/python support. it is broadly improved all around
|
||||
* refactored all of the HydrusData time functions and much of ClientTime to a new HydrusTime module
|
||||
* refactored the ClientData time stuff to ClientTime
|
||||
* refactored some thread/process functions from HydrusData to HydrusThreading
|
||||
* refactored some list splitting/throttling functions from HydrusData to a new HydrusLists module
|
||||
* refactored the file filter out of ClientMedia and into the new ClientMediaFileFilter, and reworked things so the medialist filter jobs now happen at the filter level. this was probably done the wrong way around, but oh well
|
||||
* expanded the new TimestampData object a bit, it can now give a nice descriptive string of itself
|
||||
* wrote a new widget to edit TimestampData stubs
|
||||
* wrote some unit tests for the new timestamp sidecar importer and exporter
|
||||
* updated my multi-column list system to handle the deprecation of a column definition (today it was the 'paused' column in manage export folders list)
|
||||
* it should also be able to handle new column definitions appearing
|
||||
* fixed an error popup that still said 'run repair invalid tags' instead of 'run fix invalid tags'
|
||||
* the FILE_SERVICES constant now holds the 'all deleted files' virtual domain. this domain keeps slipping my logic, so fingers crossed this helps. also means you can select it in 'system:file service' and stuff now
|
||||
* misc cleaning and linting work
|
||||
|
||||
## [Version 523](https://github.com/hydrusnetwork/hydrus/releases/tag/v523)
|
||||
|
||||
### timestamp editing
|
||||
|
@ -325,91 +382,3 @@ title: Changelog
|
|||
* thanks to a user, updated the recently note-and-ai-updated pixiv parser again to grab the canonical pixiv URL and translated tags, if present
|
||||
* thanks to a user, updated the sankaku parser to grab some more tags
|
||||
* the file location context and tag context buttons under tag autocompletes now put menu separators between each type of file/tag service in their menus. for basic users, this'll be a separator for every row, but for advanced users with multiple local domains, it will help categorise the list a bit
|
||||
|
||||
## [Version 514](https://github.com/hydrusnetwork/hydrus/releases/tag/v514)
|
||||
|
||||
### downloaders
|
||||
|
||||
* twitter took down the API we were using, breaking all our nice twitter downloaders! argh!
|
||||
* a user has figured out a basic new downloader that grabs the tweets amongst the first twenty tweets-and-retweets of an account. yes, only the first twenty max, and usually fewer. because this is a big change, the client will ask about it when you update. if you have some complicated situation where you are working on the old default twitter downloaders and don't want them deleted, you can select 'no' on the dialog it throws up, but everyone else wants to say 'yes'. then check your twitter subs: make sure they moved to the new downloader, and you probably want to make them check more frequently too.
|
||||
* given the rate of changes at twitter, I think we can expect more changes and blocks in future. I don't know whether nitter will be viable alternative, so if the artists you like end up on a nice simple booru _anywhere_, I strongly recommend just moving there. twitter appears to be explicitly moving to non-third-party-friendly
|
||||
* thanks to a user's work, the 'danbooru - get webm ugoira' parser is fixed!
|
||||
* thanks to a user's work, the deviant art parser is updated to get the highest res image in more situations!
|
||||
* thanks to a user's work, the pixiv downloader now gets the artist note, in japanese (and translated, if there is one), and a 'medium:ai generated' tag!
|
||||
|
||||
### sidecars
|
||||
|
||||
* I wrote some sidecar help here! https://hydrusnetwork.github.io/hydrus/advanced_sidecars.html
|
||||
* when the client parses files for import, the 'does this look like a sidecar?' test now also checks that the base component of the base filename (e.g. 'Image123' from 'Image123.jpg.txt') actually appears in the list of non-txt/json/xml ext files. a random yo.txt file out of nowhere will now be inspected in case it is secretly a jpeg again, for good or ill
|
||||
* when you drop some files on the client, the number of files skipped because they looked like sidecars is now stated in the status label
|
||||
* fixed a typo bug that meant tags imported from sidecars were not being properly cleaned, despite preview appearance otherwise, for instance ':)', which in hydrus needs to be secretly stored as '::)' was being imported as ')'
|
||||
* as a special case, tags that in hydrus are secretly '::)' will be converted to ':)' on export to sidecar too, the inverse of the above problem. there may be some other tag cleaning quirks to undo here, so let me know what you run into
|
||||
|
||||
### related tags overhaul
|
||||
|
||||
* the 'related tags' suggestion system, turned on under _options->tag suggestions_, has several changes, including some prototype tech I'd love feedback on
|
||||
* first off, there are two new search buttons, 'new 1' and 'new 2' ('2' is available on repositories only).. these use an upgraded statistical search and scoring system that a user worked on and sent in. I have butchered his specific namespace searching system to something more general/flexible and easy for me to maintain, but it works better and more comprehensibly than my old method! give it a go and let me know how each button does--the first one will be fast but less useful on the PTR, the second will be slower but generally give richer results (although it cannot do tags with too-high count)
|
||||
* the new search routine works on multiple files, so 'related tags' now shows on tag dialogs launched from a selection of thumbnails!
|
||||
* also, all the related search buttons now search any selection of tags you make!!! so if you can't remember that character's name, just click on the series or another character they are often with and hit the search, and you should get a whole bunch appear
|
||||
* I am going to keep working on this in the future. the new buttons will become the only buttons, I'll try and mitigate the prototype search limitations, add some cancel tech, move to a time-based search length like the current buttons, and I'll add more settings, including for filtering so we aren't looking up related tags for 'page:x' and so on. I'm interested in knowing how you get on with IRL data. are there too many recommendations (is the tolerance too high?)? is the sorting good (is the stuff at the top relevant or often just noise?)?
|
||||
|
||||
### misc
|
||||
|
||||
* all users can now copy their service keys (which are a technical non-changing hex identifier for your client's services) from the review services window--advanced mode is no longer needed. this may be useful as the client api transitions to service keys
|
||||
* when a job in the downloader search log generates new jobs (e.g. fetches the next page), the new job(s) are now inserted after the parent. previously, they were appended to the end of the list. this changes how ngugs operate, converting their searches from interleaved to sequential!
|
||||
* restarting search log jobs now also places the new job after the restarted job
|
||||
* when you create a new export folder, if you have default metadata export sidecar settings from a previous manual file export, the program now asks if you want those for the new export folder or an empty list. previously, it just assigned the saved default, which could be jarring if it was saved from ages ago
|
||||
* added a migration guide to the running from source help. also brushed up some language and fixed a bunch of borked title weights in that document
|
||||
* the max initial and periodic file limits in subscriptions is now 50k when in advanced mode. I can't promise that would be nice though!
|
||||
* the file history chart no longer says that inbox and delete time tracking are new
|
||||
|
||||
### misc fixes
|
||||
|
||||
* fixed a cursor type detection test that was stopping the cursor from hiding immediately when you do a media viewer drag in Qt6
|
||||
* fixed an issue where 'clear deletion record' calls were not deleting from the newer 'all my files' domain. the erroneous extra records will be searched for and scrubbed on update
|
||||
* fixed the issue where if you had the new 'unnamespaced input gives (any namespace) wildcard results' search option on, you couldn't add any novel tags in WRITE autocomplete contexts like 'manage tags'!!! it could only offer the automatically converted wildcard tags as suggested input, which of course aren't appropriate for a WRITE context. the way I ultimately fixed this was horrible; the whole thing needs more work to deal with clever logic like this better, so let me know if you get any more trouble here
|
||||
* I think I fixed an infinite hang when trying to add certain siblings in manage tag siblings. I believe this was occuring when the dialog was testing if the new pair would create a loop when the sibling structure already contains a loop. now it throws up a message and breaks the test
|
||||
* fixed an issue where certain system:filetype predicates would spawn apparent duplicates of themselves instead of removing on double-click. images+audio+video+swf+pdf was one example. it was a 'all the image types' vs 'list of (all the) image types' conversion/comparison/sorting issue
|
||||
|
||||
### client api
|
||||
|
||||
* **this is later than I expected, but as was planned last year, I am clearing up several obsolete parameters and data structures this week. mostly it is bad service name-identification that seemed simple or flexible to support but just added maintenance debt, induced bad implementation practises, and hindered future expansions. if you have a custom api script, please read on--and if you have not yet moved to the alternatives, do so before updating!**
|
||||
* **all `...service_name...` parameters are officially obsolete! they will still work via some legacy hacks, so old scripts shouldn't break, but they are no longer documented. please move to the `...service_key...` alternates as soon as reasonably possible (check out `/get_services` if you need to learn about service keys)**
|
||||
* **`/add_tags/get_tag_services` is removed! use `/get_services` instead!**
|
||||
* **`hide_service_names_tags`, previously made default true, is removed and its data structures `service_names_to_statuses_to_...` are also gone! move to the new `tags` structure.**
|
||||
* **`hide_service_keys_tags` is now default true. it will be removed in 4 weeks or so. same deal as with `service_names_to_statuses_to_...`--move to `tags`**
|
||||
* **`system_inbox` and `system_archive` are removed from `/get_files/search_files`! just use 'system:inbox/archive' in the tags list**
|
||||
* **the 'set_file_relationships' command from last week has been reworked to have a nicer Object parameter with a new name. please check the updated help!** normally I wouldn't change something so quick, but we are still in early prototype, so I'm ok shifting it (and the old method still works lmao, but I'll clear that code out in a few weeks, so please move over--the Object will be much nicer to expand in future, which I forgot about in v513)
|
||||
* many Client API commands now support modern file domain objects, meaning you can search a UNION of file services and 'deleted-from' file services. affected commands are
|
||||
* * /add_files/delete_files
|
||||
* * /add_files/undelete_files
|
||||
* * /add_tags/search_tags
|
||||
* * /get_files/search_files
|
||||
* * /manage_file_relationships/get_everything
|
||||
* a new `/get_service` call now lets you ask about an individual service by service name or service key, basically a parameterised /get_services
|
||||
* the `/manage_pages/get_pages` and `/manage_pages/get_page_info` calls now give the `page_state`, a new enum that says if the page is ready, initialised, searching, or search-cancelled
|
||||
* to reduce duplicate argument spam, the client api help now specifies the complicated 'these files' and now 'this file domain' arguments into sub-sections, and the commands that use them just point to the subsections. check it out--it makes sense when you look at it.
|
||||
* `/add_tags/add_tags` now raises 400 if you give an invalid content action (e.g. pending to a local tag service). previously it skipped these rows silently
|
||||
* added and updated unit tests and help for the above changes
|
||||
* client api version is now 41
|
||||
|
||||
### boring optimisation
|
||||
|
||||
* when you are looking at a search log or file log, if entries are added, removed, or moved around, all the log entries that have changed row # now update (previously it just sent a redraw signal for the new rows, not the second-order affected rows that were shuffled up/down. many access routines for these logs are sped up
|
||||
* file log status checking is completely rewritten. the ways it searches, caches and optimises the 'which is the next item with x status' queues is faster and requires far less maintenance. large import queues have less overhead, so the in and outs of general download work should scale up much better now
|
||||
* the main data cache that stores rendered images, image tiles, and thumbnails now maintains itself far more efficiently. there was a hellish O(n) overhead when adding or removing an item which has been reduced to constant time. this gonk was being spammed every few minutes during normal memory maintenance, when hundreds of thumbs can be purged at once. clients with tens of thousands of thumbnails in memory will maintain that list far more smoothly
|
||||
* physical file delete is now more efficient, requiring far fewer hard drive hits to delete a media file. it is also far less aggressive, with a new setting in _options->files and trash_ that sets how long to wait between individual file deletes, default 250ms. before, it was full LFG mode with minor delays every hundred/thousand jobs, and since it takes a write lock, it was lagging out thumbnail load when hitting a lot of work. the daemon here also shuts down faster if caught working during program shut down
|
||||
|
||||
### boring code cleanup
|
||||
|
||||
* refactored some parsing routines to be more flexible
|
||||
* added some more dictionary and enum type testing to the client api parameter parsing routines. error messages should be better!
|
||||
* improved how `/add_tags/add_tags` parsing works. ensuring both access methods check all types and report nicer errors
|
||||
* cleaned up the `/search_files/file_metadata` call's parsing, moving to the new generalised method and smoothing out some old code flow. it now checks hashes against the last search, too
|
||||
* cleaned up `/manage_pages/add_files` similarly
|
||||
* cleaned up how tag services are parsed and their errors reported in the client api
|
||||
* the client api is better about processing the file identifiers you give it in the same order you gave
|
||||
* fixed bad 'potentials_search_type'/'search_type' inconsistency in the client api help examples
|
||||
* obviously a bunch of client api unit test and help cleanup to account for the obsolete stuff and various other changes here
|
||||
* updated a bunch of the client api unit tests to handle some of the new parsing
|
||||
* fixed the remaining 'randomly fail due to complex counting logic' potential count unit tests. turns out there were like seven more of them
|
||||
|
|
|
@ -169,6 +169,23 @@ To start using a login script, select the domain and click 'edit credentials'. Y
|
|||
|
||||
Most sites only have one way of logging in, but hydrus does support more. Hentai Foundry is a good example--by default, the client performs the 'click-through' login as a guest, which requires no credentials and means any hydrus client can get any content from the start. But this way of logging in only lasts about 60 minutes or so before having to be refreshed, and it does not hide any spicy stuff, so if you use HF a lot, I recommend you create a throwaway account, set the filters you like in your HF profile (e.g. no guro content), and then click the 'change login script' in the client to the proper username/pass login.
|
||||
|
||||
The login system is new and still a bit experimental. Don't try to pull off anything too weird with it! If anything goes wrong, it will likely delay the script (and hence the whole domain) from working for a while, or invalidate it entirely. If the error is something simple, like a password typo or current server maintenance, go back to this dialog to fix and scrub the error and try again. If the site just changed its layout, you may need to update the login script. If it is more complicated, please contact me, hydrus_dev, with the details!
|
||||
The login system is not very clever. Don't try to pull off anything too weird with it! If anything goes wrong, it will likely delay the script (and hence the whole domain) from working for a while, or invalidate it entirely. If the error is something simple, like a password typo or current server maintenance, go back to this dialog to fix and scrub the error and try again. If the site just changed its layout, you may need to update the login script. If it is more complicated, please contact me, hydrus_dev, with the details!
|
||||
|
||||
If you would like to login to a site that is not yet supported by hydrus (usually ones with a Captcha in the login page), see about getting a web browser add-on that lets you export a cookies.txt (either for the whole browser or just for that domain) and then drag and drop that file onto the hydrus _network->data->review session cookies_ dialog. This sometimes does not work if your add-on's export formatting is unusual. If it does work, hydrus will import and use those cookies, which skips the login by making your hydrus pretend to be your browser directly. This is obviously advanced and hacky, so if you need to do it, let me know how you get on and what tools you find work best!
|
||||
If you would like to login to a site that is not yet supported by hydrus (usually ones with a Captcha in the login page), you have two options:
|
||||
|
||||
1. Get a web browser add-on that lets you export a cookies.txt (either for the whole browser or just for that domain) and then drag and drop that cookies.txt file onto the hydrus _network->data->review session cookies_ dialog. This sometimes does not work if your add-on's export formatting is unusual. If it does work, hydrus will import and use those cookies, which skips the login by making your hydrus pretend to be your browser directly. This is obviously advanced and hacky, so if you need to do it, let me know how you get on and what tools you find work best!
|
||||
2. Use [Hydrus Companion](https://gitgud.io/prkc/hydrus-companion) browser add-on to do the same basic thing automatically.
|
||||
|
||||
## Difficult Sites
|
||||
|
||||
Boorus are usually easy to parse from, and there are many hydrus downloaders available that work well. Other sites are less easy to download from. Some will purposefully disguise access behind captchas or difficult login tokens that the hydrus downloader just isn't clever enough to handle. In these cases, it can be best just to go to an external downloader program that is specially tuned for these complex sites.
|
||||
|
||||
It takes a bit of time to set up these sorts of programs--and if you get into them, you'll likely want to make a script to help automate their use--but if you know they solve your problem, it is well worth it!
|
||||
|
||||
- [yt-dlp](https://github.com/yt-dlp/yt-dlp) - This is an excellent video downloader that can download from hundreds of different websites. Learn how it works, it is useful for all sorts of things!
|
||||
- [gallery-dl](https://github.com/mikf/gallery-dl) - This is an excellent image and small-vid downloader that works for pretty much any booru and many larger/professional gallery sites, particularly when those sites need logins. Check the documentation, since you may be able to get it to rip cookies right out of your firefox, or you can give it your actual user/password for many sites and it'll handle all the login for you.
|
||||
- [imgbrd-grabber](https://github.com/Bionus/imgbrd-grabber) - Another excellent, mostly booru downloader, with an UI. You can export some metadata to filenames, which you might like to then suck up with hydrus filename-import-parsing.
|
||||
|
||||
With these tools, used manually and/or with some scripts you set up, you may be able to set up a regular import workflow to hydrus (especilly with an `Import Folder` as under the `file` menu) and get _most_ of what you would with an internal downloader. Some things like known URLs and tag parsing may be limited or non-existant, but it is better than nothing, and if you only need to do it for a couple sources on a couple sites every month, you can fill in the most of the gap manually yourself.
|
||||
|
||||
Hydev is planning to roll yt-dlp and gallery-dl support into the program natively in a future update of the downloader engine.
|
||||
|
|
|
@ -34,6 +34,52 @@
|
|||
<div class="content">
|
||||
<h1 id="changelog"><a href="#changelog">changelog</a></h1>
|
||||
<ul>
|
||||
<li>
|
||||
<h2 id="version_524"><a href="#version_524">version 524</a></h2>
|
||||
<ul>
|
||||
<li><h3>timestamp sidecars</h3></li>
|
||||
<li>the sidecars system now supports timestamps. it just uses the unix timestamp number, but if you need it, you can use string conversion to create a full datestring. each sidecar node only selects/sets that one timestamp, so this may get spammy if you want to migrate everything, but you can now migrate archived/imported/whatever time from one client to another! the content updates from sidecar imports apply immediately _after_ the file is fully imported, so it is safe and good to sidecar-import 'my files imported time' etc.. for new files, and it should all get set correctly, but obviously let me know otherwise. if you set 'archived time', the files have to be in an archived state immediately after import, which means importing and archiving them previously, or hitting 'archive all imports' on the respective file import options</li>
|
||||
<li>sidecars are getting complex, so I expect I will soon add a button that sets up a 'full' JSON sidecar import/export in one click, basically just spamming/sucking everything the sidecar system can do, pretty soon, so it is easier to set up larger migrations</li>
|
||||
<li><h3>timestamp merge</h3></li>
|
||||
<li>the duplicate merge options now have an action for 'sync file modified date?'. you can set so both files get their earliest (the new default for 'they are the same'), or that the earlier worse can be applied to the later better (the new default for 'this is better') (issue #1203)</li>
|
||||
<li>in the duplicate system, when URLs are merged, their respective domain-based timestamps are also merged according to the earliest, as above</li>
|
||||
<li><h3>more timestamps</h3></li>
|
||||
<li>hydrus now supports timestamps before 1970. should be good now, lol, back to 1AD (and my tests show BC dates seem to be working too?). it is probably a meme to apply a modified date of 1505 to some painting, but when I add timestamps to the API maybe we can have some fun. btw calendar calculations and timezones are hell on earth at times, and there's a decent chance that your pre-1970 dates may show up on hour out of phase in labels (a daylight savings time thing) of what you enter in some other area of UI. in either case, my code is not clever enough to apply DST schedules retroactively to older dates, so your search ranges may simply be an hour out back in 1953. it sounds stupid, but it may matter if we are talking midnight boundaries, so let me know how you find it</li>
|
||||
<li>when you set a new file modified date, the file on disk's modified date will only be updated if the date set is after 1980-01-01 (Windows) or 1970-01-01 (Linux) due to system limitations</li>
|
||||
<li>fixed a typo bug in last week's work that meant file service timestamp editing was not updating the media object (i.e. changes were not visible until a restart)</li>
|
||||
<li>fixed a bug where collections that contained files with delete timestamps were throwing errors on display. (they were calculating aggregate timestamp data wrong)</li>
|
||||
<li>I rejiggered how the 'is this timestamp sensible?' test applies. this test essentially discounts any timestamp before 1970-01-08 to catch any weird mis-parses and stop them nuking your aggregate modified timestamp values. it now won't apply to internal duplicate merge and so on, but it still applies when you parse timestamps in the downloader system, so you still can't parse anything pre-1970 for now</li>
|
||||
<li>one thing I noticed is my '5 years 1 months ago' calculation, which uses a fixed 30 day month and doesn't count the extra day of leap years, is showing obviously increasingly inaccurate numbers here. I'll fix it up</li>
|
||||
<li><h3>export folders</h3></li>
|
||||
<li>export folders can now show a popup while they work. there's a new checkbox for it in their edit UI. default is ON, so you'll start seeing popups for export folders that run in the background. this popup is cancellable, too, so you can now stop in-progress export runs if things seem wrong</li>
|
||||
<li>both import and export folders will force-show working popups whenever you trigger them manually</li>
|
||||
<li>export folders no longer have the weird and confusing 'paused' and 'run regularly?' duality. this was a legacy error handling thing, now cleaned up and merged into 'run regularly?'</li>
|
||||
<li>when 'run regularly?' is unchecked, the run period and new 'show popup while working regularly?' checkboxes are now disabled</li>
|
||||
<li><h3>misc</h3></li>
|
||||
<li>added 'system:ratio is square/portrait/landscape' nicer label aliases for =/taller/wider 1:1 ratio. I added them to the quick-select list on the edit panel, too. they also parse in the system predicate parser!</li>
|
||||
<li>I added a bit to the 'getting started with downloading' help page about getting access to difficult sites. I refer to Hydrus Companion as a good internal login solution, and link to yt-dlp, gallery-dl, and imgbrd-grabber with a little discussion on setting up external import workflows. I tried gallery-dl on twitter this week and it was excellent. it can also take your login credentials as either user/pass or cookies.txt (or pull cookies straight from firefox/safari) and give access to nsfw. since twitter has rapidly become a pain for us recently, I will be pointing people to gallery-dl for now</li>
|
||||
<li>fixed my Qt subclass definitions for PySide6 6.5.0, which strictly requires the Qt object to be the rightmost base class in multiple inheritance subclasses, wew. this his AUR users last week, I understand!</li>
|
||||
<li><h3>client api (and local booru lol)</h3></li>
|
||||
<li>if you set the Client API to not allow non-local connections, it now binds to 127.0.0.1 and ::1 specifically, which tell your OS we only want the loopback interface. this increases security, and on Windows _should_ mean it only does that first-time firewall dialog popup when 'allow non-local connections' is unchecked</li>
|
||||
<li>I brushed up the manage services UI for the Client API. the widgets all line up better now, and turning the service on and off isn't the awkward '[] do not run the service' any more</li>
|
||||
<li>fixed the 'disable idle mode if the client api does stuff' check, which was wired up wrong! also, the reset here now fires as a request starts, not when it is complete, meaning if you are already in idle mode, a client api request will now quickly cancel idle mode and hopefully free up any locked database situation promptly</li>
|
||||
<li><h3>boring cleanup and stuff</h3></li>
|
||||
<li>reworked all timestamp-datetime conversion to be happier with pre-1970 dates regardless of system/python support. it is broadly improved all around</li>
|
||||
<li>refactored all of the HydrusData time functions and much of ClientTime to a new HydrusTime module</li>
|
||||
<li>refactored the ClientData time stuff to ClientTime</li>
|
||||
<li>refactored some thread/process functions from HydrusData to HydrusThreading</li>
|
||||
<li>refactored some list splitting/throttling functions from HydrusData to a new HydrusLists module</li>
|
||||
<li>refactored the file filter out of ClientMedia and into the new ClientMediaFileFilter, and reworked things so the medialist filter jobs now happen at the filter level. this was probably done the wrong way around, but oh well</li>
|
||||
<li>expanded the new TimestampData object a bit, it can now give a nice descriptive string of itself</li>
|
||||
<li>wrote a new widget to edit TimestampData stubs</li>
|
||||
<li>wrote some unit tests for the new timestamp sidecar importer and exporter</li>
|
||||
<li>updated my multi-column list system to handle the deprecation of a column definition (today it was the 'paused' column in manage export folders list)</li>
|
||||
<li>it should also be able to handle new column definitions appearing</li>
|
||||
<li>fixed an error popup that still said 'run repair invalid tags' instead of 'run fix invalid tags'</li>
|
||||
<li>the FILE_SERVICES constant now holds the 'all deleted files' virtual domain. this domain keeps slipping my logic, so fingers crossed this helps. also means you can select it in 'system:file service' and stuff now</li>
|
||||
<li>misc cleaning and linting work</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h2 id="version_523"><a href="#version_523">version 523</a></h2>
|
||||
<ul>
|
||||
|
@ -13148,18 +13194,22 @@
|
|||
<li>improved some error handling code</li>
|
||||
<li>reintroduced message printing</li>
|
||||
<li>improved subscriptions messaging</li>
|
||||
<li><h3>added cancel button to</h3></li>
|
||||
<ul>
|
||||
<li>check file integrity</li>
|
||||
<li>export to tag archive</li>
|
||||
</ul>
|
||||
<li><h3>added pause and cancel buttons to</h3></li>
|
||||
<ul>
|
||||
<li>repository sync</li>
|
||||
<li>subscription sync</li>
|
||||
<li>pending upload</li>
|
||||
<li>regenerate thumbnails</li>
|
||||
</ul>
|
||||
<li>
|
||||
<h3>added cancel button to</h3>
|
||||
<ul>
|
||||
<li>check file integrity</li>
|
||||
<li>export to tag archive</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<h3>added pause and cancel buttons to</h3>
|
||||
<ul>
|
||||
<li>repository sync</li>
|
||||
<li>subscription sync</li>
|
||||
<li>pending upload</li>
|
||||
<li>regenerate thumbnails</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>improved how jobs' pausability and cancelability are spawned</li>
|
||||
<li>improved and harmonised a lot of pause and cancel and general shutdown-job-interaction logic</li>
|
||||
<li>pausable and cancellable popups can only be dimissed with right click once they are done or cancelled</li>
|
||||
|
|
|
@ -6,6 +6,7 @@ from hydrus.core import HydrusExceptions
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientSearch
|
||||
|
||||
|
@ -114,7 +115,7 @@ class APIManager( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
self._session_keys_to_access_keys_and_expirys[ session_key ] = ( access_key, HydrusData.GetNow() + SESSION_EXPIRY )
|
||||
self._session_keys_to_access_keys_and_expirys[ session_key ] = ( access_key, HydrusTime.GetNow() + SESSION_EXPIRY )
|
||||
|
||||
|
||||
return session_key
|
||||
|
@ -131,14 +132,14 @@ class APIManager( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
( access_key, session_expiry ) = self._session_keys_to_access_keys_and_expirys[ session_key ]
|
||||
|
||||
if HydrusData.TimeHasPassed( session_expiry ):
|
||||
if HydrusTime.TimeHasPassed( session_expiry ):
|
||||
|
||||
del self._session_keys_to_access_keys_and_expirys[ session_expiry ]
|
||||
|
||||
raise HydrusExceptions.SessionException( 'That session key has expired!' )
|
||||
|
||||
|
||||
self._session_keys_to_access_keys_and_expirys[ session_key ] = ( access_key, HydrusData.GetNow() + SESSION_EXPIRY )
|
||||
self._session_keys_to_access_keys_and_expirys[ session_key ] = ( access_key, HydrusTime.GetNow() + SESSION_EXPIRY )
|
||||
|
||||
|
||||
return access_key
|
||||
|
@ -346,7 +347,7 @@ class APIPermissions( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
raise HydrusExceptions.InsufficientCredentialsException( error_text )
|
||||
|
||||
|
||||
self._search_results_timeout = HydrusData.GetNow() + SEARCH_RESULTS_CACHE_TIMEOUT
|
||||
self._search_results_timeout = HydrusTime.GetNow() + SEARCH_RESULTS_CACHE_TIMEOUT
|
||||
|
||||
|
||||
|
||||
|
@ -432,7 +433,7 @@ class APIPermissions( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
if self._last_search_results is not None and HydrusData.TimeHasPassed( self._search_results_timeout ):
|
||||
if self._last_search_results is not None and HydrusTime.TimeHasPassed( self._search_results_timeout ):
|
||||
|
||||
self._last_search_results = None
|
||||
|
||||
|
@ -450,7 +451,7 @@ class APIPermissions( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
self._last_search_results = set( hash_ids )
|
||||
|
||||
self._search_results_timeout = HydrusData.GetNow() + SEARCH_RESULTS_CACHE_TIMEOUT
|
||||
self._search_results_timeout = HydrusTime.GetNow() + SEARCH_RESULTS_CACHE_TIMEOUT
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -3,6 +3,7 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
SIMPLE_ARCHIVE_DELETE_FILTER_BACK = 0
|
||||
SIMPLE_ARCHIVE_DELETE_FILTER_DELETE = 1
|
||||
|
@ -667,7 +668,7 @@ class ApplicationCommand( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
direction_s = 'back' if direction == -1 else 'forwards'
|
||||
|
||||
ms_s = HydrusData.TimeDeltaToPrettyTimeDelta( ms / 1000 )
|
||||
ms_s = HydrusTime.TimeDeltaToPrettyTimeDelta( ms / 1000 )
|
||||
|
||||
s = '{} ({} {})'.format( s, direction_s, ms_s )
|
||||
|
||||
|
|
|
@ -11,6 +11,7 @@ from hydrus.core import HydrusImageHandling
|
|||
from hydrus.core import HydrusThreading
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientFiles
|
||||
|
@ -95,7 +96,7 @@ class DataCache( object ):
|
|||
del self._keys_fifo[ key ]
|
||||
|
||||
|
||||
self._keys_fifo[ key ] = HydrusData.GetNow()
|
||||
self._keys_fifo[ key ] = HydrusTime.GetNow()
|
||||
|
||||
|
||||
def Clear( self ):
|
||||
|
@ -209,7 +210,7 @@ class DataCache( object ):
|
|||
|
||||
( key, last_access_time ) = next( iter( self._keys_fifo.items() ) )
|
||||
|
||||
if HydrusData.TimeHasPassed( last_access_time + self._timeout ):
|
||||
if HydrusTime.TimeHasPassed( last_access_time + self._timeout ):
|
||||
|
||||
self._DeleteItem()
|
||||
|
||||
|
@ -275,7 +276,7 @@ class LocalBooruCache( object ):
|
|||
|
||||
timeout = info[ 'timeout' ]
|
||||
|
||||
if timeout is not None and HydrusData.TimeHasPassed( timeout ):
|
||||
if timeout is not None and HydrusTime.TimeHasPassed( timeout ):
|
||||
|
||||
raise HydrusExceptions.NotFoundException( 'This share has expired.' )
|
||||
|
||||
|
@ -390,7 +391,7 @@ class ParsingCache( object ):
|
|||
|
||||
def __init__( self ):
|
||||
|
||||
self._next_clean_cache_time = HydrusData.GetNow()
|
||||
self._next_clean_cache_time = HydrusTime.GetNow()
|
||||
|
||||
self._html_to_soups = {}
|
||||
self._json_to_jsons = {}
|
||||
|
@ -400,7 +401,7 @@ class ParsingCache( object ):
|
|||
|
||||
def _CleanCache( self ):
|
||||
|
||||
if HydrusData.TimeHasPassed( self._next_clean_cache_time ):
|
||||
if HydrusTime.TimeHasPassed( self._next_clean_cache_time ):
|
||||
|
||||
for cache in ( self._html_to_soups, self._json_to_jsons ):
|
||||
|
||||
|
@ -408,7 +409,7 @@ class ParsingCache( object ):
|
|||
|
||||
for ( data, ( last_accessed, parsed_object ) ) in cache.items():
|
||||
|
||||
if HydrusData.TimeHasPassed( last_accessed + 10 ):
|
||||
if HydrusTime.TimeHasPassed( last_accessed + 10 ):
|
||||
|
||||
dead_datas.add( data )
|
||||
|
||||
|
@ -420,7 +421,7 @@ class ParsingCache( object ):
|
|||
|
||||
|
||||
|
||||
self._next_clean_cache_time = HydrusData.GetNow() + 5
|
||||
self._next_clean_cache_time = HydrusTime.GetNow() + 5
|
||||
|
||||
|
||||
|
||||
|
@ -436,7 +437,7 @@ class ParsingCache( object ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
if json_text not in self._json_to_jsons:
|
||||
|
||||
|
@ -465,7 +466,7 @@ class ParsingCache( object ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
if html not in self._html_to_soups:
|
||||
|
||||
|
@ -1149,7 +1150,7 @@ class ThumbnailCache( object ):
|
|||
self._waterfall_event.clear()
|
||||
|
||||
|
||||
start_time = HydrusData.GetNowPrecise()
|
||||
start_time = HydrusTime.GetNowPrecise()
|
||||
stop_time = start_time + 0.005 # a bit of a typical frame
|
||||
|
||||
page_keys_to_rendered_medias = collections.defaultdict( list )
|
||||
|
@ -1157,7 +1158,7 @@ class ThumbnailCache( object ):
|
|||
num_done = 0
|
||||
max_at_once = 16
|
||||
|
||||
while not HydrusData.TimeHasPassedPrecise( stop_time ) and num_done <= max_at_once:
|
||||
while not HydrusTime.TimeHasPassedPrecise( stop_time ) and num_done <= max_at_once:
|
||||
|
||||
with self._lock:
|
||||
|
||||
|
|
|
@ -21,6 +21,7 @@ from hydrus.core import HydrusGlobals as HG
|
|||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTemp
|
||||
from hydrus.core import HydrusThreading
|
||||
from hydrus.core import HydrusTime
|
||||
from hydrus.core import HydrusVideoHandling
|
||||
from hydrus.core.networking import HydrusNetwork
|
||||
from hydrus.core.networking import HydrusNetworking
|
||||
|
@ -142,13 +143,13 @@ class App( QW.QApplication ):
|
|||
|
||||
if HG.client_controller.ProgramIsShuttingDown():
|
||||
|
||||
screw_it_time = HydrusData.GetNow() + 30
|
||||
screw_it_time = HydrusTime.GetNow() + 30
|
||||
|
||||
while not HG.client_controller.ProgramIsShutDown():
|
||||
|
||||
time.sleep( 0.5 )
|
||||
|
||||
if HydrusData.TimeHasPassed( screw_it_time ):
|
||||
if HydrusTime.TimeHasPassed( screw_it_time ):
|
||||
|
||||
return
|
||||
|
||||
|
@ -322,7 +323,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
wake_time = self.GetTimestamp( 'now_awake' )
|
||||
|
||||
|
||||
if HydrusData.TimeHasPassed( wake_time ):
|
||||
if HydrusTime.TimeHasPassed( wake_time ):
|
||||
|
||||
job_key.Delete()
|
||||
|
||||
|
@ -330,7 +331,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
else:
|
||||
|
||||
job_key.SetStatusText( 'enabling I/O {}'.format( HydrusData.TimestampToPrettyTimeDelta( wake_time, just_now_threshold = 0 ) ) )
|
||||
job_key.SetStatusText( 'enabling I/O {}'.format( HydrusTime.TimestampToPrettyTimeDelta( wake_time, just_now_threshold = 0 ) ) )
|
||||
|
||||
|
||||
time.sleep( 0.5 )
|
||||
|
@ -359,7 +360,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
manager.Shutdown()
|
||||
|
||||
|
||||
started = HydrusData.GetNow()
|
||||
started = HydrusTime.GetNow()
|
||||
|
||||
while False in ( manager.IsShutdown() for manager in managers ):
|
||||
|
||||
|
@ -367,7 +368,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
time.sleep( 0.1 )
|
||||
|
||||
if HydrusData.TimeHasPassed( started + 30 ):
|
||||
if HydrusTime.TimeHasPassed( started + 30 ):
|
||||
|
||||
break
|
||||
|
||||
|
@ -679,7 +680,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
return True
|
||||
|
||||
|
||||
if not HydrusData.TimeHasPassed( self.GetBootTime() + 120 ):
|
||||
if not HydrusTime.TimeHasPassed( self.GetBootTime() + 120 ):
|
||||
|
||||
return False
|
||||
|
||||
|
@ -694,7 +695,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
if idle_period is not None:
|
||||
|
||||
if not HydrusData.TimeHasPassed( self.GetTimestamp( 'last_user_action' ) + idle_period ):
|
||||
if not HydrusTime.TimeHasPassed( self.GetTimestamp( 'last_user_action' ) + idle_period ):
|
||||
|
||||
currently_idle = False
|
||||
|
||||
|
@ -704,7 +705,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
if idle_mouse_period is not None:
|
||||
|
||||
if not HydrusData.TimeHasPassed( self.GetTimestamp( 'last_mouse_action' ) + idle_mouse_period ):
|
||||
if not HydrusTime.TimeHasPassed( self.GetTimestamp( 'last_mouse_action' ) + idle_mouse_period ):
|
||||
|
||||
currently_idle = False
|
||||
|
||||
|
@ -714,7 +715,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
if idle_mode_client_api_timeout is not None:
|
||||
|
||||
if not HydrusData.TimeHasPassed( self.GetTimestamp( 'last_client_api_action' ) + idle_mode_client_api_timeout ):
|
||||
if not HydrusTime.TimeHasPassed( self.GetTimestamp( 'last_client_api_action' ) + idle_mode_client_api_timeout ):
|
||||
|
||||
currently_idle = False
|
||||
|
||||
|
@ -731,7 +732,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
if turning_idle:
|
||||
|
||||
self._idle_started = HydrusData.GetNow()
|
||||
self._idle_started = HydrusTime.GetNow()
|
||||
|
||||
self.pub( 'wake_daemons' )
|
||||
|
||||
|
@ -751,7 +752,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
return False
|
||||
|
||||
|
||||
if self._idle_started is not None and HydrusData.TimeHasPassed( self._idle_started + 3600 ):
|
||||
if self._idle_started is not None and HydrusTime.TimeHasPassed( self._idle_started + 3600 ):
|
||||
|
||||
return True
|
||||
|
||||
|
@ -763,7 +764,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
self.frame_splash_status.SetSubtext( 'db' )
|
||||
|
||||
stop_time = HydrusData.GetNow() + ( self.options[ 'idle_shutdown_max_minutes' ] * 60 )
|
||||
stop_time = HydrusTime.GetNow() + ( self.options[ 'idle_shutdown_max_minutes' ] * 60 )
|
||||
|
||||
self.MaintainDB( maintenance_mode = HC.MAINTENANCE_SHUTDOWN, stop_time = stop_time )
|
||||
|
||||
|
@ -773,7 +774,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
for service in services:
|
||||
|
||||
if HydrusData.TimeHasPassed( stop_time ):
|
||||
if HydrusTime.TimeHasPassed( stop_time ):
|
||||
|
||||
return
|
||||
|
||||
|
@ -841,7 +842,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
if not HG.query_planner_mode:
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
HG.query_planner_start_time = now
|
||||
HG.query_planner_query_count = 0
|
||||
|
@ -864,7 +865,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
if not HG.profile_mode:
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
with HG.profile_counter_lock:
|
||||
|
||||
|
@ -1354,7 +1355,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
if tree_stop_time is None:
|
||||
|
||||
tree_stop_time = HydrusData.GetNow() + 30
|
||||
tree_stop_time = HydrusTime.GetNow() + 30
|
||||
|
||||
|
||||
self.WriteSynchronous( 'maintain_similar_files_tree', maintenance_mode = maintenance_mode, stop_time = tree_stop_time )
|
||||
|
@ -1372,7 +1373,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
if search_stop_time is None:
|
||||
|
||||
search_stop_time = HydrusData.GetNow() + 60
|
||||
search_stop_time = HydrusTime.GetNow() + 60
|
||||
|
||||
|
||||
self.WriteSynchronous( 'maintain_similar_files_search_for_potential_duplicates', search_distance, maintenance_mode = maintenance_mode, stop_time = search_stop_time )
|
||||
|
@ -1411,7 +1412,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
HydrusController.HydrusController.MaintainMemorySlow( self )
|
||||
|
||||
if HydrusData.TimeHasPassed( self.GetTimestamp( 'last_page_change' ) + 30 * 60 ):
|
||||
if HydrusTime.TimeHasPassed( self.GetTimestamp( 'last_page_change' ) + 30 * 60 ):
|
||||
|
||||
self.pub( 'delete_old_closed_pages' )
|
||||
|
||||
|
@ -1542,7 +1543,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
def ResetIdleTimerFromClientAPI( self ):
|
||||
|
||||
self.TouchTimestamp( 'last_client_api_request' )
|
||||
self.TouchTimestamp( 'last_client_api_action' )
|
||||
|
||||
|
||||
def ResetPageChangeTimer( self ):
|
||||
|
@ -1832,13 +1833,22 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
try:
|
||||
|
||||
if use_https:
|
||||
if allow_non_local_connections:
|
||||
|
||||
ipv6_port = reactor.listenSSL( port, http_factory, context_factory, interface = '::' )
|
||||
interface = '::1'
|
||||
|
||||
else:
|
||||
|
||||
ipv6_port = reactor.listenTCP( port, http_factory, interface = '::' )
|
||||
interface = '::'
|
||||
|
||||
|
||||
if use_https:
|
||||
|
||||
ipv6_port = reactor.listenSSL( port, http_factory, context_factory, interface = interface )
|
||||
|
||||
else:
|
||||
|
||||
ipv6_port = reactor.listenTCP( port, http_factory, interface = interface )
|
||||
|
||||
|
||||
except Exception as e:
|
||||
|
@ -1852,13 +1862,22 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
try:
|
||||
|
||||
if use_https:
|
||||
if allow_non_local_connections:
|
||||
|
||||
ipv4_port = reactor.listenSSL( port, http_factory, context_factory )
|
||||
interface = ''
|
||||
|
||||
else:
|
||||
|
||||
ipv4_port = reactor.listenTCP( port, http_factory )
|
||||
interface = '127.0.0.1'
|
||||
|
||||
|
||||
if use_https:
|
||||
|
||||
ipv4_port = reactor.listenSSL( port, http_factory, context_factory, interface = interface )
|
||||
|
||||
else:
|
||||
|
||||
ipv4_port = reactor.listenTCP( port, http_factory, interface = interface )
|
||||
|
||||
|
||||
except:
|
||||
|
@ -2099,7 +2118,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
else:
|
||||
|
||||
if HydrusData.TimeHasPassed( self.GetTimestamp( 'last_cpu_check' ) + 60 ):
|
||||
if HydrusTime.TimeHasPassed( self.GetTimestamp( 'last_cpu_check' ) + 60 ):
|
||||
|
||||
cpu_times = psutil.cpu_percent( percpu = True )
|
||||
|
||||
|
@ -2274,7 +2293,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
while not image_renderer.IsReady():
|
||||
|
||||
if HydrusData.TimeHasPassed( start_time + 15 ):
|
||||
if HydrusTime.TimeHasPassed( start_time + 15 ):
|
||||
|
||||
HydrusData.ShowText( 'The image did not render in fifteen seconds, so the attempt to copy it to the clipboard was abandoned.' )
|
||||
|
||||
|
|
|
@ -5,6 +5,7 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusThreading
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
||||
|
|
|
@ -295,18 +295,6 @@ def ShowTextClient( text ):
|
|||
|
||||
HG.client_controller.pub( 'message', job_key )
|
||||
|
||||
def TimestampToPrettyTimeDelta( timestamp, just_now_string = 'just now', just_now_threshold = 3, history_suffix = ' ago', show_seconds = True, no_prefix = False ):
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'always_show_iso_time' ):
|
||||
|
||||
return HydrusData.ConvertTimestampToPrettyTime( timestamp )
|
||||
|
||||
else:
|
||||
|
||||
return HydrusData.BaseTimestampToPrettyTimeDelta( timestamp, just_now_string = just_now_string, just_now_threshold = just_now_threshold, history_suffix = history_suffix, show_seconds = show_seconds, no_prefix = no_prefix )
|
||||
|
||||
|
||||
HydrusData.TimestampToPrettyTimeDelta = TimestampToPrettyTimeDelta
|
||||
|
||||
def ToHumanBytes( size ):
|
||||
|
||||
|
|
|
@ -5,6 +5,7 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core.networking import HydrusNetworking
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
@ -262,9 +263,9 @@ def GetDefaultScriptRows():
|
|||
|
||||
script_info = []
|
||||
|
||||
script_info.append( ( 32, 'iqdb danbooru', 2, HydrusData.GetNow(), '''["https://danbooru.iqdb.org/", 1, 0, [55, 1, [[], "some hash bytes"]], "file", {}, [[29, 1, ["link to danbooru", [27, 6, [[26, 1, [[62, 2, [0, "td", {"class": "image"}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 0, "href", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], [[30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "tag-list"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {"class": "tag-type-1"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {"class": "search-tag"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "creator"]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "tag-list"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {"class": "tag-type-3"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {"class": "search-tag"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "series"]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "tag-list"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {"class": "tag-type-4"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {"class": "search-tag"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "character"]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "tag-list"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {"class": "tag-type-0"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {"class": "search-tag"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, ""]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "post-information"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [2, "Rating:*", null, null, "Rating: Safe"]], [55, 1, [[[0, 8]], "Rating: Safe"]]]], 0, false, "rating"]], [30, 4, ["", 7, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "post-information"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {}, null, null, true, [51, 1, [2, "Source:*", null, null, "Source:"]]]], [62, 2, [0, "a", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 0, "href", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, [8, 0]]]]]], [30, 4, ["no iqdb match found", 8, [27, 6, [[26, 1, [[62, 2, [0, "th", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, [false, [51, 1, [2, "Best match", null, null, "Best match"]]]]]]]''' ) )
|
||||
script_info.append( ( 32, 'danbooru md5', 2, HydrusData.GetNow(), '''["https://danbooru.donmai.us/", 0, 1, [55, 1, [[[4, "hex"]], "some hash bytes"]], "md5", {"page": "post", "s": "list"}, [[30, 4, ["we got sent back to main gallery page -- title test", 8, [27, 6, [[26, 1, [[62, 2, [0, "head", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "title", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, [true, [51, 1, [2, "Image List", null, null, "Image List"]]]]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-0"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, ""]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-3"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "series"]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-1"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "creator"]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-4"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "character"]], [30, 4, ["we got sent back to main gallery page -- page links exist", 8, [27, 6, [[26, 1, [[62, 2, [0, "div", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 0, "class", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, [true, [51, 1, [2, "pagination", null, null, "pagination"]]]]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "post-information"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "href", [51, 1, [2, "Rating:*", null, null, "Rating: Safe"]], [55, 1, [[[0, 8]], "Rating: Safe"]]]], 0, false, "rating"]]]]''' ) )
|
||||
script_info.append( ( 32, 'gelbooru md5', 2, HydrusData.GetNow(), '''["http://gelbooru.com/index.php", 0, 1, [55, 1, [[[4, "hex"]], "some hash bytes"]], "md5", {"s": "list", "page": "post"}, [[30, 6, ["we got sent back to main gallery page -- title test", 8, [27, 7, [[26, 1, [[62, 2, [0, "head", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "title", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], [true, [51, 1, [2, "Image List", null, null, "Image List"]]]]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-general"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], ""]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-copyright"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], "series"]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-artist"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], "creator"]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-character"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], "character"]], [30, 6, ["we got sent back to main gallery page -- page links exist", 8, [27, 7, [[26, 1, [[62, 2, [0, "div", {"id": "paginator"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 2, "class", [84, 1, [26, 1, []]]]], [true, [51, 1, [3, "", null, null, "pagination"]]]]]]]''' ) )
|
||||
script_info.append( ( 32, 'iqdb danbooru', 2, HydrusTime.GetNow(), '''["https://danbooru.iqdb.org/", 1, 0, [55, 1, [[], "some hash bytes"]], "file", {}, [[29, 1, ["link to danbooru", [27, 6, [[26, 1, [[62, 2, [0, "td", {"class": "image"}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 0, "href", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], [[30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "tag-list"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {"class": "tag-type-1"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {"class": "search-tag"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "creator"]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "tag-list"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {"class": "tag-type-3"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {"class": "search-tag"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "series"]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "tag-list"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {"class": "tag-type-4"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {"class": "search-tag"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "character"]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "tag-list"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {"class": "tag-type-0"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {"class": "search-tag"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, ""]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "post-information"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [2, "Rating:*", null, null, "Rating: Safe"]], [55, 1, [[[0, 8]], "Rating: Safe"]]]], 0, false, "rating"]], [30, 4, ["", 7, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "post-information"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {}, null, null, true, [51, 1, [2, "Source:*", null, null, "Source:"]]]], [62, 2, [0, "a", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 0, "href", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, [8, 0]]]]]], [30, 4, ["no iqdb match found", 8, [27, 6, [[26, 1, [[62, 2, [0, "th", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, [false, [51, 1, [2, "Best match", null, null, "Best match"]]]]]]]''' ) )
|
||||
script_info.append( ( 32, 'danbooru md5', 2, HydrusTime.GetNow(), '''["https://danbooru.donmai.us/", 0, 1, [55, 1, [[[4, "hex"]], "some hash bytes"]], "md5", {"page": "post", "s": "list"}, [[30, 4, ["we got sent back to main gallery page -- title test", 8, [27, 6, [[26, 1, [[62, 2, [0, "head", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "title", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, [true, [51, 1, [2, "Image List", null, null, "Image List"]]]]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-0"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, ""]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-3"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "series"]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-1"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "creator"]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-4"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "character"]], [30, 4, ["we got sent back to main gallery page -- page links exist", 8, [27, 6, [[26, 1, [[62, 2, [0, "div", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 0, "class", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, [true, [51, 1, [2, "pagination", null, null, "pagination"]]]]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "post-information"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "href", [51, 1, [2, "Rating:*", null, null, "Rating: Safe"]], [55, 1, [[[0, 8]], "Rating: Safe"]]]], 0, false, "rating"]]]]''' ) )
|
||||
script_info.append( ( 32, 'gelbooru md5', 2, HydrusTime.GetNow(), '''["http://gelbooru.com/index.php", 0, 1, [55, 1, [[[4, "hex"]], "some hash bytes"]], "md5", {"s": "list", "page": "post"}, [[30, 6, ["we got sent back to main gallery page -- title test", 8, [27, 7, [[26, 1, [[62, 2, [0, "head", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "title", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], [true, [51, 1, [2, "Image List", null, null, "Image List"]]]]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-general"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], ""]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-copyright"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], "series"]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-artist"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], "creator"]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-character"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], "character"]], [30, 6, ["we got sent back to main gallery page -- page links exist", 8, [27, 7, [[26, 1, [[62, 2, [0, "div", {"id": "paginator"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 2, "class", [84, 1, [26, 1, []]]]], [true, [51, 1, [3, "", null, null, "pagination"]]]]]]]''' ) )
|
||||
|
||||
return script_info
|
||||
|
||||
|
|
|
@ -7,6 +7,7 @@ from hydrus.core import HydrusGlobals as HG
|
|||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTemp
|
||||
from hydrus.core import HydrusThreading
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientThreading
|
||||
|
|
|
@ -10,12 +10,14 @@ from hydrus.core import HydrusGlobals as HG
|
|||
from hydrus.core import HydrusImageHandling
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientData
|
||||
from hydrus.client import ClientThreading
|
||||
from hydrus.client import ClientTime
|
||||
from hydrus.client.importing.options import NoteImportOptions
|
||||
from hydrus.client.media import ClientMedia
|
||||
from hydrus.client.media import ClientMediaFileFilter
|
||||
from hydrus.client.metadata import ClientTags
|
||||
|
||||
hashes_to_jpeg_quality = {}
|
||||
|
@ -378,7 +380,7 @@ def GetDuplicateComparisonStatements( shown_media, comparison_media ):
|
|||
score = 0
|
||||
|
||||
|
||||
statement = '{}, {} {}'.format( ClientData.TimestampToPrettyTimeDelta( s_ts, history_suffix = ' old' ), operator, ClientData.TimestampToPrettyTimeDelta( c_ts, history_suffix = ' old' ) )
|
||||
statement = '{}, {} {}'.format( ClientTime.TimestampToPrettyTimeDelta( s_ts, history_suffix = ' old' ), operator, ClientTime.TimestampToPrettyTimeDelta( c_ts, history_suffix = ' old' ) )
|
||||
|
||||
statements_and_scores[ 'time_imported' ] = ( statement, score )
|
||||
|
||||
|
@ -679,11 +681,11 @@ class DuplicatesManager( object ):
|
|||
|
||||
search_distance = HG.client_controller.new_options.GetInteger( 'similar_files_duplicate_pairs_search_distance' )
|
||||
|
||||
start_time = HydrusData.GetNowPrecise()
|
||||
start_time = HydrusTime.GetNowPrecise()
|
||||
|
||||
( still_work_to_do, num_done ) = HG.client_controller.WriteSynchronous( 'maintain_similar_files_search_for_potential_duplicates', search_distance, maintenance_mode = HC.MAINTENANCE_FORCED, job_key = job_key, work_time_float = 0.5 )
|
||||
|
||||
time_it_took = HydrusData.GetNowPrecise() - start_time
|
||||
time_it_took = HydrusTime.GetNowPrecise() - start_time
|
||||
|
||||
num_searched_estimate += num_done
|
||||
|
||||
|
@ -737,11 +739,50 @@ SYNC_ARCHIVE_NONE = 0
|
|||
SYNC_ARCHIVE_IF_ONE_DO_BOTH = 1
|
||||
SYNC_ARCHIVE_DO_BOTH_REGARDLESS = 2
|
||||
|
||||
def get_updated_domain_modified_timestamp_datas( destination_media: ClientMedia.MediaSingleton, source_media: ClientMedia.MediaSingleton, urls: typing.Collection[ str ] ):
|
||||
|
||||
from hydrus.client.networking import ClientNetworkingFunctions
|
||||
|
||||
domains = { ClientNetworkingFunctions.ConvertURLIntoDomain( url ) for url in urls }
|
||||
|
||||
timestamp_datas = []
|
||||
source_timestamp_manager = source_media.GetLocationsManager().GetTimestampsManager()
|
||||
destination_timestamp_manager = destination_media.GetLocationsManager().GetTimestampsManager()
|
||||
|
||||
for domain in domains:
|
||||
|
||||
source_timestamp = source_timestamp_manager.GetDomainModifiedTimestamp( domain )
|
||||
|
||||
if source_timestamp is not None:
|
||||
|
||||
timestamp_data = ClientTime.TimestampData.STATICDomainModifiedTime( domain, source_timestamp )
|
||||
|
||||
destination_timestamp = destination_timestamp_manager.GetDomainModifiedTimestamp( domain )
|
||||
|
||||
if destination_timestamp is None or ClientTime.ShouldUpdateModifiedTime( destination_timestamp, source_timestamp ):
|
||||
|
||||
timestamp_datas.append( timestamp_data )
|
||||
|
||||
|
||||
|
||||
|
||||
return timestamp_datas
|
||||
|
||||
|
||||
def get_domain_modified_content_updates( destination_media: ClientMedia.MediaSingleton, source_media: ClientMedia.MediaSingleton, urls: typing.Collection[ str ] ):
|
||||
|
||||
timestamp_datas = get_updated_domain_modified_timestamp_datas( destination_media, source_media, urls )
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_TIMESTAMP, HC.CONTENT_UPDATE_SET, ( destination_media.GetHash(), timestamp_data ) ) for timestamp_data in timestamp_datas ]
|
||||
|
||||
return content_updates
|
||||
|
||||
|
||||
class DuplicateContentMergeOptions( HydrusSerialisable.SerialisableBase ):
|
||||
|
||||
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_DUPLICATE_CONTENT_MERGE_OPTIONS
|
||||
SERIALISABLE_NAME = 'Duplicate Content Merge Options'
|
||||
SERIALISABLE_VERSION = 6
|
||||
SERIALISABLE_VERSION = 7
|
||||
|
||||
def __init__( self ):
|
||||
|
||||
|
@ -753,6 +794,7 @@ class DuplicateContentMergeOptions( HydrusSerialisable.SerialisableBase ):
|
|||
self._sync_note_import_options = NoteImportOptions.NoteImportOptions()
|
||||
self._sync_archive_action = SYNC_ARCHIVE_NONE
|
||||
self._sync_urls_action = HC.CONTENT_MERGE_ACTION_NONE
|
||||
self._sync_file_modified_date_action = HC.CONTENT_MERGE_ACTION_COPY
|
||||
|
||||
|
||||
def _GetSerialisableInfo( self ):
|
||||
|
@ -770,12 +812,28 @@ class DuplicateContentMergeOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
serialisable_sync_note_import_options = self._sync_note_import_options.GetSerialisableTuple()
|
||||
|
||||
return ( serialisable_tag_service_actions, serialisable_rating_service_actions, self._sync_notes_action, serialisable_sync_note_import_options, self._sync_archive_action, self._sync_urls_action )
|
||||
return (
|
||||
serialisable_tag_service_actions,
|
||||
serialisable_rating_service_actions,
|
||||
self._sync_notes_action,
|
||||
serialisable_sync_note_import_options,
|
||||
self._sync_archive_action,
|
||||
self._sync_urls_action,
|
||||
self._sync_file_modified_date_action
|
||||
)
|
||||
|
||||
|
||||
def _InitialiseFromSerialisableInfo( self, serialisable_info ):
|
||||
|
||||
( serialisable_tag_service_actions, serialisable_rating_service_actions, self._sync_notes_action, serialisable_sync_note_import_options, self._sync_archive_action, self._sync_urls_action ) = serialisable_info
|
||||
(
|
||||
serialisable_tag_service_actions,
|
||||
serialisable_rating_service_actions,
|
||||
self._sync_notes_action,
|
||||
serialisable_sync_note_import_options,
|
||||
self._sync_archive_action,
|
||||
self._sync_urls_action,
|
||||
self._sync_file_modified_date_action
|
||||
) = serialisable_info
|
||||
|
||||
self._tag_service_actions = [ ( bytes.fromhex( serialisable_service_key ), action, HydrusSerialisable.CreateFromSerialisableTuple( serialisable_tag_filter ) ) for ( serialisable_service_key, action, serialisable_tag_filter ) in serialisable_tag_service_actions ]
|
||||
self._rating_service_actions = [ ( bytes.fromhex( serialisable_service_key ), action ) for ( serialisable_service_key, action ) in serialisable_rating_service_actions ]
|
||||
|
@ -872,6 +930,32 @@ class DuplicateContentMergeOptions( HydrusSerialisable.SerialisableBase ):
|
|||
return ( 6, new_serialisable_info )
|
||||
|
||||
|
||||
if version == 6:
|
||||
|
||||
(
|
||||
serialisable_tag_service_actions,
|
||||
serialisable_rating_service_actions,
|
||||
sync_notes_action,
|
||||
serialisable_sync_note_import_options,
|
||||
sync_archive_action,
|
||||
sync_urls_action
|
||||
) = old_serialisable_info
|
||||
|
||||
sync_file_modified_date_action = HC.CONTENT_MERGE_ACTION_NONE
|
||||
|
||||
new_serialisable_info = (
|
||||
serialisable_tag_service_actions,
|
||||
serialisable_rating_service_actions,
|
||||
sync_notes_action,
|
||||
serialisable_sync_note_import_options,
|
||||
sync_archive_action,
|
||||
sync_urls_action,
|
||||
sync_file_modified_date_action
|
||||
)
|
||||
|
||||
return ( 7, new_serialisable_info )
|
||||
|
||||
|
||||
|
||||
def GetRatingServiceActions( self ) -> typing.Collection[ tuple ]:
|
||||
|
||||
|
@ -888,6 +972,11 @@ class DuplicateContentMergeOptions( HydrusSerialisable.SerialisableBase ):
|
|||
return self._sync_archive_action
|
||||
|
||||
|
||||
def GetSyncFileModifiedDateAction( self ) -> int:
|
||||
|
||||
return self._sync_file_modified_date_action
|
||||
|
||||
|
||||
def GetSyncNotesAction( self ) -> int:
|
||||
|
||||
return self._sync_notes_action
|
||||
|
@ -918,6 +1007,11 @@ class DuplicateContentMergeOptions( HydrusSerialisable.SerialisableBase ):
|
|||
self._sync_archive_action = sync_archive_action
|
||||
|
||||
|
||||
def SetSyncFileModifiedDateAction( self, sync_file_modified_date_action: int ):
|
||||
|
||||
self._sync_file_modified_date_action = sync_file_modified_date_action
|
||||
|
||||
|
||||
def SetSyncNotesAction( self, sync_notes_action: int ):
|
||||
|
||||
self._sync_notes_action = sync_notes_action
|
||||
|
@ -1197,6 +1291,33 @@ class DuplicateContentMergeOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
#
|
||||
|
||||
if self._sync_file_modified_date_action != HC.CONTENT_MERGE_ACTION_NONE:
|
||||
|
||||
first_timestamp = first_media.GetMediaResult().GetTimestampsManager().GetFileModifiedTimestamp()
|
||||
second_timestamp = second_media.GetMediaResult().GetTimestampsManager().GetFileModifiedTimestamp()
|
||||
|
||||
if self._sync_file_modified_date_action == HC.CONTENT_MERGE_ACTION_TWO_WAY_MERGE:
|
||||
|
||||
if ClientTime.ShouldUpdateModifiedTime( first_timestamp, second_timestamp ):
|
||||
|
||||
service_keys_to_content_updates[ CC.COMBINED_LOCAL_FILE_SERVICE_KEY ].append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TIMESTAMP, HC.CONTENT_UPDATE_SET, ( first_hash, ClientTime.TimestampData.STATICFileModifiedTime( second_timestamp ) ) ) )
|
||||
|
||||
elif ClientTime.ShouldUpdateModifiedTime( second_timestamp, first_timestamp ):
|
||||
|
||||
service_keys_to_content_updates[ CC.COMBINED_LOCAL_FILE_SERVICE_KEY ].append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TIMESTAMP, HC.CONTENT_UPDATE_SET, ( second_hash, ClientTime.TimestampData.STATICFileModifiedTime( first_timestamp ) ) ) )
|
||||
|
||||
|
||||
elif self._sync_file_modified_date_action == HC.CONTENT_MERGE_ACTION_COPY:
|
||||
|
||||
if ClientTime.ShouldUpdateModifiedTime( first_timestamp, second_timestamp ):
|
||||
|
||||
service_keys_to_content_updates[ CC.COMBINED_LOCAL_FILE_SERVICE_KEY ].append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TIMESTAMP, HC.CONTENT_UPDATE_SET, ( first_hash, ClientTime.TimestampData.STATICFileModifiedTime( second_timestamp ) ) ) )
|
||||
|
||||
|
||||
|
||||
|
||||
#
|
||||
|
||||
if self._sync_urls_action != HC.CONTENT_MERGE_ACTION_NONE:
|
||||
|
@ -1204,8 +1325,6 @@ class DuplicateContentMergeOptions( HydrusSerialisable.SerialisableBase ):
|
|||
first_urls = set( first_media.GetLocationsManager().GetURLs() )
|
||||
second_urls = set( second_media.GetLocationsManager().GetURLs() )
|
||||
|
||||
content_updates = []
|
||||
|
||||
if self._sync_urls_action == HC.CONTENT_MERGE_ACTION_TWO_WAY_MERGE:
|
||||
|
||||
first_needs = second_urls.difference( first_urls )
|
||||
|
@ -1213,12 +1332,16 @@ class DuplicateContentMergeOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
if len( first_needs ) > 0:
|
||||
|
||||
content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_URLS, HC.CONTENT_UPDATE_ADD, ( first_needs, first_hashes ) ) )
|
||||
service_keys_to_content_updates[ CC.COMBINED_LOCAL_FILE_SERVICE_KEY ].append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_URLS, HC.CONTENT_UPDATE_ADD, ( first_needs, first_hashes ) ) )
|
||||
|
||||
service_keys_to_content_updates[ CC.COMBINED_LOCAL_FILE_SERVICE_KEY ].extend( get_domain_modified_content_updates( first_media, second_media, first_needs ) )
|
||||
|
||||
|
||||
if len( second_needs ) > 0:
|
||||
|
||||
content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_URLS, HC.CONTENT_UPDATE_ADD, ( second_needs, second_hashes ) ) )
|
||||
service_keys_to_content_updates[ CC.COMBINED_LOCAL_FILE_SERVICE_KEY ].append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_URLS, HC.CONTENT_UPDATE_ADD, ( second_needs, second_hashes ) ) )
|
||||
|
||||
service_keys_to_content_updates[ CC.COMBINED_LOCAL_FILE_SERVICE_KEY ].extend( get_domain_modified_content_updates( second_media, first_media, second_needs ) )
|
||||
|
||||
|
||||
elif self._sync_urls_action == HC.CONTENT_MERGE_ACTION_COPY:
|
||||
|
@ -1227,13 +1350,10 @@ class DuplicateContentMergeOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
if len( first_needs ) > 0:
|
||||
|
||||
content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_URLS, HC.CONTENT_UPDATE_ADD, ( first_needs, first_hashes ) ) )
|
||||
service_keys_to_content_updates[ CC.COMBINED_LOCAL_FILE_SERVICE_KEY ].append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_URLS, HC.CONTENT_UPDATE_ADD, ( first_needs, first_hashes ) ) )
|
||||
|
||||
service_keys_to_content_updates[ CC.COMBINED_LOCAL_FILE_SERVICE_KEY ].extend( get_domain_modified_content_updates( first_media, second_media, first_needs ) )
|
||||
|
||||
|
||||
|
||||
if len( content_updates ) > 0:
|
||||
|
||||
service_keys_to_content_updates[ CC.COMBINED_LOCAL_FILE_SERVICE_KEY ].extend( content_updates )
|
||||
|
||||
|
||||
|
||||
|
@ -1260,7 +1380,7 @@ class DuplicateContentMergeOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
if media.HasDeleteLocked():
|
||||
|
||||
ClientMedia.ReportDeleteLockFailures( [ media ] )
|
||||
ClientMediaFileFilter.ReportDeleteLockFailures( [ media ] )
|
||||
|
||||
continue
|
||||
|
||||
|
|
|
@ -12,8 +12,10 @@ from hydrus.core import HydrusExceptions
|
|||
from hydrus.core import HydrusFileHandling
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusImageHandling
|
||||
from hydrus.core import HydrusLists
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusThreading
|
||||
from hydrus.core import HydrusTime
|
||||
from hydrus.core.networking import HydrusNetworking
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
@ -325,18 +327,18 @@ class ClientFilesManager( object ):
|
|||
check_period = 60
|
||||
|
||||
|
||||
if HydrusData.TimeHasPassed( time_fetched + check_period ):
|
||||
if HydrusTime.TimeHasPassed( time_fetched + check_period ):
|
||||
|
||||
free_space = HydrusPaths.GetFreeSpace( location )
|
||||
|
||||
self._locations_to_free_space[ location ] = ( free_space, HydrusData.GetNow() )
|
||||
self._locations_to_free_space[ location ] = ( free_space, HydrusTime.GetNow() )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
free_space = HydrusPaths.GetFreeSpace( location )
|
||||
|
||||
self._locations_to_free_space[ location ] = ( free_space, HydrusData.GetNow() )
|
||||
self._locations_to_free_space[ location ] = ( free_space, HydrusTime.GetNow() )
|
||||
|
||||
|
||||
return free_space
|
||||
|
@ -878,7 +880,7 @@ class ClientFilesManager( object ):
|
|||
|
||||
missing_prefixes = sorted( missing_dict[ missing_location ] )
|
||||
|
||||
missing_prefixes_string = ' ' + os.linesep.join( ( ', '.join( block ) for block in HydrusData.SplitListIntoChunks( missing_prefixes, 32 ) ) )
|
||||
missing_prefixes_string = ' ' + os.linesep.join( ( ', '.join( block ) for block in HydrusLists.SplitListIntoChunks( missing_prefixes, 32 ) ) )
|
||||
|
||||
missing_string += os.linesep
|
||||
missing_string += missing_location
|
||||
|
@ -1577,11 +1579,11 @@ class ClientFilesManager( object ):
|
|||
|
||||
os.utime( path, ( existing_access_time, modified_timestamp ) )
|
||||
|
||||
HydrusData.Print( 'Successfully changed modified time of "{}" from {} to {}.'.format( path, HydrusData.ConvertTimestampToPrettyTime( existing_modified_time ), HydrusData.ConvertTimestampToPrettyTime( modified_timestamp ) ))
|
||||
HydrusData.Print( 'Successfully changed modified time of "{}" from {} to {}.'.format( path, HydrusTime.TimestampToPrettyTime( existing_modified_time ), HydrusTime.TimestampToPrettyTime( modified_timestamp ) ))
|
||||
|
||||
except PermissionError:
|
||||
|
||||
HydrusData.Print( 'Tried to set modified time of {} to file "{}", but did not have permission!'.format( HydrusData.ConvertTimestampToPrettyTime( modified_timestamp ), path ) )
|
||||
HydrusData.Print( 'Tried to set modified time of {} to file "{}", but did not have permission!'.format( HydrusTime.TimestampToPrettyTime( modified_timestamp ), path ) )
|
||||
|
||||
|
||||
|
||||
|
@ -2054,7 +2056,7 @@ class FilesMaintenanceManager( object ):
|
|||
|
||||
if needed_to_dupe_the_file:
|
||||
|
||||
self._controller.WriteSynchronous( 'file_maintenance_add_jobs_hashes', { hash }, REGENERATE_FILE_DATA_JOB_DELETE_NEIGHBOUR_DUPES, HydrusData.GetNow() + ( 7 * 86400 ) )
|
||||
self._controller.WriteSynchronous( 'file_maintenance_add_jobs_hashes', { hash }, REGENERATE_FILE_DATA_JOB_DELETE_NEIGHBOUR_DUPES, HydrusTime.GetNow() + ( 7 * 86400 ) )
|
||||
|
||||
|
||||
|
||||
|
@ -2254,13 +2256,13 @@ class FilesMaintenanceManager( object ):
|
|||
return
|
||||
|
||||
|
||||
next_gc_collect = HydrusData.GetNow() + 10
|
||||
next_gc_collect = HydrusTime.GetNow() + 10
|
||||
|
||||
try:
|
||||
|
||||
big_pauser = HydrusData.BigJobPauser( wait_time = 0.8 )
|
||||
big_pauser = HydrusThreading.BigJobPauser( wait_time = 0.8 )
|
||||
|
||||
last_time_jobs_were_cleared = HydrusData.GetNow()
|
||||
last_time_jobs_were_cleared = HydrusTime.GetNow()
|
||||
cleared_jobs = []
|
||||
|
||||
num_to_do = len( media_results )
|
||||
|
@ -2424,7 +2426,7 @@ class FilesMaintenanceManager( object ):
|
|||
|
||||
|
||||
|
||||
if HydrusData.TimeHasPassed( last_time_jobs_were_cleared + 10 ) or len( cleared_jobs ) > 256:
|
||||
if HydrusTime.TimeHasPassed( last_time_jobs_were_cleared + 10 ) or len( cleared_jobs ) > 256:
|
||||
|
||||
self._controller.WriteSynchronous( 'file_maintenance_clear_jobs', cleared_jobs )
|
||||
|
||||
|
@ -2604,9 +2606,9 @@ class FilesMaintenanceManager( object ):
|
|||
|
||||
try:
|
||||
|
||||
time_to_start = HydrusData.GetNow() + 15
|
||||
time_to_start = HydrusTime.GetNow() + 15
|
||||
|
||||
while not HydrusData.TimeHasPassed( time_to_start ):
|
||||
while not HydrusTime.TimeHasPassed( time_to_start ):
|
||||
|
||||
check_shutdown()
|
||||
|
||||
|
|
|
@ -9,6 +9,7 @@ from hydrus.client import ClientConstants as CC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusImageHandling
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
cv_interpolation_enum_lookup = {}
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
||||
|
@ -60,6 +61,8 @@ def GetPossibleFileDomainServicesInOrder( all_known_files_allowed: bool, only_lo
|
|||
|
||||
service_types_in_order.append( HC.COMBINED_LOCAL_FILE )
|
||||
|
||||
service_types_in_order.append( HC.COMBINED_DELETED_FILE )
|
||||
|
||||
|
||||
service_types_in_order.append( HC.FILE_REPOSITORY )
|
||||
service_types_in_order.append( HC.IPFS )
|
||||
|
|
|
@ -4,6 +4,7 @@ import threading
|
|||
import time
|
||||
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
class GlobalMaintenanceJobInterface( object ):
|
||||
|
||||
|
@ -60,14 +61,14 @@ class GlobalMaintenanceJobScheduler( object ):
|
|||
|
||||
def CanRun( self ):
|
||||
|
||||
if not HydrusData.TimeHasPassed( self._no_work_until_time ):
|
||||
if not HydrusTime.TimeHasPassed( self._no_work_until_time ):
|
||||
|
||||
return False
|
||||
|
||||
|
||||
# check shutdown, idle, foreground status
|
||||
|
||||
if not HydrusData.TimeHasPassed( self._next_run_time ):
|
||||
if not HydrusTime.TimeHasPassed( self._next_run_time ):
|
||||
|
||||
return False
|
||||
|
||||
|
@ -87,7 +88,7 @@ class GlobalMaintenanceJobScheduler( object ):
|
|||
|
||||
def WorkCompleted( self ):
|
||||
|
||||
self._next_run_time = HydrusData.GetNow() + self._period
|
||||
self._next_run_time = HydrusTime.GetNow() + self._period
|
||||
|
||||
|
||||
# make this serialisable. it'll save like domain manager
|
||||
|
|
|
@ -7,6 +7,7 @@ from qtpy import QtGui as QG
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientData
|
||||
|
@ -249,7 +250,7 @@ class FileViewingStatsManager( object ):
|
|||
|
||||
self._pending_updates = {}
|
||||
|
||||
self._last_update = HydrusData.GetNow()
|
||||
self._last_update = HydrusTime.GetNow()
|
||||
|
||||
self._my_flush_job = self._controller.CallRepeating( 5, 60, self.REPEATINGFlush )
|
||||
|
||||
|
|
|
@ -2,7 +2,9 @@ import os
|
|||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusLists
|
||||
from hydrus.core import HydrusTagArchive
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientThreading
|
||||
|
@ -25,7 +27,7 @@ def GetBasicSpeedStatement( num_done, time_started_precise ):
|
|||
|
||||
else:
|
||||
|
||||
time_taken = HydrusData.GetNowPrecise() - time_started_precise
|
||||
time_taken = HydrusTime.GetNowPrecise() - time_started_precise
|
||||
|
||||
rows_s = int( num_done / time_taken )
|
||||
|
||||
|
@ -80,7 +82,7 @@ class MigrationDestinationHTA( MigrationDestination ):
|
|||
|
||||
self._hta.CommitBigJob()
|
||||
|
||||
if HydrusData.TimeHasPassed( self._time_started + 120 ):
|
||||
if HydrusTime.TimeHasPassed( self._time_started + 120 ):
|
||||
|
||||
self._hta.Optimise()
|
||||
|
||||
|
@ -92,7 +94,7 @@ class MigrationDestinationHTA( MigrationDestination ):
|
|||
|
||||
def DoSomeWork( self, source ):
|
||||
|
||||
time_started_precise = HydrusData.GetNowPrecise()
|
||||
time_started_precise = HydrusTime.GetNowPrecise()
|
||||
|
||||
num_done = 0
|
||||
|
||||
|
@ -110,7 +112,7 @@ class MigrationDestinationHTA( MigrationDestination ):
|
|||
|
||||
def Prepare( self ):
|
||||
|
||||
self._time_started = HydrusData.GetNow()
|
||||
self._time_started = HydrusTime.GetNow()
|
||||
|
||||
self._hta = HydrusTagArchive.HydrusTagArchive( self._path )
|
||||
|
||||
|
@ -141,7 +143,7 @@ class MigrationDestinationHTPA( MigrationDestination ):
|
|||
|
||||
self._htpa.CommitBigJob()
|
||||
|
||||
if HydrusData.TimeHasPassed( self._time_started + 120 ):
|
||||
if HydrusTime.TimeHasPassed( self._time_started + 120 ):
|
||||
|
||||
self._htpa.Optimise()
|
||||
|
||||
|
@ -153,7 +155,7 @@ class MigrationDestinationHTPA( MigrationDestination ):
|
|||
|
||||
def DoSomeWork( self, source ):
|
||||
|
||||
time_started_precise = HydrusData.GetNowPrecise()
|
||||
time_started_precise = HydrusTime.GetNowPrecise()
|
||||
|
||||
data = source.GetSomeData()
|
||||
|
||||
|
@ -166,7 +168,7 @@ class MigrationDestinationHTPA( MigrationDestination ):
|
|||
|
||||
def Prepare( self ):
|
||||
|
||||
self._time_started = HydrusData.GetNow()
|
||||
self._time_started = HydrusTime.GetNow()
|
||||
|
||||
self._htpa = HydrusTagArchive.HydrusTagPairArchive( self._path )
|
||||
|
||||
|
@ -204,7 +206,7 @@ class MigrationDestinationListMappings( MigrationDestinationList ):
|
|||
|
||||
def DoSomeWork( self, source ):
|
||||
|
||||
time_started_precise = HydrusData.GetNowPrecise()
|
||||
time_started_precise = HydrusTime.GetNowPrecise()
|
||||
|
||||
num_done = 0
|
||||
|
||||
|
@ -224,7 +226,7 @@ class MigrationDestinationListPairs( MigrationDestinationList ):
|
|||
|
||||
def DoSomeWork( self, source ):
|
||||
|
||||
time_started_precise = HydrusData.GetNowPrecise()
|
||||
time_started_precise = HydrusTime.GetNowPrecise()
|
||||
|
||||
data = source.GetSomeData()
|
||||
|
||||
|
@ -260,7 +262,7 @@ class MigrationDestinationTagServiceMappings( MigrationDestinationTagService ):
|
|||
|
||||
def DoSomeWork( self, source ):
|
||||
|
||||
time_started_precise = HydrusData.GetNowPrecise()
|
||||
time_started_precise = HydrusTime.GetNowPrecise()
|
||||
|
||||
data = source.GetSomeData()
|
||||
|
||||
|
@ -309,7 +311,7 @@ class MigrationDestinationTagServicePairs( MigrationDestinationTagService ):
|
|||
|
||||
def DoSomeWork( self, source ):
|
||||
|
||||
time_started_precise = HydrusData.GetNowPrecise()
|
||||
time_started_precise = HydrusTime.GetNowPrecise()
|
||||
|
||||
data = source.GetSomeData()
|
||||
|
||||
|
@ -530,7 +532,7 @@ class MigrationSourceHTA( MigrationSource ):
|
|||
|
||||
def GetSomeData( self ):
|
||||
|
||||
data = HydrusData.PullNFromIterator( self._iterator, 256 )
|
||||
data = HydrusLists.PullNFromIterator( self._iterator, 256 )
|
||||
|
||||
if len( data ) == 0:
|
||||
|
||||
|
@ -626,7 +628,7 @@ class MigrationSourceHTPA( MigrationSource ):
|
|||
|
||||
def GetSomeData( self ):
|
||||
|
||||
data = HydrusData.PullNFromIterator( self._iterator, 256 )
|
||||
data = HydrusLists.PullNFromIterator( self._iterator, 256 )
|
||||
|
||||
if len( data ) == 0:
|
||||
|
||||
|
@ -666,7 +668,7 @@ class MigrationSourceList( MigrationSource ):
|
|||
|
||||
def GetSomeData( self ):
|
||||
|
||||
some_data = HydrusData.PullNFromIterator( self._iterator, 5 )
|
||||
some_data = HydrusLists.PullNFromIterator( self._iterator, 5 )
|
||||
|
||||
if len( some_data ) == 0:
|
||||
|
||||
|
|
|
@ -8,6 +8,7 @@ from hydrus.core import HydrusGlobals as HG
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientDefaults
|
||||
|
@ -332,6 +333,7 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
duplicate_content_merge_options.SetRatingServiceActions( [ ( CC.DEFAULT_FAVOURITES_RATING_SERVICE_KEY, HC.CONTENT_MERGE_ACTION_MOVE ) ] )
|
||||
duplicate_content_merge_options.SetSyncArchiveAction( ClientDuplicates.SYNC_ARCHIVE_DO_BOTH_REGARDLESS )
|
||||
duplicate_content_merge_options.SetSyncURLsAction( HC.CONTENT_MERGE_ACTION_COPY )
|
||||
duplicate_content_merge_options.SetSyncFileModifiedDateAction( HC.CONTENT_MERGE_ACTION_COPY )
|
||||
duplicate_content_merge_options.SetSyncNotesAction( HC.CONTENT_MERGE_ACTION_COPY )
|
||||
|
||||
from hydrus.client.importing.options import NoteImportOptions
|
||||
|
@ -353,6 +355,7 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
duplicate_content_merge_options.SetRatingServiceActions( [ ( CC.DEFAULT_FAVOURITES_RATING_SERVICE_KEY, HC.CONTENT_MERGE_ACTION_TWO_WAY_MERGE ) ] )
|
||||
duplicate_content_merge_options.SetSyncArchiveAction( ClientDuplicates.SYNC_ARCHIVE_DO_BOTH_REGARDLESS )
|
||||
duplicate_content_merge_options.SetSyncURLsAction( HC.CONTENT_MERGE_ACTION_TWO_WAY_MERGE )
|
||||
duplicate_content_merge_options.SetSyncFileModifiedDateAction( HC.CONTENT_MERGE_ACTION_TWO_WAY_MERGE )
|
||||
duplicate_content_merge_options.SetSyncNotesAction( HC.CONTENT_MERGE_ACTION_TWO_WAY_MERGE )
|
||||
|
||||
note_import_options = NoteImportOptions.NoteImportOptions()
|
||||
|
@ -1195,7 +1198,7 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetDuplicateContentMergeOptions( self, duplicate_type ):
|
||||
def GetDuplicateContentMergeOptions( self, duplicate_type ) -> ClientDuplicates.DuplicateContentMergeOptions:
|
||||
|
||||
with self._lock:
|
||||
|
||||
|
|
|
@ -15,6 +15,7 @@ from hydrus.core import HydrusGlobals as HG
|
|||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusText
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientStrings
|
||||
from hydrus.client.networking import ClientNetworkingFunctions
|
||||
|
@ -129,7 +130,7 @@ def ConvertParseResultToPrettyString( result ):
|
|||
|
||||
timestamp = int( parsed_text )
|
||||
|
||||
timestamp_string = HydrusData.ConvertTimestampToPrettyTime( timestamp )
|
||||
timestamp_string = HydrusTime.TimestampToPrettyTime( timestamp )
|
||||
|
||||
except:
|
||||
|
||||
|
@ -534,7 +535,7 @@ def GetTimestampFromParseResults( results, desired_timestamp_type ):
|
|||
|
||||
if timestamp_type == HC.TIMESTAMP_TYPE_MODIFIED_DOMAIN:
|
||||
|
||||
timestamp = min( HydrusData.GetNow() - 5, timestamp )
|
||||
timestamp = min( HydrusTime.GetNow() - 5, timestamp )
|
||||
|
||||
|
||||
timestamp_results.append( timestamp )
|
||||
|
|
|
@ -12,6 +12,7 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusImageHandling
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
from hydrus.core import HydrusVideoHandling
|
||||
|
||||
from hydrus.client import ClientFiles
|
||||
|
|
|
@ -12,6 +12,7 @@ from hydrus.core import HydrusGlobals as HG
|
|||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusText
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientData
|
||||
|
@ -454,7 +455,7 @@ class FileSystemPredicates( object ):
|
|||
|
||||
age = ( years * 365 * 86400 ) + ( ( ( ( ( months * 30 ) + days ) * 24 ) + hours ) * 3600 )
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
# this is backwards (less than means min timestamp) because we are talking about age, not timestamp
|
||||
|
||||
|
@ -487,14 +488,14 @@ class FileSystemPredicates( object ):
|
|||
|
||||
( year, month, day, hour, minute ) = age_value
|
||||
|
||||
dt = ClientTime.GetDateTime( year, month, day, hour, minute )
|
||||
dt = HydrusTime.GetDateTime( year, month, day, hour, minute )
|
||||
|
||||
time_pivot = ClientTime.CalendarToTimestamp( dt )
|
||||
time_pivot = HydrusTime.DateTimeToTimestamp( dt )
|
||||
|
||||
dt_day_of_start = ClientTime.GetDateTime( year, month, day, 0, 0 )
|
||||
dt_day_of_start = HydrusTime.GetDateTime( year, month, day, 0, 0 )
|
||||
|
||||
day_of_start = ClientTime.CalendarToTimestamp( dt_day_of_start )
|
||||
day_of_end = ClientTime.CalendarToTimestamp( ClientTime.CalendarDelta( dt_day_of_start, day_delta = 1 ) )
|
||||
day_of_start = HydrusTime.DateTimeToTimestamp( dt_day_of_start )
|
||||
day_of_end = HydrusTime.DateTimeToTimestamp( ClientTime.CalendarDelta( dt_day_of_start, day_delta = 1 ) )
|
||||
|
||||
# the before/since semantic logic is:
|
||||
# '<' 2022-05-05 means 'before that date'
|
||||
|
@ -515,8 +516,8 @@ class FileSystemPredicates( object ):
|
|||
|
||||
elif operator == CC.UNICODE_ALMOST_EQUAL_TO:
|
||||
|
||||
previous_month_timestamp = ClientTime.CalendarToTimestamp( ClientTime.CalendarDelta( dt, month_delta = -1 ) )
|
||||
next_month_timestamp = ClientTime.CalendarToTimestamp( ClientTime.CalendarDelta( dt, month_delta = 1 ) )
|
||||
previous_month_timestamp = HydrusTime.DateTimeToTimestamp( ClientTime.CalendarDelta( dt, month_delta = -1 ) )
|
||||
next_month_timestamp = HydrusTime.DateTimeToTimestamp( ClientTime.CalendarDelta( dt, month_delta = 1 ) )
|
||||
|
||||
self._timestamp_ranges[ predicate_type ][ '>' ] = previous_month_timestamp
|
||||
self._timestamp_ranges[ predicate_type ][ '<' ] = next_month_timestamp
|
||||
|
@ -2337,7 +2338,7 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
else:
|
||||
|
||||
base += ' {} {}'.format( operator, HydrusData.ConvertMillisecondsToPrettyTime( value ) )
|
||||
base += ' {} {}'.format( operator, HydrusTime.MillisecondsToPrettyTime( value ) )
|
||||
|
||||
|
||||
|
||||
|
@ -2427,6 +2428,22 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
base += ' ' + operator + ' ' + str( ratio_width ) + ':' + str( ratio_height )
|
||||
|
||||
if ratio_width == 1 and ratio_height == 1:
|
||||
|
||||
if operator == 'wider than':
|
||||
|
||||
base = 'ratio is landscape'
|
||||
|
||||
elif operator == 'taller than':
|
||||
|
||||
base = 'ratio is portrait'
|
||||
|
||||
elif operator == '=':
|
||||
|
||||
base = 'ratio is square'
|
||||
|
||||
|
||||
|
||||
|
||||
elif self._predicate_type == PREDICATE_TYPE_SYSTEM_SIZE:
|
||||
|
||||
|
@ -2505,7 +2522,7 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
|
|||
pretty_operator = 'unknown operator '
|
||||
|
||||
|
||||
base += ': ' + pretty_operator + HydrusData.TimeDeltaToPrettyTimeDelta( time_delta ) + ' ago'
|
||||
base += ': ' + pretty_operator + HydrusTime.TimeDeltaToPrettyTimeDelta( time_delta ) + ' ago'
|
||||
|
||||
elif age_type == 'date':
|
||||
|
||||
|
@ -2513,7 +2530,7 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
dt = datetime.datetime( year, month, day, hour, minute )
|
||||
|
||||
timestamp = ClientTime.CalendarToTimestamp( dt )
|
||||
timestamp = HydrusTime.DateTimeToTimestamp( dt )
|
||||
|
||||
if operator == '<':
|
||||
|
||||
|
@ -2534,8 +2551,7 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
include_24h_time = operator != '=' and ( hour > 0 or minute > 0 )
|
||||
|
||||
# convert this GMT TIMESTAMP to a pretty local string
|
||||
base += ': ' + pretty_operator + HydrusData.ConvertTimestampToPrettyTime( timestamp, include_24h_time = include_24h_time )
|
||||
base += ': ' + pretty_operator + HydrusTime.TimestampToPrettyTime( timestamp, include_24h_time = include_24h_time )
|
||||
|
||||
|
||||
|
||||
|
@ -2865,7 +2881,7 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
elif view_type == 'viewtime':
|
||||
|
||||
value_string = HydrusData.TimeDeltaToPrettyTimeDelta( viewing_value )
|
||||
value_string = HydrusTime.TimeDeltaToPrettyTimeDelta( viewing_value )
|
||||
|
||||
|
||||
base = '{} {} {} {}'.format( domain, view_type, operator, value_string )
|
||||
|
|
|
@ -5,6 +5,7 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientSearch
|
||||
|
@ -170,6 +171,7 @@ pred_generators = {
|
|||
SystemPredicateParser.Predicate.NUM_OF_FRAMES : lambda o, v, u: ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_FRAMES, ( o, v ) ),
|
||||
SystemPredicateParser.Predicate.NUM_PIXELS : lambda o, v, u: ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_NUM_PIXELS, ( o, v, HydrusData.ConvertPixelsToInt( u ) ) ),
|
||||
SystemPredicateParser.Predicate.RATIO : lambda o, v, u: ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_RATIO, ( o, v[0], v[1] ) ),
|
||||
SystemPredicateParser.Predicate.RATIO_SPECIAL : lambda o, v, u: ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_RATIO, ( o, v[0], v[1] ) ),
|
||||
SystemPredicateParser.Predicate.TAG_AS_NUMBER : lambda o, v, u: ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_TAG_AS_NUMBER, ( o[0], o[1], v ) ),
|
||||
SystemPredicateParser.Predicate.MEDIA_VIEWS : lambda o, v, u: ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_FILE_VIEWING_STATS, ( 'views', ( 'media', ), o, v ) ),
|
||||
SystemPredicateParser.Predicate.PREVIEW_VIEWS : lambda o, v, u: ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_SYSTEM_FILE_VIEWING_STATS, ( 'views', ( 'preview', ), o, v ) ),
|
||||
|
|
|
@ -16,6 +16,7 @@ from hydrus.core import HydrusImageHandling
|
|||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTemp
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client.gui import ClientGUIFunctions
|
||||
|
|
|
@ -16,15 +16,16 @@ from hydrus.core import HydrusGlobals as HG
|
|||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusTime
|
||||
from hydrus.core.networking import HydrusNATPunch
|
||||
from hydrus.core.networking import HydrusNetwork
|
||||
from hydrus.core.networking import HydrusNetworkVariableHandling
|
||||
from hydrus.core.networking import HydrusNetworking
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientData
|
||||
from hydrus.client import ClientFiles
|
||||
from hydrus.client import ClientThreading
|
||||
from hydrus.client import ClientTime
|
||||
from hydrus.client.gui import QtPorting as QP
|
||||
from hydrus.client.importing import ClientImporting
|
||||
from hydrus.client.metadata import ClientRatings
|
||||
|
@ -857,12 +858,12 @@ class ServiceRemote( Service ):
|
|||
duration = self._GetErrorWaitPeriod()
|
||||
|
||||
|
||||
next_no_requests_until = HydrusData.GetNow() + duration
|
||||
next_no_requests_until = HydrusTime.GetNow() + duration
|
||||
|
||||
if next_no_requests_until > self._no_requests_until:
|
||||
|
||||
self._no_requests_reason = reason
|
||||
self._no_requests_until = HydrusData.GetNow() + duration
|
||||
self._no_requests_until = HydrusTime.GetNow() + duration
|
||||
|
||||
|
||||
self._SetDirty()
|
||||
|
@ -894,9 +895,9 @@ class ServiceRemote( Service ):
|
|||
|
||||
def _CheckCanCommunicateExternally( self, including_bandwidth = True ):
|
||||
|
||||
if not HydrusData.TimeHasPassed( self._no_requests_until ):
|
||||
if not HydrusTime.TimeHasPassed( self._no_requests_until ):
|
||||
|
||||
raise HydrusExceptions.InsufficientCredentialsException( self._no_requests_reason + ' - next request ' + ClientData.TimestampToPrettyTimeDelta( self._no_requests_until ) )
|
||||
raise HydrusExceptions.InsufficientCredentialsException( self._no_requests_reason + ' - next request ' + ClientTime.TimestampToPrettyTimeDelta( self._no_requests_until ) )
|
||||
|
||||
|
||||
if including_bandwidth:
|
||||
|
@ -1002,7 +1003,7 @@ class ServiceRestricted( ServiceRemote ):
|
|||
|
||||
HG.client_controller.pub( 'notify_account_sync_due' )
|
||||
|
||||
self._next_account_sync = HydrusData.GetNow()
|
||||
self._next_account_sync = HydrusTime.GetNow()
|
||||
|
||||
HG.client_controller.network_engine.session_manager.ClearSession( self.network_context )
|
||||
|
||||
|
@ -1017,7 +1018,7 @@ class ServiceRestricted( ServiceRemote ):
|
|||
|
||||
self._account = HydrusNetwork.Account.GenerateUnknownAccount( account_key )
|
||||
|
||||
self._next_account_sync = HydrusData.GetNow() + ACCOUNT_SYNC_PERIOD
|
||||
self._next_account_sync = HydrusTime.GetNow() + ACCOUNT_SYNC_PERIOD
|
||||
|
||||
self._SetDirty()
|
||||
|
||||
|
@ -1081,7 +1082,7 @@ class ServiceRestricted( ServiceRemote ):
|
|||
|
||||
self._account = HydrusNetwork.Account.GenerateUnknownAccount( account_key )
|
||||
|
||||
self._next_account_sync = HydrusData.GetNow()
|
||||
self._next_account_sync = HydrusTime.GetNow()
|
||||
|
||||
self._SetDirty()
|
||||
|
||||
|
@ -1165,13 +1166,13 @@ class ServiceRestricted( ServiceRemote ):
|
|||
|
||||
def GetNextAccountSyncStatus( self ):
|
||||
|
||||
if HydrusData.TimeHasPassed( self._next_account_sync ):
|
||||
if HydrusTime.TimeHasPassed( self._next_account_sync ):
|
||||
|
||||
s = 'imminently'
|
||||
|
||||
else:
|
||||
|
||||
s = ClientData.TimestampToPrettyTimeDelta( self._next_account_sync )
|
||||
s = ClientTime.TimestampToPrettyTimeDelta( self._next_account_sync )
|
||||
|
||||
|
||||
return 'next account sync ' + s
|
||||
|
@ -1409,7 +1410,7 @@ class ServiceRestricted( ServiceRemote ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
self._next_account_sync = HydrusData.GetNow() - 1
|
||||
self._next_account_sync = HydrusTime.GetNow() - 1
|
||||
|
||||
self._SetDirty()
|
||||
|
||||
|
@ -1446,13 +1447,13 @@ class ServiceRestricted( ServiceRemote ):
|
|||
|
||||
do_it = False
|
||||
|
||||
self._next_account_sync = HydrusData.GetNow() + SHORT_DELAY_PERIOD
|
||||
self._next_account_sync = HydrusTime.GetNow() + SHORT_DELAY_PERIOD
|
||||
|
||||
self._SetDirty()
|
||||
|
||||
else:
|
||||
|
||||
do_it = HydrusData.TimeHasPassed( self._next_account_sync )
|
||||
do_it = HydrusTime.TimeHasPassed( self._next_account_sync )
|
||||
|
||||
|
||||
|
||||
|
@ -1469,7 +1470,7 @@ class ServiceRestricted( ServiceRemote ):
|
|||
|
||||
( message, message_created ) = self._account.GetMessageAndTimestamp()
|
||||
|
||||
if message != '' and message_created != original_message_created and not HydrusData.TimeHasPassed( message_created + ( 86400 * 5 ) ):
|
||||
if message != '' and message_created != original_message_created and not HydrusTime.TimeHasPassed( message_created + ( 86400 * 5 ) ):
|
||||
|
||||
m = 'New message for your account on {}:'.format( self._name )
|
||||
m += os.linesep * 2
|
||||
|
@ -1570,7 +1571,7 @@ class ServiceRestricted( ServiceRemote ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
self._next_account_sync = HydrusData.GetNow() + ACCOUNT_SYNC_PERIOD
|
||||
self._next_account_sync = HydrusTime.GetNow() + ACCOUNT_SYNC_PERIOD
|
||||
|
||||
self._SetDirty()
|
||||
|
||||
|
@ -1704,7 +1705,7 @@ class ServiceRepository( ServiceRestricted ):
|
|||
return
|
||||
|
||||
|
||||
it_took = HydrusData.GetNowPrecise() - precise_timestamp
|
||||
it_took = HydrusTime.GetNowPrecise() - precise_timestamp
|
||||
|
||||
rows_s = HydrusData.ToHumanInt( int( total_rows / it_took ) )
|
||||
|
||||
|
@ -1715,7 +1716,7 @@ class ServiceRepository( ServiceRestricted ):
|
|||
|
||||
def _ReportOngoingRowSpeed( self, job_key, rows_done, total_rows, precise_timestamp, rows_done_in_last_packet, row_name ):
|
||||
|
||||
it_took = HydrusData.GetNowPrecise() - precise_timestamp
|
||||
it_took = HydrusTime.GetNowPrecise() - precise_timestamp
|
||||
|
||||
rows_s = HydrusData.ToHumanInt( int( rows_done_in_last_packet / it_took ) )
|
||||
|
||||
|
@ -2029,7 +2030,7 @@ class ServiceRepository( ServiceRestricted ):
|
|||
did_definition_analyze = False
|
||||
did_content_analyze = False
|
||||
|
||||
definition_start_time = HydrusData.GetNowPrecise()
|
||||
definition_start_time = HydrusTime.GetNowPrecise()
|
||||
|
||||
try:
|
||||
|
||||
|
@ -2090,7 +2091,7 @@ class ServiceRepository( ServiceRestricted ):
|
|||
|
||||
while len( iterator_dict ) > 0:
|
||||
|
||||
this_work_start_time = HydrusData.GetNowPrecise()
|
||||
this_work_start_time = HydrusTime.GetNowPrecise()
|
||||
|
||||
if HG.client_controller.CurrentlyVeryIdle():
|
||||
|
||||
|
@ -2108,11 +2109,11 @@ class ServiceRepository( ServiceRestricted ):
|
|||
break_percentage = 0.1
|
||||
|
||||
|
||||
start_time = HydrusData.GetNowPrecise()
|
||||
start_time = HydrusTime.GetNowPrecise()
|
||||
|
||||
num_rows_done = HG.client_controller.WriteSynchronous( 'process_repository_definitions', self._service_key, definition_hash, iterator_dict, content_types, job_key, work_time )
|
||||
|
||||
time_it_took = HydrusData.GetNowPrecise() - start_time
|
||||
time_it_took = HydrusTime.GetNowPrecise() - start_time
|
||||
|
||||
rows_done_in_this_update += num_rows_done
|
||||
total_definition_rows_completed += num_rows_done
|
||||
|
@ -2156,7 +2157,7 @@ class ServiceRepository( ServiceRestricted ):
|
|||
return
|
||||
|
||||
|
||||
content_start_time = HydrusData.GetNowPrecise()
|
||||
content_start_time = HydrusTime.GetNowPrecise()
|
||||
|
||||
try:
|
||||
|
||||
|
@ -2238,7 +2239,7 @@ class ServiceRepository( ServiceRestricted ):
|
|||
|
||||
while len( iterator_dict ) > 0:
|
||||
|
||||
this_work_start_time = HydrusData.GetNowPrecise()
|
||||
this_work_start_time = HydrusTime.GetNowPrecise()
|
||||
|
||||
if HG.client_controller.CurrentlyVeryIdle():
|
||||
|
||||
|
@ -2256,11 +2257,11 @@ class ServiceRepository( ServiceRestricted ):
|
|||
break_percentage = 0.1
|
||||
|
||||
|
||||
start_time = HydrusData.GetNowPrecise()
|
||||
start_time = HydrusTime.GetNowPrecise()
|
||||
|
||||
num_rows_done = HG.client_controller.WriteSynchronous( 'process_repository_content', self._service_key, content_hash, iterator_dict, content_types, job_key, work_time )
|
||||
|
||||
time_it_took = HydrusData.GetNowPrecise() - start_time
|
||||
time_it_took = HydrusTime.GetNowPrecise() - start_time
|
||||
|
||||
rows_done_in_this_update += num_rows_done
|
||||
total_content_rows_completed += num_rows_done
|
||||
|
@ -2531,7 +2532,7 @@ class ServiceRepository( ServiceRestricted ):
|
|||
# if a user is more than two weeks behind, let's assume they aren't 'caught up'
|
||||
CAUGHT_UP_BUFFER = 14 * 86400
|
||||
|
||||
two_weeks_ago = HydrusData.GetNow() - CAUGHT_UP_BUFFER
|
||||
two_weeks_ago = HydrusTime.GetNow() - CAUGHT_UP_BUFFER
|
||||
|
||||
with self._lock:
|
||||
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
import base64
|
||||
import calendar
|
||||
import datetime
|
||||
import hashlib
|
||||
import html
|
||||
import re
|
||||
|
@ -12,6 +13,9 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientTime
|
||||
|
||||
STRING_CONVERSION_REMOVE_TEXT_FROM_BEGINNING = 0
|
||||
STRING_CONVERSION_REMOVE_TEXT_FROM_END = 1
|
||||
|
@ -245,34 +249,27 @@ class StringConverter( StringProcessingStep ):
|
|||
|
||||
( phrase, timezone, timezone_offset ) = data
|
||||
|
||||
struct_time = time.strptime( s, phrase )
|
||||
dt = datetime.datetime.strptime( s, phrase )
|
||||
|
||||
if timezone == HC.TIMEZONE_GMT:
|
||||
if timezone in ( HC.TIMEZONE_UTC, HC.TIMEZONE_OFFSET ):
|
||||
|
||||
# the given struct is in GMT, so calendar.timegm is appropriate here
|
||||
dt = datetime.datetime(
|
||||
dt.year,
|
||||
dt.month,
|
||||
dt.day,
|
||||
dt.hour,
|
||||
dt.minute,
|
||||
dt.second,
|
||||
tzinfo = datetime.timezone.utc
|
||||
)
|
||||
|
||||
timestamp = int( calendar.timegm( struct_time ) )
|
||||
|
||||
elif timezone == HC.TIMEZONE_LOCAL:
|
||||
|
||||
# the given struct is in local time, so time.mktime is correct
|
||||
|
||||
try:
|
||||
if timezone == HC.TIMEZONE_OFFSET:
|
||||
|
||||
timestamp = int( time.mktime( struct_time ) )
|
||||
|
||||
except:
|
||||
|
||||
timestamp = HydrusData.GetNow()
|
||||
dt = dt - datetime.timedelta( seconds = timezone_offset )
|
||||
|
||||
|
||||
elif timezone == HC.TIMEZONE_OFFSET:
|
||||
|
||||
# the given struct is in server time, which is the same as GMT minus an offset
|
||||
# if we are 7200 seconds ahead, the correct GMT timestamp needs to be 7200 smaller
|
||||
|
||||
timestamp = int( calendar.timegm( struct_time ) ) - timezone_offset
|
||||
|
||||
|
||||
timestamp = HydrusTime.DateTimeToTimestamp( dt )
|
||||
|
||||
s = str( timestamp )
|
||||
|
||||
|
@ -289,20 +286,9 @@ class StringConverter( StringProcessingStep ):
|
|||
raise Exception( '"{}" was not an integer!'.format( s ) )
|
||||
|
||||
|
||||
if timezone == HC.TIMEZONE_GMT:
|
||||
|
||||
# user wants a UTC string, so we need UTC struct
|
||||
|
||||
struct_time = time.gmtime( timestamp )
|
||||
|
||||
elif timezone == HC.TIMEZONE_LOCAL:
|
||||
|
||||
# user wants a local string, so we need localtime
|
||||
|
||||
struct_time = time.localtime( timestamp )
|
||||
|
||||
dt = HydrusTime.TimestampToDateTime( timestamp, timezone )
|
||||
|
||||
s = time.strftime( phrase, struct_time )
|
||||
s = dt.strftime( phrase )
|
||||
|
||||
elif conversion_type == STRING_CONVERSION_INTEGER_ADDITION:
|
||||
|
||||
|
|
|
@ -7,6 +7,7 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusThreading
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.gui import QtPorting as QP
|
||||
|
||||
|
@ -16,7 +17,7 @@ class JobKey( object ):
|
|||
|
||||
self._key = HydrusData.GenerateKey()
|
||||
|
||||
self._creation_time = HydrusData.GetNowFloat()
|
||||
self._creation_time = HydrusTime.GetNowFloat()
|
||||
|
||||
self._pausable = pausable
|
||||
self._cancellable = cancellable
|
||||
|
@ -25,7 +26,7 @@ class JobKey( object ):
|
|||
self._stop_time = stop_time
|
||||
self._cancel_on_shutdown = cancel_on_shutdown and maintenance_mode != HC.MAINTENANCE_SHUTDOWN
|
||||
|
||||
self._start_time = HydrusData.GetNow()
|
||||
self._start_time = HydrusTime.GetNow()
|
||||
|
||||
self._deleted = threading.Event()
|
||||
self._deletion_time = None
|
||||
|
@ -34,16 +35,16 @@ class JobKey( object ):
|
|||
self._paused = threading.Event()
|
||||
|
||||
self._ui_update_pause_period = 0.1
|
||||
self._next_ui_update_pause = HydrusData.GetNowFloat() + self._ui_update_pause_period
|
||||
self._next_ui_update_pause = HydrusTime.GetNowFloat() + self._ui_update_pause_period
|
||||
|
||||
self._yield_pause_period = 10
|
||||
self._next_yield_pause = HydrusData.GetNow() + self._yield_pause_period
|
||||
self._next_yield_pause = HydrusTime.GetNow() + self._yield_pause_period
|
||||
|
||||
self._bigger_pause_period = 100
|
||||
self._next_bigger_pause = HydrusData.GetNow() + self._bigger_pause_period
|
||||
self._next_bigger_pause = HydrusTime.GetNow() + self._bigger_pause_period
|
||||
|
||||
self._longer_pause_period = 1000
|
||||
self._next_longer_pause = HydrusData.GetNow() + self._longer_pause_period
|
||||
self._next_longer_pause = HydrusTime.GetNow() + self._longer_pause_period
|
||||
|
||||
self._exception = None
|
||||
|
||||
|
@ -93,7 +94,7 @@ class JobKey( object ):
|
|||
|
||||
if self._deletion_time is not None:
|
||||
|
||||
if HydrusData.TimeHasPassed( self._deletion_time ):
|
||||
if HydrusTime.TimeHasPassed( self._deletion_time ):
|
||||
|
||||
self.Finish()
|
||||
|
||||
|
@ -129,11 +130,11 @@ class JobKey( object ):
|
|||
|
||||
if seconds is None:
|
||||
|
||||
self._deletion_time = HydrusData.GetNow()
|
||||
self._deletion_time = HydrusTime.GetNow()
|
||||
|
||||
else:
|
||||
|
||||
self._deletion_time = HydrusData.GetNow() + seconds
|
||||
self._deletion_time = HydrusTime.GetNow() + seconds
|
||||
|
||||
|
||||
|
||||
|
@ -162,11 +163,11 @@ class JobKey( object ):
|
|||
|
||||
|
||||
|
||||
if HydrusData.TimeHasPassedFloat( self._next_ui_update_pause ):
|
||||
if HydrusTime.TimeHasPassedFloat( self._next_ui_update_pause ):
|
||||
|
||||
time.sleep( 0.00001 )
|
||||
|
||||
self._next_ui_update_pause = HydrusData.GetNowFloat() + self._ui_update_pause_period
|
||||
self._next_ui_update_pause = HydrusTime.GetNowFloat() + self._ui_update_pause_period
|
||||
|
||||
|
||||
|
||||
|
@ -370,17 +371,17 @@ class JobKey( object ):
|
|||
|
||||
with self._variable_lock: self._variables[ name ] = value
|
||||
|
||||
if HydrusData.TimeHasPassed( self._next_ui_update_pause ):
|
||||
if HydrusTime.TimeHasPassed( self._next_ui_update_pause ):
|
||||
|
||||
time.sleep( 0.00001 )
|
||||
|
||||
self._next_ui_update_pause = HydrusData.GetNow() + self._ui_update_pause_period
|
||||
self._next_ui_update_pause = HydrusTime.GetNow() + self._ui_update_pause_period
|
||||
|
||||
|
||||
|
||||
def TimeRunning( self ):
|
||||
|
||||
return HydrusData.GetNow() - self._start_time
|
||||
return HydrusTime.GetNow() - self._start_time
|
||||
|
||||
|
||||
def ToString( self ):
|
||||
|
@ -429,23 +430,23 @@ class JobKey( object ):
|
|||
|
||||
def WaitIfNeeded( self ):
|
||||
|
||||
if HydrusData.TimeHasPassed( self._next_yield_pause ):
|
||||
if HydrusTime.TimeHasPassed( self._next_yield_pause ):
|
||||
|
||||
time.sleep( 0.1 )
|
||||
|
||||
self._next_yield_pause = HydrusData.GetNow() + self._yield_pause_period
|
||||
self._next_yield_pause = HydrusTime.GetNow() + self._yield_pause_period
|
||||
|
||||
if HydrusData.TimeHasPassed( self._next_bigger_pause ):
|
||||
if HydrusTime.TimeHasPassed( self._next_bigger_pause ):
|
||||
|
||||
time.sleep( 1 )
|
||||
|
||||
self._next_bigger_pause = HydrusData.GetNow() + self._bigger_pause_period
|
||||
self._next_bigger_pause = HydrusTime.GetNow() + self._bigger_pause_period
|
||||
|
||||
if HydrusData.TimeHasPassed( self._longer_pause_period ):
|
||||
if HydrusTime.TimeHasPassed( self._longer_pause_period ):
|
||||
|
||||
time.sleep( 10 )
|
||||
|
||||
self._next_longer_pause = HydrusData.GetNow() + self._longer_pause_period
|
||||
self._next_longer_pause = HydrusTime.GetNow() + self._longer_pause_period
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,10 +1,14 @@
|
|||
import calendar
|
||||
import datetime
|
||||
import time
|
||||
import typing
|
||||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusSerialisable
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
||||
try:
|
||||
|
||||
from dateutil.relativedelta import relativedelta
|
||||
|
@ -17,21 +21,7 @@ except:
|
|||
|
||||
|
||||
from hydrus.core import HydrusData
|
||||
|
||||
def CalendarToTimestamp( dt: datetime.datetime ) -> int:
|
||||
|
||||
try:
|
||||
|
||||
# mktime is local calendar time to timestamp, so this is client specific
|
||||
timestamp = int( time.mktime( dt.timetuple() ) )
|
||||
|
||||
except:
|
||||
|
||||
timestamp = HydrusData.GetNow()
|
||||
|
||||
|
||||
return timestamp
|
||||
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
def CalendarDelta( dt: datetime.datetime, month_delta = 0, day_delta = 0 ) -> datetime.datetime:
|
||||
|
||||
|
@ -49,22 +39,8 @@ def CalendarDelta( dt: datetime.datetime, month_delta = 0, day_delta = 0 ) -> da
|
|||
|
||||
|
||||
|
||||
def GetDateTime( year: int, month: int, day: int, hour: int, minute: int ) -> datetime.datetime:
|
||||
|
||||
return datetime.datetime( year, month, day, hour, minute )
|
||||
|
||||
def MergeModifiedTimes( existing_timestamp: typing.Optional[ int ], new_timestamp: typing.Optional[ int ] ) -> typing.Optional[ int ]:
|
||||
|
||||
if not TimestampIsSensible( existing_timestamp ):
|
||||
|
||||
existing_timestamp = None
|
||||
|
||||
|
||||
if not TimestampIsSensible( new_timestamp ):
|
||||
|
||||
new_timestamp = None
|
||||
|
||||
|
||||
if ShouldUpdateModifiedTime( existing_timestamp, new_timestamp ):
|
||||
|
||||
return new_timestamp
|
||||
|
@ -77,12 +53,12 @@ def MergeModifiedTimes( existing_timestamp: typing.Optional[ int ], new_timestam
|
|||
|
||||
def ShouldUpdateModifiedTime( existing_timestamp: int, new_timestamp: typing.Optional[ int ] ) -> bool:
|
||||
|
||||
if not TimestampIsSensible( new_timestamp ):
|
||||
if new_timestamp is None:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
if not TimestampIsSensible( existing_timestamp ):
|
||||
if existing_timestamp is None:
|
||||
|
||||
return True
|
||||
|
||||
|
@ -112,6 +88,20 @@ def TimestampIsSensible( timestamp: typing.Optional[ int ] ) -> bool:
|
|||
return True
|
||||
|
||||
|
||||
def TimestampToPrettyTimeDelta( timestamp, just_now_string = 'just now', just_now_threshold = 3, history_suffix = ' ago', show_seconds = True, no_prefix = False ):
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'always_show_iso_time' ):
|
||||
|
||||
return HydrusTime.TimestampToPrettyTime( timestamp )
|
||||
|
||||
else:
|
||||
|
||||
return HydrusTime.BaseTimestampToPrettyTimeDelta( timestamp, just_now_string = just_now_string, just_now_threshold = just_now_threshold, history_suffix = history_suffix, show_seconds = show_seconds, no_prefix = no_prefix )
|
||||
|
||||
|
||||
|
||||
HydrusTime.TimestampToPrettyTimeDelta = TimestampToPrettyTimeDelta
|
||||
|
||||
REAL_SIMPLE_TIMESTAMP_TYPES = {
|
||||
HC.TIMESTAMP_TYPE_ARCHIVED,
|
||||
HC.TIMESTAMP_TYPE_MODIFIED_FILE
|
||||
|
@ -159,6 +149,11 @@ class TimestampData( HydrusSerialisable.SerialisableBase ):
|
|||
return ( self.timestamp_type, self.location, self.timestamp ).__hash__()
|
||||
|
||||
|
||||
def __repr__( self ):
|
||||
|
||||
return self.ToString()
|
||||
|
||||
|
||||
def _GetSerialisableInfo( self ):
|
||||
|
||||
if self.timestamp_type in FILE_SERVICE_TIMESTAMP_TYPES:
|
||||
|
@ -187,6 +182,53 @@ class TimestampData( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def ToString( self ) -> str:
|
||||
|
||||
if self.timestamp_type in SIMPLE_TIMESTAMP_TYPES:
|
||||
|
||||
type_base = HC.timestamp_type_str_lookup[ self.timestamp_type ]
|
||||
|
||||
else:
|
||||
|
||||
if self.timestamp_type in FILE_SERVICE_TIMESTAMP_TYPES:
|
||||
|
||||
try:
|
||||
|
||||
service_string = HG.client_controller.services_manager.GetName( self.location )
|
||||
|
||||
except:
|
||||
|
||||
service_string = 'unknown service'
|
||||
|
||||
|
||||
type_base = '"{}" {}'.format( service_string, HC.timestamp_type_str_lookup[ self.timestamp_type ] )
|
||||
|
||||
elif self.timestamp_type == HC.TIMESTAMP_TYPE_LAST_VIEWED:
|
||||
|
||||
type_base = '{} {}'.format( CC.canvas_type_str_lookup[ self.location ], HC.timestamp_type_str_lookup[ self.timestamp_type ] )
|
||||
|
||||
elif self.timestamp_type == HC.TIMESTAMP_TYPE_MODIFIED_DOMAIN:
|
||||
|
||||
type_base = '"{}" {}'.format( self.location, HC.timestamp_type_str_lookup[ self.timestamp_type ] )
|
||||
|
||||
else:
|
||||
|
||||
type_base = 'unknown timestamp type'
|
||||
|
||||
|
||||
|
||||
if self.timestamp is None:
|
||||
|
||||
# we are a stub, type summary is appropriate
|
||||
return type_base
|
||||
|
||||
else:
|
||||
|
||||
return '{}: {}'.format( type_base, HydrusTime.TimestampToPrettyTime( self.timestamp ) )
|
||||
|
||||
|
||||
|
||||
|
||||
@staticmethod
|
||||
def STATICArchivedTime( timestamp: int ) -> "TimestampData":
|
||||
|
||||
|
@ -242,5 +284,4 @@ class TimestampData( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
|
||||
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_TIMESTAMP_DATA ] = TimestampData
|
||||
|
|
|
@ -6,6 +6,7 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusImageHandling
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
if cv2.__version__.startswith( '2' ):
|
||||
|
||||
|
|
|
@ -18,9 +18,11 @@ from hydrus.core import HydrusDB
|
|||
from hydrus.core import HydrusDBBase
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusLists
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusTime
|
||||
from hydrus.core.networking import HydrusNetwork
|
||||
|
||||
from hydrus.client import ClientAPI
|
||||
|
@ -174,7 +176,7 @@ def BlockingSafeShowMessage( message ):
|
|||
|
||||
def report_content_speed_to_job_key( job_key, rows_done, total_rows, precise_timestamp, num_rows, row_name ):
|
||||
|
||||
it_took = HydrusData.GetNowPrecise() - precise_timestamp
|
||||
it_took = HydrusTime.GetNowPrecise() - precise_timestamp
|
||||
|
||||
rows_s = HydrusData.ToHumanInt( int( num_rows / it_took ) )
|
||||
|
||||
|
@ -185,7 +187,7 @@ def report_content_speed_to_job_key( job_key, rows_done, total_rows, precise_tim
|
|||
|
||||
def report_speed_to_job_key( job_key, precise_timestamp, num_rows, row_name ):
|
||||
|
||||
it_took = HydrusData.GetNowPrecise() - precise_timestamp
|
||||
it_took = HydrusTime.GetNowPrecise() - precise_timestamp
|
||||
|
||||
rows_s = HydrusData.ToHumanInt( int( num_rows / it_took ) )
|
||||
|
||||
|
@ -201,7 +203,7 @@ def report_speed_to_log( precise_timestamp, num_rows, row_name ):
|
|||
return
|
||||
|
||||
|
||||
it_took = HydrusData.GetNowPrecise() - precise_timestamp
|
||||
it_took = HydrusTime.GetNowPrecise() - precise_timestamp
|
||||
|
||||
rows_s = HydrusData.ToHumanInt( int( num_rows / it_took ) )
|
||||
|
||||
|
@ -579,7 +581,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
# the important change here as compared to the old system is that if you have a bunch of parents like 'character name' -> 'female', which might be a 10k-to-1 relationship, adding a new link to the chain does need much work
|
||||
# we compare the current structure, the ideal structure, and just make the needed changes
|
||||
|
||||
time_started = HydrusData.GetNowFloat()
|
||||
time_started = HydrusTime.GetNowFloat()
|
||||
|
||||
tag_service_id = self.modules_services.GetServiceId( service_key )
|
||||
|
||||
|
@ -587,7 +589,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
( sibling_rows_to_add, sibling_rows_to_remove, parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) = self.modules_tag_display.GetApplicationStatus( tag_service_id )
|
||||
|
||||
while len( sibling_rows_to_add ) + len( sibling_rows_to_remove ) + len( parent_rows_to_add ) + len( parent_rows_to_remove ) > 0 and not HydrusData.TimeHasPassedFloat( time_started + work_time ):
|
||||
while len( sibling_rows_to_add ) + len( sibling_rows_to_remove ) + len( parent_rows_to_add ) + len( parent_rows_to_remove ) > 0 and not HydrusTime.TimeHasPassedFloat( time_started + work_time ):
|
||||
|
||||
# ok, so it turns out that migrating entire chains at once was sometimes laggy for certain large parent chains like 'azur lane'
|
||||
# imagine the instance where we simply want to parent a hundred As to a single B--we obviously don't have to do all that in one go
|
||||
|
@ -1484,7 +1486,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
# do delete outside, file repos and perhaps some other bananas situation can delete without ever having added
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
if service_type not in HC.FILE_SERVICES_WITH_NO_DELETE_RECORD:
|
||||
|
||||
|
@ -1551,7 +1553,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if service_type in HC.FILE_SERVICES_COVERED_BY_COMBINED_DELETED_FILE:
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
rows = [ ( hash_id, now ) for hash_id in existing_hash_ids ]
|
||||
|
||||
|
@ -1575,7 +1577,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
self._DeleteFiles( self.modules_services.combined_local_media_service_id, trashed_hash_ids )
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
delete_rows = [ ( hash_id, now ) for hash_id in trashed_hash_ids ]
|
||||
|
||||
|
@ -3618,7 +3620,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
num_we_want = HG.client_controller.new_options.GetNoneableInteger( 'num_recent_tags' )
|
||||
|
||||
if num_we_want == None:
|
||||
if num_we_want is None:
|
||||
|
||||
num_we_want = 20
|
||||
|
||||
|
@ -3689,7 +3691,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
def cancelled_hook():
|
||||
|
||||
return HydrusData.TimeHasPassedPrecise( stop_time_for_finding_results )
|
||||
return HydrusTime.TimeHasPassedPrecise( stop_time_for_finding_results )
|
||||
|
||||
|
||||
|
||||
|
@ -3760,7 +3762,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
num_tags_to_search = 0
|
||||
num_skipped = 0
|
||||
|
||||
stop_time_for_finding_results = HydrusData.GetNowPrecise() + ( max_time_to_take * 0.85 )
|
||||
stop_time_for_finding_results = HydrusTime.GetNowPrecise() + ( max_time_to_take * 0.85 )
|
||||
|
||||
search_tags = [ search_tag for search_tag in search_tags if get_weight_from_dict( search_tag, search_tag_slices_weight_dict ) != 0.0 ]
|
||||
|
||||
|
@ -4309,7 +4311,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
else:
|
||||
|
||||
timestamp_cutoff = HydrusData.GetNow() - minimum_age
|
||||
timestamp_cutoff = HydrusTime.GetNow() - minimum_age
|
||||
|
||||
age_phrase = ' WHERE timestamp < ' + str( timestamp_cutoff )
|
||||
|
||||
|
@ -4337,7 +4339,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if minimum_age is not None:
|
||||
|
||||
message += ' with minimum age ' + ClientData.TimestampToPrettyTimeDelta( timestamp_cutoff, just_now_threshold = 0 ) + ','
|
||||
message += ' with minimum age ' + ClientTime.TimestampToPrettyTimeDelta( timestamp_cutoff, just_now_threshold = 0 ) + ','
|
||||
|
||||
|
||||
message += ' I found ' + HydrusData.ToHumanInt( len( hash_ids ) ) + '.'
|
||||
|
@ -4441,7 +4443,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
file_info_manager = ClientMediaManagers.FileInfoManager( hash_id, hash, size, mime, width, height, duration, num_frames, has_audio, num_words )
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
for destination_file_service_key in destination_location_context.current_service_keys:
|
||||
|
||||
|
@ -4531,7 +4533,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
self.modules_files_metadata_basic.AddFilesInfo( [ ( hash_id, size, mime, width, height, duration, num_frames, has_audio, num_words ) ], overwrite = True )
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
self._AddFiles( self.modules_services.local_update_service_id, [ ( hash_id, now ) ] )
|
||||
|
||||
|
@ -4833,7 +4835,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
def _MigrationGetMappings( self, database_temp_job_name, file_service_key, tag_service_key, hash_type, tag_filter, content_statuses ):
|
||||
|
||||
time_started_precise = HydrusData.GetNowPrecise()
|
||||
time_started_precise = HydrusTime.GetNowPrecise()
|
||||
|
||||
data = []
|
||||
|
||||
|
@ -4905,7 +4907,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
data.append( ( desired_hash, tags ) )
|
||||
|
||||
|
||||
we_should_stop = len( data ) >= 256 or ( len( data ) > 0 and HydrusData.TimeHasPassedPrecise( time_started_precise + 1.0 ) )
|
||||
we_should_stop = len( data ) >= 256 or ( len( data ) > 0 and HydrusTime.TimeHasPassedPrecise( time_started_precise + 1.0 ) )
|
||||
|
||||
|
||||
return data
|
||||
|
@ -4913,7 +4915,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
def _MigrationGetPairs( self, database_temp_job_name, left_tag_filter, right_tag_filter ):
|
||||
|
||||
time_started_precise = HydrusData.GetNowPrecise()
|
||||
time_started_precise = HydrusTime.GetNowPrecise()
|
||||
|
||||
data = []
|
||||
|
||||
|
@ -4948,7 +4950,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
data.append( ( left_tag, right_tag ) )
|
||||
|
||||
we_should_stop = len( data ) >= 256 or ( len( data ) > 0 and HydrusData.TimeHasPassedPrecise( time_started_precise + 1.0 ) )
|
||||
we_should_stop = len( data ) >= 256 or ( len( data ) > 0 and HydrusTime.TimeHasPassedPrecise( time_started_precise + 1.0 ) )
|
||||
|
||||
|
||||
return data
|
||||
|
@ -5085,7 +5087,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
def _PerceptualHashesSearchForPotentialDuplicates( self, search_distance, maintenance_mode = HC.MAINTENANCE_FORCED, job_key = None, stop_time = None, work_time_float = None ):
|
||||
|
||||
time_started_float = HydrusData.GetNowFloat()
|
||||
time_started_float = HydrusTime.GetNowFloat()
|
||||
|
||||
num_done = 0
|
||||
still_work_to_do = True
|
||||
|
@ -5100,7 +5102,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
for ( i, hash_id ) in enumerate( group_of_hash_ids ):
|
||||
|
||||
if work_time_float is not None and HydrusData.TimeHasPassedFloat( time_started_float + work_time_float ):
|
||||
if work_time_float is not None and HydrusTime.TimeHasPassedFloat( time_started_float + work_time_float ):
|
||||
|
||||
return ( still_work_to_do, num_done )
|
||||
|
||||
|
@ -5222,7 +5224,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
self.modules_service_paths.SetServiceFilename( service_id, hash_id, multihash )
|
||||
|
||||
timestamp = HydrusData.GetNow()
|
||||
timestamp = HydrusTime.GetNow()
|
||||
|
||||
|
||||
self._AddFiles( service_id, [ ( hash_id, timestamp ) ] )
|
||||
|
@ -5929,7 +5931,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
service_id = self.modules_services.GetServiceId( service_key )
|
||||
|
||||
precise_time_to_stop = HydrusData.GetNowPrecise() + work_time
|
||||
precise_time_to_stop = HydrusTime.GetNowPrecise() + work_time
|
||||
|
||||
num_rows_processed = 0
|
||||
|
||||
|
@ -5941,7 +5943,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'new_files' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, FILES_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
for chunk in HydrusLists.SplitIteratorIntoAutothrottledChunks( i, FILES_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
files_info_rows = []
|
||||
files_rows = []
|
||||
|
@ -5961,7 +5963,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
num_rows_processed += len( files_rows )
|
||||
|
||||
if HydrusData.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
if HydrusTime.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
|
||||
return num_rows_processed
|
||||
|
||||
|
@ -5976,7 +5978,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'deleted_files' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, FILES_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
for chunk in HydrusLists.SplitIteratorIntoAutothrottledChunks( i, FILES_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
service_hash_ids = chunk
|
||||
|
||||
|
@ -5986,7 +5988,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
num_rows_processed += len( hash_ids )
|
||||
|
||||
if HydrusData.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
if HydrusTime.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
|
||||
return num_rows_processed
|
||||
|
||||
|
@ -6004,7 +6006,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'new_mappings' ]
|
||||
|
||||
for chunk in HydrusData.SplitMappingIteratorIntoAutothrottledChunks( i, MAPPINGS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
for chunk in HydrusLists.SplitMappingIteratorIntoAutothrottledChunks( i, MAPPINGS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
mappings_ids = []
|
||||
|
||||
|
@ -6026,7 +6028,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
num_rows_processed += num_rows
|
||||
|
||||
if HydrusData.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
if HydrusTime.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
|
||||
return num_rows_processed
|
||||
|
||||
|
@ -6041,7 +6043,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'deleted_mappings' ]
|
||||
|
||||
for chunk in HydrusData.SplitMappingIteratorIntoAutothrottledChunks( i, MAPPINGS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
for chunk in HydrusLists.SplitMappingIteratorIntoAutothrottledChunks( i, MAPPINGS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
deleted_mappings_ids = []
|
||||
|
||||
|
@ -6061,7 +6063,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
num_rows_processed += num_rows
|
||||
|
||||
if HydrusData.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
if HydrusTime.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
|
||||
return num_rows_processed
|
||||
|
||||
|
@ -6083,7 +6085,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'new_parents' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, PAIR_ROWS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
for chunk in HydrusLists.SplitIteratorIntoAutothrottledChunks( i, PAIR_ROWS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
parent_ids = []
|
||||
tag_ids = set()
|
||||
|
@ -6107,7 +6109,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
num_rows_processed += len( parent_ids )
|
||||
|
||||
if HydrusData.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
if HydrusTime.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
|
||||
return num_rows_processed
|
||||
|
||||
|
@ -6122,7 +6124,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'deleted_parents' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, PAIR_ROWS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
for chunk in HydrusLists.SplitIteratorIntoAutothrottledChunks( i, PAIR_ROWS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
parent_ids = []
|
||||
tag_ids = set()
|
||||
|
@ -6148,7 +6150,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
num_rows_processed += num_rows
|
||||
|
||||
if HydrusData.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
if HydrusTime.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
|
||||
return num_rows_processed
|
||||
|
||||
|
@ -6166,7 +6168,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'new_siblings' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, PAIR_ROWS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
for chunk in HydrusLists.SplitIteratorIntoAutothrottledChunks( i, PAIR_ROWS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
sibling_ids = []
|
||||
tag_ids = set()
|
||||
|
@ -6192,7 +6194,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
num_rows_processed += num_rows
|
||||
|
||||
if HydrusData.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
if HydrusTime.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
|
||||
return num_rows_processed
|
||||
|
||||
|
@ -6207,7 +6209,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'deleted_siblings' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, PAIR_ROWS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
for chunk in HydrusLists.SplitIteratorIntoAutothrottledChunks( i, PAIR_ROWS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
sibling_ids = []
|
||||
tag_ids = set()
|
||||
|
@ -6231,7 +6233,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
num_rows_processed += len( sibling_ids )
|
||||
|
||||
if HydrusData.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
if HydrusTime.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
|
||||
return num_rows_processed
|
||||
|
||||
|
@ -6264,7 +6266,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
else:
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
tag_ids = [ self.modules_tags.GetTagId( tag ) for tag in tags ]
|
||||
|
||||
|
@ -9289,6 +9291,36 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
|
||||
if version == 523:
|
||||
|
||||
try:
|
||||
|
||||
new_options = self.modules_serialisable.GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_CLIENT_OPTIONS )
|
||||
|
||||
duplicate_content_merge_options = new_options.GetDuplicateContentMergeOptions( HC.DUPLICATE_BETTER )
|
||||
|
||||
duplicate_content_merge_options.SetSyncFileModifiedDateAction( HC.CONTENT_MERGE_ACTION_COPY )
|
||||
|
||||
new_options.SetDuplicateContentMergeOptions( HC.DUPLICATE_BETTER, duplicate_content_merge_options )
|
||||
|
||||
duplicate_content_merge_options = new_options.GetDuplicateContentMergeOptions( HC.DUPLICATE_SAME_QUALITY )
|
||||
|
||||
duplicate_content_merge_options.SetSyncFileModifiedDateAction( HC.CONTENT_MERGE_ACTION_TWO_WAY_MERGE )
|
||||
|
||||
new_options.SetDuplicateContentMergeOptions( HC.DUPLICATE_SAME_QUALITY, duplicate_content_merge_options )
|
||||
|
||||
self.modules_serialisable.SetJSONDump( new_options )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
message = 'Updating some duplicate merge options failed! This is not super important, but hydev would be interested in seeing the error that was printed to the log.'
|
||||
|
||||
self.pub_initial_message( message )
|
||||
|
||||
|
||||
|
||||
self._controller.frame_splash_status.SetTitleText( 'updated db to v{}'.format( HydrusData.ToHumanInt( version + 1 ) ) )
|
||||
|
||||
self._Execute( 'UPDATE version SET version = ?;', ( version + 1, ) )
|
||||
|
@ -9717,13 +9749,13 @@ class DB( HydrusDB.HydrusDB ):
|
|||
self._controller.frame_splash_status.SetText( 'vacuuming ' + name )
|
||||
job_key.SetStatusText( 'vacuuming ' + name )
|
||||
|
||||
started = HydrusData.GetNowPrecise()
|
||||
started = HydrusTime.GetNowPrecise()
|
||||
|
||||
HydrusDB.VacuumDB( db_path )
|
||||
|
||||
time_took = HydrusData.GetNowPrecise() - started
|
||||
time_took = HydrusTime.GetNowPrecise() - started
|
||||
|
||||
HydrusData.Print( 'Vacuumed ' + db_path + ' in ' + HydrusData.TimeDeltaToPrettyTimeDelta( time_took ) )
|
||||
HydrusData.Print( 'Vacuumed ' + db_path + ' in ' + HydrusTime.TimeDeltaToPrettyTimeDelta( time_took ) )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
|
|
|
@ -7,7 +7,9 @@ from hydrus.core import HydrusDB
|
|||
from hydrus.core import HydrusDBBase
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusLists
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.db import ClientDBFilesStorage
|
||||
from hydrus.client.db import ClientDBMappingsCounts
|
||||
|
@ -224,7 +226,7 @@ class ClientDBCacheLocalHashes( ClientDBModule.ClientDBModule ):
|
|||
BLOCK_SIZE = 10000
|
||||
num_to_do = len( local_hash_ids )
|
||||
|
||||
for ( i, block_of_hash_ids ) in enumerate( HydrusData.SplitListIntoChunks( local_hash_ids, BLOCK_SIZE ) ):
|
||||
for ( i, block_of_hash_ids ) in enumerate( HydrusLists.SplitListIntoChunks( local_hash_ids, BLOCK_SIZE ) ):
|
||||
|
||||
HG.client_controller.frame_splash_status.SetSubtext( 'caching local file data {}'.format( HydrusData.ConvertValueRangeToPrettyString( i * BLOCK_SIZE, num_to_do ) ) )
|
||||
|
||||
|
|
|
@ -3,6 +3,7 @@ import typing
|
|||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientLocation
|
||||
|
@ -66,7 +67,7 @@ class ClientDBFilesInbox( ClientDBModule.ClientDBModule ):
|
|||
|
||||
self.inbox_hash_ids.difference_update( archiveable_hash_ids )
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
self.modules_files_metadata_timestamps.SetSimpleTimestamps( HC.TIMESTAMP_TYPE_ARCHIVED, [ ( hash_id, now ) for hash_id in archiveable_hash_ids ] )
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@ import typing
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientFiles
|
||||
from hydrus.client import ClientTime
|
||||
|
|
|
@ -3,6 +3,7 @@ import typing
|
|||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientFiles
|
||||
from hydrus.client.db import ClientDBDefinitionsCache
|
||||
|
@ -72,7 +73,7 @@ class ClientDBFilesMaintenanceQueue( ClientDBModule.ClientDBModule ):
|
|||
|
||||
for job_type in possible_job_types:
|
||||
|
||||
hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM file_maintenance_jobs WHERE job_type = ? AND time_can_start < ? LIMIT ?;', ( job_type, HydrusData.GetNow(), 256 ) ) )
|
||||
hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM file_maintenance_jobs WHERE job_type = ? AND time_can_start < ? LIMIT ?;', ( job_type, HydrusTime.GetNow(), 256 ) ) )
|
||||
|
||||
if len( hash_ids ) > 0:
|
||||
|
||||
|
@ -87,7 +88,7 @@ class ClientDBFilesMaintenanceQueue( ClientDBModule.ClientDBModule ):
|
|||
|
||||
def GetJobCounts( self ):
|
||||
|
||||
result = self._Execute( 'SELECT job_type, COUNT( * ) FROM file_maintenance_jobs WHERE time_can_start < ? GROUP BY job_type;', ( HydrusData.GetNow(), ) ).fetchall()
|
||||
result = self._Execute( 'SELECT job_type, COUNT( * ) FROM file_maintenance_jobs WHERE time_can_start < ? GROUP BY job_type;', ( HydrusTime.GetNow(), ) ).fetchall()
|
||||
|
||||
job_types_to_count = dict( result )
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@ import typing
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientLocation
|
||||
|
@ -253,7 +254,7 @@ class ClientDBFilesMetadataRich( ClientDBModule.ClientDBModule ):
|
|||
|
||||
else:
|
||||
|
||||
note = 'Deleted from the client {} ({}), which was {} before this check.'.format( HydrusData.ConvertTimestampToPrettyTime( timestamp ), file_deletion_reason, HydrusData.BaseTimestampToPrettyTimeDelta( timestamp ) )
|
||||
note = 'Deleted from the client {} ({}), which was {} before this check.'.format( HydrusTime.TimestampToPrettyTime( timestamp ), file_deletion_reason, HydrusTime.BaseTimestampToPrettyTimeDelta( timestamp ) )
|
||||
|
||||
|
||||
return ClientImportFiles.FileImportStatus( CC.STATUS_DELETED, hash, note = prefix + note )
|
||||
|
@ -265,7 +266,7 @@ class ClientDBFilesMetadataRich( ClientDBModule.ClientDBModule ):
|
|||
|
||||
timestamp = result
|
||||
|
||||
note = 'Currently in trash ({}). Sent there at {}, which was {} before this check.'.format( file_deletion_reason, HydrusData.ConvertTimestampToPrettyTime( timestamp ), HydrusData.BaseTimestampToPrettyTimeDelta( timestamp, just_now_threshold = 0 ) )
|
||||
note = 'Currently in trash ({}). Sent there at {}, which was {} before this check.'.format( file_deletion_reason, HydrusTime.TimestampToPrettyTime( timestamp ), HydrusTime.BaseTimestampToPrettyTimeDelta( timestamp, just_now_threshold = 0 ) )
|
||||
|
||||
return ClientImportFiles.FileImportStatus( CC.STATUS_DELETED, hash, note = prefix + note )
|
||||
|
||||
|
@ -278,7 +279,7 @@ class ClientDBFilesMetadataRich( ClientDBModule.ClientDBModule ):
|
|||
|
||||
mime = self.modules_files_metadata_basic.GetMime( hash_id )
|
||||
|
||||
note = 'Imported at {}, which was {} before this check.'.format( HydrusData.ConvertTimestampToPrettyTime( timestamp ), HydrusData.BaseTimestampToPrettyTimeDelta( timestamp, just_now_threshold = 0 ) )
|
||||
note = 'Imported at {}, which was {} before this check.'.format( HydrusTime.TimestampToPrettyTime( timestamp ), HydrusTime.BaseTimestampToPrettyTimeDelta( timestamp, just_now_threshold = 0 ) )
|
||||
|
||||
return ClientImportFiles.FileImportStatus( CC.STATUS_SUCCESSFUL_BUT_REDUNDANT, hash, mime = mime, note = prefix + note )
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@ import typing
|
|||
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.db import ClientDBModule
|
||||
|
||||
|
|
|
@ -8,6 +8,7 @@ from hydrus.core import HydrusDB
|
|||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientLocation
|
||||
|
|
|
@ -7,6 +7,7 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusDB
|
||||
from hydrus.core import HydrusDBBase
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientLocation
|
||||
|
@ -1252,7 +1253,7 @@ class ClientDBFilesStorage( ClientDBModule.ClientDBModule ):
|
|||
|
||||
deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED )
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
self._ExecuteMany(
|
||||
'INSERT OR IGNORE INTO {} ( hash_id, timestamp, original_timestamp ) VALUES ( ?, ?, ? );'.format( deleted_files_table_name ),
|
||||
|
|
|
@ -4,6 +4,7 @@ import typing
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientTime
|
||||
from hydrus.client.db import ClientDBModule
|
||||
|
|
|
@ -4,6 +4,7 @@ import typing
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientTime
|
||||
|
|
|
@ -7,6 +7,7 @@ import typing
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientThreading
|
||||
from hydrus.client.db import ClientDBModule
|
||||
|
@ -77,15 +78,15 @@ class ClientDBMaintenance( ClientDBModule.ClientDBModule ):
|
|||
|
||||
time.sleep( 0.02 )
|
||||
|
||||
started = HydrusData.GetNowPrecise()
|
||||
started = HydrusTime.GetNowPrecise()
|
||||
|
||||
self.AnalyzeTable( name )
|
||||
|
||||
time_took = HydrusData.GetNowPrecise() - started
|
||||
time_took = HydrusTime.GetNowPrecise() - started
|
||||
|
||||
if time_took > 1:
|
||||
|
||||
HydrusData.Print( 'Analyzed ' + name + ' in ' + HydrusData.TimeDeltaToPrettyTimeDelta( time_took ) )
|
||||
HydrusData.Print( 'Analyzed ' + name + ' in ' + HydrusTime.TimeDeltaToPrettyTimeDelta( time_took ) )
|
||||
|
||||
|
||||
p1 = HG.client_controller.ShouldStopThisWork( maintenance_mode, stop_time = stop_time )
|
||||
|
@ -139,7 +140,7 @@ class ClientDBMaintenance( ClientDBModule.ClientDBModule ):
|
|||
|
||||
self._Execute( 'DELETE FROM analyze_timestamps WHERE name = ?;', ( name, ) )
|
||||
|
||||
self._Execute( 'INSERT OR IGNORE INTO analyze_timestamps ( name, num_rows, timestamp ) VALUES ( ?, ?, ? );', ( name, num_rows, HydrusData.GetNow() ) )
|
||||
self._Execute( 'INSERT OR IGNORE INTO analyze_timestamps ( name, num_rows, timestamp ) VALUES ( ?, ?, ? );', ( name, num_rows, HydrusTime.GetNow() ) )
|
||||
|
||||
|
||||
def GetLastShutdownWorkTime( self ):
|
||||
|
@ -203,7 +204,7 @@ class ClientDBMaintenance( ClientDBModule.ClientDBModule ):
|
|||
continue
|
||||
|
||||
|
||||
if not HydrusData.TimeHasPassed( timestamp + period ):
|
||||
if not HydrusTime.TimeHasPassed( timestamp + period ):
|
||||
|
||||
continue
|
||||
|
||||
|
@ -284,14 +285,14 @@ class ClientDBMaintenance( ClientDBModule.ClientDBModule ):
|
|||
|
||||
self._Execute( 'DELETE FROM last_shutdown_work_time;' )
|
||||
|
||||
self._Execute( 'INSERT INTO last_shutdown_work_time ( last_shutdown_work_time ) VALUES ( ? );', ( HydrusData.GetNow(), ) )
|
||||
self._Execute( 'INSERT INTO last_shutdown_work_time ( last_shutdown_work_time ) VALUES ( ? );', ( HydrusTime.GetNow(), ) )
|
||||
|
||||
|
||||
def RegisterSuccessfulVacuum( self, name: str ):
|
||||
|
||||
self._Execute( 'DELETE FROM vacuum_timestamps WHERE name = ?;', ( name, ) )
|
||||
|
||||
self._Execute( 'INSERT OR IGNORE INTO vacuum_timestamps ( name, timestamp ) VALUES ( ?, ? );', ( name, HydrusData.GetNow() ) )
|
||||
self._Execute( 'INSERT OR IGNORE INTO vacuum_timestamps ( name, timestamp ) VALUES ( ?, ? );', ( name, HydrusTime.GetNow() ) )
|
||||
|
||||
|
||||
def TouchAnalyzeNewTables( self ):
|
||||
|
|
|
@ -5,6 +5,7 @@ import typing
|
|||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.db import ClientDBFilesStorage
|
||||
from hydrus.client.db import ClientDBMappingsCounts
|
||||
|
|
|
@ -4,6 +4,7 @@ import typing
|
|||
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusDB
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.db import ClientDBMappingsCacheCombinedFilesDisplay
|
||||
from hydrus.client.db import ClientDBMappingsCounts
|
||||
|
|
|
@ -6,6 +6,7 @@ import typing
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusDBBase
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.db import ClientDBMappingsCounts
|
||||
from hydrus.client.db import ClientDBMappingsCountsUpdate
|
||||
|
|
|
@ -6,6 +6,8 @@ import typing
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusDBBase
|
||||
from hydrus.core import HydrusLists
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.db import ClientDBFilesStorage
|
||||
from hydrus.client.db import ClientDBMaintenance
|
||||
|
@ -430,7 +432,7 @@ class ClientDBMappingsCacheSpecificStorage( ClientDBModule.ClientDBModule ):
|
|||
|
||||
BLOCK_SIZE = 10000
|
||||
|
||||
for ( i, block_of_hash_ids ) in enumerate( HydrusData.SplitListIntoChunks( hash_ids, BLOCK_SIZE ) ):
|
||||
for ( i, block_of_hash_ids ) in enumerate( HydrusLists.SplitListIntoChunks( hash_ids, BLOCK_SIZE ) ):
|
||||
|
||||
with self._MakeTemporaryIntegerTable( block_of_hash_ids, 'hash_id' ) as temp_hash_id_table_name:
|
||||
|
||||
|
|
|
@ -7,6 +7,7 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusDBBase
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.db import ClientDBModule
|
||||
from hydrus.client.networking import ClientNetworkingFunctions
|
||||
|
|
|
@ -4,6 +4,7 @@ import typing
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusDBModule
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
def BlockingSafeShowMessage( message ):
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@ import typing
|
|||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.db import ClientDBMaster
|
||||
from hydrus.client.db import ClientDBModule
|
||||
|
|
|
@ -8,6 +8,8 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusDBBase
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusLists
|
||||
from hydrus.core import HydrusTime
|
||||
from hydrus.core.networking import HydrusNetwork
|
||||
|
||||
from hydrus.client import ClientFiles
|
||||
|
@ -629,7 +631,7 @@ class ClientDBRepositories( ClientDBModule.ClientDBModule ):
|
|||
|
||||
service_id = self.modules_services.GetServiceId( service_key )
|
||||
|
||||
precise_time_to_stop = HydrusData.GetNowPrecise() + work_time
|
||||
precise_time_to_stop = HydrusTime.GetNowPrecise() + work_time
|
||||
|
||||
( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryDefinitionTableNames( service_id )
|
||||
|
||||
|
@ -639,7 +641,7 @@ class ClientDBRepositories( ClientDBModule.ClientDBModule ):
|
|||
|
||||
i = definition_iterator_dict[ 'service_hash_ids_to_hashes' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, 50, precise_time_to_stop ):
|
||||
for chunk in HydrusLists.SplitIteratorIntoAutothrottledChunks( i, 50, precise_time_to_stop ):
|
||||
|
||||
inserts = []
|
||||
|
||||
|
@ -654,7 +656,7 @@ class ClientDBRepositories( ClientDBModule.ClientDBModule ):
|
|||
|
||||
num_rows_processed += len( inserts )
|
||||
|
||||
if HydrusData.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
if HydrusTime.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
|
||||
return num_rows_processed
|
||||
|
||||
|
@ -667,7 +669,7 @@ class ClientDBRepositories( ClientDBModule.ClientDBModule ):
|
|||
|
||||
i = definition_iterator_dict[ 'service_tag_ids_to_tags' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, 50, precise_time_to_stop ):
|
||||
for chunk in HydrusLists.SplitIteratorIntoAutothrottledChunks( i, 50, precise_time_to_stop ):
|
||||
|
||||
inserts = []
|
||||
|
||||
|
@ -691,7 +693,7 @@ class ClientDBRepositories( ClientDBModule.ClientDBModule ):
|
|||
|
||||
num_rows_processed += len( inserts )
|
||||
|
||||
if HydrusData.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
if HydrusTime.TimeHasPassedPrecise( precise_time_to_stop ) or job_key.IsCancelled():
|
||||
|
||||
return num_rows_processed
|
||||
|
||||
|
|
|
@ -10,6 +10,7 @@ from hydrus.core import HydrusDBBase
|
|||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client.db import ClientDBModule
|
||||
|
@ -130,12 +131,12 @@ class MaintenanceTracker( object ):
|
|||
|
||||
def HashedSerialisableMaintenanceDue( self ):
|
||||
|
||||
return HydrusData.TimeHasPassed( self._last_hashed_serialisable_maintenance + 86400 ) or self._total_new_hashed_serialisable_bytes > 512 * 1048576
|
||||
return HydrusTime.TimeHasPassed( self._last_hashed_serialisable_maintenance + 86400 ) or self._total_new_hashed_serialisable_bytes > 512 * 1048576
|
||||
|
||||
|
||||
def NotifyHashedSerialisableMaintenanceDone( self ):
|
||||
|
||||
self._last_hashed_serialisable_maintenance = HydrusData.GetNow()
|
||||
self._last_hashed_serialisable_maintenance = HydrusTime.GetNow()
|
||||
self._total_new_hashed_serialisable_bytes = 0
|
||||
|
||||
|
||||
|
@ -705,7 +706,7 @@ class ClientDBSerialisable( ClientDBModule.ClientDBModule ):
|
|||
|
||||
if force_timestamp is None:
|
||||
|
||||
object_timestamp = HydrusData.GetNow()
|
||||
object_timestamp = HydrusTime.GetNow()
|
||||
|
||||
if store_backups:
|
||||
|
||||
|
|
|
@ -5,6 +5,7 @@ import typing
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.db import ClientDBDefinitionsCache
|
||||
from hydrus.client.db import ClientDBMaster
|
||||
|
|
|
@ -6,6 +6,7 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientLocation
|
||||
|
|
|
@ -7,6 +7,8 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusDBBase
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusLists
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientThreading
|
||||
from hydrus.client.db import ClientDBFilesStorage
|
||||
|
@ -503,7 +505,7 @@ class ClientDBSimilarFiles( ClientDBModule.ClientDBModule ):
|
|||
|
||||
def MaintainTree( self, maintenance_mode = HC.MAINTENANCE_FORCED, job_key = None, stop_time = None ):
|
||||
|
||||
time_started = HydrusData.GetNow()
|
||||
time_started = HydrusTime.GetNow()
|
||||
pub_job_key = False
|
||||
job_key_pubbed = False
|
||||
|
||||
|
@ -524,7 +526,7 @@ class ClientDBSimilarFiles( ClientDBModule.ClientDBModule ):
|
|||
|
||||
while len( rebalance_perceptual_hash_ids ) > 0:
|
||||
|
||||
if pub_job_key and not job_key_pubbed and HydrusData.TimeHasPassed( time_started + 5 ):
|
||||
if pub_job_key and not job_key_pubbed and HydrusTime.TimeHasPassed( time_started + 5 ):
|
||||
|
||||
HG.client_controller.pub( 'modal_message', job_key )
|
||||
|
||||
|
@ -705,7 +707,7 @@ class ClientDBSimilarFiles( ClientDBModule.ClientDBModule ):
|
|||
num_cycles += 1
|
||||
total_nodes_searched += len( current_potentials )
|
||||
|
||||
for group_of_current_potentials in HydrusData.SplitListIntoChunks( current_potentials, 10000 ):
|
||||
for group_of_current_potentials in HydrusLists.SplitListIntoChunks( current_potentials, 10000 ):
|
||||
|
||||
# this is split into fixed lists of results of subgroups because as an iterable it was causing crashes on linux!!
|
||||
# after investigation, it seemed to be SQLite having a problem with part of Get64BitHammingDistance touching phashes it presumably was still hanging on to
|
||||
|
|
|
@ -9,6 +9,7 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusDBBase
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientSearch
|
||||
|
@ -321,9 +322,9 @@ class ClientDBTagDisplay( ClientDBModule.ClientDBModule ):
|
|||
|
||||
if HG.autocomplete_delay_mode:
|
||||
|
||||
time_to_stop = HydrusData.GetNowFloat() + 3.0
|
||||
time_to_stop = HydrusTime.GetNowFloat() + 3.0
|
||||
|
||||
while not HydrusData.TimeHasPassedFloat( time_to_stop ):
|
||||
while not HydrusTime.TimeHasPassedFloat( time_to_stop ):
|
||||
|
||||
time.sleep( 0.1 )
|
||||
|
||||
|
@ -355,7 +356,7 @@ class ClientDBTagDisplay( ClientDBModule.ClientDBModule ):
|
|||
|
||||
showed_bad_tag_error = True
|
||||
|
||||
HydrusData.ShowText( 'Hey, you seem to have an invalid tag in view right now! Please run the \'repair invalid tags\' routine under the \'database\' menu asap!' )
|
||||
HydrusData.ShowText( 'Hey, you seem to have an invalid tag in view right now! Please run the \'fix invalid tags\' routine under the \'database\' menu asap!' )
|
||||
|
||||
|
||||
continue
|
||||
|
|
|
@ -6,6 +6,7 @@ import typing
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusDBBase
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.db import ClientDBDefinitionsCache
|
||||
from hydrus.client.db import ClientDBModule
|
||||
|
|
|
@ -8,7 +8,9 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusDB
|
||||
from hydrus.core import HydrusDBBase
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusLists
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientSearch
|
||||
|
@ -479,9 +481,9 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
if HG.autocomplete_delay_mode and not exact_match:
|
||||
|
||||
time_to_stop = HydrusData.GetNowFloat() + 3.0
|
||||
time_to_stop = HydrusTime.GetNowFloat() + 3.0
|
||||
|
||||
while not HydrusData.TimeHasPassedFloat( time_to_stop ):
|
||||
while not HydrusTime.TimeHasPassedFloat( time_to_stop ):
|
||||
|
||||
time.sleep( 0.1 )
|
||||
|
||||
|
@ -646,7 +648,7 @@ class ClientDBTagSearch( ClientDBModule.ClientDBModule ):
|
|||
|
||||
tag_ids_without_siblings = list( tag_ids )
|
||||
|
||||
for batch_of_tag_ids in HydrusData.SplitListIntoChunks( tag_ids_without_siblings, 10240 ):
|
||||
for batch_of_tag_ids in HydrusLists.SplitListIntoChunks( tag_ids_without_siblings, 10240 ):
|
||||
|
||||
with self._MakeTemporaryIntegerTable( batch_of_tag_ids, 'tag_id' ) as temp_tag_ids_table_name:
|
||||
|
||||
|
|
|
@ -6,6 +6,7 @@ import typing
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusDBBase
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client.db import ClientDBDefinitionsCache
|
||||
|
|
|
@ -4,6 +4,7 @@ import typing
|
|||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.db import ClientDBMaster
|
||||
from hydrus.client.db import ClientDBModule
|
||||
|
|
|
@ -7,14 +7,17 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusLists
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusThreading
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientPaths
|
||||
from hydrus.client import ClientSearch
|
||||
from hydrus.client import ClientThreading
|
||||
from hydrus.client.metadata import ClientMetadataMigration
|
||||
from hydrus.client.metadata import ClientTags
|
||||
|
||||
|
@ -278,7 +281,7 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_EXPORT_FOLDER
|
||||
SERIALISABLE_NAME = 'Export Folder'
|
||||
SERIALISABLE_VERSION = 7
|
||||
SERIALISABLE_VERSION = 8
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
|
@ -293,9 +296,9 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
period = 3600,
|
||||
phrase = None,
|
||||
last_checked = 0,
|
||||
paused = False,
|
||||
run_now = False,
|
||||
last_error = ''
|
||||
last_error = '',
|
||||
show_working_popup = True
|
||||
):
|
||||
|
||||
HydrusSerialisable.SerialisableBaseNamed.__init__( self, name )
|
||||
|
@ -332,9 +335,9 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
self._period = period
|
||||
self._phrase = phrase
|
||||
self._last_checked = last_checked
|
||||
self._paused = paused and not run_now
|
||||
self._run_now = run_now
|
||||
self._last_error = last_error
|
||||
self._show_working_popup = show_working_popup
|
||||
|
||||
|
||||
def _GetSerialisableInfo( self ):
|
||||
|
@ -342,12 +345,40 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
serialisable_file_search_context = self._file_search_context.GetSerialisableTuple()
|
||||
serialisable_metadata_routers = self._metadata_routers.GetSerialisableTuple()
|
||||
|
||||
return ( self._path, self._export_type, self._delete_from_client_after_export, self._export_symlinks, serialisable_file_search_context, serialisable_metadata_routers, self._run_regularly, self._period, self._phrase, self._last_checked, self._paused, self._run_now, self._last_error )
|
||||
return (
|
||||
self._path,
|
||||
self._export_type,
|
||||
self._delete_from_client_after_export,
|
||||
self._export_symlinks,
|
||||
serialisable_file_search_context,
|
||||
serialisable_metadata_routers,
|
||||
self._run_regularly,
|
||||
self._period,
|
||||
self._phrase,
|
||||
self._last_checked,
|
||||
self._run_now,
|
||||
self._last_error,
|
||||
self._show_working_popup
|
||||
)
|
||||
|
||||
|
||||
def _InitialiseFromSerialisableInfo( self, serialisable_info ):
|
||||
|
||||
( self._path, self._export_type, self._delete_from_client_after_export, self._export_symlinks, serialisable_file_search_context, serialisable_metadata_routers, self._run_regularly, self._period, self._phrase, self._last_checked, self._paused, self._run_now, self._last_error ) = serialisable_info
|
||||
(
|
||||
self._path,
|
||||
self._export_type,
|
||||
self._delete_from_client_after_export,
|
||||
self._export_symlinks,
|
||||
serialisable_file_search_context,
|
||||
serialisable_metadata_routers,
|
||||
self._run_regularly,
|
||||
self._period,
|
||||
self._phrase,
|
||||
self._last_checked,
|
||||
self._run_now,
|
||||
self._last_error,
|
||||
self._show_working_popup
|
||||
) = serialisable_info
|
||||
|
||||
if self._export_type == HC.EXPORT_FOLDER_TYPE_SYNCHRONISE:
|
||||
|
||||
|
@ -418,11 +449,11 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
return ( 6, new_serialisable_info )
|
||||
|
||||
|
||||
|
||||
if version == 6:
|
||||
|
||||
( path, export_type, delete_from_client_after_export, serialisable_file_search_context, serialisable_metadata_routers, run_regularly, period, phrase, last_checked, paused, run_now, last_error ) = old_serialisable_info
|
||||
|
||||
|
||||
export_symlinks = False
|
||||
|
||||
new_serialisable_info = ( path, export_type, delete_from_client_after_export, export_symlinks, serialisable_file_search_context, serialisable_metadata_routers, run_regularly, period, phrase, last_checked, paused, run_now, last_error )
|
||||
|
@ -430,30 +461,60 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
return ( 7, new_serialisable_info )
|
||||
|
||||
|
||||
if version == 7:
|
||||
|
||||
(
|
||||
path,
|
||||
export_type,
|
||||
delete_from_client_after_export,
|
||||
export_symlinks,
|
||||
serialisable_file_search_context,
|
||||
serialisable_metadata_routers,
|
||||
run_regularly,
|
||||
period,
|
||||
phrase,
|
||||
last_checked,
|
||||
paused,
|
||||
run_now,
|
||||
last_error
|
||||
) = old_serialisable_info
|
||||
|
||||
show_working_popup = True
|
||||
|
||||
if paused:
|
||||
|
||||
run_regularly = False
|
||||
|
||||
|
||||
new_serialisable_info = ( path, export_type, delete_from_client_after_export, export_symlinks, serialisable_file_search_context, serialisable_metadata_routers, run_regularly, period, phrase, last_checked, run_now, last_error, show_working_popup )
|
||||
|
||||
return ( 8, new_serialisable_info )
|
||||
|
||||
|
||||
|
||||
def _DoExport( self ):
|
||||
def _DoExport( self, job_key: ClientThreading.JobKey ):
|
||||
|
||||
query_hash_ids = HG.client_controller.Read( 'file_query_ids', self._file_search_context, apply_implicit_limit = False )
|
||||
|
||||
media_results = []
|
||||
|
||||
i = 0
|
||||
CHUNK_SIZE = 256
|
||||
|
||||
base = 256
|
||||
|
||||
while i < len( query_hash_ids ):
|
||||
for ( i, block_of_hash_ids ) in enumerate( HydrusLists.SplitListIntoChunks( query_hash_ids, 256 ) ):
|
||||
|
||||
job_key.SetStatusText( 'searching: {}'.format( HydrusData.ConvertValueRangeToPrettyString( i * CHUNK_SIZE, len( query_hash_ids ) ) ) )
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return
|
||||
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'pause_export_folders_sync' ) or HydrusThreading.IsThreadShuttingDown():
|
||||
|
||||
return
|
||||
|
||||
|
||||
if i == 0: ( last_i, i ) = ( 0, base )
|
||||
else: ( last_i, i ) = ( i, i + base )
|
||||
|
||||
sub_query_hash_ids = query_hash_ids[ last_i : i ]
|
||||
|
||||
more_media_results = HG.client_controller.Read( 'media_results_from_ids', sub_query_hash_ids )
|
||||
more_media_results = HG.client_controller.Read( 'media_results_from_ids', block_of_hash_ids )
|
||||
|
||||
media_results.extend( more_media_results )
|
||||
|
||||
|
@ -479,6 +540,13 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
for ( i, media_result ) in enumerate( media_results ):
|
||||
|
||||
job_key.SetStatusText( 'exporting: {}'.format( HydrusData.ConvertValueRangeToPrettyString( i + 1, len( media_results ) ) ) )
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return
|
||||
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'pause_export_folders_sync' ) or HydrusThreading.IsThreadShuttingDown():
|
||||
|
||||
return
|
||||
|
@ -486,7 +554,6 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
hash = media_result.GetHash()
|
||||
mime = media_result.GetMime()
|
||||
size = media_result.GetSize()
|
||||
|
||||
try:
|
||||
|
||||
|
@ -569,7 +636,14 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
deletee_paths = previous_paths.difference( sync_paths )
|
||||
|
||||
for deletee_path in deletee_paths:
|
||||
for ( i, deletee_path ) in enumerate( deletee_paths ):
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return
|
||||
|
||||
|
||||
job_key.SetStatusText( 'delete-synchronising: {}'.format( HydrusData.ConvertValueRangeToPrettyString( i + 1, len( deletee_paths ) ) ) )
|
||||
|
||||
ClientPaths.DeletePath( deletee_path )
|
||||
|
||||
|
@ -578,6 +652,11 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
for ( root, dirnames, filenames ) in os.walk( self._path, topdown = False ):
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return
|
||||
|
||||
|
||||
if root == self._path:
|
||||
|
||||
continue
|
||||
|
@ -615,7 +694,14 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
service_keys_to_deletee_hashes = collections.defaultdict( list )
|
||||
|
||||
for media_result in media_results:
|
||||
for ( i, media_result ) in enumerate( media_results ):
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return
|
||||
|
||||
|
||||
job_key.SetStatusText( 'delete-prepping: {}'.format( HydrusData.ConvertValueRangeToPrettyString( i + 1, len( media_results ) ) ) )
|
||||
|
||||
if media_result.IsDeleteLocked():
|
||||
|
||||
|
@ -636,9 +722,18 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
for ( service_key, deletee_hashes ) in service_keys_to_deletee_hashes.items():
|
||||
|
||||
chunks_of_hashes = HydrusData.SplitListIntoChunks( deletee_hashes, 64 )
|
||||
CHUNK_SIZE = 64
|
||||
|
||||
for chunk_of_hashes in chunks_of_hashes:
|
||||
chunks_of_hashes = HydrusLists.SplitListIntoChunks( deletee_hashes, CHUNK_SIZE )
|
||||
|
||||
for ( i, chunk_of_hashes ) in enumerate( chunks_of_hashes ):
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
return
|
||||
|
||||
|
||||
job_key.SetStatusText( 'deleting: {}'.format( HydrusData.ConvertValueRangeToPrettyString( i * CHUNK_SIZE, len( deletee_hashes ) ) ) )
|
||||
|
||||
content_update = HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_DELETE, chunk_of_hashes, reason = reason )
|
||||
|
||||
|
@ -647,18 +742,24 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
|
||||
|
||||
job_key.SetStatusText( 'Done!' )
|
||||
|
||||
|
||||
def DoWork( self ):
|
||||
|
||||
regular_run_due = self._run_regularly and HydrusData.TimeHasPassed( self._last_checked + self._period )
|
||||
regular_run_due = self._run_regularly and HydrusTime.TimeHasPassed( self._last_checked + self._period )
|
||||
|
||||
good_to_go = ( regular_run_due or self._run_now ) and not self._paused
|
||||
good_to_go = regular_run_due or self._run_now
|
||||
|
||||
if not good_to_go:
|
||||
|
||||
return
|
||||
|
||||
|
||||
job_key = ClientThreading.JobKey( pausable = False, cancellable = True )
|
||||
|
||||
job_key.SetStatusTitle( 'export folder - ' + self._name )
|
||||
|
||||
try:
|
||||
|
||||
if self._path == '':
|
||||
|
@ -676,15 +777,33 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
raise Exception( 'The path, "{}", is not a directory!'.format( self._path ) )
|
||||
|
||||
|
||||
self._DoExport()
|
||||
popup_desired = self._show_working_popup or self._run_now
|
||||
|
||||
if popup_desired:
|
||||
|
||||
HG.client_controller.pub( 'message', job_key )
|
||||
|
||||
|
||||
self._DoExport( job_key )
|
||||
|
||||
self._last_error = ''
|
||||
|
||||
except Exception as e:
|
||||
|
||||
self._paused = True
|
||||
if self._run_regularly:
|
||||
|
||||
self._run_regularly = False
|
||||
|
||||
pause_str = 'It has been set to not run regularly.'
|
||||
|
||||
else:
|
||||
|
||||
pause_str = ''
|
||||
|
||||
|
||||
HydrusData.ShowText( 'The export folder "' + self._name + '" encountered an error! It has now been paused. Please check the folder\'s settings and maybe report to hydrus dev if the error is complicated! The error follows:' )
|
||||
message = f'The export folder "{self._name}" encountered an error! {pause_str}Please check the folder\'s settings and maybe report to hydrus dev if the error is complicated! The error follows:'
|
||||
|
||||
HydrusData.ShowText( message )
|
||||
|
||||
HydrusData.ShowException( e )
|
||||
|
||||
|
@ -692,11 +811,13 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
finally:
|
||||
|
||||
self._last_checked = HydrusData.GetNow()
|
||||
self._last_checked = HydrusTime.GetNow()
|
||||
self._run_now = False
|
||||
|
||||
HG.client_controller.WriteSynchronous( 'serialisable', self )
|
||||
|
||||
job_key.Delete()
|
||||
|
||||
|
||||
|
||||
def GetLastError( self ) -> str:
|
||||
|
@ -711,13 +832,17 @@ class ExportFolder( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
def RunNow( self ):
|
||||
|
||||
self._paused = False
|
||||
self._run_now = True
|
||||
|
||||
|
||||
def ShowWorkingPopup( self ) -> bool:
|
||||
|
||||
return self._show_working_popup
|
||||
|
||||
|
||||
def ToTuple( self ):
|
||||
|
||||
return ( self._name, self._path, self._export_type, self._delete_from_client_after_export, self._export_symlinks, self._file_search_context, self._run_regularly, self._period, self._phrase, self._last_checked, self._paused, self._run_now )
|
||||
return ( self._name, self._path, self._export_type, self._delete_from_client_after_export, self._export_symlinks, self._file_search_context, self._run_regularly, self._period, self._phrase, self._last_checked, self._run_now )
|
||||
|
||||
|
||||
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_EXPORT_FOLDER ] = ExportFolder
|
||||
|
|
|
@ -31,10 +31,12 @@ from hydrus.core import HydrusGlobals as HG
|
|||
from hydrus.core import HydrusImageHandling
|
||||
from hydrus.core import HydrusMemory
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusProfiling
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusTemp
|
||||
from hydrus.core import HydrusText
|
||||
from hydrus.core import HydrusTime
|
||||
from hydrus.core import HydrusVideoHandling
|
||||
from hydrus.core.networking import HydrusNetwork
|
||||
from hydrus.core.networking import HydrusNetworking
|
||||
|
@ -257,7 +259,7 @@ def THREADUploadPending( service_key ):
|
|||
|
||||
while result is not None:
|
||||
|
||||
time_started_this_loop = HydrusData.GetNowPrecise()
|
||||
time_started_this_loop = HydrusTime.GetNowPrecise()
|
||||
|
||||
nums_pending = HG.client_controller.Read( 'nums_pending' )
|
||||
|
||||
|
@ -314,7 +316,7 @@ def THREADUploadPending( service_key ):
|
|||
|
||||
file_info_manager = media_result.GetFileInfoManager()
|
||||
|
||||
timestamp = HydrusData.GetNow()
|
||||
timestamp = HydrusTime.GetNow()
|
||||
|
||||
content_update_row = ( file_info_manager, timestamp )
|
||||
|
||||
|
@ -378,7 +380,7 @@ def THREADUploadPending( service_key ):
|
|||
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
total_time_this_loop_took = HydrusData.GetNowPrecise() - time_started_this_loop
|
||||
total_time_this_loop_took = HydrusTime.GetNowPrecise() - time_started_this_loop
|
||||
|
||||
if total_time_this_loop_took > 1.5:
|
||||
|
||||
|
@ -392,7 +394,7 @@ def THREADUploadPending( service_key ):
|
|||
result = HG.client_controller.Read( 'pending', service_key, content_types_to_request, ideal_weight = current_ideal_weight )
|
||||
|
||||
|
||||
finished_all_uploads = result == None
|
||||
finished_all_uploads = result is None
|
||||
|
||||
if initial_num_pending > 0 and no_results_found and service_type == HC.TAG_REPOSITORY:
|
||||
|
||||
|
@ -825,7 +827,7 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
library_version_lines.append( 'db cache size per file: {}MB'.format( HG.db_cache_size ) )
|
||||
library_version_lines.append( 'db journal mode: {}'.format( HG.db_journal_mode ) )
|
||||
library_version_lines.append( 'db synchronous mode: {}'.format( HG.db_synchronous ) )
|
||||
library_version_lines.append( 'db transaction commit period: {}'.format( HydrusData.TimeDeltaToPrettyTimeDelta( HG.db_cache_size ) ) )
|
||||
library_version_lines.append( 'db transaction commit period: {}'.format( HydrusTime.TimeDeltaToPrettyTimeDelta( HG.db_cache_size ) ) )
|
||||
library_version_lines.append( 'db using memory for temp?: {}'.format( HG.no_db_temp_files ) )
|
||||
|
||||
import locale
|
||||
|
@ -881,7 +883,7 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
|
||||
if result == QW.QDialog.Accepted:
|
||||
|
||||
stop_time = HydrusData.GetNow() + 120
|
||||
stop_time = HydrusTime.GetNow() + 120
|
||||
|
||||
self._controller.Write( 'analyze', maintenance_mode = HC.MAINTENANCE_FORCED, stop_time = stop_time )
|
||||
|
||||
|
@ -1015,7 +1017,7 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
|
||||
self._controller.Write( 'backup', path )
|
||||
|
||||
HG.client_controller.new_options.SetNoneableInteger( 'last_backup_time', HydrusData.GetNow() )
|
||||
HG.client_controller.new_options.SetNoneableInteger( 'last_backup_time', HydrusTime.GetNow() )
|
||||
|
||||
self._did_a_backup_this_session = True
|
||||
|
||||
|
@ -1027,7 +1029,7 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
|
||||
def do_it( service ):
|
||||
|
||||
started = HydrusData.GetNow()
|
||||
started = HydrusTime.GetNow()
|
||||
|
||||
service.Request( HC.POST, 'backup' )
|
||||
|
||||
|
@ -1049,9 +1051,9 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
result_bytes = service.Request( HC.GET, 'busy' )
|
||||
|
||||
|
||||
it_took = HydrusData.GetNow() - started
|
||||
it_took = HydrusTime.GetNow() - started
|
||||
|
||||
HydrusData.ShowText( 'Server backup done in ' + HydrusData.TimeDeltaToPrettyTimeDelta( it_took ) + '!' )
|
||||
HydrusData.ShowText( 'Server backup done in ' + HydrusTime.TimeDeltaToPrettyTimeDelta( it_took ) + '!' )
|
||||
|
||||
|
||||
message = 'This will tell the server to lock and copy its database files. It will probably take a few minutes to complete, during which time it will not be able to serve any requests.'
|
||||
|
@ -1386,7 +1388,7 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
break
|
||||
|
||||
|
||||
job_key.SetStatusText( 'Will auto-dismiss in ' + HydrusData.TimeDeltaToPrettyTimeDelta( 10 - i ) + '.' )
|
||||
job_key.SetStatusText( 'Will auto-dismiss in ' + HydrusTime.TimeDeltaToPrettyTimeDelta( 10 - i ) + '.' )
|
||||
job_key.SetVariable( 'popup_gauge_1', ( i, 10 ) )
|
||||
|
||||
time.sleep( 1 )
|
||||
|
@ -1780,13 +1782,13 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
ip = response[ 'ip' ]
|
||||
timestamp = response[ 'timestamp' ]
|
||||
|
||||
gmt_time = HydrusData.ConvertTimestampToPrettyTime( timestamp, in_utc = True )
|
||||
local_time = HydrusData.ConvertTimestampToPrettyTime( timestamp )
|
||||
utc_time = HydrusTime.TimestampToPrettyTime( timestamp, in_utc = True )
|
||||
local_time = HydrusTime.TimestampToPrettyTime( timestamp )
|
||||
|
||||
text = 'File Hash: ' + hash.hex()
|
||||
text += os.linesep
|
||||
text += 'Uploader\'s IP: ' + ip
|
||||
text += 'Upload Time (GMT): ' + gmt_time
|
||||
text += 'Upload Time (UTC): ' + utc_time
|
||||
text += 'Upload Time (Your time): ' + local_time
|
||||
|
||||
HydrusData.Print( text )
|
||||
|
@ -2379,13 +2381,13 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
|
||||
if last_backup_time is not None:
|
||||
|
||||
if not HydrusData.TimeHasPassed( last_backup_time + 1800 ):
|
||||
if not HydrusTime.TimeHasPassed( last_backup_time + 1800 ):
|
||||
|
||||
message += ' (did one recently)'
|
||||
|
||||
else:
|
||||
|
||||
message += ' (last {})'.format( HydrusData.TimestampToPrettyTimeDelta( last_backup_time ) )
|
||||
message += ' (last {})'.format( HydrusTime.TimestampToPrettyTimeDelta( last_backup_time ) )
|
||||
|
||||
|
||||
|
||||
|
@ -2572,7 +2574,7 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
|
||||
for timestamp in timestamps:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( submenu, HydrusData.ConvertTimestampToPrettyTime( timestamp ), 'Append this backup session to whatever pages are already open.', self._notebook.AppendGUISessionBackup, name, timestamp )
|
||||
ClientGUIMenus.AppendMenuItem( submenu, HydrusTime.TimestampToPrettyTime( timestamp ), 'Append this backup session to whatever pages are already open.', self._notebook.AppendGUISessionBackup, name, timestamp )
|
||||
|
||||
|
||||
ClientGUIMenus.AppendMenu( append_backup, submenu, name )
|
||||
|
@ -4391,7 +4393,7 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
|
||||
if nullification_period > HydrusNetwork.MAX_NULLIFICATION_PERIOD:
|
||||
|
||||
QW.QMessageBox.information( self, 'Information', 'Sorry, the value you entered was too high. The max is {}.'.format( HydrusData.TimeDeltaToPrettyTimeDelta( HydrusNetwork.MAX_NULLIFICATION_PERIOD ) ) )
|
||||
QW.QMessageBox.information( self, 'Information', 'Sorry, the value you entered was too high. The max is {}.'.format( HydrusTime.TimeDeltaToPrettyTimeDelta( HydrusNetwork.MAX_NULLIFICATION_PERIOD ) ) )
|
||||
|
||||
return
|
||||
|
||||
|
@ -4520,7 +4522,7 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
|
||||
if update_period > HydrusNetwork.MAX_UPDATE_PERIOD:
|
||||
|
||||
QW.QMessageBox.information( self, 'Information', 'Sorry, the value you entered was too high. The max is {}.'.format( HydrusData.TimeDeltaToPrettyTimeDelta( HydrusNetwork.MAX_UPDATE_PERIOD ) ) )
|
||||
QW.QMessageBox.information( self, 'Information', 'Sorry, the value you entered was too high. The max is {}.'.format( HydrusTime.TimeDeltaToPrettyTimeDelta( HydrusNetwork.MAX_UPDATE_PERIOD ) ) )
|
||||
|
||||
return
|
||||
|
||||
|
@ -6286,7 +6288,7 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
|
||||
def do_it( service ):
|
||||
|
||||
started = HydrusData.GetNow()
|
||||
started = HydrusTime.GetNow()
|
||||
|
||||
service.Request( HC.POST, 'maintenance_regen_service_info' )
|
||||
|
||||
|
@ -6308,9 +6310,9 @@ class FrameGUI( CAC.ApplicationCommandProcessorMixin, ClientGUITopLevelWindows.M
|
|||
result_bytes = service.Request( HC.GET, 'busy' )
|
||||
|
||||
|
||||
it_took = HydrusData.GetNow() - started
|
||||
it_took = HydrusTime.GetNow() - started
|
||||
|
||||
HydrusData.ShowText( 'Server maintenance done in ' + HydrusData.TimeDeltaToPrettyTimeDelta( it_took ) + '!' )
|
||||
HydrusData.ShowText( 'Server maintenance done in ' + HydrusTime.TimeDeltaToPrettyTimeDelta( it_took ) + '!' )
|
||||
|
||||
|
||||
message = 'This will tell the server to recalculate the cached numbers for number of files, mappings, actionable petitions and so on. It may take a little while to complete, during which time it will not be able to serve any requests.'
|
||||
|
@ -6865,7 +6867,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
def do_it( service ):
|
||||
|
||||
started = HydrusData.GetNow()
|
||||
started = HydrusTime.GetNow()
|
||||
|
||||
service.Request( HC.POST, 'vacuum' )
|
||||
|
||||
|
@ -6887,9 +6889,9 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
result_bytes = service.Request( HC.GET, 'busy' )
|
||||
|
||||
|
||||
it_took = HydrusData.GetNow() - started
|
||||
it_took = HydrusTime.GetNow() - started
|
||||
|
||||
HydrusData.ShowText( 'Server vacuum done in ' + HydrusData.TimeDeltaToPrettyTimeDelta( it_took ) + '!' )
|
||||
HydrusData.ShowText( 'Server vacuum done in ' + HydrusTime.TimeDeltaToPrettyTimeDelta( it_took ) + '!' )
|
||||
|
||||
|
||||
message = 'This will tell the server to lock and vacuum its database files. It may take some time to complete, during which time it will not be able to serve any requests.'
|
||||
|
@ -7045,7 +7047,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
new_closed_pages = []
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
timeout = 60 * 60
|
||||
|
||||
|
@ -7165,7 +7167,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
summary = 'Profiling animation timer: ' + repr( window )
|
||||
|
||||
HydrusData.Profile( summary, 'window.TIMERAnimationUpdate()', globals(), locals(), min_duration_ms = HG.ui_timer_profile_min_job_time_ms )
|
||||
HydrusProfiling.Profile( summary, 'window.TIMERAnimationUpdate()', globals(), locals(), min_duration_ms = HG.ui_timer_profile_min_job_time_ms )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -7478,7 +7480,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
self._clipboard_watcher_destination_page_watcher = None
|
||||
|
||||
|
||||
close_time = HydrusData.GetNow()
|
||||
close_time = HydrusTime.GetNow()
|
||||
|
||||
self._closed_pages.append( ( close_time, page ) )
|
||||
|
||||
|
@ -7867,7 +7869,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
boot_time = self._controller.GetBootTime()
|
||||
|
||||
time_since_boot = max( 1, HydrusData.GetNow() - boot_time )
|
||||
time_since_boot = max( 1, HydrusTime.GetNow() - boot_time )
|
||||
|
||||
usage_since_boot = global_tracker.GetUsage( HC.BANDWIDTH_TYPE_DATA, time_since_boot )
|
||||
|
||||
|
@ -7965,7 +7967,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
summary = 'Profiling page timer: ' + repr( page )
|
||||
|
||||
HydrusData.Profile( summary, 'page.REPEATINGPageUpdate()', globals(), locals(), min_duration_ms = HG.ui_timer_profile_min_job_time_ms )
|
||||
HydrusProfiling.Profile( summary, 'page.REPEATINGPageUpdate()', globals(), locals(), min_duration_ms = HG.ui_timer_profile_min_job_time_ms )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -8008,7 +8010,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
summary = 'Profiling ui update timer: ' + repr( window )
|
||||
|
||||
HydrusData.Profile( summary, 'window.TIMERUIUpdate()', globals(), locals(), min_duration_ms = HG.ui_timer_profile_min_job_time_ms )
|
||||
HydrusProfiling.Profile( summary, 'window.TIMERUIUpdate()', globals(), locals(), min_duration_ms = HG.ui_timer_profile_min_job_time_ms )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -8295,7 +8297,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
shutdown_work_period = self._controller.new_options.GetInteger( 'shutdown_work_period' )
|
||||
|
||||
shutdown_work_due = HydrusData.TimeHasPassed( last_shutdown_work_time + shutdown_work_period )
|
||||
shutdown_work_due = HydrusTime.TimeHasPassed( last_shutdown_work_time + shutdown_work_period )
|
||||
|
||||
if shutdown_work_due:
|
||||
|
||||
|
@ -8307,7 +8309,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
idle_shutdown_max_minutes = self._controller.options[ 'idle_shutdown_max_minutes' ]
|
||||
|
||||
time_to_stop = HydrusData.GetNow() + ( idle_shutdown_max_minutes * 60 )
|
||||
time_to_stop = HydrusTime.GetNow() + ( idle_shutdown_max_minutes * 60 )
|
||||
|
||||
work_to_do = self._controller.GetIdleShutdownWorkDue( time_to_stop )
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@ from qtpy import QtWidgets as QW
|
|||
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientAPI
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
@ -19,7 +20,7 @@ class CaptureAPIAccessPermissionsRequestPanel( ClientGUIScrolledPanels.ReviewPan
|
|||
|
||||
ClientGUIScrolledPanels.ReviewPanel.__init__( self, parent )
|
||||
|
||||
self._time_started = HydrusData.GetNow()
|
||||
self._time_started = HydrusTime.GetNow()
|
||||
|
||||
self._api_permissions = None
|
||||
|
||||
|
|
|
@ -7,6 +7,7 @@ from qtpy import QtWidgets as QW
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.gui import QtPorting as QP
|
||||
|
||||
|
|
|
@ -8,6 +8,7 @@ from qtpy import QtGui as QG
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client.gui import ClientGUIFrames
|
||||
|
@ -222,7 +223,7 @@ class DialogGenerateNewAccounts( Dialog ):
|
|||
|
||||
else:
|
||||
|
||||
expires = HydrusData.GetNow() + lifetime
|
||||
expires = HydrusTime.GetNow() + lifetime
|
||||
|
||||
|
||||
service = HG.client_controller.services_manager.GetService( self._service_key )
|
||||
|
@ -296,7 +297,7 @@ class DialogInputLocalBooruShare( Dialog ):
|
|||
|
||||
else:
|
||||
|
||||
time_left = HydrusData.GetTimeDeltaUntilTime( timeout )
|
||||
time_left = HydrusTime.GetTimeDeltaUntilTime( timeout )
|
||||
|
||||
if time_left < 60 * 60 * 12: time_value = 60
|
||||
elif time_left < 60 * 60 * 24 * 7: time_value = 60 * 60
|
||||
|
@ -406,7 +407,7 @@ class DialogInputLocalBooruShare( Dialog ):
|
|||
|
||||
timeout = self._timeout_number.GetValue()
|
||||
|
||||
if timeout is not None: timeout = timeout * self._timeout_multiplier.GetValue() + HydrusData.GetNow()
|
||||
if timeout is not None: timeout = timeout * self._timeout_multiplier.GetValue() + HydrusTime.GetNow()
|
||||
|
||||
return ( self._share_key, name, text, timeout, self._hashes )
|
||||
|
||||
|
|
|
@ -9,6 +9,7 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core.networking import HydrusNATPunch
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
@ -714,7 +715,7 @@ class DialogManageUPnP( ClientGUIDialogs.Dialog ):
|
|||
|
||||
else:
|
||||
|
||||
pretty_duration = HydrusData.TimeDeltaToPrettyTimeDelta( duration )
|
||||
pretty_duration = HydrusTime.TimeDeltaToPrettyTimeDelta( duration )
|
||||
|
||||
|
||||
display_tuple = ( description, internal_ip, str( internal_port ), str( external_port ), protocol, pretty_duration )
|
||||
|
|
|
@ -9,6 +9,7 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientDefaults
|
||||
|
|
|
@ -9,6 +9,7 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusText
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.exporting import ClientExportingFiles
|
||||
from hydrus.client.gui import ClientGUIFunctions
|
||||
|
|
|
@ -4,6 +4,7 @@ from qtpy import QtWidgets as QW
|
|||
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.gui import ClientGUIDialogsQuick
|
||||
|
||||
|
|
|
@ -10,12 +10,13 @@ from hydrus.core import HydrusExceptions
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusText
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientData
|
||||
from hydrus.client import ClientLocation
|
||||
from hydrus.client import ClientPaths
|
||||
from hydrus.client import ClientSerialisable
|
||||
from hydrus.client import ClientTime
|
||||
from hydrus.client.gui import ClientGUIDialogsQuick
|
||||
from hydrus.client.gui import ClientGUIMenus
|
||||
from hydrus.client.gui import ClientGUISerialisable
|
||||
|
@ -349,8 +350,8 @@ class EditFileSeedCachePanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
pretty_file_seed_data = str( file_seed_data )
|
||||
pretty_status = CC.status_string_lookup[ status ]
|
||||
pretty_added = ClientData.TimestampToPrettyTimeDelta( added )
|
||||
pretty_modified = ClientData.TimestampToPrettyTimeDelta( modified )
|
||||
pretty_added = ClientTime.TimestampToPrettyTimeDelta( added )
|
||||
pretty_modified = ClientTime.TimestampToPrettyTimeDelta( modified )
|
||||
|
||||
if source_time is None:
|
||||
|
||||
|
@ -358,7 +359,7 @@ class EditFileSeedCachePanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
else:
|
||||
|
||||
pretty_source_time = ClientData.TimestampToPrettyTimeDelta( source_time )
|
||||
pretty_source_time = ClientTime.TimestampToPrettyTimeDelta( source_time )
|
||||
|
||||
|
||||
sort_source_time = ClientGUIListCtrl.SafeNoneInt( source_time )
|
||||
|
|
|
@ -8,11 +8,13 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusText
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientData
|
||||
from hydrus.client import ClientPaths
|
||||
from hydrus.client import ClientSerialisable
|
||||
from hydrus.client import ClientTime
|
||||
from hydrus.client.gui import ClientGUIDialogsQuick
|
||||
from hydrus.client.gui import ClientGUIMenus
|
||||
from hydrus.client.gui import ClientGUISerialisable
|
||||
|
@ -298,8 +300,8 @@ class EditGallerySeedLogPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
pretty_gallery_seed_index = HydrusData.ToHumanInt( gallery_seed_index )
|
||||
pretty_url = url
|
||||
pretty_status = CC.status_string_lookup[ status ]
|
||||
pretty_added = ClientData.TimestampToPrettyTimeDelta( added )
|
||||
pretty_modified = ClientData.TimestampToPrettyTimeDelta( modified )
|
||||
pretty_added = ClientTime.TimestampToPrettyTimeDelta( added )
|
||||
pretty_modified = ClientTime.TimestampToPrettyTimeDelta( modified )
|
||||
pretty_note = note.split( os.linesep )[0]
|
||||
|
||||
display_tuple = ( pretty_gallery_seed_index, pretty_url, pretty_status, pretty_added, pretty_modified, pretty_note )
|
||||
|
|
|
@ -9,6 +9,7 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientDefaults
|
||||
|
@ -555,7 +556,7 @@ class EditLoginsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
( login_domain, login_script_key_and_name, credentials_tuple, login_access_type, login_access_text, active, validity, validity_error_text, no_work_until, no_work_until_reason ) = domain_and_login_info
|
||||
|
||||
if not HydrusData.TimeHasPassed( no_work_until ) or no_work_until_reason != '':
|
||||
if not HydrusTime.TimeHasPassed( no_work_until ) or no_work_until_reason != '':
|
||||
|
||||
return True
|
||||
|
||||
|
@ -659,7 +660,7 @@ class EditLoginsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
if login_expiry is None:
|
||||
|
||||
sort_login_expiry = HydrusData.GetNow() + 45 * 60
|
||||
sort_login_expiry = HydrusTime.GetNow() + 45 * 60
|
||||
|
||||
else:
|
||||
|
||||
|
@ -668,13 +669,13 @@ class EditLoginsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
sort_logged_in = ( logged_in, sort_login_expiry )
|
||||
|
||||
if HydrusData.TimeHasPassed( no_work_until ):
|
||||
if HydrusTime.TimeHasPassed( no_work_until ):
|
||||
|
||||
pretty_no_work_until = ''
|
||||
|
||||
else:
|
||||
|
||||
pretty_no_work_until = '{} - {}'.format( HydrusData.ConvertTimestampToPrettyExpires( no_work_until ), no_work_until_reason )
|
||||
pretty_no_work_until = '{} - {}'.format( HydrusTime.TimestampToPrettyExpires( no_work_until ), no_work_until_reason )
|
||||
|
||||
|
||||
pretty_login_domain = login_domain
|
||||
|
@ -699,7 +700,7 @@ class EditLoginsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
else:
|
||||
|
||||
pretty_login_expiry = HydrusData.ConvertTimestampToPrettyExpires( login_expiry )
|
||||
pretty_login_expiry = HydrusTime.TimestampToPrettyExpires( login_expiry )
|
||||
|
||||
|
||||
pretty_logged_in = 'yes - {}'.format( pretty_login_expiry )
|
||||
|
|
|
@ -10,6 +10,7 @@ from hydrus.core import HydrusExceptions
|
|||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientLocation
|
||||
|
|
|
@ -9,6 +9,9 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusImageHandling
|
||||
from hydrus.core import HydrusLists
|
||||
from hydrus.core import HydrusThreading
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
@ -598,13 +601,13 @@ def MoveOrDuplicateLocalFiles( win: QW.QWidget, dest_service_key: bytes, action:
|
|||
HG.client_controller.pub( 'message', job_key )
|
||||
|
||||
|
||||
pauser = HydrusData.BigJobPauser()
|
||||
pauser = HydrusThreading.BigJobPauser()
|
||||
|
||||
num_to_do = len( applicable_media )
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
for ( i, block_of_media ) in enumerate( HydrusData.SplitListIntoChunks( applicable_media, BLOCK_SIZE ) ):
|
||||
for ( i, block_of_media ) in enumerate( HydrusLists.SplitListIntoChunks( applicable_media, BLOCK_SIZE ) ):
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
|
|
|
@ -8,6 +8,7 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
|
|
@ -5,7 +5,9 @@ from qtpy import QtGui as QG
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusProfiling
|
||||
from hydrus.core import HydrusText
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.gui import QtPorting as QP
|
||||
|
||||
|
@ -239,7 +241,7 @@ def GetEventCallable( callable, *args, **kwargs ):
|
|||
|
||||
summary = 'Profiling menu: ' + repr( callable )
|
||||
|
||||
HydrusData.Profile( summary, 'callable( *args, **kwargs )', globals(), locals(), min_duration_ms = HG.menu_profile_min_job_time_ms )
|
||||
HydrusProfiling.Profile( summary, 'callable( *args, **kwargs )', globals(), locals(), min_duration_ms = HG.menu_profile_min_job_time_ms )
|
||||
|
||||
else:
|
||||
|
||||
|
|
|
@ -8,6 +8,7 @@ from qtpy import QtWidgets as QW
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientData
|
||||
|
@ -476,10 +477,10 @@ class PopupMessage( PopupWindow ):
|
|||
|
||||
self._network_job_ctrl.ClearNetworkJob()
|
||||
|
||||
self._time_network_job_disappeared = HydrusData.GetNow()
|
||||
self._time_network_job_disappeared = HydrusTime.GetNow()
|
||||
|
||||
|
||||
if self._network_job_ctrl.isVisible() and HydrusData.TimeHasPassed( self._time_network_job_disappeared + 10 ):
|
||||
if self._network_job_ctrl.isVisible() and HydrusTime.TimeHasPassed( self._time_network_job_disappeared + 10 ):
|
||||
|
||||
self._network_job_ctrl.hide()
|
||||
|
||||
|
|
|
@ -7,6 +7,7 @@ from qtpy import QtWidgets as QW
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.gui import QtPorting as QP
|
||||
from hydrus.client.gui.widgets import ClientGUICommon
|
||||
|
|
|
@ -3,6 +3,7 @@ from qtpy import QtWidgets as QW
|
|||
from qtpy import QtGui as QG
|
||||
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
@ -128,7 +129,7 @@ class ResizingScrolledPanel( QW.QScrollArea ):
|
|||
|
||||
# ok this is a stupid sizing hack to stop the ever-growing window that has a sizeHint three pixels bigger than its current size causing a resize growth loop
|
||||
# if we get a bunch of resizes real quick, we cut them off, hopefully breaking the cycle
|
||||
if not HydrusData.TimeHasPassedFloat( self._last_just_sized_cascade_start_time + 1.0 ):
|
||||
if not HydrusTime.TimeHasPassedFloat( self._last_just_sized_cascade_start_time + 1.0 ):
|
||||
|
||||
self._number_of_just_sizedes += 1
|
||||
|
||||
|
@ -139,7 +140,7 @@ class ResizingScrolledPanel( QW.QScrollArea ):
|
|||
|
||||
else:
|
||||
|
||||
self._last_just_sized_cascade_start_time = HydrusData.GetNowFloat()
|
||||
self._last_just_sized_cascade_start_time = HydrusTime.GetNowFloat()
|
||||
|
||||
self._number_of_just_sizedes = 1
|
||||
|
||||
|
|
|
@ -11,9 +11,12 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusLists
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusText
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
@ -36,6 +39,7 @@ from hydrus.client.gui.widgets import ClientGUICommon
|
|||
from hydrus.client.importing.options import NoteImportOptions
|
||||
from hydrus.client.importing.options import TagImportOptions
|
||||
from hydrus.client.media import ClientMedia
|
||||
from hydrus.client.media import ClientMediaFileFilter
|
||||
|
||||
class EditChooseMultiple( ClientGUIScrolledPanels.EditPanel ):
|
||||
|
||||
|
@ -737,7 +741,7 @@ class EditDeleteFilesPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
if service.GetServiceType() in HC.LOCAL_FILE_SERVICES:
|
||||
|
||||
media = ClientMedia.FilterAndReportDeleteLockFailures( media )
|
||||
media = ClientMediaFileFilter.FilterAndReportDeleteLockFailures( media )
|
||||
|
||||
|
||||
return media
|
||||
|
@ -874,7 +878,7 @@ class EditDeleteFilesPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
text = template.format( file_desc, deletee_service.GetName() )
|
||||
|
||||
chunks_of_hashes = HydrusData.SplitListIntoChunks( hashes, 64 )
|
||||
chunks_of_hashes = HydrusLists.SplitListIntoChunks( hashes, 64 )
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_DELETE, chunk_of_hashes ) for chunk_of_hashes in chunks_of_hashes ]
|
||||
|
||||
|
@ -897,7 +901,7 @@ class EditDeleteFilesPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
h = [ m.GetHash() for m in self._media if CC.COMBINED_LOCAL_MEDIA_SERVICE_KEY in m.GetLocationsManager().GetCurrent() ]
|
||||
|
||||
chunks_of_hashes = HydrusData.SplitListIntoChunks( h, 64 )
|
||||
chunks_of_hashes = HydrusLists.SplitListIntoChunks( h, 64 )
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_DELETE, chunk_of_hashes ) for chunk_of_hashes in chunks_of_hashes ]
|
||||
|
||||
|
@ -973,7 +977,7 @@ class EditDeleteFilesPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
text = 'Permanently delete {}?'.format( suffix )
|
||||
|
||||
chunks_of_hashes = HydrusData.SplitListIntoChunks( hashes, 64 )
|
||||
chunks_of_hashes = HydrusLists.SplitListIntoChunks( hashes, 64 )
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_DELETE, chunk_of_hashes ) for chunk_of_hashes in chunks_of_hashes ]
|
||||
|
||||
|
@ -1010,7 +1014,7 @@ class EditDeleteFilesPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
text = 'Permanently delete these ' + HydrusData.ToHumanInt( num_to_delete ) + ' files and do not save a deletion record?'
|
||||
|
||||
|
||||
chunks_of_hashes = list( HydrusData.SplitListIntoChunks( hashes, 64 ) ) # iterator, so list it to use it more than once, jej
|
||||
chunks_of_hashes = list( HydrusLists.SplitListIntoChunks( hashes, 64 ) ) # iterator, so list it to use it more than once, jej
|
||||
|
||||
list_of_service_keys_to_content_updates = []
|
||||
|
||||
|
@ -1242,19 +1246,23 @@ class EditDuplicateContentMergeOptionsPanel( ClientGUIScrolledPanels.EditPanel )
|
|||
self._sync_archive_action.addItem( 'always archive both', ClientDuplicates.SYNC_ARCHIVE_DO_BOTH_REGARDLESS )
|
||||
|
||||
self._sync_urls_action = ClientGUICommon.BetterChoice( self )
|
||||
self._sync_file_modified_date_action = ClientGUICommon.BetterChoice( self )
|
||||
self._sync_notes_action = ClientGUICommon.BetterChoice( self )
|
||||
|
||||
self._sync_urls_action.addItem( 'make no change', HC.CONTENT_MERGE_ACTION_NONE )
|
||||
self._sync_notes_action.addItem( 'make no change', HC.CONTENT_MERGE_ACTION_NONE )
|
||||
self._sync_urls_action.addItem( HC.content_merge_string_lookup[ HC.CONTENT_MERGE_ACTION_NONE ], HC.CONTENT_MERGE_ACTION_NONE )
|
||||
self._sync_file_modified_date_action.addItem( HC.content_modified_date_merge_string_lookup[ HC.CONTENT_MERGE_ACTION_NONE ], HC.CONTENT_MERGE_ACTION_NONE )
|
||||
self._sync_notes_action.addItem( HC.content_merge_string_lookup[ HC.CONTENT_MERGE_ACTION_NONE ], HC.CONTENT_MERGE_ACTION_NONE )
|
||||
|
||||
if self._duplicate_action == HC.DUPLICATE_BETTER:
|
||||
|
||||
self._sync_urls_action.addItem( HC.content_merge_string_lookup[ HC.CONTENT_MERGE_ACTION_COPY ], HC.CONTENT_MERGE_ACTION_COPY )
|
||||
self._sync_file_modified_date_action.addItem( HC.content_modified_date_merge_string_lookup[ HC.CONTENT_MERGE_ACTION_COPY ], HC.CONTENT_MERGE_ACTION_COPY )
|
||||
self._sync_notes_action.addItem( HC.content_merge_string_lookup[ HC.CONTENT_MERGE_ACTION_COPY ], HC.CONTENT_MERGE_ACTION_COPY )
|
||||
self._sync_notes_action.addItem( HC.content_merge_string_lookup[ HC.CONTENT_MERGE_ACTION_MOVE ], HC.CONTENT_MERGE_ACTION_MOVE )
|
||||
|
||||
|
||||
self._sync_urls_action.addItem( HC.content_merge_string_lookup[ HC.CONTENT_MERGE_ACTION_TWO_WAY_MERGE ], HC.CONTENT_MERGE_ACTION_TWO_WAY_MERGE )
|
||||
self._sync_file_modified_date_action.addItem( HC.content_modified_date_merge_string_lookup[ HC.CONTENT_MERGE_ACTION_TWO_WAY_MERGE ], HC.CONTENT_MERGE_ACTION_TWO_WAY_MERGE )
|
||||
self._sync_notes_action.addItem( HC.content_merge_string_lookup[ HC.CONTENT_MERGE_ACTION_TWO_WAY_MERGE ], HC.CONTENT_MERGE_ACTION_TWO_WAY_MERGE )
|
||||
|
||||
self._sync_note_import_options_button = ClientGUICommon.BetterButton( self, 'note merge settings', self._EditNoteImportOptions )
|
||||
|
@ -1265,6 +1273,7 @@ class EditDuplicateContentMergeOptionsPanel( ClientGUIScrolledPanels.EditPanel )
|
|||
rating_service_options = duplicate_content_merge_options.GetRatingServiceActions()
|
||||
sync_archive_action = duplicate_content_merge_options.GetSyncArchiveAction()
|
||||
sync_urls_action = duplicate_content_merge_options.GetSyncURLsAction()
|
||||
sync_file_modified_date_action = duplicate_content_merge_options.GetSyncFileModifiedDateAction()
|
||||
sync_notes_action = duplicate_content_merge_options.GetSyncNotesAction()
|
||||
self._sync_note_import_options = duplicate_content_merge_options.GetSyncNoteImportOptions()
|
||||
|
||||
|
@ -1289,14 +1298,17 @@ class EditDuplicateContentMergeOptionsPanel( ClientGUIScrolledPanels.EditPanel )
|
|||
if self._duplicate_action in ( HC.DUPLICATE_ALTERNATE, HC.DUPLICATE_FALSE_POSITIVE ) and not for_custom_action:
|
||||
|
||||
self._sync_urls_action.setEnabled( False )
|
||||
self._sync_file_modified_date_action.setEnabled( False )
|
||||
self._sync_notes_action.setEnabled( False )
|
||||
|
||||
self._sync_urls_action.SetValue( HC.CONTENT_MERGE_ACTION_NONE )
|
||||
self._sync_file_modified_date_action.SetValue( HC.CONTENT_MERGE_ACTION_NONE )
|
||||
self._sync_notes_action.SetValue( HC.CONTENT_MERGE_ACTION_NONE )
|
||||
|
||||
else:
|
||||
|
||||
self._sync_urls_action.SetValue( sync_urls_action )
|
||||
self._sync_file_modified_date_action.SetValue( sync_file_modified_date_action )
|
||||
self._sync_notes_action.SetValue( sync_notes_action )
|
||||
|
||||
|
||||
|
@ -1318,6 +1330,7 @@ class EditDuplicateContentMergeOptionsPanel( ClientGUIScrolledPanels.EditPanel )
|
|||
rows = []
|
||||
|
||||
rows.append( ( 'sync archived status?: ', self._sync_archive_action ) )
|
||||
rows.append( ( 'sync file modified date?: ', self._sync_file_modified_date_action ) )
|
||||
rows.append( ( 'sync known urls?: ', self._sync_urls_action ) )
|
||||
rows.append( ( 'sync notes?: ', self._sync_notes_action ) )
|
||||
rows.append( ( '', self._sync_note_import_options_button ) )
|
||||
|
@ -1721,6 +1734,7 @@ class EditDuplicateContentMergeOptionsPanel( ClientGUIScrolledPanels.EditPanel )
|
|||
rating_service_actions = [ ( service_key, action ) for ( service_key, action ) in self._service_keys_to_rating_options.items() ]
|
||||
sync_archive_action = self._sync_archive_action.GetValue()
|
||||
sync_urls_action = self._sync_urls_action.GetValue()
|
||||
sync_file_modified_date_action = self._sync_file_modified_date_action.GetValue()
|
||||
sync_notes_action = self._sync_notes_action.GetValue()
|
||||
|
||||
duplicate_content_merge_options = ClientDuplicates.DuplicateContentMergeOptions()
|
||||
|
@ -1729,6 +1743,7 @@ class EditDuplicateContentMergeOptionsPanel( ClientGUIScrolledPanels.EditPanel )
|
|||
duplicate_content_merge_options.SetRatingServiceActions( rating_service_actions )
|
||||
duplicate_content_merge_options.SetSyncArchiveAction( sync_archive_action )
|
||||
duplicate_content_merge_options.SetSyncURLsAction( sync_urls_action )
|
||||
duplicate_content_merge_options.SetSyncFileModifiedDateAction( sync_file_modified_date_action )
|
||||
duplicate_content_merge_options.SetSyncNotesAction( sync_notes_action )
|
||||
duplicate_content_merge_options.SetSyncNoteImportOptions( self._sync_note_import_options )
|
||||
|
||||
|
@ -2117,7 +2132,7 @@ class EditFileTimestampsPanel( CAC.ApplicationCommandProcessorMixin, ClientGUISc
|
|||
self._last_viewed_media_viewer_timestamp = ClientGUITime.DateTimeButton( self, seconds_allowed = True, only_past_dates = True )
|
||||
self._last_viewed_preview_viewer_timestamp = ClientGUITime.DateTimeButton( self, seconds_allowed = True, only_past_dates = True )
|
||||
|
||||
self._file_modified_timestamp_warning_st = ClientGUICommon.BetterStaticText( self, label = 'This will also change the modified date of the file on disk!' )
|
||||
self._file_modified_timestamp_warning_st = ClientGUICommon.BetterStaticText( self, label = 'initialising' )
|
||||
self._file_modified_timestamp_warning_st.setObjectName( 'HydrusWarning' )
|
||||
self._file_modified_timestamp_warning_st.setAlignment( QC.Qt.AlignCenter )
|
||||
self._file_modified_timestamp_warning_st.setVisible( False )
|
||||
|
@ -2263,7 +2278,7 @@ class EditFileTimestampsPanel( CAC.ApplicationCommandProcessorMixin, ClientGUISc
|
|||
|
||||
domain = timestamp_data.location
|
||||
|
||||
pretty_timestamp = HydrusData.ConvertTimestampToPrettyTime( timestamp_data.timestamp )
|
||||
pretty_timestamp = HydrusTime.TimestampToPrettyTime( timestamp_data.timestamp )
|
||||
|
||||
display_tuple = ( domain, pretty_timestamp )
|
||||
sort_tuple = ( domain, timestamp_data.timestamp )
|
||||
|
@ -2287,7 +2302,7 @@ class EditFileTimestampsPanel( CAC.ApplicationCommandProcessorMixin, ClientGUISc
|
|||
pretty_timestamp_type = HC.timestamp_type_str_lookup[ timestamp_data.timestamp_type ]
|
||||
sort_timestamp_type = pretty_timestamp_type
|
||||
|
||||
pretty_timestamp = HydrusData.ConvertTimestampToPrettyTime( timestamp_data.timestamp )
|
||||
pretty_timestamp = HydrusTime.TimestampToPrettyTime( timestamp_data.timestamp )
|
||||
|
||||
if timestamp_data.timestamp is None:
|
||||
|
||||
|
@ -2659,7 +2674,26 @@ class EditFileTimestampsPanel( CAC.ApplicationCommandProcessorMixin, ClientGUISc
|
|||
|
||||
def _ShowFileModifiedWarning( self ):
|
||||
|
||||
self._file_modified_timestamp_warning_st.setVisible( True )
|
||||
for timestamp_data in self._GetValidTimestampDatas( only_changes = True ):
|
||||
|
||||
if timestamp_data.timestamp_type == HC.TIMESTAMP_TYPE_MODIFIED_FILE and timestamp_data.timestamp is not None:
|
||||
|
||||
self._file_modified_timestamp_warning_st.setVisible( True )
|
||||
|
||||
if HydrusPaths.FileModifiedTimeIsOk( timestamp_data.timestamp ):
|
||||
|
||||
self._file_modified_timestamp_warning_st.setText( 'This will also change the modified date of the file on disk!' )
|
||||
|
||||
else:
|
||||
|
||||
self._file_modified_timestamp_warning_st.setText( 'File modified date on disk will not be changed--the timestamp is too early.' )
|
||||
|
||||
|
||||
return
|
||||
|
||||
|
||||
|
||||
self._file_modified_timestamp_warning_st.setVisible( False )
|
||||
|
||||
|
||||
def GetFileModifiedUpdate( self ) -> typing.Optional[ int ]:
|
||||
|
@ -2668,7 +2702,10 @@ class EditFileTimestampsPanel( CAC.ApplicationCommandProcessorMixin, ClientGUISc
|
|||
|
||||
if timestamp_data.timestamp_type == HC.TIMESTAMP_TYPE_MODIFIED_FILE and timestamp_data.timestamp is not None:
|
||||
|
||||
return timestamp_data.timestamp
|
||||
if HydrusPaths.FileModifiedTimeIsOk( timestamp_data.timestamp ):
|
||||
|
||||
return timestamp_data.timestamp
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -16,6 +16,7 @@ from hydrus.core import HydrusPaths
|
|||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusText
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
|
|
@ -25,6 +25,7 @@ from hydrus.core import HydrusTags
|
|||
from hydrus.core import HydrusTagArchive
|
||||
from hydrus.core import HydrusText
|
||||
from hydrus.core import HydrusThreading
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientData
|
||||
|
@ -761,7 +762,7 @@ class MigrateDatabasePanel( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
else:
|
||||
|
||||
stop_time = HydrusData.GetNow() + result
|
||||
stop_time = HydrusTime.GetNow() + result
|
||||
|
||||
|
||||
job_key = ClientThreading.JobKey( cancellable = True, stop_time = stop_time )
|
||||
|
@ -3080,7 +3081,7 @@ class ReviewHowBonedAmI( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
eit = boned_stats[ 'earliest_import_time' ]
|
||||
|
||||
eit_label = 'Earliest file import: {} ({})'.format( HydrusData.ConvertTimestampToPrettyTime( eit ), HydrusData.TimestampToPrettyTimeDelta( eit ) )
|
||||
eit_label = 'Earliest file import: {} ({})'.format( HydrusTime.TimestampToPrettyTime( eit ), HydrusTime.TimestampToPrettyTimeDelta( eit ) )
|
||||
|
||||
eit_st = ClientGUICommon.BetterStaticText( panel, label = eit_label )
|
||||
|
||||
|
@ -3101,11 +3102,11 @@ class ReviewHowBonedAmI( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
( media_views, media_viewtime, preview_views, preview_viewtime ) = total_viewtime
|
||||
|
||||
media_label = 'Total media views: ' + HydrusData.ToHumanInt( media_views ) + ', totalling ' + HydrusData.TimeDeltaToPrettyTimeDelta( media_viewtime )
|
||||
media_label = 'Total media views: ' + HydrusData.ToHumanInt( media_views ) + ', totalling ' + HydrusTime.TimeDeltaToPrettyTimeDelta( media_viewtime )
|
||||
|
||||
media_st = ClientGUICommon.BetterStaticText( panel, label = media_label )
|
||||
|
||||
preview_label = 'Total preview views: ' + HydrusData.ToHumanInt( preview_views ) + ', totalling ' + HydrusData.TimeDeltaToPrettyTimeDelta( preview_viewtime )
|
||||
preview_label = 'Total preview views: ' + HydrusData.ToHumanInt( preview_views ) + ', totalling ' + HydrusTime.TimeDeltaToPrettyTimeDelta( preview_viewtime )
|
||||
|
||||
preview_st = ClientGUICommon.BetterStaticText( panel, label = preview_label )
|
||||
|
||||
|
@ -4021,7 +4022,7 @@ Vacuuming is an expensive operation. It requires lots of free space on your driv
|
|||
|
||||
else:
|
||||
|
||||
pretty_last_vacuumed = HydrusData.TimestampToPrettyTimeDelta( sort_last_vacuumed )
|
||||
pretty_last_vacuumed = HydrusTime.TimestampToPrettyTimeDelta( sort_last_vacuumed )
|
||||
|
||||
|
||||
( result, info ) = self._CanVacuumName( name )
|
||||
|
@ -4043,7 +4044,7 @@ Vacuuming is an expensive operation. It requires lots of free space on your driv
|
|||
|
||||
vacuum_time_estimate = HydrusDB.GetApproxVacuumDuration( db_size )
|
||||
|
||||
pretty_vacuum_time_estimate = '{} to {}'.format( HydrusData.TimeDeltaToPrettyTimeDelta( vacuum_time_estimate / 40 ), HydrusData.TimeDeltaToPrettyTimeDelta( vacuum_time_estimate ) )
|
||||
pretty_vacuum_time_estimate = '{} to {}'.format( HydrusTime.TimeDeltaToPrettyTimeDelta( vacuum_time_estimate / 40 ), HydrusTime.TimeDeltaToPrettyTimeDelta( vacuum_time_estimate ) )
|
||||
|
||||
return ( vacuum_time_estimate, pretty_vacuum_time_estimate )
|
||||
|
||||
|
|
|
@ -6,6 +6,7 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientSerialisable
|
||||
from hydrus.client.gui import ClientGUIFunctions
|
||||
|
|
|
@ -8,6 +8,7 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
|
|
@ -9,6 +9,7 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
from hydrus.client import ClientData
|
||||
|
@ -1282,7 +1283,7 @@ class ShortcutsHandler( QC.QObject ):
|
|||
|
||||
CUMULATIVE_MOUSEWARP_MANHATTAN_LENGTH = 0
|
||||
|
||||
#if event.type() != QC.QEvent.Wheel and self._ignore_activating_mouse_click and not HydrusData.TimeHasPassedPrecise( self._frame_activated_time + 0.017 ):
|
||||
#if event.type() != QC.QEvent.Wheel and self._ignore_activating_mouse_click and not HydrusTime.TimeHasPassedPrecise( self._frame_activated_time + 0.017 ):
|
||||
if event.type() != QC.QEvent.Wheel and self._ignore_activating_mouse_click and not self._parent_currently_activated:
|
||||
|
||||
if event.type() == QC.QEvent.MouseButtonRelease and self._activating_wait_job is not None:
|
||||
|
|
|
@ -8,6 +8,7 @@ from qtpy import QtGui as QG
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client.gui import ClientGUIAsync
|
||||
|
|
|
@ -9,6 +9,7 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientParsing
|
||||
|
@ -748,11 +749,11 @@ class EditStringConverterPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
self._data_decoding.addItem( e, e )
|
||||
|
||||
|
||||
self._data_timezone_decode.addItem( 'UTC', HC.TIMEZONE_GMT )
|
||||
self._data_timezone_decode.addItem( 'UTC', HC.TIMEZONE_UTC )
|
||||
self._data_timezone_decode.addItem( 'Local', HC.TIMEZONE_LOCAL )
|
||||
self._data_timezone_decode.addItem( 'Offset', HC.TIMEZONE_OFFSET )
|
||||
|
||||
self._data_timezone_encode.addItem( 'UTC', HC.TIMEZONE_GMT )
|
||||
self._data_timezone_encode.addItem( 'UTC', HC.TIMEZONE_UTC )
|
||||
self._data_timezone_encode.addItem( 'Local', HC.TIMEZONE_LOCAL )
|
||||
|
||||
for e in ( 'md5', 'sha1', 'sha256', 'sha512' ):
|
||||
|
|
|
@ -5,6 +5,7 @@ from qtpy import QtWidgets as QW
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
STYLESHEET_DIR = os.path.join( HC.STATIC_DIR, 'qss' )
|
||||
|
||||
|
|
|
@ -12,10 +12,11 @@ from hydrus.core import HydrusExceptions
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusText
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientData
|
||||
from hydrus.client import ClientPaths
|
||||
from hydrus.client import ClientTime
|
||||
from hydrus.client.gui import ClientGUIAsync
|
||||
from hydrus.client.gui import ClientGUIDialogs
|
||||
from hydrus.client.gui import ClientGUIDialogsQuick
|
||||
|
@ -455,7 +456,7 @@ class EditSubscriptionPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
else:
|
||||
|
||||
pretty_latest_new_file_time = ClientData.TimestampToPrettyTimeDelta( latest_new_file_time )
|
||||
pretty_latest_new_file_time = ClientTime.TimestampToPrettyTimeDelta( latest_new_file_time )
|
||||
|
||||
|
||||
if last_check_time is None or last_check_time == 0:
|
||||
|
@ -464,7 +465,7 @@ class EditSubscriptionPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
else:
|
||||
|
||||
pretty_last_check_time = ClientData.TimestampToPrettyTimeDelta( last_check_time )
|
||||
pretty_last_check_time = ClientTime.TimestampToPrettyTimeDelta( last_check_time )
|
||||
|
||||
|
||||
pretty_next_check_time = query_header.GetNextCheckStatusString()
|
||||
|
@ -484,7 +485,7 @@ class EditSubscriptionPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
else:
|
||||
|
||||
pretty_delay = 'bandwidth: ' + HydrusData.TimeDeltaToPrettyTimeDelta( estimate )
|
||||
pretty_delay = 'bandwidth: ' + HydrusTime.TimeDeltaToPrettyTimeDelta( estimate )
|
||||
delay = estimate
|
||||
|
||||
|
||||
|
@ -1107,13 +1108,13 @@ class EditSubscriptionPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
def _UpdateDelayText( self ):
|
||||
|
||||
if HydrusData.TimeHasPassed( self._no_work_until ):
|
||||
if HydrusTime.TimeHasPassed( self._no_work_until ):
|
||||
|
||||
status = 'no recent errors'
|
||||
|
||||
else:
|
||||
|
||||
status = 'delayed--retrying ' + ClientData.TimestampToPrettyTimeDelta( self._no_work_until, just_now_threshold = 0 ) + ' because: ' + self._no_work_until_reason
|
||||
status = 'delayed--retrying ' + ClientTime.TimestampToPrettyTimeDelta( self._no_work_until, just_now_threshold = 0 ) + ' because: ' + self._no_work_until_reason
|
||||
|
||||
|
||||
self._delay_st.setText( status )
|
||||
|
@ -1570,7 +1571,7 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
else:
|
||||
|
||||
pretty_latest_new_file_time = ClientData.TimestampToPrettyTimeDelta( latest_new_file_time )
|
||||
pretty_latest_new_file_time = ClientTime.TimestampToPrettyTimeDelta( latest_new_file_time )
|
||||
|
||||
|
||||
if last_checked is None or last_checked == 0:
|
||||
|
@ -1579,7 +1580,7 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
else:
|
||||
|
||||
pretty_last_checked = ClientData.TimestampToPrettyTimeDelta( last_checked )
|
||||
pretty_last_checked = ClientTime.TimestampToPrettyTimeDelta( last_checked )
|
||||
|
||||
|
||||
#
|
||||
|
@ -1627,7 +1628,7 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
#
|
||||
|
||||
if HydrusData.TimeHasPassed( no_work_until ):
|
||||
if HydrusTime.TimeHasPassed( no_work_until ):
|
||||
|
||||
try:
|
||||
|
||||
|
@ -1640,19 +1641,19 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
elif min_estimate == 0: # some are good to go, but there are delays
|
||||
|
||||
pretty_delay = 'bandwidth: some ok, some up to ' + HydrusData.TimeDeltaToPrettyTimeDelta( max_estimate )
|
||||
pretty_delay = 'bandwidth: some ok, some up to ' + HydrusTime.TimeDeltaToPrettyTimeDelta( max_estimate )
|
||||
delay = max_estimate
|
||||
|
||||
else:
|
||||
|
||||
if min_estimate == max_estimate: # probably just one query, and it is delayed
|
||||
|
||||
pretty_delay = 'bandwidth: up to ' + HydrusData.TimeDeltaToPrettyTimeDelta( max_estimate )
|
||||
pretty_delay = 'bandwidth: up to ' + HydrusTime.TimeDeltaToPrettyTimeDelta( max_estimate )
|
||||
delay = max_estimate
|
||||
|
||||
else:
|
||||
|
||||
pretty_delay = 'bandwidth: from ' + HydrusData.TimeDeltaToPrettyTimeDelta( min_estimate ) + ' to ' + HydrusData.TimeDeltaToPrettyTimeDelta( max_estimate )
|
||||
pretty_delay = 'bandwidth: from ' + HydrusTime.TimeDeltaToPrettyTimeDelta( min_estimate ) + ' to ' + HydrusTime.TimeDeltaToPrettyTimeDelta( max_estimate )
|
||||
delay = max_estimate
|
||||
|
||||
|
||||
|
@ -1665,8 +1666,8 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
else:
|
||||
|
||||
pretty_delay = 'delayed--retrying ' + ClientData.TimestampToPrettyTimeDelta( no_work_until, just_now_threshold = 0 ) + ' - because: ' + no_work_until_reason
|
||||
delay = HydrusData.GetTimeDeltaUntilTime( no_work_until )
|
||||
pretty_delay = 'delayed--retrying ' + ClientTime.TimestampToPrettyTimeDelta( no_work_until, just_now_threshold = 0 ) + ' - because: ' + no_work_until_reason
|
||||
delay = HydrusTime.GetTimeDeltaUntilTime( no_work_until )
|
||||
|
||||
|
||||
file_seed_cache_status = ClientImportSubscriptionQuery.GenerateQueryHeadersStatus( query_headers )
|
||||
|
|
|
@ -8,6 +8,7 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
@ -420,7 +421,7 @@ class RelatedTagsPanel( QW.QWidget ):
|
|||
num_done_s = '{} {} searched fully in '.format( HydrusData.ConvertValueRangeToPrettyString( num_done, num_to_do ), tags_s )
|
||||
|
||||
|
||||
label = '{}{}.'.format( num_done_s, HydrusData.TimeDeltaToPrettyTimeDelta( total_time_took ) )
|
||||
label = '{}{}.'.format( num_done_s, HydrusTime.TimeDeltaToPrettyTimeDelta( total_time_took ) )
|
||||
|
||||
self._status_label.setText( label )
|
||||
|
||||
|
@ -431,7 +432,7 @@ class RelatedTagsPanel( QW.QWidget ):
|
|||
self._have_done_search_with_this_media = True
|
||||
|
||||
|
||||
start_time = HydrusData.GetNowPrecise()
|
||||
start_time = HydrusTime.GetNowPrecise()
|
||||
|
||||
concurrence_threshold = HG.client_controller.new_options.GetInteger( 'related_tags_concurrence_threshold_percent' ) / 100
|
||||
search_tag_slices_weight_dict = { ':' : 1.0, '' : 1.0 }
|
||||
|
@ -477,7 +478,7 @@ class RelatedTagsPanel( QW.QWidget ):
|
|||
other_tags_to_exclude = other_tags_to_exclude
|
||||
)
|
||||
|
||||
total_time_took = HydrusData.GetNowPrecise() - start_time
|
||||
total_time_took = HydrusTime.GetNowPrecise() - start_time
|
||||
|
||||
predicates = ClientSearch.SortPredicates( predicates )
|
||||
|
||||
|
@ -720,7 +721,7 @@ class FileLookupScriptTagsPanel( QW.QWidget ):
|
|||
file_identifier = script.ConvertMediaToFileIdentifier( m )
|
||||
|
||||
|
||||
stop_time = HydrusData.GetNow() + 30
|
||||
stop_time = HydrusTime.GetNow() + 30
|
||||
|
||||
job_key = ClientThreading.JobKey( cancellable = True, stop_time = stop_time )
|
||||
|
||||
|
|
|
@ -18,6 +18,7 @@ from hydrus.core import HydrusGlobals as HG
|
|||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusText
|
||||
from hydrus.core import HydrusTime
|
||||
from hydrus.core.networking import HydrusNetwork
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
|
|
|
@ -5,11 +5,14 @@ import typing
|
|||
from qtpy import QtCore as QC
|
||||
from qtpy import QtWidgets as QW
|
||||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientTime
|
||||
from hydrus.client.gui import ClientGUIFunctions
|
||||
from hydrus.client.gui import ClientGUIScrolledPanels
|
||||
from hydrus.client.gui import ClientGUITopLevelWindowsPanels
|
||||
|
@ -561,12 +564,6 @@ class DateTimeCtrl( QW.QWidget ):
|
|||
qt_datetime = QC.QDateTime( qt_date, qt_time )
|
||||
|
||||
now = QC.QDateTime.currentDateTime()
|
||||
epoch = QC.QDateTime.fromSecsSinceEpoch( 0 )
|
||||
|
||||
if qt_datetime < epoch:
|
||||
|
||||
qt_datetime = epoch
|
||||
|
||||
|
||||
if self._only_past_dates and qt_datetime > now:
|
||||
|
||||
|
@ -638,7 +635,7 @@ class TimeDeltaButton( QW.QPushButton ):
|
|||
|
||||
else:
|
||||
|
||||
text = HydrusData.TimeDeltaToPrettyTimeDelta( value )
|
||||
text = HydrusTime.TimeDeltaToPrettyTimeDelta( value )
|
||||
|
||||
|
||||
self.setText( text )
|
||||
|
@ -896,6 +893,202 @@ class TimeDeltaCtrl( QW.QWidget ):
|
|||
|
||||
|
||||
|
||||
class TimestampDataStubCtrl( QW.QWidget ):
|
||||
|
||||
valueChanged = QC.Signal()
|
||||
|
||||
def __init__( self, parent, timestamp_data_stub = None ):
|
||||
|
||||
QW.QWidget.__init__( self, parent )
|
||||
|
||||
if timestamp_data_stub is None:
|
||||
|
||||
timestamp_data_stub = ClientTime.TimestampData.STATICSimpleStub( HC.TIMESTAMP_TYPE_ARCHIVED )
|
||||
|
||||
|
||||
#
|
||||
|
||||
self._timestamp_type = ClientGUICommon.BetterChoice( self )
|
||||
|
||||
for timestamp_type in [ HC.TIMESTAMP_TYPE_ARCHIVED, HC.TIMESTAMP_TYPE_MODIFIED_FILE, HC.TIMESTAMP_TYPE_MODIFIED_DOMAIN, HC.TIMESTAMP_TYPE_IMPORTED, HC.TIMESTAMP_TYPE_DELETED, HC.TIMESTAMP_TYPE_PREVIOUSLY_IMPORTED, HC.TIMESTAMP_TYPE_LAST_VIEWED ]:
|
||||
|
||||
label = HC.timestamp_type_str_lookup[ timestamp_type ]
|
||||
|
||||
self._timestamp_type.addItem( label, timestamp_type )
|
||||
|
||||
|
||||
#
|
||||
|
||||
self._current_file_service = ClientGUICommon.BetterChoice( self )
|
||||
|
||||
for service in HG.client_controller.services_manager.GetServices( HC.FILE_SERVICES ):
|
||||
|
||||
self._current_file_service.addItem( service.GetName(), service.GetServiceKey() )
|
||||
|
||||
|
||||
#
|
||||
|
||||
self._deleted_file_service = ClientGUICommon.BetterChoice( self )
|
||||
|
||||
for service in HG.client_controller.services_manager.GetServices( HC.FILE_SERVICES_WITH_DELETE_RECORD ):
|
||||
|
||||
self._deleted_file_service.addItem( service.GetName(), service.GetServiceKey() )
|
||||
|
||||
|
||||
#
|
||||
|
||||
self._canvas_type = ClientGUICommon.BetterChoice( self )
|
||||
|
||||
for canvas_type in [ CC.CANVAS_MEDIA_VIEWER, CC.CANVAS_PREVIEW ]:
|
||||
|
||||
self._canvas_type.addItem( CC.canvas_type_str_lookup[ canvas_type ], canvas_type )
|
||||
|
||||
|
||||
#
|
||||
|
||||
self._domain_panel = QW.QWidget( self )
|
||||
|
||||
self._domain = QW.QLineEdit( self._domain_panel )
|
||||
|
||||
rows = []
|
||||
|
||||
rows.append( ( 'domain: ', self._domain ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( self._domain_panel, rows )
|
||||
|
||||
self._domain_panel.setLayout( gridbox )
|
||||
|
||||
#
|
||||
|
||||
vbox = QP.VBoxLayout()
|
||||
|
||||
QP.AddToLayout( vbox, self._timestamp_type, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, self._current_file_service, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, self._deleted_file_service, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, self._canvas_type, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, self._domain_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
vbox.addStretch( 1 )
|
||||
|
||||
self.setLayout( vbox )
|
||||
|
||||
self._SetValue( timestamp_data_stub )
|
||||
|
||||
self._TimestampTypeChanged()
|
||||
|
||||
self._timestamp_type.currentIndexChanged.connect( self._TimestampTypeChanged )
|
||||
|
||||
|
||||
def _SetValue( self, timestamp_data_stub: ClientTime.TimestampData ):
|
||||
|
||||
timestamp_type = timestamp_data_stub.timestamp_type
|
||||
|
||||
self._timestamp_type.SetValue( timestamp_type )
|
||||
|
||||
location = timestamp_data_stub.location
|
||||
|
||||
if timestamp_type in ClientTime.FILE_SERVICE_TIMESTAMP_TYPES:
|
||||
|
||||
if timestamp_type == HC.TIMESTAMP_TYPE_IMPORTED:
|
||||
|
||||
self._current_file_service.SetValue( location )
|
||||
|
||||
else:
|
||||
|
||||
self._deleted_file_service.SetValue( location )
|
||||
|
||||
|
||||
elif timestamp_type == HC.TIMESTAMP_TYPE_LAST_VIEWED:
|
||||
|
||||
self._canvas_type.SetValue( location )
|
||||
|
||||
elif timestamp_type == HC.TIMESTAMP_TYPE_MODIFIED_DOMAIN:
|
||||
|
||||
self._domain.setText( location )
|
||||
|
||||
|
||||
|
||||
def _TimestampTypeChanged( self ):
|
||||
|
||||
self._current_file_service.setVisible( False )
|
||||
self._deleted_file_service.setVisible( False )
|
||||
self._canvas_type.setVisible( False )
|
||||
self._domain_panel.setVisible( False )
|
||||
|
||||
timestamp_type = self._timestamp_type.GetValue()
|
||||
|
||||
if timestamp_type in ClientTime.FILE_SERVICE_TIMESTAMP_TYPES:
|
||||
|
||||
if timestamp_type == HC.TIMESTAMP_TYPE_IMPORTED:
|
||||
|
||||
self._current_file_service.setVisible( True )
|
||||
|
||||
else:
|
||||
|
||||
self._deleted_file_service.setVisible( True )
|
||||
|
||||
|
||||
elif timestamp_type == HC.TIMESTAMP_TYPE_LAST_VIEWED:
|
||||
|
||||
self._canvas_type.setVisible( True )
|
||||
|
||||
elif timestamp_type == HC.TIMESTAMP_TYPE_MODIFIED_DOMAIN:
|
||||
|
||||
self._domain_panel.setVisible( True )
|
||||
|
||||
|
||||
|
||||
def GetValue( self ) -> ClientTime.TimestampData:
|
||||
|
||||
timestamp_type = self._timestamp_type.GetValue()
|
||||
|
||||
if timestamp_type in ClientTime.SIMPLE_TIMESTAMP_TYPES:
|
||||
|
||||
timestamp_data_stub = ClientTime.TimestampData.STATICSimpleStub( timestamp_type )
|
||||
|
||||
else:
|
||||
|
||||
if timestamp_type in ClientTime.FILE_SERVICE_TIMESTAMP_TYPES:
|
||||
|
||||
if timestamp_type == HC.TIMESTAMP_TYPE_IMPORTED:
|
||||
|
||||
location = self._current_file_service.GetValue()
|
||||
|
||||
else:
|
||||
|
||||
location = self._deleted_file_service.GetValue()
|
||||
|
||||
|
||||
elif timestamp_type == HC.TIMESTAMP_TYPE_LAST_VIEWED:
|
||||
|
||||
location = self._canvas_type.GetValue()
|
||||
|
||||
elif timestamp_type == HC.TIMESTAMP_TYPE_MODIFIED_DOMAIN:
|
||||
|
||||
location = self._domain.text()
|
||||
|
||||
if location == '':
|
||||
|
||||
raise HydrusExceptions.VetoException( 'You have to enter a domain!' )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
raise HydrusExceptions.VetoException( 'Unknown timestamp type!' )
|
||||
|
||||
|
||||
timestamp_data_stub = ClientTime.TimestampData( timestamp_type = timestamp_type, location = location )
|
||||
|
||||
|
||||
return timestamp_data_stub
|
||||
|
||||
|
||||
def SetValue( self, timestamp_data_stub: ClientTime.TimestampData ):
|
||||
|
||||
self._SetValue( timestamp_data_stub )
|
||||
|
||||
|
||||
|
||||
class VelocityCtrl( QW.QWidget ):
|
||||
|
||||
velocityChanged = QC.Signal()
|
||||
|
|
|
@ -6,6 +6,7 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client.gui import ClientGUIFunctions
|
||||
from hydrus.client.gui import ClientGUIShortcuts
|
||||
|
@ -604,9 +605,9 @@ class Frame( QW.QWidget ):
|
|||
|
||||
|
||||
class MainFrame( QW.QMainWindow ):
|
||||
|
||||
|
||||
def __init__( self, parent, title ):
|
||||
|
||||
|
||||
QW.QMainWindow.__init__( self, parent )
|
||||
|
||||
self.setWindowTitle( title )
|
||||
|
|
|
@ -15,6 +15,8 @@ from collections import defaultdict
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusProfiling
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client.gui import QtInit
|
||||
|
@ -1291,7 +1293,7 @@ class CallAfterEventCatcher( QC.QObject ):
|
|||
|
||||
summary = 'Profiling CallAfter Event: {}'.format( event._fn )
|
||||
|
||||
HydrusData.Profile( summary, 'event.Execute()', globals(), locals(), min_duration_ms = HG.callto_profile_min_job_time_ms )
|
||||
HydrusProfiling.Profile( summary, 'event.Execute()', globals(), locals(), min_duration_ms = HG.callto_profile_min_job_time_ms )
|
||||
|
||||
else:
|
||||
|
||||
|
|
|
@ -8,8 +8,10 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusLists
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
@ -39,6 +41,7 @@ from hydrus.client.gui import QtPorting as QP
|
|||
from hydrus.client.gui.canvas import ClientGUICanvasHoverFrames
|
||||
from hydrus.client.gui.canvas import ClientGUICanvasMedia
|
||||
from hydrus.client.media import ClientMedia
|
||||
from hydrus.client.media import ClientMediaFileFilter
|
||||
from hydrus.client.metadata import ClientRatings
|
||||
from hydrus.client.metadata import ClientTags
|
||||
from hydrus.client.metadata import ClientTagSorting
|
||||
|
@ -311,7 +314,7 @@ class Canvas( CAC.ApplicationCommandProcessorMixin, QW.QWidget ):
|
|||
|
||||
self._background_colour_generator = CanvasBackgroundColourGenerator()
|
||||
|
||||
self._current_media_start_time = HydrusData.GetNow()
|
||||
self._current_media_start_time = HydrusTime.GetNow()
|
||||
|
||||
self._new_options = HG.client_controller.new_options
|
||||
|
||||
|
@ -685,7 +688,7 @@ class Canvas( CAC.ApplicationCommandProcessorMixin, QW.QWidget ):
|
|||
|
||||
def _SaveCurrentMediaViewTime( self ):
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
now = HydrusTime.GetNow()
|
||||
|
||||
view_timestamp = self._current_media_start_time
|
||||
|
||||
|
@ -2077,7 +2080,7 @@ class CanvasWithHovers( CanvasWithDetails ):
|
|||
#
|
||||
|
||||
self._timer_cursor_hide_job = None
|
||||
self._last_cursor_autohide_touch_time = HydrusData.GetNowFloat()
|
||||
self._last_cursor_autohide_touch_time = HydrusTime.GetNowFloat()
|
||||
|
||||
# need this as we need un-button-pressed move events for cursor hide
|
||||
self.setMouseTracking( True )
|
||||
|
@ -2106,7 +2109,7 @@ class CanvasWithHovers( CanvasWithDetails ):
|
|||
|
||||
hide_time = hide_time_ms / 1000
|
||||
|
||||
can_hide = HydrusData.TimeHasPassedFloat( self._last_cursor_autohide_touch_time + hide_time )
|
||||
can_hide = HydrusTime.TimeHasPassedFloat( self._last_cursor_autohide_touch_time + hide_time )
|
||||
|
||||
can_check_again = ClientGUIFunctions.MouseIsOverWidget( self )
|
||||
|
||||
|
@ -2139,7 +2142,7 @@ class CanvasWithHovers( CanvasWithDetails ):
|
|||
|
||||
def _RestartCursorHideWait( self ):
|
||||
|
||||
self._last_cursor_autohide_touch_time = HydrusData.GetNowFloat()
|
||||
self._last_cursor_autohide_touch_time = HydrusTime.GetNowFloat()
|
||||
|
||||
self._RestartCursorHideCheckJob()
|
||||
|
||||
|
@ -3646,7 +3649,7 @@ def CommitArchiveDelete( page_key: bytes, location_context: ClientLocation.Locat
|
|||
deletee_file_service_keys = [ CC.COMBINED_LOCAL_MEDIA_SERVICE_KEY ]
|
||||
|
||||
|
||||
for block_of_deleted in HydrusData.SplitListIntoChunks( deleted, 64 ):
|
||||
for block_of_deleted in HydrusLists.SplitListIntoChunks( deleted, 64 ):
|
||||
|
||||
service_keys_to_content_updates = {}
|
||||
|
||||
|
@ -3673,7 +3676,7 @@ def CommitArchiveDelete( page_key: bytes, location_context: ClientLocation.Locat
|
|||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
|
||||
for block_of_kept_hashes in HydrusData.SplitListIntoChunks( kept_hashes, 64 ):
|
||||
for block_of_kept_hashes in HydrusLists.SplitListIntoChunks( kept_hashes, 64 ):
|
||||
|
||||
service_keys_to_content_updates = {}
|
||||
|
||||
|
@ -3732,7 +3735,7 @@ class CanvasMediaListFilterArchiveDelete( CanvasMediaList ):
|
|||
|
||||
kept = list( self._kept )
|
||||
|
||||
deleted = ClientMedia.FilterAndReportDeleteLockFailures( self._deleted )
|
||||
deleted = ClientMediaFileFilter.FilterAndReportDeleteLockFailures( self._deleted )
|
||||
|
||||
if len( kept ) > 0 or len( deleted ) > 0:
|
||||
|
||||
|
|
|
@ -8,6 +8,7 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
|
|
@ -11,6 +11,7 @@ from hydrus.core import HydrusFileHandling
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusImageHandling
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
@ -378,7 +379,7 @@ class Animation( QW.QWidget ):
|
|||
self._current_frame_index = 0
|
||||
self._current_frame_drawn = False
|
||||
self._current_timestamp_ms = None
|
||||
self._next_frame_due_at = HydrusData.GetNowPrecise()
|
||||
self._next_frame_due_at = HydrusTime.GetNowPrecise()
|
||||
self._slow_frame_score = 1.0
|
||||
|
||||
self._paused = True
|
||||
|
@ -511,9 +512,9 @@ class Animation( QW.QWidget ):
|
|||
|
||||
next_frame_ideally_due = self._next_frame_due_at + next_frame_time_s
|
||||
|
||||
if HydrusData.TimeHasPassedPrecise( next_frame_ideally_due ):
|
||||
if HydrusTime.TimeHasPassedPrecise( next_frame_ideally_due ):
|
||||
|
||||
self._next_frame_due_at = HydrusData.GetNowPrecise() + next_frame_time_s
|
||||
self._next_frame_due_at = HydrusTime.GetNowPrecise() + next_frame_time_s
|
||||
|
||||
else:
|
||||
|
||||
|
@ -623,7 +624,7 @@ class Animation( QW.QWidget ):
|
|||
self._current_frame_index = frame_index
|
||||
self._current_timestamp_ms = None
|
||||
|
||||
self._next_frame_due_at = HydrusData.GetNowPrecise()
|
||||
self._next_frame_due_at = HydrusTime.GetNowPrecise()
|
||||
|
||||
self._video_container.GetReadyForFrame( self._current_frame_index )
|
||||
|
||||
|
@ -815,7 +816,7 @@ class Animation( QW.QWidget ):
|
|||
self._current_frame_index = int( ( self._num_frames - 1 ) * HC.options[ 'animation_start_position' ] )
|
||||
self._current_frame_drawn = False
|
||||
self._current_timestamp_ms = None
|
||||
self._next_frame_due_at = HydrusData.GetNowPrecise()
|
||||
self._next_frame_due_at = HydrusTime.GetNowPrecise()
|
||||
self._slow_frame_score = 1.0
|
||||
|
||||
self._paused = start_paused
|
||||
|
@ -856,7 +857,7 @@ class Animation( QW.QWidget ):
|
|||
|
||||
if self._current_frame_drawn:
|
||||
|
||||
if not self._paused and HydrusData.TimeHasPassedPrecise( self._next_frame_due_at ):
|
||||
if not self._paused and HydrusTime.TimeHasPassedPrecise( self._next_frame_due_at ):
|
||||
|
||||
num_frames = self._media.GetNumFrames()
|
||||
|
||||
|
@ -1113,7 +1114,7 @@ class AnimationBar( QW.QWidget ):
|
|||
|
||||
if current_timestamp_ms is not None:
|
||||
|
||||
progress_strings.append( HydrusData.ConvertValueRangeToScanbarTimestampsMS( current_timestamp_ms, self._duration_ms ) )
|
||||
progress_strings.append( HydrusTime.ValueRangeToScanbarTimestampsMS( current_timestamp_ms, self._duration_ms ) )
|
||||
|
||||
|
||||
s = ' - '.join( progress_strings )
|
||||
|
|
|
@ -11,6 +11,7 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusImageHandling
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusTime
|
||||
from hydrus.core import HydrusVideoHandling
|
||||
|
||||
from hydrus.client import ClientApplicationCommand as CAC
|
||||
|
|
|
@ -10,7 +10,10 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusLists
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusThreading
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientLocation
|
||||
|
@ -30,6 +33,7 @@ from hydrus.client.gui.metadata import ClientGUIMetadataMigration
|
|||
from hydrus.client.gui.search import ClientGUIACDropdown
|
||||
from hydrus.client.gui.widgets import ClientGUICommon
|
||||
from hydrus.client.media import ClientMedia
|
||||
from hydrus.client.media import ClientMediaFileFilter
|
||||
from hydrus.client.metadata import ClientMetadataMigrationExporters
|
||||
from hydrus.client.metadata import ClientMetadataMigrationImporters
|
||||
from hydrus.client.metadata import ClientTags
|
||||
|
@ -136,7 +140,7 @@ class EditExportFoldersPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
def _ConvertExportFolderToListCtrlTuples( self, export_folder: ClientExportingFiles.ExportFolder ):
|
||||
|
||||
( name, path, export_type, delete_from_client_after_export, export_symlinks, file_search_context, run_regularly, period, phrase, last_checked, paused, run_now ) = export_folder.ToTuple()
|
||||
( name, path, export_type, delete_from_client_after_export, export_symlinks, file_search_context, run_regularly, period, phrase, last_checked, run_now ) = export_folder.ToTuple()
|
||||
|
||||
pretty_export_type = 'regular'
|
||||
|
||||
|
@ -154,7 +158,7 @@ class EditExportFoldersPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
if run_regularly:
|
||||
|
||||
pretty_period = HydrusData.TimeDeltaToPrettyTimeDelta( period )
|
||||
pretty_period = HydrusTime.TimeDeltaToPrettyTimeDelta( period )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -166,22 +170,13 @@ class EditExportFoldersPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
pretty_period += ' (running after dialog ok)'
|
||||
|
||||
|
||||
if paused:
|
||||
|
||||
pretty_paused = 'yes'
|
||||
|
||||
else:
|
||||
|
||||
pretty_paused = ''
|
||||
|
||||
|
||||
pretty_phrase = phrase
|
||||
|
||||
last_error = export_folder.GetLastError()
|
||||
|
||||
display_tuple = ( name, path, pretty_export_type, pretty_file_search_context, pretty_paused, pretty_period, pretty_phrase, last_error )
|
||||
display_tuple = ( name, path, pretty_export_type, pretty_file_search_context, pretty_period, pretty_phrase, last_error )
|
||||
|
||||
sort_tuple = ( name, path, pretty_export_type, pretty_file_search_context, paused, period, phrase, last_error )
|
||||
sort_tuple = ( name, path, pretty_export_type, pretty_file_search_context, period, phrase, last_error )
|
||||
|
||||
return ( display_tuple, sort_tuple )
|
||||
|
||||
|
@ -244,7 +239,7 @@ class EditExportFolderPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
self._export_folder = export_folder
|
||||
|
||||
( name, path, export_type, delete_from_client_after_export, export_symlinks, file_search_context, run_regularly, period, phrase, self._last_checked, paused, run_now ) = self._export_folder.ToTuple()
|
||||
( name, path, export_type, delete_from_client_after_export, export_symlinks, file_search_context, run_regularly, period, phrase, self._last_checked, run_now ) = self._export_folder.ToTuple()
|
||||
|
||||
self._path_box = ClientGUICommon.StaticBox( self, 'name and location' )
|
||||
|
||||
|
@ -279,10 +274,10 @@ class EditExportFolderPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
self._run_regularly = QW.QCheckBox( self._period_box )
|
||||
|
||||
self._paused = QW.QCheckBox( self._period_box )
|
||||
|
||||
self._run_now = QW.QCheckBox( self._period_box )
|
||||
|
||||
self._show_working_popup = QW.QCheckBox( self._period_box )
|
||||
|
||||
#
|
||||
|
||||
self._query_box = ClientGUICommon.StaticBox( self, 'query to export' )
|
||||
|
@ -304,7 +299,7 @@ class EditExportFolderPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
self._metadata_routers_box = ClientGUICommon.StaticBox( self, 'sidecar exporting' )
|
||||
|
||||
metadata_routers = export_folder.GetMetadataRouters()
|
||||
allowed_importer_classes = [ ClientMetadataMigrationImporters.SingleFileMetadataImporterMediaTags, ClientMetadataMigrationImporters.SingleFileMetadataImporterMediaNotes, ClientMetadataMigrationImporters.SingleFileMetadataImporterMediaURLs ]
|
||||
allowed_importer_classes = [ ClientMetadataMigrationImporters.SingleFileMetadataImporterMediaTags, ClientMetadataMigrationImporters.SingleFileMetadataImporterMediaNotes, ClientMetadataMigrationImporters.SingleFileMetadataImporterMediaURLs, ClientMetadataMigrationImporters.SingleFileMetadataImporterMediaTimestamps ]
|
||||
allowed_exporter_classes = [ ClientMetadataMigrationExporters.SingleFileMetadataExporterTXT, ClientMetadataMigrationExporters.SingleFileMetadataExporterJSON ]
|
||||
|
||||
self._metadata_routers_button = ClientGUIMetadataMigration.SingleFileMetadataRoutersButton( self._metadata_routers_box, metadata_routers, allowed_importer_classes, allowed_exporter_classes )
|
||||
|
@ -325,10 +320,10 @@ class EditExportFolderPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
self._run_regularly.setChecked( run_regularly )
|
||||
|
||||
self._paused.setChecked( paused )
|
||||
|
||||
self._run_now.setChecked( run_now )
|
||||
|
||||
self._show_working_popup.setChecked( export_folder.ShowWorkingPopup() )
|
||||
|
||||
self._pattern.setText( phrase )
|
||||
|
||||
#
|
||||
|
@ -367,12 +362,11 @@ If you select synchronise, be careful!'''
|
|||
|
||||
self._query_box.Add( self._tag_autocomplete )
|
||||
|
||||
self._period_box.Add( self._period, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
rows = []
|
||||
|
||||
rows.append( ( 'run regularly?: ', self._run_regularly ) )
|
||||
rows.append( ( 'paused: ', self._paused ) )
|
||||
rows.append( self._period )
|
||||
rows.append( ( 'show popup when working regularly?: ', self._show_working_popup ) )
|
||||
rows.append( ( 'run on dialog ok: ', self._run_now ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( self._period_box, rows )
|
||||
|
@ -400,9 +394,19 @@ If you select synchronise, be careful!'''
|
|||
self.widget().setLayout( vbox )
|
||||
|
||||
self._UpdateTypeDeleteUI()
|
||||
self._UpdateRunRegularly()
|
||||
|
||||
self._type.currentIndexChanged.connect( self._UpdateTypeDeleteUI )
|
||||
self._delete_from_client_after_export.clicked.connect( self.EventDeleteFilesAfterExport )
|
||||
self._run_regularly.clicked.connect( self._UpdateRunRegularly )
|
||||
|
||||
|
||||
def _UpdateRunRegularly( self ):
|
||||
|
||||
run_regularly = self._run_regularly.isChecked()
|
||||
|
||||
self._period.setEnabled( run_regularly )
|
||||
self._show_working_popup.setEnabled( run_regularly )
|
||||
|
||||
|
||||
def _UpdateTypeDeleteUI( self ):
|
||||
|
@ -485,10 +489,10 @@ If you select synchronise, be careful!'''
|
|||
|
||||
run_now = self._run_now.isChecked()
|
||||
|
||||
paused = self._paused.isChecked()
|
||||
|
||||
last_error = self._export_folder.GetLastError()
|
||||
|
||||
show_working_popup = self._show_working_popup.isChecked()
|
||||
|
||||
export_folder = ClientExportingFiles.ExportFolder(
|
||||
name,
|
||||
path = path,
|
||||
|
@ -501,9 +505,9 @@ If you select synchronise, be careful!'''
|
|||
period = period,
|
||||
phrase = phrase,
|
||||
last_checked = self._last_checked,
|
||||
paused = paused,
|
||||
run_now = run_now,
|
||||
last_error = last_error
|
||||
last_error = last_error,
|
||||
show_working_popup = show_working_popup
|
||||
)
|
||||
|
||||
return export_folder
|
||||
|
@ -566,7 +570,7 @@ class ReviewExportFilesPanel( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
|
||||
metadata_routers = new_options.GetDefaultExportFilesMetadataRouters()
|
||||
allowed_importer_classes = [ ClientMetadataMigrationImporters.SingleFileMetadataImporterMediaTags, ClientMetadataMigrationImporters.SingleFileMetadataImporterMediaNotes, ClientMetadataMigrationImporters.SingleFileMetadataImporterMediaURLs ]
|
||||
allowed_importer_classes = [ ClientMetadataMigrationImporters.SingleFileMetadataImporterMediaTags, ClientMetadataMigrationImporters.SingleFileMetadataImporterMediaNotes, ClientMetadataMigrationImporters.SingleFileMetadataImporterMediaURLs, ClientMetadataMigrationImporters.SingleFileMetadataImporterMediaTimestamps ]
|
||||
allowed_exporter_classes = [ ClientMetadataMigrationExporters.SingleFileMetadataExporterTXT, ClientMetadataMigrationExporters.SingleFileMetadataExporterJSON ]
|
||||
|
||||
self._metadata_routers_button = ClientGUIMetadataMigration.SingleFileMetadataRoutersButton( self, metadata_routers, allowed_importer_classes, allowed_exporter_classes )
|
||||
|
@ -807,7 +811,7 @@ class ReviewExportFilesPanel( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
HG.client_controller.pub( 'message', job_key )
|
||||
|
||||
pauser = HydrusData.BigJobPauser()
|
||||
pauser = HydrusThreading.BigJobPauser()
|
||||
|
||||
for ( index, ( media, dest_path ) ) in enumerate( to_do ):
|
||||
|
||||
|
@ -888,11 +892,11 @@ class ReviewExportFilesPanel( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
possible_deletee_medias = { media for ( media, path ) in to_do }
|
||||
|
||||
deletee_medias = ClientMedia.FilterAndReportDeleteLockFailures( possible_deletee_medias )
|
||||
deletee_medias = ClientMediaFileFilter.FilterAndReportDeleteLockFailures( possible_deletee_medias )
|
||||
|
||||
local_file_service_keys = HG.client_controller.services_manager.GetServiceKeys( ( HC.LOCAL_FILE_DOMAIN, ) )
|
||||
|
||||
chunks_of_deletee_medias = HydrusData.SplitListIntoChunks( list( deletee_medias ), 64 )
|
||||
chunks_of_deletee_medias = HydrusLists.SplitListIntoChunks( list( deletee_medias ), 64 )
|
||||
|
||||
for chunk_of_deletee_medias in chunks_of_deletee_medias:
|
||||
|
||||
|
|
|
@ -11,9 +11,11 @@ from hydrus.core import HydrusExceptions
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTags
|
||||
from hydrus.core import HydrusText
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientData
|
||||
from hydrus.client import ClientTime
|
||||
from hydrus.client.gui import ClientGUIDialogs
|
||||
from hydrus.client.gui import ClientGUIDialogsQuick
|
||||
from hydrus.client.gui import ClientGUIFileSeedCache
|
||||
|
@ -1023,7 +1025,7 @@ class EditLocalImportFilenameTaggingPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
self._paths_list = ClientGUIListCtrl.BetterListCtrl( self, CGLC.COLUMN_LIST_PATHS_TO_TAGS.ID, 10, self._ConvertDataToListCtrlTuples )
|
||||
|
||||
allowed_importer_classes = [ ClientMetadataMigrationImporters.SingleFileMetadataImporterTXT, ClientMetadataMigrationImporters.SingleFileMetadataImporterJSON ]
|
||||
allowed_exporter_classes = [ ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaTags, ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaNotes, ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaURLs ]
|
||||
allowed_exporter_classes = [ ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaTags, ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaNotes, ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaURLs, ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaTimestamps ]
|
||||
|
||||
self._metadata_routers_panel = ClientGUIMetadataMigration.SingleFileMetadataRoutersControl( self, metadata_routers, allowed_importer_classes, allowed_exporter_classes )
|
||||
|
||||
|
@ -1889,13 +1891,13 @@ class WatcherReviewPanel( ClientGUICommon.StaticBox ):
|
|||
|
||||
if watcher_status == '' and next_check_time is not None:
|
||||
|
||||
if HydrusData.TimeHasPassed( next_check_time ):
|
||||
if HydrusTime.TimeHasPassed( next_check_time ):
|
||||
|
||||
watcher_status = 'checking imminently'
|
||||
|
||||
else:
|
||||
|
||||
watcher_status = 'next check ' + ClientData.TimestampToPrettyTimeDelta( next_check_time, just_now_threshold = 0 )
|
||||
watcher_status = 'next check ' + ClientTime.TimestampToPrettyTimeDelta( next_check_time, just_now_threshold = 0 )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -6,6 +6,7 @@ from hydrus.core import HydrusConstants as HC
|
|||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client.gui import ClientGUIDialogsQuick
|
||||
|
@ -109,7 +110,7 @@ class EditImportFoldersPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
else:
|
||||
|
||||
pretty_check_period = HydrusData.TimeDeltaToPrettyTimeDelta( check_period )
|
||||
pretty_check_period = HydrusTime.TimeDeltaToPrettyTimeDelta( check_period )
|
||||
|
||||
|
||||
sort_tuple = ( name, path, paused, check_period )
|
||||
|
@ -258,7 +259,7 @@ class EditImportFolderPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
metadata_routers = self._import_folder.GetMetadataRouters()
|
||||
allowed_importer_classes = [ ClientMetadataMigrationImporters.SingleFileMetadataImporterTXT, ClientMetadataMigrationImporters.SingleFileMetadataImporterJSON ]
|
||||
allowed_exporter_classes = [ ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaTags, ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaNotes, ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaURLs ]
|
||||
allowed_exporter_classes = [ ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaTags, ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaNotes, ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaURLs, ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaTimestamps ]
|
||||
|
||||
self._metadata_routers_button = ClientGUIMetadataMigration.SingleFileMetadataRoutersButton( self, metadata_routers, allowed_importer_classes, allowed_exporter_classes )
|
||||
|
||||
|
|
|
@ -9,6 +9,7 @@ from hydrus.core import HydrusData
|
|||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusSerialisable
|
||||
from hydrus.core import HydrusTime
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client.gui import ClientGUICore as CGC
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue