diff --git a/docs/Fixing_Hydrus_Random_Crashes_Under_Linux.md b/docs/Fixing_Hydrus_Random_Crashes_Under_Linux.md index bf291157..f28c1d63 100644 --- a/docs/Fixing_Hydrus_Random_Crashes_Under_Linux.md +++ b/docs/Fixing_Hydrus_Random_Crashes_Under_Linux.md @@ -72,7 +72,11 @@ Swapiness is a setting you might have seen, but it only determines Linux's desir ## Why does my Linux system studder or become unresponsive when hydrus has been running a while? -You are running out of pages because Linux releases I/O buffer pages only when a file is closed. Thus the OS is waiting for you to hit the watermark(as described in "why is hydrus crashing") to start freeing pages, which causes the chug. When contents is written from memory to disk the page is retained so that if you reread that part of the disk the OS does not need to access disk it just pulls it from the much faster memory. This is usually a good thing, but Hydrus does not close database files so it eats up pages over time. This is really good for hydrus but sucks for the responsiveness of other apps, and will cause hydrus to consume pages after doing a lengthy operation in anticipation of needing them again, even when it is thereafter idle. You need to set `vm.dirtytime_expire_seconds` to a lower value. +You are running out of pages because Linux releases I/O buffer pages only when a file is closed, OR memory fragmentation in Hydrus is high because you have a big session weight or had a big I/O spike. Thus the OS is waiting for you to hit the watermark(as described in "why is hydrus crashing") to start freeing pages, which causes the chug. + +When contents is written from memory to disk the page is retained so that if you reread that part of the disk the OS does not need to access disk it just pulls it from the much faster memory. This is usually a good thing, but Hydrus makes many small writes to files you probably wont be asking for again soon it eats up pages over time. + +Hydrus also holds the database open and red/wrires new areas to it often even if it will not acess those parts again for ages. It tends to accumulate lots of I/O cache for these small pages it will not be interested in. This is really good for hydrus (because it will over time have the root of the most important indexes in memory) but sucks for the responsiveness of other apps, and will cause hydrus to consume pages after doing a lengthy operation in anticipation of needing them again, even when it is thereafter idle. You need to set `vm.dirtytime_expire_seconds` to a lower value. > `vm.dirtytime_expire_seconds` > When a lazytime inode is constantly having its pages dirtied, the inode with @@ -89,24 +93,30 @@ https://www.kernel.org/doc/Documentation/sysctl/vm.txt -## Why does everything become clunky for a bit if I have tuned all of the above settings? +## Why does everything become clunky for a bit if I have tuned all of the above settings? (especially if I try to do something on the system that isn't hydrus) -The kernel launches a process called `kswapd` to swap and reclaim memory pages, its behaviour is goverened by the following two values +The kernel launches a process called `kswapd` to swap and reclaim memory pages, after hydrus has used pages they need to be returned to the OS (unless fragmentation is preventing this). The OS needs to scan for pages allocated to programs which are not in use, it doens't do this all the time because holding the required locks would have a serious performance impact. The behaviour of `kswapd` is goverened by several important values. If you are using a classic system with a reasonably sized amount of memoery and a swapfile you should tune these. If you are using memory compression (or should be using memory compression because you have a cheap system) read this whole document for info specific to that configuration. -- `vm.vfs_cache_pressure` The tendancy for the kernel to reclaim I/O cache for files and directories. Default=100, set to 110 to bias the kernel into reclaiming I/O pages over keeping them at a "fair rate" compared to other pages. Hydrus tends to write a lot of files and then ignore them for a long time, so its a good idea to prefer freeing pages for infrequent I/O. -**Note**: Increasing `vfs_cache_pressure` significantly beyond 100 may have negative performance impact. Reclaim code needs to take various locks to find freeable directory and inode objects. With `vfs_cache_pressure=1000`, it will look for ten times more freeable objects than there are. - -- `watermark_scale_factor` +- `vm.watermark_scale_factor` This factor controls the aggressiveness of kswapd. It defines the amount of memory left in a node/system before kswapd is woken up and how much memory needs to be free before kswapd goes back to sleep. The unit is in fractions of 10,000. The default value of 10 means the distances between watermarks are 0.1% of the available memory in the node/system. The maximum value is 1000, or 10% of memory. A high rate of threads entering direct reclaim (allocstall) or kswapd going to sleep prematurely (kswapd_low_wmark_hit_quickly) can indicate that the number of free pages kswapd maintains for latency reasons is too small for the allocation bursts occurring in the system. This knob can then be used to tune kswapd aggressiveness accordingly. +- `vm.watermark_boost_factor`: If memory fragmentation is high raise the scale factor to look for reclaimable/swappable pages more agressively. + I like to keep `watermark_scale_factor` at 70 (70/10,000)=0.7%, so kswapd will run until at least 0.7% of system memory has been reclaimed. i.e. If 32GiB (real and virt) of memory, it will try to keep at least 0.224 GiB immediately available. +- `vm.dirty_ratio`: The absolute maximum number of un-synced memory(as a percentage of available memory) that the system will buffer before blocking writing processes. This **protects you against OOM, but does not keep your system responsive**. + - **Note**: A default installation of Ubuntu sets this way too high (60%) as it does not expect your workload to just be hammering possibly slow disks with written pages. **Even with memory overcomitting this can make you OOM**, because you will run out of real memory before the system pauses the program that is writing so hard. A more reasonable value is 10 (10%) + +- `vm.dirty_background_ratio`: How many the number of unsynced pages that can exist before the system starts comitting them in the background. If this is set too low the system will constantly spend cycles trying to write out dirty pages. If it is set too high it will be way to lazy. I like to set it to 8. + +- `vm.vfs_cache_pressure` The tendancy for the kernel to reclaim I/O cache for files and directories. This is less important than the other values, but hydrus opens and closes lots of file handles so you may want to boost it a bit higher than default. Default=100, set to 110 to bias the kernel into reclaiming I/O pages over keeping them at a "fair rate" compared to other pages. Hydrus tends to write a lot of files and then ignore them for a long time, so its a good idea to prefer freeing pages for infrequent I/O. +**Note**: Increasing `vfs_cache_pressure` significantly beyond 100 may have negative performance impact. Reclaim code needs to take various locks to find freeable directory and inode objects. With `vfs_cache_pressure=1000`, it will look for ten times more freeable objects than there are. ### Virtual Memory Under Linux 4: Unleash the memory -An example /etc/sysctl.conf section for virtual memory settings. +An example `/etc/sysctl.conf` section for virtual memory settings. ```ini ######## @@ -134,3 +144,63 @@ vm.watermark_scale_factor=70 #Don't set this value much over 100 or the kernel will spend all its time reclaiming I/O pages vm.vfs_cache_pressure=110 ``` + +## Virtual Memory Under Linux 5: Phenomenal Cosmic Power; Itty bitty living space +Are you trying to __run hydrus on a 200 dollar miniPC__, this is suprisingly doable, but you will need to really understand what you are tuning. + +To start lets explain memory tiers. As memory moves further away from the CPU it becomes slower. Memory close to the CPU is volatile, which means if you remove power from it, it disappears forever. Conversely disk is called non-volatile memory, or persistant storage. We want to get files written to non-volatile storage, and we don't want to have to compete to read non-volatile storage, we would also prefer to not have to compete for writing, butthis is harder. + +The most straight forward way of doing this is to seperate where hydrus writes its SQLITE database(index) files, from where it writes the imported files. But we can make a more flexible setup that will also keep our system responsive, we just need to make sure that the system writes to the fastest possible place first. So let's illustrate the options. + +```mermaid + graph + direction LR + CPU-->RAM; + RAM-->ZRAM; + ZRAM-->SSD; + SSD-->HDD; + subgraph Non-Volatile + SSD; + HDD; + end +``` + +1. **RAM**: Information must be in RAM for it to be operated on +2. **ZRAM**: A compressed area of RAM that cannot be directly accessed. Like a zip file but in memory. Or for the more technical, like a compressed ramdisk. +3. **SSD**: Fast non-volatile storage,good for random access about 100-1000x slower than RAM. +4. **HDD**: Slow non-volatile storage good for random access. About 10000x slowee than RAM. +5. **Tape**(Not shown): Slow archival storage or backup. surprisingly fast actually but can only be accessed sequentially. + +The objective is to make the most of our limited hardware so we definitely want to go through zram first. Depending on your configuration you might have a bulk storage (NAS) downstream that you can write the files to, if all of your storage is in the same tower as you are running hydrus, then make sure the SQLITE .db files are on an SSD volume. + +Next you should enable [ZRAM](https://wiki.archlinux.org/title/Zram) devices (__**Not to be confused with ZWAP**__). A ZRAM device is a compressed swapfile that lives in RAM. + +ZRAM can drastically improve performance, and RAM capacity. Experimentally, a 1.7GB partition usually shrinks to around 740MiB. Depending on your system ZRAM may generate several partitions. The author asked for 4x2GB=8GIB partitions, hence the cited ratio. + +**ZRAM must be created every boot as RAM-disks are lost when power is removed** +Install a zram generator as part of your startup process. If you still do not have enough swap, you can still create a swapfile. RAM can be configured to use a partition as fallback, but not a file. However you can enable a standard swapfile as described in the prior section. ZRAM generators usually create ZRAM partitions with the highest priority (lowest priority number) so ZRAM will fill up first, before normal disk swaping. + +To check your swap configuration +```sh +swapon #no argument +cat /proc/swapinfo +``` + +To make maximum use of your swap make sure to **__SET THE FOLLOWING VM SETTINGS__** + +```sh +#disable IO clustering we are writing to memroy which is super fast +#IF YOU DO NOT DO THIS YOUR SYSTEM WILL HITCH as it tries to lock multiple RAM pages. This would be desirable on non-volatile storages but is actually bad on RAM. +vm.page-cluster=0 + +#Tell the system that it costs almost as much to swap as to write out dirty pages. But bias it very slightly to completing writes. This is ideal since hydrus tends to hammer the system with writes, and we want to use ZRAM to eat spikes, but also want the system to slightly prefer writing dirty pages +vm.swappiness=99 +``` + +---- + +The above is good for most users. If however you also need to speed up your storage due to a high number of applications on your network using it you may wish to install cache, provided you have at least one or two avialable SSD slots, and the writing pattern is many small random writes. + +You should never create a write cache without knowing what you are doing. You need two SSDs to crosscheck eachother, and ideally ressilant server SSDs with large capacitors that ensure all content is always written. If you go with a commercial storage solution they will probably check this already, and give you a nice interface for just inserting and assigning SSD cache. + +You can also create a cach manually wit the **Logical Volume** using the LVM. If you do this you can group together storage volumes. In particular you can put a [read or write cache](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lvm_cache_volume_creation) with an SSD in front of slower HDD. diff --git a/docs/changelog.md b/docs/changelog.md index 7873b4fa..9d881af7 100644 --- a/docs/changelog.md +++ b/docs/changelog.md @@ -7,6 +7,50 @@ title: Changelog !!! note This is the new changelog, only the most recent builds. For all versions, see the [old changelog](old_changelog.html). +## [Version 564](https://github.com/hydrusnetwork/hydrus/releases/tag/v564) + +### more macOS work + +### thanks to a user, we have more macOS features + +* macOS users get a new shortcut action, default Space, that uses Quick Look to preview a thumbnail like you can in Finder. **all existing users will get the new shortcut!** +* the hydrus .app now has the version number in Get Info +* **macOS users who run from source should rebuild their venvs this week!** if you don't, then trying this new Quick Look feature will just give you an error notification + +### new fuzzy operator math in system predicates + +* the system predicates for width, height, num_notes, num_words, num_urls, num_frames, duration, and framerate now support two different kinds of approximate equals, ≈: absolute (±x), and percentage (±x%). previously, the ≈ secretly just did ±15% in all cases (issue #1468) +* all `system:framerate=x` searches are now converted to `±5%`, which is what they were behind the scenes. `!=` framerate stuff is no longer supported, so if you happened to use it, it is now converted to `<` just as a valid fallback +* `system:duration` gets the same thing, `±5%`. it wasn't doing this behind the scenes before, but it should have been! +* `system:duration` also now allows hours and minutes input, if you need longer! +* for now, the parsing system is not updated to specify the % or absolute ± values. it will remain the same as the old system, with ±15% as the default for a `~=` input +* there's still a little borked logic in these combined types. if you search `< 3 URLs`, that will return files with 0 URLs, and same for `num_notes`, but if you search `< 200px width` or any of the others I changed this week, that won't return a PDF that has no width (although it will return a damaged file that reports 0 width specifically). I am going to think about this, since there isn't an easy one-size-fits-all-solution to marry what is technically correct with what is actually convenient. I'll probably add a checkbox that says whether to include 'Null' values or not and default that True/False depending on the situation; let me know what you think! + +### misc + +* I have taken out Space as the default for archive/delete filter 'keep' and duplicate filter 'this is better, delete other'. Space is now exclusively, by default, media pause/play. **I am going to set this to existing users too, deleting/overwriting what Space does for you, if you are still set to the defaults** +* integer percentages are now rendered without the trailing `.0`. `15%`, not `15.0%` +* when you 'open externally', 'open in web browser', or 'open path' from a thumbnail, the preview viewer now pauses rather than clears completely +* fixed the edit shortcut panel ALWAYS showing the new (home/end/left/right/to focus) dropdown for thumbnail dropdown, arrgh +* I fixed a stupid typo that was breaking file repository file deletes +* `help->about` now shows the Qt platformName +* added a note about bad Wayland support to the Linux 'installing' help document +* the guy who wrote the `Fixing_Hydrus_Random_Crashes_Under_Linux` document has updated it with new information, particularly related to running hydrus fast using virtual memory on small, underpowered computers + +### client api + +* thanks to a user, the undocumented API call that returns info on importer pages now includes the sha256 file hash in each import object Object +* although it is a tiny change, let's nonetheless update the Client API version to 61 + +### boring predicate overhaul work + +* updated the `NumberTest` object to hold specific percentage and absolute ± values +* updated the `NumberTest` object to render itself to any number format, for instance pixels vs kilobytes vs a time delta +* updated the `Predicate` object for system preds width, height, num_notes, num_words, num_urls, num_frames, duration, and framerate to store their operator and value as a `NumberTest`, and updated predicate string rendering, parsing, editing, database-level predicate handling +* wrote new widgets to edit `NumberTest`s of various sorts and spammed them to these (operator, value) system predicate UI panels. we are finally clearing out some 8+-year-old jank here +* rewrote the `num_notes` database search logic to use `NumberTest`s +* the system preds for height, width, and framerate now say 'has x' and 'no x' when set to `>0` or `=0`, although what these really mean is not perfectly defined + ## [Version 563](https://github.com/hydrusnetwork/hydrus/releases/tag/v563) ### macOS improvements @@ -418,68 +462,3 @@ title: Changelog * slimmed down some of the watcher/subscription fixed-checking-time code * misc formatting cleanup and surplus import clearout * fixed the discord link in the PTR help document - -## [Version 554](https://github.com/hydrusnetwork/hydrus/releases/tag/v554) - -### checker options fixes - -* **sorry for any jank 'static check interval' watcher or subscription timings you saw last week! I screwed something up and it slipped through testing** -* the 'static check interval' logic is much much simpler. rather than try to always keep to the same check period, even if the actual check is delayed, it just works off 'last check time + period', every time. the clever stuff was generally confusing and failing in a variety of ways -* fixed a bug in the new static check time code that was stopping certain in-limbo watchers from calculating their correct next check time on program load -* fixed a bug in the new static check time code that was causing too many checks in long-paused-and-now-unpaused downloaders -* some new unit tests will make sure these errors do not happen again -* in the checker options UI, if you uncheck 'just check at a static, regular interval', and leave the faster/slower values as the same when you OK, then the dialog now asks you if that is what you want -* in the checker options UI, the 'slower than' value will now automatically update itself to be no smaller than the 'faster than' value - -### job status fixes and cleanup (mostly boring) - -* **sorry for any 'Cancel/IsCancellable' related errors you saw last week! I screwed something else up** -* fixed a dumb infinite recursion error in the new job status cancellable 'safety' checks that was happening when it was time to auto-dismiss a cancellable job due to program/thread shutdown or a maintenance mode change. this also fixes some non-dismissing popup messages (usually subscriptions) that weren't setting their cancel status correctly -* this happened because the code here was ancient and ugly. I have renamed, simplified, and reworked the logical dangerzone variables and methods in the job status object so we don't run into this problem again. 'Cancel' and 'Finish' no longer take a seconds parameter, 'Delete' is now 'FinishAndDismiss', 'IsDeleted' is now 'IsDismissed', 'IsDeletable' is now merged into a cleaner 'IsDone', 'IsWorking' is removed, 'SetCancellable' and 'SetPausable' are removed (these will always be in the init, and will determine what type of job we have), and the various new Client API calls and help are updated for this -* also, the job status methods now check their backstop 'cancel' tests far less often, and there's a throttle to make sure they can only run once a second anyway -* also ditched the needless threading events for simple bools -* also cleared up about 40 pointless Finish/FinishAndDismiss duplicate calls across the program -* also fixed up the job status object to do its various yield pauses more sanely - -### cbz and ugoira detection and thumbnails - -* CBZ files are now detected! there is no very strict standard of what is or isn't a CBZ (it is basically just a zip of images and maybe some metadata files), but I threw together a 'yeah that looks like a cbz' test that now runs on every zip. there will probably be several false positives, but with luck fewer false negatives, which I think is the way we want to lean here. if you have just some zip of similarly named images, it'll now be counted as a CBZ, but I think we'll nonetheless want to give those all the upcoming CBZ tech anyway, even if they aren't technically intended to be 'CBZ', whatever that actually means here other than the different file extension -* the client looks for the cover image in your CBZ and uses that for the thumbnail! it also uses this file's resolution as the CBZ resolution -* Ugoira files are now detected! there is a firmer standard of what an Ugoira is, but it is still tricky as we are just talking about a different list of zipped image files here. I expect zero false negatives and some false positives (unfortunately, it'll be CBZs with zero-padded numerical-only filenames). as all ugoiras are valid CBZs but few CBZs are valid ugoiras, the Ugoira test runs first -* the client now gets a thumbnail for Ugoiras. It'll also use the x%-in setting that other animations and videos use! it also fetches resolution and 'num frames'. duration can't be inferred just yet, but we hope to have some options (and actual rendering) happening in the medium-term future -* this is all an experiment. let me know how it goes, and send in any examples of it failing awfully. there is lots more to do. if things don't explode with this much, I'll see about .cbr and cb7, which seems totally doable, and then I can seriously plan out UI for actual view and internal navigation. I can't promise proper reader features like bookmarks or anything, but I'll keep on pushing -* all your existing zips will be scheduled for a filetype re-scan on update - -### animations - -* the native FFMPEG renderer pipeline is now capable of transparency. APNGs rendered in the native viewer now have correct transparency and can pass 'has transparency' checks -* all your apngs will be scheduled for the 'has transparency' check, just like pngs and gifs and stuff a couple weeks ago. thanks to the user who submitted some transparency-having apngs to test with! -* the thumbnails for animated gifs are now taken using the FFMPEG renderer, which puts them x% in, just like APNG and other video. transparency in these thumbnails also seems to be good! am not going to regen everyone's animated gif thumbs yet--I'll do some more IRL testing--but this will probably come in a few weeks. let me know if you see a bevy of newly imported gifs with crazy thumbs -* I also overhauled the native GIF renderer. what used to be a cobbled-together RGB OpenCV solution with a fallback to bad PIL code is now a proper only-PIL RGBA solution, and the transparency seems to be great now (the OpenCV code had no transparency, and the PIL fallback tried but generally drew the last frame on top of the previous, giving a noclip effect). the new renderer also skips to an unrendered area faster -* given the file maintenance I/O Error problems we had the past couple weeks, I also made this cleaner GIF renderer much more robust; it will generally just rewind itself or give a black frame if it runs into truncation problems, no worries, and for gifs that just have one weird frame that doesn't break seek, it should be able to skip past those now, repeating the last good frame until it hits something valid -* as a side thing, the FFMPEG GIF renderer seems capable of doing almost everything the PIL renderer can now. I can flip the code to using the FFMPEG pipeline and gifs come through fine, transparency included. I prefer the PIL for now, but depending on how things go, I may add options to use the FFMPEG bridge as a testbed/fallback in future -* added some PIL animated gif rendering tech to handle a gif that out of nowhere produces a giga 85171x53524 frame, eating up multiple GB of memory and taking twenty seconds to failrender -* fixed yet another potential source of the false positive I/O Errors caused by the recent 'has transparency' checking, this time not just in malformed animated gif frames, but some busted static images too -* improved the PIL loading code a little more, converting more possible I/O Errors and other weird damaged file states to the correct hydrus-internal exception types with nicer error texts -* the 'disable CV for gifs' option is removed - -### file pre-import checks - -* the 'is this file free to work on' test that runs before files are added to the manual or import folder file list now has an additional file-open check. this improves reliability over NAS connections, where the file may be used by a remote process, and also improves detection for files where the current user only has read permissions -* import folders now have a 'recent modified time skip period' setting, defaulting to 60 seconds. any file that has a modified date newer than that many seconds ago will not be imported on the current check. this helps to avoid importing files that are currently being downloaded/copied into the folder when the import folder runs (when that folder/download process is otherwise immune to the existing 'already in use' checks) -* import folders now repeat-check folders that have many previously-seen files much faster - -### misc - -* the 'max gif size' setting in the quiet and loud file import options now defaults to 'no limit'. it used to be 32MB, to catch various trash webm re-encodes, but these days it catches more false positives than it is worth, and 32MB is less of a deal these days too -* the test on boot to see if the given database location is writeable-to should now give an error when that location is on a non--existing location (e.g. a removable usb drive that is not currently plugged in). previously, it could, depending on the situation, either proceed and go crazy later or wait indefinitely on a CPU-heavy busy-wait for the drive to be plugged back in. unfortunately, because at this stage there is no logfile location and no UI, if your custom db dir does not and cannot exist, the program terminates instantly and silently writes a crash log to your desktop. I have made a plan to improve this in future -* also cleaned up all the db_dir boot code generally. the various validity tests should now only happen once per potential location -* the function that converts an export phrase into a filename will now elide long unicode filenames correctly. filenames with complex unicode characters will take more than one byte per character (and most OSes have ~255 byte filename limit), which requires a trickier check. also, on Windows, where there is a 260-character total path limit, the combined directory+filename length is checked better, and just checked on Windows. all errors raised here are better -* added some unit tests to check the new path eliding tech -* brushed up the 'getting started with ratings' help a little - -### client api - -* thanks to a user, the Client API now has the ability to see and interact with the current popup messages in the popup toaster! -* fixed a stupid typo that I made in the new Client API options call. added a unit test to catch this in future, too -* the client api version is now 57 diff --git a/docs/duplicates.md b/docs/duplicates.md index 07794d8e..a7895c0b 100644 --- a/docs/duplicates.md +++ b/docs/duplicates.md @@ -70,7 +70,7 @@ There are three ways a file can be related to another in the current duplicates You can customise the shortcuts under _file->shortcuts->duplicate_filter_. The defaults are: -* Left-click or space: **this is better, delete the other**. +* Left-click: **this is better, delete the other**. * Right-click: **they are related alternates**. diff --git a/docs/getting_started_files.md b/docs/getting_started_files.md index b056ae0e..c236c225 100644 --- a/docs/getting_started_files.md +++ b/docs/getting_started_files.md @@ -97,7 +97,7 @@ Lets say you just downloaded a good thread, or perhaps you just imported an old Select some thumbnails, and either choose _filter->archive/delete_ from the right-click menu or hit F12. You will see them in a special version of the media viewer, with the following default controls: -* ++left-button++, ++space++, or ++f7++: **keep and archive the file, move on** +* ++left-button++ or ++f7++: **keep and archive the file, move on** * ++right-button++ or ++delete++: **delete the file, move on** * ++up++: **Skip this file, move on** * ++middle-button++ or ++backspace++: **I didn't mean that, go back one** diff --git a/docs/getting_started_installing.md b/docs/getting_started_installing.md index f67b6783..9c8409e3 100644 --- a/docs/getting_started_installing.md +++ b/docs/getting_started_installing.md @@ -37,6 +37,9 @@ I try to release a new version every Wednesday by 8pm EST and write an accompany * _This release has always been a little buggy. Many macOS users are having better success [running from source](running_from_source.md)._ === "Linux" + + !!! warning "Wayland" + Unfortunately, hydrus has several bad bugs in Wayland. The mpv window will often not embed properly into the media viewer, menus and windows may position on the wrong screen, and the taskbar icon may not work at all. [Running from source](running_from_source.md) may improve the situation, but some of these issues seem to be intractable for now. X11 is much happier with hydrus. * Get the .tag.gz. Extract it somewhere useful and create shortcuts to 'client' and 'server' as you like. The build is made on Ubuntu, so if you run something else, compatibility is hit and miss. * If you have problems running the Ubuntu build, users with some python experience generally find running from source works well. diff --git a/docs/old_changelog.html b/docs/old_changelog.html index a8fe893a..097c2219 100644 --- a/docs/old_changelog.html +++ b/docs/old_changelog.html @@ -34,6 +34,42 @@

changelog