haproxy/README

330 lines
15 KiB
Plaintext
Raw Normal View History

----------------------
HAProxy how-to
----------------------
version 1.8
willy tarreau
[RELEASE] Released version 1.8.0 Released version 1.8.0 with the following main changes : - BUG/MEDIUM: stream: don't automatically forward connect nor close - BUG/MAJOR: stream: ensure analysers are always called upon close - BUG/MINOR: stream-int: don't try to read again when CF_READ_DONTWAIT is set - MEDIUM: mworker: Add systemd `Type=notify` support - BUG/MEDIUM: cache: free callback to remove from tree - CLEANUP: cache: remove unused struct - MEDIUM: cache: enable the HTTP analysers - CLEANUP: cache: remove wrong comment - MINOR: threads/atomic: rename local variables in macros to avoid conflicts - MINOR: threads/plock: rename local variables in macros to avoid conflicts - MINOR: threads/atomic: implement pl_mb() in asm on x86 - MINOR: threads/atomic: implement pl_bts() on non-x86 - MINOR: threads/build: atomic: replace the few inlines with macros - BUILD: threads/plock: fix a build issue on Clang without optimization - BUILD: ebtree: don't redefine types u32/s32 in scope-aware trees - BUILD: compiler: add a new type modifier __maybe_unused - BUILD: h2: mark some inlined functions "unused" - BUILD: server: check->desc always exists - BUG/MEDIUM: h2: properly report connection errors in headers and data handlers - MEDIUM: h2: add a function to emit an HTTP/1 request from a headers list - MEDIUM: h2: change hpack_decode_headers() to only provide a list of headers - BUG/MEDIUM: h2: always reassemble the Cookie request header field - BUG/MINOR: systemd: ignore daemon mode - CONTRIB: spoa_example: allow to compile outside HAProxy. - CONTRIB: spoa_example: remove bref, wordlist, cond_wordlist - CONTRIB: spoa_example: remove last dependencies on type "sample" - CONTRIB: spoa_example: remove SPOE enums that are useless for clients - CLEANUP: cache: reorder includes - MEDIUM: shctx: use unsigned int for len and block_count - MEDIUM: cache: "show cache" on the cli - BUG/MEDIUM: cache: use key=0 as a condition for freeing - BUG/MEDIUM: cache: refcount forbids to free the objects - BUG/MEDIUM: cache fix cli_kws structure - BUG/MEDIUM: deinit: correctly deinitialize the proxy and global listener tasks - BUG/MINOR: ssl: Always start the handshake if we can't send early data. - MINOR: ssl: Don't disable early data handling if we could not write. - MINOR: pools: prepare functions to override malloc/free in pools - MINOR: pools: implement DEBUG_UAF to detect use after free - BUG/MEDIUM: threads/time: fix time drift correction - BUG/MEDIUM: threads/time: maintain a common time reference between all threads - MINOR: sample: Add "thread" sample fetch - BUG/MINOR: Use crt_base instead of ca_base when crt is parsed on a server line - BUG/MINOR: stream: fix tv_request calculation for applets - BUG/MAJOR: h2: always remove a stream from the send list before freeing it - BUG/MAJOR: threads/task: dequeue expired tasks under the WQ lock - MINOR: ssl: Handle reading early data after writing better. - MINOR: mux: Make sure every string is woken up after the handshake. - MEDIUM: cache: store sha1 for hashing the cache key - MINOR: http: implement the "http-request reject" rule - MINOR: h2: send RST_STREAM before GOAWAY on reject - MEDIUM: h2: don't gracefully close the connection anymore on Connection: close - MINOR: h2: make use of client-fin timeout after GOAWAY - MEDIUM: config: ensure that tune.bufsize is at least 16384 when using HTTP/2 - MINOR: ssl: Handle early data with BoringSSL - BUG/MEDIUM: stream: always release the stream-interface on abort - BUG/MEDIUM: cache: free ressources in chn_end_analyze - MINOR: cache: move the refcount decrease in the applet release - BUG/MINOR: listener: Allow multiple "process" options on "bind" lines - MINOR: config: Support a range to specify processes in "cpu-map" parameter - MINOR: config: Slightly change how parse_process_number works - MINOR: config: Export parse_process_number and use it wherever it's applicable - MINOR: standard: Add my_ffsl function to get the position of the bit set to one - MINOR: config: Add auto-increment feature for cpu-map - MINOR: config: Support partial ranges in cpu-map directive - MINOR:: config: Remove thread-map directive - MINOR: config: Add the threads support in cpu-map directive - MINOR: config: Add threads support for "process" option on "bind" lines - MEDIUM: listener: Bind listeners on a thread subset if specified - CLEANUP: debug: Use DPRINTF instead of fprintf into #ifdef DEBUG_FULL/#endif - CLEANUP: log: Rename Alert/Warning in ha_alert/ha_warning - MINOR/CLEANUP: proxy: rename "proxy" to "proxies_list" - CLEANUP: pools: rename all pool functions and pointers to remove this "2" - DOC: update the roadmap file with the latest changes merged in 1.8 - DOC: fix mangled version in peers protocol documentation - DOC: add initial peers protovol v2.0 documentation. - DOC: mention William as maintainer of the cache and master-worker - DOC: add Christopher and Emeric as maintainers of the threads - MINOR: cache: replace a fprint() by an abort() - MEDIUM: cache: max-age configuration keyword - DOC: explain HTTP2 timeout behavior - DOC: cache: configuration and management - MAJOR: mworker: exits the master on failure - BUG/MINOR: threads: don't drop "extern" on the lock in include files - MINOR: task: keep a pointer to the currently running task - MINOR: task: align the rq and wq locks - MINOR: fd: cache-align fdtab and fdcache locks - MINOR: buffers: cache-align buffer_wq_lock - CLEANUP: server: reorder some fields in struct server to save 40 bytes - CLEANUP: proxy: slightly reorder the struct proxy to reduce holes - CLEANUP: checks: remove 16 bytes of holes in struct check - CLEANUP: cache: more efficiently pack the struct cache - CLEANUP: fd: place the lock at the beginning of struct fdtab - CLEANUP: pools: align pools on a cache line - DOC: config: add a few bits about how to configure HTTP/2 - BUG/MAJOR: threads/queue: avoid recursive locking in pendconn_get_next_strm() - BUILD: Makefile: reorder object files by size
2017-11-26 18:25:23 +00:00
2017/11/26
1) How to build it
------------------
[RELEASE] Released version 1.8.0 Released version 1.8.0 with the following main changes : - BUG/MEDIUM: stream: don't automatically forward connect nor close - BUG/MAJOR: stream: ensure analysers are always called upon close - BUG/MINOR: stream-int: don't try to read again when CF_READ_DONTWAIT is set - MEDIUM: mworker: Add systemd `Type=notify` support - BUG/MEDIUM: cache: free callback to remove from tree - CLEANUP: cache: remove unused struct - MEDIUM: cache: enable the HTTP analysers - CLEANUP: cache: remove wrong comment - MINOR: threads/atomic: rename local variables in macros to avoid conflicts - MINOR: threads/plock: rename local variables in macros to avoid conflicts - MINOR: threads/atomic: implement pl_mb() in asm on x86 - MINOR: threads/atomic: implement pl_bts() on non-x86 - MINOR: threads/build: atomic: replace the few inlines with macros - BUILD: threads/plock: fix a build issue on Clang without optimization - BUILD: ebtree: don't redefine types u32/s32 in scope-aware trees - BUILD: compiler: add a new type modifier __maybe_unused - BUILD: h2: mark some inlined functions "unused" - BUILD: server: check->desc always exists - BUG/MEDIUM: h2: properly report connection errors in headers and data handlers - MEDIUM: h2: add a function to emit an HTTP/1 request from a headers list - MEDIUM: h2: change hpack_decode_headers() to only provide a list of headers - BUG/MEDIUM: h2: always reassemble the Cookie request header field - BUG/MINOR: systemd: ignore daemon mode - CONTRIB: spoa_example: allow to compile outside HAProxy. - CONTRIB: spoa_example: remove bref, wordlist, cond_wordlist - CONTRIB: spoa_example: remove last dependencies on type "sample" - CONTRIB: spoa_example: remove SPOE enums that are useless for clients - CLEANUP: cache: reorder includes - MEDIUM: shctx: use unsigned int for len and block_count - MEDIUM: cache: "show cache" on the cli - BUG/MEDIUM: cache: use key=0 as a condition for freeing - BUG/MEDIUM: cache: refcount forbids to free the objects - BUG/MEDIUM: cache fix cli_kws structure - BUG/MEDIUM: deinit: correctly deinitialize the proxy and global listener tasks - BUG/MINOR: ssl: Always start the handshake if we can't send early data. - MINOR: ssl: Don't disable early data handling if we could not write. - MINOR: pools: prepare functions to override malloc/free in pools - MINOR: pools: implement DEBUG_UAF to detect use after free - BUG/MEDIUM: threads/time: fix time drift correction - BUG/MEDIUM: threads/time: maintain a common time reference between all threads - MINOR: sample: Add "thread" sample fetch - BUG/MINOR: Use crt_base instead of ca_base when crt is parsed on a server line - BUG/MINOR: stream: fix tv_request calculation for applets - BUG/MAJOR: h2: always remove a stream from the send list before freeing it - BUG/MAJOR: threads/task: dequeue expired tasks under the WQ lock - MINOR: ssl: Handle reading early data after writing better. - MINOR: mux: Make sure every string is woken up after the handshake. - MEDIUM: cache: store sha1 for hashing the cache key - MINOR: http: implement the "http-request reject" rule - MINOR: h2: send RST_STREAM before GOAWAY on reject - MEDIUM: h2: don't gracefully close the connection anymore on Connection: close - MINOR: h2: make use of client-fin timeout after GOAWAY - MEDIUM: config: ensure that tune.bufsize is at least 16384 when using HTTP/2 - MINOR: ssl: Handle early data with BoringSSL - BUG/MEDIUM: stream: always release the stream-interface on abort - BUG/MEDIUM: cache: free ressources in chn_end_analyze - MINOR: cache: move the refcount decrease in the applet release - BUG/MINOR: listener: Allow multiple "process" options on "bind" lines - MINOR: config: Support a range to specify processes in "cpu-map" parameter - MINOR: config: Slightly change how parse_process_number works - MINOR: config: Export parse_process_number and use it wherever it's applicable - MINOR: standard: Add my_ffsl function to get the position of the bit set to one - MINOR: config: Add auto-increment feature for cpu-map - MINOR: config: Support partial ranges in cpu-map directive - MINOR:: config: Remove thread-map directive - MINOR: config: Add the threads support in cpu-map directive - MINOR: config: Add threads support for "process" option on "bind" lines - MEDIUM: listener: Bind listeners on a thread subset if specified - CLEANUP: debug: Use DPRINTF instead of fprintf into #ifdef DEBUG_FULL/#endif - CLEANUP: log: Rename Alert/Warning in ha_alert/ha_warning - MINOR/CLEANUP: proxy: rename "proxy" to "proxies_list" - CLEANUP: pools: rename all pool functions and pointers to remove this "2" - DOC: update the roadmap file with the latest changes merged in 1.8 - DOC: fix mangled version in peers protocol documentation - DOC: add initial peers protovol v2.0 documentation. - DOC: mention William as maintainer of the cache and master-worker - DOC: add Christopher and Emeric as maintainers of the threads - MINOR: cache: replace a fprint() by an abort() - MEDIUM: cache: max-age configuration keyword - DOC: explain HTTP2 timeout behavior - DOC: cache: configuration and management - MAJOR: mworker: exits the master on failure - BUG/MINOR: threads: don't drop "extern" on the lock in include files - MINOR: task: keep a pointer to the currently running task - MINOR: task: align the rq and wq locks - MINOR: fd: cache-align fdtab and fdcache locks - MINOR: buffers: cache-align buffer_wq_lock - CLEANUP: server: reorder some fields in struct server to save 40 bytes - CLEANUP: proxy: slightly reorder the struct proxy to reduce holes - CLEANUP: checks: remove 16 bytes of holes in struct check - CLEANUP: cache: more efficiently pack the struct cache - CLEANUP: fd: place the lock at the beginning of struct fdtab - CLEANUP: pools: align pools on a cache line - DOC: config: add a few bits about how to configure HTTP/2 - BUG/MAJOR: threads/queue: avoid recursive locking in pendconn_get_next_strm() - BUILD: Makefile: reorder object files by size
2017-11-26 18:25:23 +00:00
This version is a stable version, which means that it belongs to a branch which
will get some fixes for bugs as they are discovered. Versions which include the
suffix "-dev" are development versions and should be avoided in production. If
you are not used to build from sources or if you are not used to follow updates
then it is recommended that instead you use the packages provided by your
software vendor or Linux distribution. Most of them are taking this task
[RELEASE] Released version 1.6.0 Released version 1.6.0 with the following main changes : - BUG/MINOR: Handle interactive mode in cli handler - DOC: global section missing parameters - DOC: backend section missing parameters - DOC: stats paramaters available in frontend - MINOR: lru: do not allocate useless memory in lru64_lookup - BUG/MINOR: http: Add OPTIONS in supported http methods (found by find_http_meth) - BUG/MINOR: ssl: fix management of the cache where forged certificates are stored - MINOR: ssl: Release Servers SSL context when HAProxy is shut down - MINOR: ssl: Read the file used to generate certificates in any order - MINOR: ssl: Add support for EC for the CA used to sign generated certificates - MINOR: ssl: Add callbacks to set DH/ECDH params for generated certificates - BUG/MEDIUM: logs: fix time zone offset format in RFC5424 - BUILD: Fix the build on OSX (htonll/ntohll) - BUILD: enable build on Linux/s390x - BUG/MEDIUM: lua: direction test failed - MINOR: lua: fix a spelling error in some error messages - CLEANUP: cli: ensure we can never double-free error messages - BUG/MEDIUM: lua: force server-close mode on Lua services - MEDIUM: init: support more command line arguments after pid list - MEDIUM: init: support a list of files on the command line - MINOR: debug: enable memory poisonning to use byte 0 - BUILD: ssl: fix build error introduced by recent commit - BUG/MINOR: config: make the stats socket pass the correct proxy to the parsers - MEDIUM: server: implement TCP_USER_TIMEOUT on the server - DOC: mention the "namespace" options for bind and server lines - DOC: add the "management" documentation - DOC: move the stats socket documentation from config to management - MINOR: examples: update haproxy.spec to mention new docs - DOC: mention management.txt in README - DOC: remove haproxy-{en,fr}.txt - BUILD: properly report when USE_ZLIB and USE_SLZ are used together - MINOR: init: report use of libslz instead of "no compression" - CLEANUP: examples: remove some obsolete and confusing files - CLEANUP: examples: remove obsolete configuration file samples - CLEANUP: examples: fix the example file content-sw-sample.cfg - CLEANUP: examples: update sample file option-http_proxy.cfg - CLEANUP: examples: update sample file ssl.cfg - CLEANUP: tests: move a test file from examples/ to tests/ - CLEANUP: examples: shut up warnings in transparent proxy example - CLEANUP: tests: removed completely obsolete test files - DOC: update ROADMAP to remove what was done in 1.6 - BUG/MEDIUM: pattern: fixup use_after_free in the pat_ref_delete_by_id
2015-10-13 16:52:22 +00:00
seriously and are doing a good job at backporting important fixes. If for any
reason you'd prefer a different version than the one packaged for your system,
you want to be certain to have all the fixes or to get some commercial support,
other choices are available at :
http://www.haproxy.com/
To build haproxy, you will need :
- GNU make. Neither Solaris nor OpenBSD's make work with the GNU Makefile.
If you get many syntax errors when running "make", you may want to retry
with "gmake" which is the name commonly used for GNU make on BSD systems.
- GCC between 2.95 and 4.8. Others may work, but not tested.
- GNU ld
Also, you might want to build with libpcre support, which will provide a very
efficient regex implementation and will also fix some badness on Solaris' one.
To build haproxy, you have to choose your target OS amongst the following ones
and assign it to the TARGET variable :
- linux22 for Linux 2.2
- linux24 for Linux 2.4 and above (default)
- linux24e for Linux 2.4 with support for a working epoll (> 0.21)
- linux26 for Linux 2.6 and above
- linux2628 for Linux 2.6.28, 3.x, and above (enables splice and tproxy)
- solaris for Solaris 8 or 10 (others untested)
- freebsd for FreeBSD 5 to 10 (others untested)
[RELEASE] Released version 1.6.0 Released version 1.6.0 with the following main changes : - BUG/MINOR: Handle interactive mode in cli handler - DOC: global section missing parameters - DOC: backend section missing parameters - DOC: stats paramaters available in frontend - MINOR: lru: do not allocate useless memory in lru64_lookup - BUG/MINOR: http: Add OPTIONS in supported http methods (found by find_http_meth) - BUG/MINOR: ssl: fix management of the cache where forged certificates are stored - MINOR: ssl: Release Servers SSL context when HAProxy is shut down - MINOR: ssl: Read the file used to generate certificates in any order - MINOR: ssl: Add support for EC for the CA used to sign generated certificates - MINOR: ssl: Add callbacks to set DH/ECDH params for generated certificates - BUG/MEDIUM: logs: fix time zone offset format in RFC5424 - BUILD: Fix the build on OSX (htonll/ntohll) - BUILD: enable build on Linux/s390x - BUG/MEDIUM: lua: direction test failed - MINOR: lua: fix a spelling error in some error messages - CLEANUP: cli: ensure we can never double-free error messages - BUG/MEDIUM: lua: force server-close mode on Lua services - MEDIUM: init: support more command line arguments after pid list - MEDIUM: init: support a list of files on the command line - MINOR: debug: enable memory poisonning to use byte 0 - BUILD: ssl: fix build error introduced by recent commit - BUG/MINOR: config: make the stats socket pass the correct proxy to the parsers - MEDIUM: server: implement TCP_USER_TIMEOUT on the server - DOC: mention the "namespace" options for bind and server lines - DOC: add the "management" documentation - DOC: move the stats socket documentation from config to management - MINOR: examples: update haproxy.spec to mention new docs - DOC: mention management.txt in README - DOC: remove haproxy-{en,fr}.txt - BUILD: properly report when USE_ZLIB and USE_SLZ are used together - MINOR: init: report use of libslz instead of "no compression" - CLEANUP: examples: remove some obsolete and confusing files - CLEANUP: examples: remove obsolete configuration file samples - CLEANUP: examples: fix the example file content-sw-sample.cfg - CLEANUP: examples: update sample file option-http_proxy.cfg - CLEANUP: examples: update sample file ssl.cfg - CLEANUP: tests: move a test file from examples/ to tests/ - CLEANUP: examples: shut up warnings in transparent proxy example - CLEANUP: tests: removed completely obsolete test files - DOC: update ROADMAP to remove what was done in 1.6 - BUG/MEDIUM: pattern: fixup use_after_free in the pat_ref_delete_by_id
2015-10-13 16:52:22 +00:00
- netbsd for NetBSD
- osx for Mac OS/X
- openbsd for OpenBSD 5.7 and above
- aix51 for AIX 5.1
- aix52 for AIX 5.2
- cygwin for Cygwin
2015-10-19 23:01:16 +00:00
- haiku for Haiku
- generic for any other OS or version.
- custom to manually adjust every setting
You may also choose your CPU to benefit from some optimizations. This is
particularly important on UltraSparc machines. For this, you can assign
one of the following choices to the CPU variable :
- i686 for intel PentiumPro, Pentium 2 and above, AMD Athlon
- i586 for intel Pentium, AMD K6, VIA C3.
- ultrasparc : Sun UltraSparc I/II/III/IV processor
- native : use the build machine's specific processor optimizations. Use with
extreme care, and never in virtualized environments (known to break).
- generic : any other processor or no CPU-specific optimization. (default)
Alternatively, you may just set the CPU_CFLAGS value to the optimal GCC options
for your platform.
You may want to build specific target binaries which do not match your native
compiler's target. This is particularly true on 64-bit systems when you want
to build a 32-bit binary. Use the ARCH variable for this purpose. Right now
it only knows about a few x86 variants (i386,i486,i586,i686,x86_64), two
generic ones (32,64) and sets -m32/-m64 as well as -march=<arch> accordingly.
If your system supports PCRE (Perl Compatible Regular Expressions), then you
really should build with libpcre which is between 2 and 10 times faster than
other libc implementations. Regex are used for header processing (deletion,
rewriting, allow, deny). The only inconvenient of libpcre is that it is not
yet widely spread, so if you build for other systems, you might get into
trouble if they don't have the dynamic library. In this situation, you should
statically link libpcre into haproxy so that it will not be necessary to
install it on target systems. Available build options for PCRE are :
- USE_PCRE=1 to use libpcre, in whatever form is available on your system
(shared or static)
- USE_STATIC_PCRE=1 to use a static version of libpcre even if the dynamic
one is available. This will enhance portability.
- with no option, use your OS libc's standard regex implementation (default).
Warning! group references on Solaris seem broken. Use static-pcre whenever
possible.
If your system doesn't provide PCRE, you are encouraged to download it from
http://www.pcre.org/ and build it yourself, it's fast and easy.
Recent systems can resolve IPv6 host names using getaddrinfo(). This primitive
is not present in all libcs and does not work in all of them either. Support in
glibc was broken before 2.3. Some embedded libs may not properly work either,
thus, support is disabled by default, meaning that some host names which only
resolve as IPv6 addresses will not resolve and configs might emit an error
during parsing. If you know that your OS libc has reliable support for
getaddrinfo(), you can add USE_GETADDRINFO=1 on the make command line to enable
it. This is the recommended option for most Linux distro packagers since it's
working fine on all recent mainstream distros. It is automatically enabled on
Solaris 8 and above, as it's known to work.
It is possible to add native support for SSL using the GNU makefile, by passing
"USE_OPENSSL=1" on the make command line. The libssl and libcrypto will
automatically be linked with haproxy. Some systems also require libz, so if the
build fails due to missing symbols such as deflateInit(), then try again with
"ADDLIB=-lz".
Your are strongly encouraged to always use an up-to-date version of OpenSSL, as
found on https://www.openssl.org/ as vulnerabilities are occasionally found and
you don't want them on your systems. HAProxy is known to build correctly on all
currently supported branches (0.9.8, 1.0.0, 1.0.1, 1.0.2 and 1.1.0 at the time
of writing). Branch 1.0.2 is currently recommended for the best combination of
features and stability. Asynchronous engines require OpenSSL 1.1.0 though. It's
worth mentionning that some OpenSSL derivatives are also reported to work but
may occasionally break. Patches to fix them are welcome but please read the
CONTRIBUTING file first.
To link OpenSSL statically against haproxy, build OpenSSL with the no-shared
keyword and install it to a local directory, so your system is not affected :
$ export STATICLIBSSL=/tmp/staticlibssl
$ ./config --prefix=$STATICLIBSSL no-shared
$ make && make install_sw
When building haproxy, pass that path via SSL_INC and SSL_LIB to make and
include additional libs with ADDLIB if needed (in this case for example libdl):
$ make TARGET=linux26 USE_OPENSSL=1 SSL_INC=$STATICLIBSSL/include SSL_LIB=$STATICLIBSSL/lib ADDLIB=-ldl
It is also possible to include native support for zlib to benefit from HTTP
MEDIUM: HTTP compression (zlib library support) This commit introduces HTTP compression using the zlib library. http_response_forward_body has been modified to call the compression functions. This feature includes 3 algorithms: identity, gzip and deflate: * identity: this is mostly for debugging, and it was useful for developping the compression feature. With Content-Length in input, it is making each chunk with the data available in the current buffer. With chunks in input, it is rechunking, the output chunks will be bigger or smaller depending of the size of the input chunk and the size of the buffer. Identity does not apply any change on data. * gzip: same as identity, but applying a gzip compression. The data are deflated using the Z_NO_FLUSH flag in zlib. When there is no more data in the input buffer, it flushes the data in the output buffer (Z_SYNC_FLUSH). At the end of data, when it receives the last chunk in input, or when there is no more data to read, it writes the end of data with Z_FINISH and the ending chunk. * deflate: same as gzip, but with deflate algorithm and zlib format. Note that this algorithm has ambiguous support on many browsers and no support at all from recent ones. It is strongly recommended not to use it for anything else than experimentation. You can't choose the compression ratio at the moment, it will be set to Z_BEST_SPEED (1), as tests have shown very little benefit in terms of compression ration when going above for HTML contents, at the cost of a massive CPU impact. Compression will be activated depending of the Accept-Encoding request header. With identity, it does not take care of that header. To build HAProxy with zlib support, use USE_ZLIB=1 in the make parameters. This work was initially started by David Du Colombier at Exceliance.
2012-10-23 08:25:10 +00:00
compression. For this, pass "USE_ZLIB=1" on the "make" command line and ensure
MAJOR: compression: integrate support for libslz This library is designed to emit a zlib-compatible stream with no memory usage and to favor resource savings over compression ratio. While zlib requires 256 kB of RAM per compression context (and can only support 4000 connections per GB of RAM), the stateless compression offered by libslz does not need to retain buffers between subsequent calls. In theory this slightly reduces the compression ratio but in practice it does not have that much of an effect since the zlib window is limited to 32kB. Libslz is available at : http://git.1wt.eu/web?p=libslz.git It was designed for web compression and provides a lot of savings over zlib in haproxy. Here are the preliminary results on a single core of a core2-quad 3.0 GHz in 32-bit for only 300 concurrent sessions visiting the home page of www.haproxy.org (76 kB) with the default 16kB buffers : BW In BW Out BW Saved Ratio memory VSZ/RSS zlib 237 Mbps 92 Mbps 145 Mbps 2.58 84M / 69M slz 733 Mbps 380 Mbps 353 Mbps 1.93 5.9M / 4.2M So while the compression ratio is lower, the bandwidth savings are much more important due to the significantly lower compression cost which allows to consume even more data from the servers. In the example above, zlib became the bottleneck at 24% of the output bandwidth. Also the difference in memory usage is obvious. More tests run on a single core of a core i5-3320M, with 500 concurrent users and the default 16kB buffers : At 100% CPU (no limit) : BW In BW Out BW Saved Ratio memory VSZ/RSS hits/s zlib 480 Mbps 188 Mbps 292 Mbps 2.55 130M / 101M 744 slz 1700 Mbps 810 Mbps 890 Mbps 2.10 23.7M / 9.7M 2382 At 85% CPU (limited) : BW In BW Out BW Saved Ratio memory VSZ/RSS hits/s zlib 1240 Mbps 976 Mbps 264 Mbps 1.27 130M / 100M 1738 slz 1600 Mbps 976 Mbps 624 Mbps 1.64 23.7M / 9.7M 2210 The most important benefit really happens when the CPU usage is limited by "maxcompcpuusage" or the BW limited by "maxcomprate" : in order to preserve resources, haproxy throttles the compression ratio until usage is within limits. Since slz is much cheaper, the average compression ratio is much higher and the input bandwidth is quite higher for one Gbps output. Other tests made with some reference files : BW In BW Out BW Saved Ratio hits/s daniels.html zlib 1320 Mbps 163 Mbps 1157 Mbps 8.10 1925 slz 3600 Mbps 580 Mbps 3020 Mbps 6.20 5300 tv.com/listing zlib 980 Mbps 124 Mbps 856 Mbps 7.90 310 slz 3300 Mbps 553 Mbps 2747 Mbps 5.97 1100 jquery.min.js zlib 430 Mbps 180 Mbps 250 Mbps 2.39 547 slz 1470 Mbps 764 Mbps 706 Mbps 1.92 1815 bootstrap.min.css zlib 790 Mbps 165 Mbps 625 Mbps 4.79 777 slz 2450 Mbps 650 Mbps 1800 Mbps 3.77 2400 So on top of saving a lot of memory, slz is constantly 2.5-3.5 times faster than zlib and results in providing more savings for a fixed CPU usage. For links smaller than 100 Mbps, zlib still provides a better compression ratio, at the expense of a much higher CPU usage. Larger input files provide slightly higher bandwidth for both libs, at the expense of a bit more memory usage for zlib (it converges to 256kB per connection).
2015-03-29 01:32:06 +00:00
that zlib is present on the system. Alternatively it is possible to use libslz
for a faster, memory less, but slightly less efficient compression, by passing
"USE_SLZ=1".
MEDIUM: HTTP compression (zlib library support) This commit introduces HTTP compression using the zlib library. http_response_forward_body has been modified to call the compression functions. This feature includes 3 algorithms: identity, gzip and deflate: * identity: this is mostly for debugging, and it was useful for developping the compression feature. With Content-Length in input, it is making each chunk with the data available in the current buffer. With chunks in input, it is rechunking, the output chunks will be bigger or smaller depending of the size of the input chunk and the size of the buffer. Identity does not apply any change on data. * gzip: same as identity, but applying a gzip compression. The data are deflated using the Z_NO_FLUSH flag in zlib. When there is no more data in the input buffer, it flushes the data in the output buffer (Z_SYNC_FLUSH). At the end of data, when it receives the last chunk in input, or when there is no more data to read, it writes the end of data with Z_FINISH and the ending chunk. * deflate: same as gzip, but with deflate algorithm and zlib format. Note that this algorithm has ambiguous support on many browsers and no support at all from recent ones. It is strongly recommended not to use it for anything else than experimentation. You can't choose the compression ratio at the moment, it will be set to Z_BEST_SPEED (1), as tests have shown very little benefit in terms of compression ration when going above for HTML contents, at the cost of a massive CPU impact. Compression will be activated depending of the Accept-Encoding request header. With identity, it does not take care of that header. To build HAProxy with zlib support, use USE_ZLIB=1 in the make parameters. This work was initially started by David Du Colombier at Exceliance.
2012-10-23 08:25:10 +00:00
Zlib is commonly found on most systems, otherwise updates can be retrieved from
http://www.zlib.net/. It is easy and fast to build. Libslz can be downloaded
from http://1wt.eu/projects/libslz/ and is even easier to build.
By default, the DEBUG variable is set to '-g' to enable debug symbols. It is
not wise to disable it on uncommon systems, because it's often the only way to
get a complete core when you need one. Otherwise, you can set DEBUG to '-s' to
strip the binary.
For example, I use this to build for Solaris 8 :
$ make TARGET=solaris CPU=ultrasparc USE_STATIC_PCRE=1
And I build it this way on OpenBSD or FreeBSD :
$ gmake TARGET=freebsd USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1
And on a classic Linux with SSL and ZLIB support (eg: Red Hat 5.x) :
$ make TARGET=linux26 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1
And on a recent Linux >= 2.6.28 with SSL and ZLIB support :
$ make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1
MEDIUM: HTTP compression (zlib library support) This commit introduces HTTP compression using the zlib library. http_response_forward_body has been modified to call the compression functions. This feature includes 3 algorithms: identity, gzip and deflate: * identity: this is mostly for debugging, and it was useful for developping the compression feature. With Content-Length in input, it is making each chunk with the data available in the current buffer. With chunks in input, it is rechunking, the output chunks will be bigger or smaller depending of the size of the input chunk and the size of the buffer. Identity does not apply any change on data. * gzip: same as identity, but applying a gzip compression. The data are deflated using the Z_NO_FLUSH flag in zlib. When there is no more data in the input buffer, it flushes the data in the output buffer (Z_SYNC_FLUSH). At the end of data, when it receives the last chunk in input, or when there is no more data to read, it writes the end of data with Z_FINISH and the ending chunk. * deflate: same as gzip, but with deflate algorithm and zlib format. Note that this algorithm has ambiguous support on many browsers and no support at all from recent ones. It is strongly recommended not to use it for anything else than experimentation. You can't choose the compression ratio at the moment, it will be set to Z_BEST_SPEED (1), as tests have shown very little benefit in terms of compression ration when going above for HTML contents, at the cost of a massive CPU impact. Compression will be activated depending of the Accept-Encoding request header. With identity, it does not take care of that header. To build HAProxy with zlib support, use USE_ZLIB=1 in the make parameters. This work was initially started by David Du Colombier at Exceliance.
2012-10-23 08:25:10 +00:00
In order to build a 32-bit binary on an x86_64 Linux system with SSL support
without support for compression but when OpenSSL requires ZLIB anyway :
$ make TARGET=linux26 ARCH=i386 USE_OPENSSL=1 ADDLIB=-lz
The SSL stack supports session cache synchronization between all running
processes. This involves some atomic operations and synchronization operations
which come in multiple flavors depending on the system and architecture :
Atomic operations :
- internal assembler versions for x86/x86_64 architectures
- gcc builtins for other architectures. Some architectures might not
be fully supported or might require a more recent version of gcc.
If your architecture is not supported, you willy have to either use
pthread if supported, or to disable the shared cache.
- pthread (posix threads). Pthreads are very common but inter-process
support is not that common, and some older operating systems did not
report an error when enabling multi-process mode, so they used to
silently fail, possibly causing crashes. Linux's implementation is
fine. OpenBSD doesn't support them and doesn't build. FreeBSD 9 builds
and reports an error at runtime, while certain older versions might
silently fail. Pthreads are enabled using USE_PTHREAD_PSHARED=1.
Synchronization operations :
- internal spinlock : this mode is OS-independant, light but will not
scale well to many processes. However, accesses to the session cache
are rare enough that this mode could certainly always be used. This
is the default mode.
- Futexes, which are Linux-specific highly scalable light weight mutexes
implemented in user-space with some limited assistance from the kernel.
This is the default on Linux 2.6 and above and is enabled by passing
USE_FUTEX=1
- pthread (posix threads). See above.
If none of these mechanisms is supported by your platform, you may need to
build with USE_PRIVATE_CACHE=1 to totally disable SSL cache sharing. Then
it is better not to run SSL on multiple processes.
If you need to pass other defines, includes, libraries, etc... then please
check the Makefile to see which ones will be available in your case, and
use the USE_* variables in the Makefile.
AIX 5.3 is known to work with the generic target. However, for the binary to
also run on 5.2 or earlier, you need to build with DEFINE="-D_MSGQSUPPORT",
otherwise __fd_select() will be used while not being present in the libc, but
this is easily addressed using the "aix52" target. If you get build errors
because of strange symbols or section mismatches, simply remove -g from
DEBUG_CFLAGS.
You can easily define your own target with the GNU Makefile. Unknown targets
are processed with no default option except USE_POLL=default. So you can very
well use that property to define your own set of options. USE_POLL can even be
disabled by setting USE_POLL="". For example :
$ gmake TARGET=tiny USE_POLL="" TARGET_CFLAGS=-fomit-frame-pointer
1.1) Device Detection
---------------------
HAProxy supports several device detection modules relying on third party
products. Some of them may provide free code, others free libs, others free
evaluation licenses. Please read about their respective details in the
following files :
doc/DeviceAtlas-device-detection.txt for DeviceAtlas
doc/51Degrees-device-detection.txt for 51Degrees
doc/WURFL-device-detection.txt for Scientiamobile WURFL
2) How to install it
--------------------
To install haproxy, you can either copy the single resulting binary to the
place you want, or run :
$ sudo make install
If you're packaging it for another system, you can specify its root directory
in the usual DESTDIR variable.
3) How to set it up
-------------------
There is some documentation in the doc/ directory :
- intro.txt : this is an introduction to haproxy, it explains what it is
what it is not. Useful for beginners or to re-discover it when planning
for an upgrade.
- architecture.txt : this is the architecture manual. It is quite old and
does not tell about the nice new features, but it's still a good starting
point when you know what you want but don't know how to do it.
- configuration.txt : this is the configuration manual. It recalls a few
essential HTTP basic concepts, and details all the configuration file
syntax (keywords, units). It also describes the log and stats format. It
is normally always up to date. If you see that something is missing from
it, please report it as this is a bug. Please note that this file is
huge and that it's generally more convenient to review Cyril Bont<6E>'s
HTML translation online here :
[RELEASE] Released version 1.6.0 Released version 1.6.0 with the following main changes : - BUG/MINOR: Handle interactive mode in cli handler - DOC: global section missing parameters - DOC: backend section missing parameters - DOC: stats paramaters available in frontend - MINOR: lru: do not allocate useless memory in lru64_lookup - BUG/MINOR: http: Add OPTIONS in supported http methods (found by find_http_meth) - BUG/MINOR: ssl: fix management of the cache where forged certificates are stored - MINOR: ssl: Release Servers SSL context when HAProxy is shut down - MINOR: ssl: Read the file used to generate certificates in any order - MINOR: ssl: Add support for EC for the CA used to sign generated certificates - MINOR: ssl: Add callbacks to set DH/ECDH params for generated certificates - BUG/MEDIUM: logs: fix time zone offset format in RFC5424 - BUILD: Fix the build on OSX (htonll/ntohll) - BUILD: enable build on Linux/s390x - BUG/MEDIUM: lua: direction test failed - MINOR: lua: fix a spelling error in some error messages - CLEANUP: cli: ensure we can never double-free error messages - BUG/MEDIUM: lua: force server-close mode on Lua services - MEDIUM: init: support more command line arguments after pid list - MEDIUM: init: support a list of files on the command line - MINOR: debug: enable memory poisonning to use byte 0 - BUILD: ssl: fix build error introduced by recent commit - BUG/MINOR: config: make the stats socket pass the correct proxy to the parsers - MEDIUM: server: implement TCP_USER_TIMEOUT on the server - DOC: mention the "namespace" options for bind and server lines - DOC: add the "management" documentation - DOC: move the stats socket documentation from config to management - MINOR: examples: update haproxy.spec to mention new docs - DOC: mention management.txt in README - DOC: remove haproxy-{en,fr}.txt - BUILD: properly report when USE_ZLIB and USE_SLZ are used together - MINOR: init: report use of libslz instead of "no compression" - CLEANUP: examples: remove some obsolete and confusing files - CLEANUP: examples: remove obsolete configuration file samples - CLEANUP: examples: fix the example file content-sw-sample.cfg - CLEANUP: examples: update sample file option-http_proxy.cfg - CLEANUP: examples: update sample file ssl.cfg - CLEANUP: tests: move a test file from examples/ to tests/ - CLEANUP: examples: shut up warnings in transparent proxy example - CLEANUP: tests: removed completely obsolete test files - DOC: update ROADMAP to remove what was done in 1.6 - BUG/MEDIUM: pattern: fixup use_after_free in the pat_ref_delete_by_id
2015-10-13 16:52:22 +00:00
http://cbonte.github.io/haproxy-dconv/configuration-1.6.html
- management.txt : it explains how to start haproxy, how to manage it at
runtime, how to manage it on multiple nodes, how to proceed with seamless
upgrades.
- gpl.txt / lgpl.txt : the copy of the licenses covering the software. See
the 'LICENSE' file at the top for more information.
- the rest is mainly for developers.
There are also a number of nice configuration examples in the "examples"
directory as well as on several sites and articles on the net which are linked
to from the haproxy web site.
4) How to report a bug
----------------------
It is possible that from time to time you'll find a bug. A bug is a case where
what you see is not what is documented. Otherwise it can be a misdesign. If you
find that something is stupidly design, please discuss it on the list (see the
"how to contribute" section below). If you feel like you're proceeding right
and haproxy doesn't obey, then first ask yourself if it is possible that nobody
before you has even encountered this issue. If it's unlikely, the you probably
have an issue in your setup. Just in case of doubt, please consult the mailing
list archives :
http://marc.info/?l=haproxy
Otherwise, please try to gather the maximum amount of information to help
reproduce the issue and send that to the mailing list :
haproxy@formilux.org
Please include your configuration and logs. You can mask your IP addresses and
passwords, we don't need them. But it's essential that you post your config if
you want people to guess what is happening.
Also, keep in mind that haproxy is designed to NEVER CRASH. If you see it die
without any reason, then it definitely is a critical bug that must be reported
and urgently fixed. It has happened a couple of times in the past, essentially
on development versions running on new architectures. If you think your setup
is fairly common, then it is possible that the issue is totally unrelated.
Anyway, if that happens, feel free to contact me directly, as I will give you
instructions on how to collect a usable core file, and will probably ask for
other captures that you'll not want to share with the list.
5) How to contribute
--------------------
Please carefully read the CONTRIBUTING file that comes with the sources. It is
mandatory.
-- end