haproxy/README

657 lines
31 KiB
Plaintext
Raw Normal View History

----------------------
HAProxy how-to
----------------------
version 1.6-dev
willy tarreau
[RELEASE] Released version 1.6-dev1 Released version 1.6-dev1 with the following main changes : - CLEANUP: extract temporary $CFG to eliminate duplication - CLEANUP: extract temporary $BIN to eliminate duplication - CLEANUP: extract temporary $PIDFILE to eliminate duplication - CLEANUP: extract temporary $LOCKFILE to eliminate duplication - CLEANUP: extract quiet_check() to avoid duplication - BUG/MINOR: don't start haproxy on reload - DOC: Address issue where documentation is excluded due to a gitignore rule. - BUG/MEDIUM: systemd: set KillMode to 'mixed' - BUILD: fix "make install" to support spaces in the install dirs - BUG/MINOR: config: http-request replace-header arg typo - BUG: config: error in http-response replace-header number of arguments - DOC: missing track-sc* in http-request rules - BUILD: lua: missing ifdef related to SSL when enabling LUA - BUG/MEDIUM: regex: fix pcre_study error handling - MEDIUM: regex: Use pcre_study always when PCRE is used, regardless of JIT - BUG/MINOR: Fix search for -p argument in systemd wrapper. - MEDIUM: Improve signal handling in systemd wrapper. - DOC: fix typo in Unix Socket commands - BUG/MEDIUM: checks: external checks can't change server status to UP - BUG/MEDIUM: checks: segfault with external checks in a backend section - BUG/MINOR: checks: external checks shouldn't wait for timeout to return the result - BUG/MEDIUM: auth: fix segfault with http-auth and a configuration with an unknown encryption algorithm - BUG/MEDIUM: config: userlists should ensure that encrypted passwords are supported - BUG/MINOR: config: don't propagate process binding for dynamic use_backend - BUG/MINOR: log: fix request flags when keep-alive is enabled - BUG/MEDIUM: checks: fix conflicts between agent checks and ssl healthchecks - MINOR: checks: allow external checks in backend sections - MEDIUM: checks: provide environment variables to the external checks - MINOR: checks: update dynamic environment variables in external checks - DOC: checks: environment variables used by "external-check command" - BUG/MEDIUM: backend: correctly detect the domain when use_domain_only is used - MINOR: ssl: load certificates in alphabetical order - BUG/MINOR: checks: prevent http keep-alive with http-check expect - MINOR: lua: typo in an error message - MINOR: report the Lua version in -vv - MINOR: lua: add a compilation error message when compiled with an incompatible version - BUG/MEDIUM: lua: segfault when calling haproxy sample fetches from lua - BUILD: try to automatically detect the Lua library name - BUILD/CLEANUP: systemd: avoid a warning due to mixed code and declaration - BUG/MEDIUM: backend: Update hash to use unsigned int throughout - BUG/MEDIUM: connection: fix memory corruption when building a proxy v2 header - MEDIUM: connection: add new bit in Proxy Protocol V2 - BUG/MINOR: ssl: rejects OCSP response without nextupdate. - BUG/MEDIUM: ssl: Fix to not serve expired OCSP responses. - BUG/MINOR: ssl: Fix OCSP resp update fails with the same certificate configured twice. - BUG/MINOR: ssl: Fix external function in order not to return a pointer on an internal trash buffer. - MINOR: add fetchs 'ssl_c_der' and 'ssl_f_der' to return DER formatted certs - MINOR: ssl: add statement to force some ssl options in global. - BUG/MINOR: ssl: correctly initialize ssl ctx for invalid certificates - BUG/MEDIUM: ssl: fix bad ssl context init can cause segfault in case of OOM. - BUG/MINOR: samples: fix unnecessary memcopy converting binary to string. - MINOR: samples: adds the bytes converter. - MINOR: samples: adds the field converter. - MINOR: samples: add the word converter. - BUG/MINOR: server: move the directive #endif to the end of file - BUG/MAJOR: buffer: check the space left is enough or not when input data in a buffer is wrapped - DOC: fix a few typos - CLEANUP: epoll: epoll_events should be allocated according to global.tune.maxpollevents - BUG/MINOR: http: fix typo: "401 Unauthorized" => "407 Unauthorized" - BUG/MINOR: parse: refer curproxy instead of proxy - BUG/MINOR: parse: check the validity of size string in a more strict way - BUILD: add new target 'make uninstall' to support uninstalling haproxy from OS - DOC: expand the docs for the provided stats. - BUG/MEDIUM: unix: do not unlink() abstract namespace sockets upon failure. - MEDIUM: ssl: Certificate Transparency support - MEDIUM: stats: proxied stats admin forms fix - MEDIUM: http: Compress HTTP responses with status codes 201,202,203 in addition to 200 - BUG/MEDIUM: connection: sanitize PPv2 header length before parsing address information - MAJOR: namespace: add Linux network namespace support - MINOR: systemd: Check configuration before start - BUILD: ssl: handle boringssl in openssl version detection - BUILD: ssl: disable OCSP when using boringssl - BUILD: ssl: don't call get_rfc2409_prime when using boringssl - MINOR: ssl: don't use boringssl's cipher_list - BUILD: ssl: use OPENSSL_NO_OCSP to detect OCSP support - MINOR: stats: fix minor typo in HTML page - MINOR: Also accept SIGHUP/SIGTERM in systemd-wrapper - MEDIUM: Add support for configurable TLS ticket keys - DOC: Document the new tls-ticket-keys bind keyword - DOC: clearly state that the "show sess" output format is not fixed - MINOR: stats: fix minor typo fix in stats_dump_errors_to_buffer() - DOC: httplog does not support 'no' - BUG/MEDIUM: ssl: Fix a memory leak in DHE key exchange - MINOR: ssl: use SSL_get_ciphers() instead of directly accessing the cipher list. - BUG/MEDIUM: Consistently use 'check' in process_chk - MEDIUM: Add external check - BUG/MEDIUM: Do not set agent health to zero if server is disabled in config - MEDIUM/BUG: Only explicitly report "DOWN (agent)" if the agent health is zero - MEDIUM: Remove connect_chk - MEDIUM: Refactor init_check and move to checks.c - MEDIUM: Add free_check() helper - MEDIUM: Move proto and addr fields struct check - MEDIUM: Attach tcpcheck_rules to check - MEDIUM: Add parsing of mailers section - MEDIUM: Allow configuration of email alerts - MEDIUM: Support sending email alerts - DOC: Document email alerts - MINOR: Remove trailing '.' from email alert messages - MEDIUM: Allow suppression of email alerts by log level - BUG/MEDIUM: Do not consider an agent check as failed on L7 error - MINOR: deinit: fix memory leak - MINOR: http: export the function 'smp_fetch_base32' - BUG/MEDIUM: http: tarpit timeout is reset - MINOR: sample: add "json" converter - BUG/MEDIUM: pattern: don't load more than once a pattern list. - MINOR: map/acl/dumpstats: remove the "Done." message - BUG/MAJOR: ns: HAProxy segfault if the cli_conn is not from a network connection - BUG/MINOR: pattern: error message missing - BUG/MEDIUM: pattern: some entries are not deleted with case insensitive match - BUG/MINOR: ARG6 and ARG7 don't fit in a 32 bits word - MAJOR: poll: only rely on wake_expired_tasks() to compute the wait delay - MEDIUM: task: call session analyzers if the task is woken by a message. - MEDIUM: protocol: automatically pick the proto associated to the connection. - MEDIUM: channel: wake up any request analyzer on response activity - MINOR: converters: add a "void *private" argument to converters - MINOR: converters: give the session pointer as converter argument - MINOR: sample: add private argument to the struct sample_fetch - MINOR: global: export function and permits to not resolve DNS names - MINOR: sample: add function for browsing samples. - MINOR: global: export many symbols. - MINOR: includes: fix a lot of missing or useless includes - MEDIUM: tcp: add register keyword system. - MEDIUM: buffer: make bo_putblk/bo_putstr/bo_putchk return the number of bytes copied. - MEDIUM: http: change the code returned by the response processing rule functions - MEDIUM: http/tcp: permit to resume http and tcp custom actions - MINOR: channel: functions to get data from a buffer without copy - MEDIUM: lua: lua integration in the build and init system. - MINOR: lua: add ease functions - MINOR: lua: add runtime execution context - MEDIUM: lua: "com" signals - MINOR: lua: add the configuration directive "lua-load" - MINOR: lua: core: create "core" class and object - MINOR: lua: post initialisation bindings - MEDIUM: lua: add coroutine as tasks. - MINOR: lua: add sample and args type converters - MINOR: lua: txn: create class TXN associated with the transaction. - MINOR: lua: add shared context in the lua stack - MINOR: lua: txn: import existing sample-fetches in the class TXN - MINOR: lua: txn: add lua function in TXN that returns an array of http headers - MINOR: lua: register and execute sample-fetches in LUA - MINOR: lua: register and execute converters in LUA - MINOR: lua: add bindings for tcp and http actions - MINOR: lua: core: add sleep functions - MEDIUM: lua: socket: add "socket" class for TCP I/O - MINOR: lua: core: pattern and acl manipulation - MINOR: lua: channel: add "channel" class - MINOR: lua: txn: object "txn" provides two objects "channel" - MINOR: lua: core: can set the nice of the current task - MINOR: lua: core: can yield an execution stack - MINOR: lua: txn: add binding for closing the client connection. - MEDIUM: lua: Lua initialisation "on demand" - BUG/MAJOR: lua: send function fails and return bad bytes - MINOR: remove unused declaration. - MINOR: lua: remove some #define - MINOR: lua: use bitfield and macro in place of integer and enum - MINOR: lua: set skeleton for Lua execution expiration - MEDIUM: lua: each yielding function returns a wake up time. - MINOR: lua: adds "forced yield" flag - MEDIUM: lua: interrupt the Lua execution for running other process - MEDIUM: lua: change the sleep function core - BUG/MEDIUM: lua: the execution timeout is ignored in yield case - DOC: lua: Lua configuration documentation - MINOR: lua: add the struct session in the lua channel struct - BUG/MINOR: lua: set buffer if it is nnot avalaible. - BUG/MEDIUM: lua: reset flags before resuming execution - BUG/MEDIUM: lua: fix infinite loop about channel - BUG/MEDIUM: lua: the Lua process is not waked up after sending data on requests side - BUG/MEDIUM: lua: many errors when we try to send data with the channel API - MEDIUM: lua: use the Lua-5.3 version of the library - BUG/MAJOR: lua: some function are not yieldable, the forced yield causes errors - BUG/MEDIUM: lua: can't handle the response bytes - BUG/MEDIUM: lua: segfault with buffer_replace2 - BUG/MINOR: lua: check buffers before initializing socket - BUG/MINOR: log: segfault if there are no proxy reference - BUG/MEDIUM: lua: sockets don't have buffer to write data - BUG/MEDIUM: lua: cannot connect socket - BUG/MINOR: lua: sockets receive behavior doesn't follows the specs - BUG/BUILD: lua: The strict Lua 5.3 version check is not done. - BUG/MEDIUM: buffer: one byte miss in buffer free space check - MEDIUM: lua: make the functions hlua_gethlua() and hlua_sethlua() faster - MINOR: replace the Core object by a simple model. - MEDIUM: lua: change the objects configuration - MEDIUM: lua: create a namespace for the fetches - MINOR: converters: add function to browse converters - MINOR: lua: wrapper for converters - MINOR: lua: replace function (req|get)_channel by a variable - MINOR: lua: fetches and converters can return an empty string in place of nil - DOC: lua api - BUG/MEDIUM: sample: fix random number upper-bound - BUG/MINOR: stats:Fix incorrect printf type. - BUG/MAJOR: session: revert all the crappy client-side timeout changes - BUG/MINOR: logs: properly initialize and count log sockets - BUG/MEDIUM: http: fetch "base" is not compatible with set-header - BUG/MINOR: counters: do not untrack counters before logging - BUG/MAJOR: sample: correctly reinitialize sample fetch context before calling sample_process() - MINOR: stick-table: make stktable_fetch_key() indicate why it failed - BUG/MEDIUM: counters: fix track-sc* to wait on unstable contents - BUILD: remove TODO from the spec file and add README - MINOR: log: make MAX_SYSLOG_LEN overridable at build time - MEDIUM: log: support a user-configurable max log line length - DOC: provide an example of how to use ssl_c_sha1 - BUILD: checks: external checker needs signal.h - BUILD: checks: kill a minor warning on Solaris in external checks - BUILD: http: fix isdigit & isspace warnings on Solaris - BUG/MINOR: listener: set the listener's fd to -1 after deletion - BUG/MEDIUM: unix: failed abstract socket binding is retryable - MEDIUM: listener: implement a per-protocol pause() function - MEDIUM: listener: support rebinding during resume() - BUG/MEDIUM: unix: completely unbind abstract sockets during a pause() - DOC: explicitly mention the limits of abstract namespace sockets - DOC: minor fix on {sc,src}_kbytes_{in,out} - DOC: fix alphabetical sort of converters - MEDIUM: stick-table: implement lookup from a sample fetch - MEDIUM: stick-table: add new converters to fetch table data - MINOR: samples: add two converters for the date format - BUG/MAJOR: http: correctly rewind the request body after start of forwarding - DOC: remove references to CPU=native in the README - DOC: mention that "compression offload" is ignored in defaults section - DOC: mention that Squid correctly responds 400 to PPv2 header - BUILD: fix dependencies between config and compat.h - MINOR: session: export the function 'smp_fetch_sc_stkctr' - MEDIUM: stick-table: make it easier to register extra data types - BUG/MINOR: http: base32+src should use the big endian version of base32 - MINOR: sample: allow IP address to cast to binary - MINOR: sample: add new converters to hash input - MINOR: sample: allow integers to cast to binary - BUILD: report commit ID in git versions as well - CLEANUP: session: move the stick counters declarations to stick_table.h - MEDIUM: http: add the track-sc* actions to http-request rules - BUG/MEDIUM: connection: fix proxy v2 header again! - BUG/MAJOR: tcp: fix a possible busy spinning loop in content track-sc* - OPTIM/MINOR: proxy: reduce struct proxy by 48 bytes on 64-bit archs - MINOR: log: add a new field "%lc" to implement a per-frontend log counter - BUG/MEDIUM: http: fix inverted condition in pat_match_meth() - BUG/MEDIUM: http: fix improper parsing of HTTP methods for use with ACLs - BUG/MINOR: pattern: remove useless allocation of unused trash in pat_parse_reg() - BUG/MEDIUM: acl: correctly compute the output type when a converter is used - CLEANUP: acl: cleanup some of the redundancy and spaghetti after last fix - BUG/CRITICAL: http: don't update msg->sov once data start to leave the buffer - MEDIUM: http: enable header manipulation for 101 responses - BUG/MEDIUM: config: propagate frontend to backend process binding again. - MEDIUM: config: properly propagate process binding between proxies - MEDIUM: config: make the frontends automatically bind to the listeners' processes - MEDIUM: config: compute the exact bind-process before listener's maxaccept - MEDIUM: config: only warn if stats are attached to multi-process bind directives - MEDIUM: config: report it when tcp-request rules are misplaced - DOC: indicate in the doc that track-sc* can wait if data are missing - MINOR: config: detect the case where a tcp-request content rule has no inspect-delay - MEDIUM: systemd-wrapper: support multiple executable versions and names - BUG/MEDIUM: remove debugging code from systemd-wrapper - BUG/MEDIUM: http: adjust close mode when switching to backend - BUG/MINOR: config: don't propagate process binding on fatal errors. - BUG/MEDIUM: check: rule-less tcp-check must detect connect failures - BUG/MINOR: tcp-check: report the correct failed step in the status - DOC: indicate that weight zero is reported as DRAIN - BUG/MEDIUM: config: avoid skipping disabled proxies - BUG/MINOR: config: do not accept more track-sc than configured - BUG/MEDIUM: backend: fix URI hash when a query string is present - BUG/MEDIUM: http: don't dump debug headers on MSG_ERROR - BUG/MAJOR: cli: explicitly call cli_release_handler() upon error - BUG/MEDIUM: tcp: fix outgoing polling based on proxy protocol - BUILD/MINOR: ssl: de-constify "ciphers" to avoid a warning on openssl-0.9.8 - BUG/MEDIUM: tcp: don't use SO_ORIGINAL_DST on non-AF_INET sockets - BUG/BUILD: revert accidental change in the makefile from latest SSL fix - BUG/MEDIUM: ssl: force a full GC in case of memory shortage - MEDIUM: ssl: add support for smaller SSL records - MINOR: session: release a few other pools when stopping - MINOR: task: release the task pool when stopping - BUG/MINOR: config: don't inherit the default balance algorithm in frontends - BUG/MAJOR: frontend: initialize capture pointers earlier - BUG/MINOR: stats: correctly set the request/response analysers - MAJOR: polling: centralize calls to I/O callbacks - DOC: fix typo in the body parser documentation for msg.sov - BUG/MINOR: peers: the buffer size is global.tune.bufsize, not trash.size - MINOR: sample: add a few basic internal fetches (nbproc, proc, stopping) - DEBUG: pools: apply poisonning on every allocated pool - BUG/MAJOR: sessions: unlink session from list on out of memory - BUG/MEDIUM: patterns: previous fix was incomplete - BUG/MEDIUM: payload: ensure that a request channel is available - BUG/MINOR: tcp-check: don't condition data polling on check type - BUG/MEDIUM: tcp-check: don't rely on random memory contents - BUG/MEDIUM: tcp-checks: disable quick-ack unless next rule is an expect - BUG/MINOR: config: fix typo in condition when propagating process binding - BUG/MEDIUM: config: do not propagate processes between stopped processes - BUG/MAJOR: stream-int: properly check the memory allocation return - BUG/MEDIUM: memory: fix freeing logic in pool_gc2() - BUG/MAJOR: namespaces: conn->target is not necessarily a server - BUG/MEDIUM: compression: correctly report zlib_mem - CLEANUP: lists: remove dead code - CLEANUP: memory: remove dead code - CLEANUP: memory: replace macros pool_alloc2/pool_free2 with functions - MINOR: memory: cut pool allocator in 3 layers - MEDIUM: memory: improve pool_refill_alloc() to pass a refill count - MINOR: stream-int: retrieve session pointer from stream-int - MINOR: buffer: reset a buffer in b_reset() and not channel_init() - MEDIUM: buffer: use b_alloc() to allocate and initialize a buffer - MINOR: buffer: move buffer initialization after channel initialization - MINOR: buffer: only use b_free to release buffers - MEDIUM: buffer: always assign a dummy empty buffer to channels - MEDIUM: buffer: add a new buf_wanted dummy buffer to report failed allocations - MEDIUM: channel: do not report full when buf_empty is present on a channel - MINOR: session: group buffer allocations together - MINOR: buffer: implement b_alloc_fast() - MEDIUM: buffer: implement b_alloc_margin() - MEDIUM: session: implement a basic atomic buffer allocator - MAJOR: session: implement a wait-queue for sessions who need a buffer - MAJOR: session: only allocate buffers when needed - MINOR: stats: report a "waiting" flags for sessions - MAJOR: session: only wake up as many sessions as available buffers permit - MINOR: config: implement global setting tune.buffers.reserve - MINOR: config: implement global setting tune.buffers.limit - MEDIUM: channel: implement a zero-copy buffer transfer - MEDIUM: stream-int: support splicing from applets - OPTIM: stream-int: try to send pending spliced data - CLEANUP: session: remove session_from_task() - DOC: add missing entry for log-format and clarify the text - MINOR: logs: add a new per-proxy "log-tag" directive - BUG/MEDIUM: http: fix header removal when previous header ends with pure LF - MINOR: config: extend the default max hostname length to 64 and beyond - BUG/MEDIUM: channel: fix possible integer overflow on reserved size computation - BUG/MINOR: channel: compare to_forward with buf->i, not buf->size - MINOR: channel: add channel_in_transit() - MEDIUM: channel: make buffer_reserved() use channel_in_transit() - MEDIUM: channel: make bi_avail() use channel_in_transit() - BUG/MEDIUM: channel: don't schedule data in transit for leaving until connected - CLEANUP: channel: rename channel_reserved -> channel_is_rewritable - MINOR: channel: rename channel_full() to !channel_may_recv() - MINOR: channel: rename buffer_reserved() to channel_reserved() - MINOR: channel: rename buffer_max_len() to channel_recv_limit() - MINOR: channel: rename bi_avail() to channel_recv_max() - MINOR: channel: rename bi_erase() to channel_truncate() - BUG/MAJOR: log: don't try to emit a log if no logger is set - MINOR: tools: add new round_2dig() function to round integers - MINOR: global: always export some SSL-specific metrics - MINOR: global: report information about the cost of SSL connections - MAJOR: init: automatically set maxconn and/or maxsslconn when possible - MINOR: http: add a new fetch "query" to extract the request's query string - MINOR: hash: add new function hash_crc32 - MINOR: samples: provide a "crc32" converter - MEDIUM: backend: add the crc32 hash algorithm for load balancing - BUG/MINOR: args: add missing entry for ARGT_MAP in arg_type_names - BUG/MEDIUM: http: make http-request set-header compute the string before removal - MEDIUM: args: use #define to specify the number of bits used by arg types and counts - MEDIUM: args: increase arg type to 5 bits and limit arg count to 5 - MINOR: args: add type-specific flags for each arg in a list - MINOR: args: implement a new arg type for regex : ARGT_REG - MEDIUM: regex: add support for passing regex flags to regex_exec_match() - MEDIUM: samples: add a regsub converter to perform regex-based transformations - BUG/MINOR: sample: fix case sensitivity for the regsub converter - MEDIUM: http: implement http-request set-{method,path,query,uri} - DOC: fix missing closing brackend on regsub - MEDIUM: samples: provide basic arithmetic and bitwise operators - MEDIUM: init: continue to enforce SYSTEM_MAXCONN with auto settings if set - BUG/MINOR: http: fix incorrect header value offset in replace-hdr/replace-value - BUG/MINOR: http: abort request processing on filter failure - MEDIUM: tcp: implement tcp-ut bind option to set TCP_USER_TIMEOUT - MINOR: ssl/server: add the "no-ssl-reuse" server option - BUG/MAJOR: peers: initialize s->buffer_wait when creating the session - MINOR: http: add a new function to iterate over each header line - MINOR: http: add the new sample fetches req.hdr_names and res.hdr_names - MEDIUM: task: always ensure that the run queue is consistent - BUILD: Makefile: add -Wdeclaration-after-statement - BUILD/CLEANUP: ssl: avoid a warning due to mixed code and declaration - BUILD/CLEANUP: config: silent 3 warnings about mixed declarations with code - MEDIUM: protocol: use a family array to index the protocol handlers - BUILD: lua: cleanup many mixed occurrences declarations & code - BUG/MEDIUM: task: fix recently introduced scheduler skew - BUG/MINOR: lua: report the correct function name in an error message - BUG/MAJOR: http: fix stats regression consecutive to HTTP_RULE_RES_YIELD - Revert "BUG/MEDIUM: lua: can't handle the response bytes" - MINOR: lua: convert IP addresses to type string - CLEANUP: lua: use the same function names in C and Lua - REORG/MAJOR: move session's req and resp channels back into the session - CLEANUP: remove now unused channel pool - REORG/MEDIUM: stream-int: introduce si_ic/si_oc to access channels - MEDIUM: stream-int: add a flag indicating which side the SI is on - MAJOR: stream-int: only rely on SI_FL_ISBACK to find the requested channel - MEDIUM: stream-interface: remove now unused pointers to channels - MEDIUM: stream-int: make si_sess() use the stream int's side - MEDIUM: stream-int: use si_task() to retrieve the task from the stream int - MEDIUM: stream-int: remove any reference to the owner - CLEANUP: stream-int: add si_ib/si_ob to dereference the buffers - CLEANUP: stream-int: add si_opposite() to find the other stream interface - REORG/MEDIUM: channel: only use chn_prod / chn_cons to find stream-interfaces - MEDIUM: channel: add a new flag "CF_ISRESP" for the response channel - MAJOR: channel: only rely on the new CF_ISRESP flag to find the SI - MEDIUM: channel: remove now unused ->prod and ->cons pointers - CLEANUP: session: simplify references to chn_{prod,cons}(&s->{req,res}) - CLEANUP: session: use local variables to access channels / stream ints - CLEANUP: session: don't needlessly pass a pointer to the stream-int - CLEANUP: session: don't use si_{ic,oc} when we know the session. - CLEANUP: stream-int: limit usage of si_ic/si_oc - CLEANUP: lua: limit usage of si_ic/si_oc - MINOR: channel: add chn_sess() helper to retrieve session from channel - MEDIUM: session: simplify receive buffer allocator to only use the channel - MEDIUM: lua: use CF_ISRESP to detect the channel's side - CLEANUP: lua: remove the session pointer from hlua_channel - CLEANUP: lua: hlua_channel_new() doesn't need the pointer to the session anymore - MEDIUM: lua: remove struct hlua_channel - MEDIUM: lua: remove hlua_sample_fetch
2015-03-11 22:57:23 +00:00
2015/03/11
1) How to build it
------------------
First, please note that this version is a development version, so in general if
you are not used to build from sources or if you don't have the time to track
very frequent updates, it is recommended that instead you switch to the stable
version (1.5) or follow the packaged updates provided by your software vendor
or Linux distribution. Most of them are taking this task seriously and are
doing a good job. If for any reason you'd prefer a different version than the
one packaged for your system, or to get some commercial support, other choices
are available at :
http://www.haproxy.com/
To build haproxy, you will need :
- GNU make. Neither Solaris nor OpenBSD's make work with the GNU Makefile.
If you get many syntax errors when running "make", you may want to retry
with "gmake" which is the name commonly used for GNU make on BSD systems.
- GCC between 2.95 and 4.8. Others may work, but not tested.
- GNU ld
Also, you might want to build with libpcre support, which will provide a very
efficient regex implementation and will also fix some badness on Solaris' one.
To build haproxy, you have to choose your target OS amongst the following ones
and assign it to the TARGET variable :
- linux22 for Linux 2.2
- linux24 for Linux 2.4 and above (default)
- linux24e for Linux 2.4 with support for a working epoll (> 0.21)
- linux26 for Linux 2.6 and above
- linux2628 for Linux 2.6.28, 3.x, and above (enables splice and tproxy)
- solaris for Solaris 8 or 10 (others untested)
- freebsd for FreeBSD 5 to 10 (others untested)
- osx for Mac OS/X
- openbsd for OpenBSD 3.1 to 5.2 (others untested)
- aix51 for AIX 5.1
- aix52 for AIX 5.2
- cygwin for Cygwin
- generic for any other OS or version.
- custom to manually adjust every setting
You may also choose your CPU to benefit from some optimizations. This is
particularly important on UltraSparc machines. For this, you can assign
one of the following choices to the CPU variable :
- i686 for intel PentiumPro, Pentium 2 and above, AMD Athlon
- i586 for intel Pentium, AMD K6, VIA C3.
- ultrasparc : Sun UltraSparc I/II/III/IV processor
- native : use the build machine's specific processor optimizations. Use with
extreme care, and never in virtualized environments (known to break).
- generic : any other processor or no CPU-specific optimization. (default)
Alternatively, you may just set the CPU_CFLAGS value to the optimal GCC options
for your platform.
You may want to build specific target binaries which do not match your native
compiler's target. This is particularly true on 64-bit systems when you want
to build a 32-bit binary. Use the ARCH variable for this purpose. Right now
it only knows about a few x86 variants (i386,i486,i586,i686,x86_64), two
generic ones (32,64) and sets -m32/-m64 as well as -march=<arch> accordingly.
If your system supports PCRE (Perl Compatible Regular Expressions), then you
really should build with libpcre which is between 2 and 10 times faster than
other libc implementations. Regex are used for header processing (deletion,
rewriting, allow, deny). The only inconvenient of libpcre is that it is not
yet widely spread, so if you build for other systems, you might get into
trouble if they don't have the dynamic library. In this situation, you should
statically link libpcre into haproxy so that it will not be necessary to
install it on target systems. Available build options for PCRE are :
- USE_PCRE=1 to use libpcre, in whatever form is available on your system
(shared or static)
- USE_STATIC_PCRE=1 to use a static version of libpcre even if the dynamic
one is available. This will enhance portability.
- with no option, use your OS libc's standard regex implementation (default).
Warning! group references on Solaris seem broken. Use static-pcre whenever
possible.
Recent systems can resolve IPv6 host names using getaddrinfo(). This primitive
is not present in all libcs and does not work in all of them either. Support in
glibc was broken before 2.3. Some embedded libs may not properly work either,
thus, support is disabled by default, meaning that some host names which only
resolve as IPv6 addresses will not resolve and configs might emit an error
during parsing. If you know that your OS libc has reliable support for
getaddrinfo(), you can add USE_GETADDRINFO=1 on the make command line to enable
it. This is the recommended option for most Linux distro packagers since it's
working fine on all recent mainstream distros. It is automatically enabled on
Solaris 8 and above, as it's known to work.
It is possible to add native support for SSL using the GNU makefile, by passing
"USE_OPENSSL=1" on the make command line. The libssl and libcrypto will
automatically be linked with haproxy. Some systems also require libz, so if the
build fails due to missing symbols such as deflateInit(), then try again with
"ADDLIB=-lz".
To link OpenSSL statically against haproxy, build OpenSSL with the no-shared
keyword and install it to a local directory, so your system is not affected :
$ export STATICLIBSSL=/tmp/staticlibssl
$ ./config --prefix=$STATICLIBSSL no-shared
$ make && make install_sw
When building haproxy, pass that path via SSL_INC and SSL_LIB to make and
include additional libs with ADDLIB if needed (in this case for example libdl):
$ make TARGET=linux26 USE_OPENSSL=1 SSL_INC=$STATICLIBSSL/include SSL_LIB=$STATICLIBSSL/lib ADDLIB=-ldl
MEDIUM: HTTP compression (zlib library support) This commit introduces HTTP compression using the zlib library. http_response_forward_body has been modified to call the compression functions. This feature includes 3 algorithms: identity, gzip and deflate: * identity: this is mostly for debugging, and it was useful for developping the compression feature. With Content-Length in input, it is making each chunk with the data available in the current buffer. With chunks in input, it is rechunking, the output chunks will be bigger or smaller depending of the size of the input chunk and the size of the buffer. Identity does not apply any change on data. * gzip: same as identity, but applying a gzip compression. The data are deflated using the Z_NO_FLUSH flag in zlib. When there is no more data in the input buffer, it flushes the data in the output buffer (Z_SYNC_FLUSH). At the end of data, when it receives the last chunk in input, or when there is no more data to read, it writes the end of data with Z_FINISH and the ending chunk. * deflate: same as gzip, but with deflate algorithm and zlib format. Note that this algorithm has ambiguous support on many browsers and no support at all from recent ones. It is strongly recommended not to use it for anything else than experimentation. You can't choose the compression ratio at the moment, it will be set to Z_BEST_SPEED (1), as tests have shown very little benefit in terms of compression ration when going above for HTML contents, at the cost of a massive CPU impact. Compression will be activated depending of the Accept-Encoding request header. With identity, it does not take care of that header. To build HAProxy with zlib support, use USE_ZLIB=1 in the make parameters. This work was initially started by David Du Colombier at Exceliance.
2012-10-23 08:25:10 +00:00
It is also possible to include native support for ZLIB to benefit from HTTP
compression. For this, pass "USE_ZLIB=1" on the "make" command line and ensure
MAJOR: compression: integrate support for libslz This library is designed to emit a zlib-compatible stream with no memory usage and to favor resource savings over compression ratio. While zlib requires 256 kB of RAM per compression context (and can only support 4000 connections per GB of RAM), the stateless compression offered by libslz does not need to retain buffers between subsequent calls. In theory this slightly reduces the compression ratio but in practice it does not have that much of an effect since the zlib window is limited to 32kB. Libslz is available at : http://git.1wt.eu/web?p=libslz.git It was designed for web compression and provides a lot of savings over zlib in haproxy. Here are the preliminary results on a single core of a core2-quad 3.0 GHz in 32-bit for only 300 concurrent sessions visiting the home page of www.haproxy.org (76 kB) with the default 16kB buffers : BW In BW Out BW Saved Ratio memory VSZ/RSS zlib 237 Mbps 92 Mbps 145 Mbps 2.58 84M / 69M slz 733 Mbps 380 Mbps 353 Mbps 1.93 5.9M / 4.2M So while the compression ratio is lower, the bandwidth savings are much more important due to the significantly lower compression cost which allows to consume even more data from the servers. In the example above, zlib became the bottleneck at 24% of the output bandwidth. Also the difference in memory usage is obvious. More tests run on a single core of a core i5-3320M, with 500 concurrent users and the default 16kB buffers : At 100% CPU (no limit) : BW In BW Out BW Saved Ratio memory VSZ/RSS hits/s zlib 480 Mbps 188 Mbps 292 Mbps 2.55 130M / 101M 744 slz 1700 Mbps 810 Mbps 890 Mbps 2.10 23.7M / 9.7M 2382 At 85% CPU (limited) : BW In BW Out BW Saved Ratio memory VSZ/RSS hits/s zlib 1240 Mbps 976 Mbps 264 Mbps 1.27 130M / 100M 1738 slz 1600 Mbps 976 Mbps 624 Mbps 1.64 23.7M / 9.7M 2210 The most important benefit really happens when the CPU usage is limited by "maxcompcpuusage" or the BW limited by "maxcomprate" : in order to preserve resources, haproxy throttles the compression ratio until usage is within limits. Since slz is much cheaper, the average compression ratio is much higher and the input bandwidth is quite higher for one Gbps output. Other tests made with some reference files : BW In BW Out BW Saved Ratio hits/s daniels.html zlib 1320 Mbps 163 Mbps 1157 Mbps 8.10 1925 slz 3600 Mbps 580 Mbps 3020 Mbps 6.20 5300 tv.com/listing zlib 980 Mbps 124 Mbps 856 Mbps 7.90 310 slz 3300 Mbps 553 Mbps 2747 Mbps 5.97 1100 jquery.min.js zlib 430 Mbps 180 Mbps 250 Mbps 2.39 547 slz 1470 Mbps 764 Mbps 706 Mbps 1.92 1815 bootstrap.min.css zlib 790 Mbps 165 Mbps 625 Mbps 4.79 777 slz 2450 Mbps 650 Mbps 1800 Mbps 3.77 2400 So on top of saving a lot of memory, slz is constantly 2.5-3.5 times faster than zlib and results in providing more savings for a fixed CPU usage. For links smaller than 100 Mbps, zlib still provides a better compression ratio, at the expense of a much higher CPU usage. Larger input files provide slightly higher bandwidth for both libs, at the expense of a bit more memory usage for zlib (it converges to 256kB per connection).
2015-03-29 01:32:06 +00:00
that zlib is present on the system. Alternatively it is possible to use libslz
for a faster, memory less, but slightly less efficient compression, by passing
"USE_SLZ=1".
MEDIUM: HTTP compression (zlib library support) This commit introduces HTTP compression using the zlib library. http_response_forward_body has been modified to call the compression functions. This feature includes 3 algorithms: identity, gzip and deflate: * identity: this is mostly for debugging, and it was useful for developping the compression feature. With Content-Length in input, it is making each chunk with the data available in the current buffer. With chunks in input, it is rechunking, the output chunks will be bigger or smaller depending of the size of the input chunk and the size of the buffer. Identity does not apply any change on data. * gzip: same as identity, but applying a gzip compression. The data are deflated using the Z_NO_FLUSH flag in zlib. When there is no more data in the input buffer, it flushes the data in the output buffer (Z_SYNC_FLUSH). At the end of data, when it receives the last chunk in input, or when there is no more data to read, it writes the end of data with Z_FINISH and the ending chunk. * deflate: same as gzip, but with deflate algorithm and zlib format. Note that this algorithm has ambiguous support on many browsers and no support at all from recent ones. It is strongly recommended not to use it for anything else than experimentation. You can't choose the compression ratio at the moment, it will be set to Z_BEST_SPEED (1), as tests have shown very little benefit in terms of compression ration when going above for HTML contents, at the cost of a massive CPU impact. Compression will be activated depending of the Accept-Encoding request header. With identity, it does not take care of that header. To build HAProxy with zlib support, use USE_ZLIB=1 in the make parameters. This work was initially started by David Du Colombier at Exceliance.
2012-10-23 08:25:10 +00:00
By default, the DEBUG variable is set to '-g' to enable debug symbols. It is
not wise to disable it on uncommon systems, because it's often the only way to
get a complete core when you need one. Otherwise, you can set DEBUG to '-s' to
strip the binary.
For example, I use this to build for Solaris 8 :
$ make TARGET=solaris CPU=ultrasparc USE_STATIC_PCRE=1
And I build it this way on OpenBSD or FreeBSD :
$ gmake TARGET=freebsd USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1
And on a classic Linux with SSL and ZLIB support (eg: Red Hat 5.x) :
$ make TARGET=linux26 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1
And on a recent Linux >= 2.6.28 with SSL and ZLIB support :
$ make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1
MEDIUM: HTTP compression (zlib library support) This commit introduces HTTP compression using the zlib library. http_response_forward_body has been modified to call the compression functions. This feature includes 3 algorithms: identity, gzip and deflate: * identity: this is mostly for debugging, and it was useful for developping the compression feature. With Content-Length in input, it is making each chunk with the data available in the current buffer. With chunks in input, it is rechunking, the output chunks will be bigger or smaller depending of the size of the input chunk and the size of the buffer. Identity does not apply any change on data. * gzip: same as identity, but applying a gzip compression. The data are deflated using the Z_NO_FLUSH flag in zlib. When there is no more data in the input buffer, it flushes the data in the output buffer (Z_SYNC_FLUSH). At the end of data, when it receives the last chunk in input, or when there is no more data to read, it writes the end of data with Z_FINISH and the ending chunk. * deflate: same as gzip, but with deflate algorithm and zlib format. Note that this algorithm has ambiguous support on many browsers and no support at all from recent ones. It is strongly recommended not to use it for anything else than experimentation. You can't choose the compression ratio at the moment, it will be set to Z_BEST_SPEED (1), as tests have shown very little benefit in terms of compression ration when going above for HTML contents, at the cost of a massive CPU impact. Compression will be activated depending of the Accept-Encoding request header. With identity, it does not take care of that header. To build HAProxy with zlib support, use USE_ZLIB=1 in the make parameters. This work was initially started by David Du Colombier at Exceliance.
2012-10-23 08:25:10 +00:00
In order to build a 32-bit binary on an x86_64 Linux system with SSL support
without support for compression but when OpenSSL requires ZLIB anyway :
$ make TARGET=linux26 ARCH=i386 USE_OPENSSL=1 ADDLIB=-lz
The SSL stack supports session cache synchronization between all running
processes. This involves some atomic operations and synchronization operations
which come in multiple flavors depending on the system and architecture :
Atomic operations :
- internal assembler versions for x86/x86_64 architectures
- gcc builtins for other architectures. Some architectures might not
be fully supported or might require a more recent version of gcc.
If your architecture is not supported, you willy have to either use
pthread if supported, or to disable the shared cache.
- pthread (posix threads). Pthreads are very common but inter-process
support is not that common, and some older operating systems did not
report an error when enabling multi-process mode, so they used to
silently fail, possibly causing crashes. Linux's implementation is
fine. OpenBSD doesn't support them and doesn't build. FreeBSD 9 builds
and reports an error at runtime, while certain older versions might
silently fail. Pthreads are enabled using USE_PTHREAD_PSHARED=1.
Synchronization operations :
- internal spinlock : this mode is OS-independant, light but will not
scale well to many processes. However, accesses to the session cache
are rare enough that this mode could certainly always be used. This
is the default mode.
- Futexes, which are Linux-specific highly scalable light weight mutexes
implemented in user-space with some limited assistance from the kernel.
This is the default on Linux 2.6 and above and is enabled by passing
USE_FUTEX=1
- pthread (posix threads). See above.
If none of these mechanisms is supported by your platform, you may need to
build with USE_PRIVATE_CACHE=1 to totally disable SSL cache sharing. Then
it is better not to run SSL on multiple processes.
If you need to pass other defines, includes, libraries, etc... then please
check the Makefile to see which ones will be available in your case, and
use the USE_* variables in the Makefile.
AIX 5.3 is known to work with the generic target. However, for the binary to
also run on 5.2 or earlier, you need to build with DEFINE="-D_MSGQSUPPORT",
otherwise __fd_select() will be used while not being present in the libc, but
this is easily addressed using the "aix52" target. If you get build errors
because of strange symbols or section mismatches, simply remove -g from
DEBUG_CFLAGS.
You can easily define your own target with the GNU Makefile. Unknown targets
are processed with no default option except USE_POLL=default. So you can very
well use that property to define your own set of options. USE_POLL can even be
disabled by setting USE_POLL="". For example :
$ gmake TARGET=tiny USE_POLL="" TARGET_CFLAGS=-fomit-frame-pointer
1.1) DeviceAtlas Device Detection
---------------------------------
In order to add DeviceAtlas Device Detection support, you would need to download
the API source code from https://deviceatlas.com/deviceatlas-haproxy-module and
once extracted :
$ make TARGET=<target> USE_PCRE=1 USE_DEVICEATLAS=1 DEVICEATLAS_INC=<path to the API root folder> DEVICEATLAS_LIB=<path to the API root folder>
These are supported DeviceAtlas directives (see doc/configuration.txt) :
- deviceatlas-json-file <path to the DeviceAtlas JSON data file>.
- deviceatlas-log-level <number> (0 to 3, level of information returned by
the API, 0 by default).
- deviceatlas-property-separator <character> (character used to separate the
properties produced by the API, | by default).
Sample configuration :
global
deviceatlas-json-file <path to json file>
...
frontend
bind *:8881
default_backend servers
http-request set-header X-DeviceAtlas-Data %[req.fhdr(User-Agent),da-csv(primaryHardwareType,osName,osVersion,browserName,browserVersion)]
1.2) 51Degrees Device Detection
-------------------------------
You can also include 51Degrees for inbuilt device detection enabling attributes
such as screen size (physical & pixels), supported input methods, release date,
hardware vendor and model, browser information, and device price among many
others. Such information can be used to improve the user experience of a web
site by tailoring the page content, layout and business processes to the
precise characteristics of the device. Such customisations improve profit by
making it easier for customers to get to the information or services they
need. Theses attributes of the device making a web request can be added to HTTP
headers as configurable parameters.
In order to enable 51Degrees get the 51Degrees source code
(https://github.com/51Degreesmobi/51Degrees-C) and then run make with
USE_51DEGREES and 51DEGREES_INC and 51DEGREES_SRC set. Make sure to replace
'51D_REPO_PATH' with the path to the 51Degrees repository.
51Degrees provide 2 different detection algorithms.
1. Pattern - balances main memory usage and CPU.
2. Trie - a very high performance detection solution which uses more main
memory than Pattern.
To make with 51Degrees Pattern algorithm use the following command line.
$ make TARGET=linux26 USE_51DEGREES=1 51DEGREES_INC='51D_REPO_PATH'/src/pattern 51DEGREES_LIB='51D_REPO_PATH'/src/pattern
To use the 51Degrees Trie algorithm use the following command line.
$ make TARGET=linux26 USE_51DEGREES=1 51DEGREES_INC='51D_REPO_PATH'/src/trie 51DEGREES_LIB='51D_REPO_PATH'/src/trie
A data file containing information about devices, browsers, operating systems
and their associated signatures is then needed. 51Degrees provide a free
database with Github repo for this purpose. These free data files are located
in '51D_REPO_PATH'/data with the extensions .dat for Pattern data and .trie for
Trie data.
The configuration file needs to set the following parameters:
51degrees-data-file path to the pattern or trie data file
51degrees-property-name-list list of 51Degrees properties to detect
51degrees-property-seperator seperator to use between values
The following is an example of the settings for Pattern.
51degrees-data-file '51D_REPO_PATH'/data/51Degrees-Lite.dat
51degrees-property-name-list IsTablet DeviceType IsMobile
51degrees-property-seperator ,
HAProxy needs a way to pass device information to the backend servers. This is
done by using the 51d converter, which intercepts the User-Agent header and
creates some new headers. This is controlled in the frontend http-in section
The following is an example which adds two new HTTP headers prefixed X-51D-
frontend http-in
bind *:8081
default_backend servers
http-request set-header X-51D-DeviceTypeMobileTablet %[req.fhdr(User-Agent),51d(DeviceType,IsMobile,IsTablet)]
http-request set-header X-51D-Tablet %[req.fhdr(User-Agent),51d(IsTablet)]
Here, two headers are created with 51Degrees data, X-51D-DeviceTypeMobileTablet
and X-51D-Tablet. Any number of headers can be created this way and can be
named anything. The User-Agent header is passed to the converter in
req.fhdr(User-Agent). 51d( ) invokes the 51degrees converter. It can be passed
up to five property names of values to return. Values will be returned in the
same order, seperated by the 51-degrees-property-seperator configured earlier.
If a property name can't be found the value 'NoData' is returned instead.
The free Lite data file contains information about screen size in pixels and
whether the device is a mobile. A full list of available properties is located
on the 51Degrees web site at:
https://51degrees.com/resources/property-dictionary.
Some properties are only available in the paid for Premium and Enterprise
versions of 51Degrees. These data sets no only contain more properties but
are updated weekly and daily and contain signatures for 100,000s of different
device combinations. For more information see the data options comparison web
page:
https://51degrees.com/compare-data-options
2) How to install it
--------------------
To install haproxy, you can either copy the single resulting binary to the
place you want, or run :
$ sudo make install
If you're packaging it for another system, you can specify its root directory
in the usual DESTDIR variable.
3) How to set it up
-------------------
There is some documentation in the doc/ directory :
- architecture.txt : this is the architecture manual. It is quite old and
does not tell about the nice new features, but it's still a good starting
point when you know what you want but don't know how to do it.
- configuration.txt : this is the configuration manual. It recalls a few
essential HTTP basic concepts, and details all the configuration file
syntax (keywords, units). It also describes the log and stats format. It
is normally always up to date. If you see that something is missing from
it, please report it as this is a bug. Please note that this file is
huge and that it's generally more convenient to review Cyril Bont<6E>'s
HTML translation online here :
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html
- haproxy-en.txt / haproxy-fr.txt : these are the old outdated docs. You
should never need them. If you do, then please report what you didn't
find in the other ones.
- gpl.txt / lgpl.txt : the copy of the licenses covering the software. See
the 'LICENSE' file at the top for more information.
- the rest is mainly for developers.
There are also a number of nice configuration examples in the "examples"
directory as well as on several sites and articles on the net which are linked
to from the haproxy web site.
4) How to report a bug
----------------------
It is possible that from time to time you'll find a bug. A bug is a case where
what you see is not what is documented. Otherwise it can be a misdesign. If you
find that something is stupidly design, please discuss it on the list (see the
"how to contribute" section below). If you feel like you're proceeding right
and haproxy doesn't obey, then first ask yourself if it is possible that nobody
before you has even encountered this issue. If it's unlikely, the you probably
have an issue in your setup. Just in case of doubt, please consult the mailing
list archives :
http://marc.info/?l=haproxy
Otherwise, please try to gather the maximum amount of information to help
reproduce the issue and send that to the mailing list :
haproxy@formilux.org
Please include your configuration and logs. You can mask your IP addresses and
passwords, we don't need them. But it's essential that you post your config if
you want people to guess what is happening.
Also, keep in mind that haproxy is designed to NEVER CRASH. If you see it die
without any reason, then it definitely is a critical bug that must be reported
and urgently fixed. It has happened a couple of times in the past, essentially
on development versions running on new architectures. If you think your setup
is fairly common, then it is possible that the issue is totally unrelated.
Anyway, if that happens, feel free to contact me directly, as I will give you
instructions on how to collect a usable core file, and will probably ask for
other captures that you'll not want to share with the list.
5) How to contribute
--------------------
It is possible that you'll want to add a specific feature to satisfy your needs
or one of your customers'. Contributions are welcome, however I'm often very
picky about changes. I will generally reject patches that change massive parts
of the code, or that touch the core parts without any good reason if those
changes have not been discussed first.
The proper place to discuss your changes is the HAProxy Mailing List. There are
enough skilled readers to catch hazardous mistakes and to suggest improvements.
I trust a number of them enough to merge a patch if they say it's OK, so using
the list is the fastest way to get your code reviewed and merged. You can
subscribe to it by sending an empty e-mail at the following address :
haproxy+subscribe@formilux.org
If you have an idea about something to implement, *please* discuss it on the
list first. It has already happened several times that two persons did the same
thing simultaneously. This is a waste of time for both of them. It's also very
common to see some changes rejected because they're done in a way that will
conflict with future evolutions, or that does not leave a good feeling. It's
always unpleasant for the person who did the work, and it is unpleasant for me
too because I value people's time and efforts. That would not happen if these
were discussed first. There is no problem posting work in progress to the list,
it happens quite often in fact. Also, don't waste your time with the doc when
submitting patches for review, only add the doc with the patch you consider
ready to merge.
Another important point concerns code portability. Haproxy requires gcc as the
C compiler, and may or may not work with other compilers. However it's known
to build using gcc 2.95 or any later version. As such, it is important to keep
in mind that certain facilities offered by recent versions must not be used in
the code :
- declarations mixed in the code (requires gcc >= 3.x)
- GCC builtins without checking for their availability based on version and
architecture ;
- assembly code without any alternate portable form for other platforms
- use of stdbool.h, "bool", "false", "true" : simply use "int", "0", "1"
- in general, anything which requires C99 (such as declaring variables in
"for" statements)
Since most of these restrictions are just a matter of coding style, it is
normally not a problem to comply.
If your work is very confidential and you can't publicly discuss it, you can
also mail me directly about it, but your mail may be waiting several days in
the queue before you get a response.
If you'd like a feature to be added but you think you don't have the skills to
implement it yourself, you should follow these steps :
1. discuss the feature on the mailing list. It is possible that someone
else has already implemented it, or that someone will tell you how to
proceed without it, or even why not to do it. It is also possible that
in fact it's quite easy to implement and people will guide you through
the process. That way you'll finally have YOUR patch merged, providing
the feature YOU need.
2. if you really can't code it yourself after discussing it, then you may
consider contacting someone to do the job for you. Some people on the
list might sometimes be OK with trying to do it.
Note to contributors: it's very handy when patches comes with a properly
formated subject. There are 3 criteria of particular importance in any patch :
- its nature (is it a fix for a bug, a new feature, an optimization, ...)
- its importance, which generally reflects the risk of merging/not merging it
- what area it applies to (eg: http, stats, startup, config, doc, ...)
It's important to make these 3 criteria easy to spot in the patch's subject,
because it's the first (and sometimes the only) thing which is read when
reviewing patches to find which ones need to be backported to older versions.
Specifically, bugs must be clearly easy to spot so that they're never missed.
Any patch fixing a bug must have the "BUG" tag in its subject. Most common
patch types include :
- BUG fix for a bug. The severity of the bug should also be indicated
when known. Similarly, if a backport is needed to older versions,
it should be indicated on the last line of the commit message. If
the bug has been identified as a regression brought by a specific
patch or version, this indication will be appreciated too. New
maintenance releases are generally emitted when a few of these
patches are merged.
- CLEANUP code cleanup, silence of warnings, etc... theorically no impact.
These patches will rarely be seen in stable branches, though they
may appear when they remove some annoyance or when they make
backporting easier. By nature, a cleanup is always minor.
- REORG code reorganization. Some blocks may be moved to other places,
some important checks might be swapped, etc... These changes
always present a risk of regression. For this reason, they should
never be mixed with any bug fix nor functional change. Code is
only moved as-is. Indicating the risk of breakage is highly
recommended.
- BUILD updates or fixes for build issues. Changes to makefiles also fall
into this category. The risk of breakage should be indicated if
known. It is also appreciated to indicate what platforms and/or
configurations were tested after the change.
- OPTIM some code was optimised. Sometimes if the regression risk is very
low and the gains significant, such patches may be merged in the
stable branch. Depending on the amount of code changed or replaced
and the level of trust the author has in the change, the risk of
regression should be indicated.
- RELEASE release of a new version (development or stable).
- LICENSE licensing updates (may impact distro packagers).
When the patch cannot be categorized, it's best not to put any tag. This is
commonly the case for new features, which development versions are mostly made
of.
Additionally, the importance of the patch should be indicated when known. A
single upper-case word is preferred, among :
- MINOR minor change, very low risk of impact. It is often the case for
code additions that don't touch live code. For a bug, it generally
indicates an annoyance, nothing more.
- MEDIUM medium risk, may cause unexpected regressions of low importance or
which may quickly be discovered. For a bug, it generally indicates
something odd which requires changing the configuration in an
undesired way to work around the issue.
- MAJOR major risk of hidden regression. This happens when I rearrange
large parts of code, when I play with timeouts, with variable
initializations, etc... We should only exceptionally find such
patches in stable branches. For a bug, it indicates severe
reliability issues for which workarounds are identified with or
without performance impacts.
- CRITICAL medium-term reliability or security is at risk and workarounds,
if they exist, might not always be acceptable. An upgrade is
absolutely required. A maintenance release may be emitted even if
only one of these bugs are fixed. Note that this tag is only used
with bugs. Such patches must indicate what is the first version
affected, and if known, the commit ID which introduced the issue.
If this criterion doesn't apply, it's best not to put it. For instance, most
doc updates and most examples or test files are just added or updated without
any need to qualify a level of importance.
The area the patch applies to is quite important, because some areas are known
to be similar in older versions, suggesting a backport might be desirable, and
conversely, some areas are known to be specific to one version. When the tag is
used alone, uppercase is preferred for readability, otherwise lowercase is fine
too. The following tags are suggested but not limitative :
- doc documentation updates or fixes. No code is affected, no need to
upgrade. These patches can also be sent right after a new feature,
to document it.
- examples example files. Be careful, sometimes these files are packaged.
- tests regression test files. No code is affected, no need to upgrade.
- init initialization code, arguments parsing, etc...
- config configuration parser, mostly used when adding new config keywords
- http the HTTP engine
- stats the stats reporting engine as well as the stats socket CLI
- checks the health checks engine (eg: when adding new checks)
- acl the ACL processing core or some ACLs from other areas
- peers the peer synchronization engine
- listeners everything related to incoming connection settings
- frontend everything related to incoming connection processing
- backend everything related to LB algorithms and server farm
- session session processing and flags (very sensible, be careful)
- server server connection management, queueing
- proxy proxy maintenance (start/stop)
- log log management
- poll any of the pollers
- halog the halog sub-component in the contrib directory
- contrib any addition to the contrib directory
Other names may be invented when more precise indications are meaningful, for
instance : "cookie" which indicates cookie processing in the HTTP core. Last,
indicating the name of the affected file is also a good way to quickly spot
changes. Many commits were already tagged with "stream_sock" or "cfgparse" for
instance.
It is desired that AT LEAST one of the 3 criteria tags is reported in the patch
subject. Ideally, we would have the 3 most often. The two first criteria should
be present before a first colon (':'). If both are present, then they should be
delimited with a slash ('/'). The 3rd criterion (area) should appear next, also
followed by a colon. Thus, all of the following messages are valid :
Examples of messages :
- DOC: document options forwardfor to logasap
- DOC/MAJOR: reorganize the whole document and change indenting
- BUG: stats: connection reset counters must be plain ascii, not HTML
- BUG/MINOR: stats: connection reset counters must be plain ascii, not HTML
- MEDIUM: checks: support multi-packet health check responses
- RELEASE: Released version 1.4.2
- BUILD: stats: stdint is not present on solaris
- OPTIM/MINOR: halog: make fgets parse more bytes by blocks
- REORG/MEDIUM: move syscall redefinition to specific places
Please do not use square brackets anymore around the tags, because they give me
more work when merging patches. By default I'm asking Git to keep them but this
causes trouble when patches are prefixed with the [PATCH] tag because in order
not to store it, I have to hand-edit the patches. So as of now, I will ask Git
to remove whatever is located between square brackets, which implies that any
subject formatted the old way will have its tag stripped out.
In fact, one of the only square bracket tags that still makes sense is '[RFC]'
at the beginning of the subject, when you're asking for someone to review your
change before getting it merged. If the patch is OK to be merged, then I can
merge it as-is and the '[RFC]' tag will automatically be removed. If you don't
want it to be merged at all, you can simply state it in the message, or use an
alternate '[WIP]' tag ("work in progress").
The tags are not rigid, follow your intuition first, anyway I reserve the right
to change them when merging the patch. It may happen that a same patch has a
different tag in two distinct branches. The reason is that a bug in one branch
may just be a cleanup in the other one because the code cannot be triggered.
For a more efficient interaction between the mainline code and your code, I can
only strongly encourage you to try the Git version control system :
http://git-scm.com/
It's very fast, lightweight and lets you undo/redo your work as often as you
want, without making your mistakes visible to the rest of the world. It will
definitely help you contribute quality code and take other people's feedback
in consideration. In order to clone the HAProxy Git repository :
$ git clone http://git.haproxy.org/git/haproxy-1.5.git (stable 1.5)
$ git clone http://git.haproxy.org/git/haproxy.git/ (development)
If you decide to use Git for your developments, then your commit messages will
have the subject line in the format described above, then the whole description
of your work (mainly why you did it) will be in the body. You can directly send
your commits to the mailing list, the format is convenient to read and process.
-- end