Add a new converter with the following prototype :
map(<map_file>[,<default_value>])
map_<match_type>(<map_file>[,<default_value>])
map_<match_type>_<output_type>(<map_file>[,<default_value>])
It searches the for input value from <map_file> using the <match_type>
matching method, and return the associated value converted to the type
<output_type>. If the input value cannot be found in the <map_file>,
the converter returns the <default_value>. If the <default_value> is
not set, the converter fails and acts as if no input value could be
fetched. If the <match_type> is not set, it defaults to "str".
Likewise, if the <output_type> is not set, it defaults to "str". For
convenience, the "map" keyword is an alias for "map_str" and maps a
string to another string. The following array contains contains the
list of all the map* converters.
+----+----------+---------+-------------+------------+
| `-_ out | | | |
| input `-_ | str | int | ip |
| / match `-_ | | | |
+---------------+---------+-------------+------------+
| str / str | map_str | map_str_int | map_str_ip |
| str / sub | map_sub | map_sub_int | map_sub_ip |
| str / dir | map_dir | map_dir_int | map_dir_ip |
| str / dom | map_dom | map_dom_int | map_dom_ip |
| str / end | map_end | map_end_int | map_end_ip |
| str / reg | map_reg | map_reg_int | map_reg_ip |
| int / int | map_int | map_int_int | map_int_ip |
| ip / ip | map_ip | map_ip_int | map_ip_ip |
+---------------+---------+-------------+------------+
The names are intentionally chosen to reflect the same match methods
as ACLs use.
We've had the feature for log-format, unique-id-format and add-header for
a while now. It has just been implemented for ACLs but some doc was still
lacking.
The syntax of this new commands are:
enable agent <backend>/<server>
disable agent <backend>/<server>
These commands allow temporarily stopping and subsequently
re-starting an auxiliary agent check. The effect of this is as follows:
New checks are only initialised when the agent is in the enabled. Thus,
disable agent will prevent any new agent checks from begin initiated until
the agent re-enabled using enable agent.
When an agent is disabled the processing of an auxiliary agent check that
was initiated while the agent was set as enabled is as follows: All
results that would alter the weight, specifically "drain" or a weight
returned by the agent, are ignored. The processing of agent check is
otherwise unchanged.
The motivation for this feature is to allow the weight changing effects
of the agent checks to be paused to allow the weight of a server to be
configured using set weight without being overridden by the agent.
Signed-off-by: Simon Horman <horms@verge.net.au>
In the case where agent-port is used and the agent
check is a secondary check to not mark a server as down
if the agent becomes unavailable.
In this configuration the agent should only cause a server to be marked
as down if the agent returns "fail", "stopped" or "down".
Signed-off-by: Simon Horman <horms@verge.net.au>
Allow an auxiliary agent check to be run independently of the
regular a regular health check. This is enabled by the agent-check
server setting.
The agent-port, which specifies the TCP port to use for the agent's
connections, is required.
The agent-inter, which specifies the interval between agent checks and
timeout of agent checks, is optional. If not set the value for regular
checks is used.
e.g.
server web1_1 127.0.0.1:80 check agent-port 10000
If either the health or agent check determines that a server is down
then it is marked as being down, otherwise it is marked as being up.
An agent health check performed by opening a TCP socket and reading an
ASCII string. The string should have one of the following forms:
* An ASCII representation of an positive integer percentage.
e.g. "75%"
Values in this format will set the weight proportional to the initial
weight of a server as configured when haproxy starts.
* The string "drain".
This will cause the weight of a server to be set to 0, and thus it
will not accept any new connections other than those that are
accepted via persistence.
* The string "down", optionally followed by a description string.
Mark the server as down and log the description string as the reason.
* The string "stopped", optionally followed by a description string.
This currently has the same behaviour as "down".
* The string "fail", optionally followed by a description string.
This currently has the same behaviour as "down".
Signed-off-by: Simon Horman <horms@verge.net.au>
Remove option lb-agent-chk and thus the facility to configure
a stand-alone agent health check. This feature was added by
"MEDIUM: checks: Add agent health check". It will be replaced
by subsequent patches with a features to allow an agent check
to be run as either a secondary check, along with any of the existing
checks, or as part of an http check with the status returned
in an HTTP header.
This patch does not entirely revert "MEDIUM: checks: Add agent health
check". The infrastructure it provides to parse the results of an
agent health check remains and will be re-used by the planned features
that are mentioned above.
Signed-off-by: Simon Horman <horms@verge.net.au>
Summary:
Added a document for hashing under internal docs explaining
hashing in haproxy along with the results of tests under the test
folder.
These documents together explain the motivation for adding
options for hashing algorithms with the option of enabling or
disabling of avalanche.
This function was designed for haproxy while testing other functions
in the past. Initially it was not planned to be used given the not
very interesting numbers it showed on real URL data : it is not as
smooth as the other ones. But later tests showed that the other ones
are extremely sensible to the server count and the type of input data,
especially DJB2 which must not be used on numeric input. So in fact
this function is still a generally average performer and it can make
sense to merge it in the end, as it can provide an alternative to
sdbm+avalanche or djb2+avalanche for consistent hashing or when hashing
on numeric data such as a source IP address or a visitor identifier in
a URL parameter.
Summary:
Avalanche is supported not as a native hashing choice, but a modifier
on the hashing function. Note that this means that possible configs
written after 1.5-dev4 using "hash-type avalanche" will get an informative
error instead. But as discussed on the mailing list it seems nobody ever
used it anyway, so let's fix it before the final 1.5 release.
The default values were selected for backward compatibility with previous
releases, as discussed on the mailing list, which means that the consistent
hashing will still apply the avalanche hash by default when no explicit
algorithm is specified.
Examples
(default) hash-type map-based
Map based hashing using sdbm without avalanche
(default) hash-type consistent
Consistent hashing using sdbm with avalanche
Additional Examples:
(a) hash-type map-based sdbm
Same as default for map-based above
(b) hash-type map-based sdbm avalanche
Map based hashing using sdbm with avalanche
(c) hash-type map-based djb2
Map based hashing using djb2 without avalanche
(d) hash-type map-based djb2 avalanche
Map based hashing using djb2 with avalanche
(e) hash-type consistent sdbm avalanche
Same as default for consistent above
(f) hash-type consistent sdbm
Consistent hashing using sdbm without avalanche
(g) hash-type consistent djb2
Consistent hashing using djb2 without avalanche
(h) hash-type consistent djb2 avalanche
Consistent hashing using djb2 with avalanche
Summary:
In testing at tumblr, we found that using djb2 hashing instead of the
default sdbm hashing resulted is better workload distribution to our backends.
This commit implements a change, that allows the user to specify the hash
function they want to use. It does not limit itself to consistent hashing
scenarios.
The supported hash functions are sdbm (default), and djb2.
For a discussion of the feature and analysis, see mailing list thread
"Consistent hashing alternative to sdbm" :
http://marc.info/?l=haproxy&m=138213693909219
Note: This change does NOT make changes to new features, for instance,
applying an avalance hashing always being performed before applying
consistent hashing.
The manpage refers to haproxy-en.txt, which is obsolete. Update the reference
to point to configuration.txt, together with the location on Debian systems.
Also capitalize "Debian".
Signed-off-by: Apollon Oikonomopoulos <apoikos@gmail.com>
Add a man section to every system call reference, giving users pointers to the
respective manpages.
Signed-off-by: Apollon Oikonomopoulos <apoikos@gmail.com>
This fetch method returns the response buffer len, similarly
to req.len for the request. Previously it was only possible
to rely on "res.payload(0,size) -m found" to find if at least
that amount of data was available, which was a bit tricky.
This new action immediately closes the connection with the server
when the condition is met. The first such rule executed ends the
rules evaluation. The main purpose of this action is to force a
connection to be finished between a client and a server after an
exchange when the application protocol expects some long time outs
to elapse first. The goal is to eliminate idle connections which
take signifiant resources on servers with certain protocols.
verifyhost allows you to specify a hostname that the remote server's
SSL certificate must match. Connections that don't match will be
closed with an SSL error.
When using req.payload and res.payload to look up for specific content at an
arbitrary location, we're often facing the problem of not knowing the input
buffer length. If the length argument is larger than the buffer length, the
function did not match, and if they're smaller, there is a risk of not getting
the expected content. This is especially true when looking for data in SOAP
requests.
So let's make some provisions for scanning the whole buffer by specifying a
length of 0 bytes. This greatly simplifies the processing of random-sized
input data.
The "set table" statement allows to create new entries with their respective
values. Till now it was limited to a single data type per line, requiring as
many "set table" statements as the desired data types to be set. Since this
is only a parser limitation, this patch gets rid of it. It also allows the
creation of a key with no data types (all reset to their default values).
sc_* sample fetches now take an optional parameter which allows to look
the key in an alternate table. This is convenient to pass multiple
information for the same key at once (eg: have multiple gpc0 for the
same key, or support being fed complementary information from the CLI).
Example :
listen front
bind :8000
tcp-request content track-sc0 src table local-ip
http-response set-header src-id %[sc0_get_gpc0]+%[sc0_get_gpc0(global-ip)]
server dummy 127.0.0.1:8001
backend local-ip
stick-table size 1k type ip store gpc0
backend global-ip
stick-table size 1k type ip store gpc0
One very annoying issue when trying to extend the sticky counters beyond
the current 3 counters is that it requires a massive copy-paste of fetch
functions (we don't have to copy-paste code anymore), just so that the
fetch names exist.
So let's have an alternate form like "sc_*(num)" to allow passing the
counter number as an argument without having to redefine new fetch names.
The MAX_SESS_STKCTR macro defines the number of usable sticky counters,
which defaults to 3.
Converts an integer supposed to contain a date since epoch to
a string representing this date in a format suitable for use
in HTTP header fields. If an offset value is specified, then
it is a number of seconds that is added to the date before the
conversion is operated. This is particularly useful to emit
Date header fields, Expires values in responses when combined
with a positive offset, or Last-Modified values when the
offset is negative.
Returns the current date as the epoch (number of seconds since 01/01/1970).
If an offset value is specified, then it is a number of seconds that is added
to the current date before returning the value. This is particularly useful
to compute relative dates, as both positive and negative offsets are allowed.
We now support having a comma-delimited converter list, which can start
right after the fetch keyword. The immediate benefit is that it allows
to use converters in log-format expressions, for example :
set-header source-net %[src,ipmask(24)]
The parser is also slightly improved and should be more resilient against
configuration errors. Also, optional arguments in converters were mistakenly
not allowed till now, so this was fixed.
The max weight of server is 256 now, but SRV_UWGHT_MAX is still 255. As a result,
FWRR will not work well when server's weight is 256. The description is as below:
There are some macros related to server's weight in include/types/server.h:
#define SRV_UWGHT_RANGE 256
#define SRV_UWGHT_MAX (SRV_UWGHT_RANGE - 1)
#define SRV_EWGHT_MAX (SRV_UWGHT_MAX * BE_WEIGHT_SCALE)
Since weight of server can be reach to 256 and BE_WEIGHT_SCALE equals to 16,
the max eweight of server should be 256*16 = 4096, it will exceed SRV_EWGHT_MAX
which equals to SRV_UWGHT_MAX*BE_WEIGHT_SCALE = 255*16 = 4080. When a server
with weight 256 is insterted into FWRR tree during initialization, the key value
of this server should be SRV_EWGHT_MAX - s->eweight = 4080 - 4096 = -16 which
is closed to UINT_MAX in unsigned type, so the server with highest weight will
be not elected as the first server to process request.
In addition, it is a better choice to compare with SRV_UWGHT_MAX than a magic
number 256 while doing check for the weight. The max number of servers for
round-robin algorithm is also updated.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
s->req->prod->conn->addr.to.ss_family contains only useful data if
conn_get_to_addr() is called early. If thats not the case (nothing in the
configuration needs the destination address like logs, transparent, ...)
then "set-tos" doesn't work.
Fix this by checking s->req->prod->conn->addr.from.ss_family instead.
Also fix a minor doc issue about set-tos in http-response.
Released version 1.5-dev19 with the following main changes :
- MINOR: stats: remove the autofocus on the scope input field
- BUG/MEDIUM: Fix crt-list file parsing error: filtered name was ignored.
- BUG/MEDIUM: ssl: EDH ciphers are not usable if no DH parameters present in pem file.
- BUG/MEDIUM: shctx: makes the code independent on SSL runtime version.
- MEDIUM: ssl: improve crt-list format to support negation
- BUG: ssl: fix crt-list for clients not supporting SNI
- MINOR: stats: show soft-stopped servers in different color
- BUG/MINOR: config: "source" does not work in defaults section
- BUG: regex: fix pcre compile error when using JIT
- MINOR: ssl: add pattern fetch 'ssl_c_sha1'
- BUG: ssl: send payload gets corrupted if tune.ssl.maxrecord is used
- MINOR: show PCRE version and JIT status in -vv
- BUG/MINOR: jit: don't rely on USE flag to detect support
- DOC: readme: add suggestion to link against static openssl
- DOC: examples: provide simplified ssl configuration
- REORG: tproxy: prepare the transparent proxy defines for accepting other OSes
- MINOR: tproxy: add support for FreeBSD
- MINOR: tproxy: add support for OpenBSD
- DOC: examples: provide an example of transparent proxy configuration for FreeBSD 8
- CLEANUP: fix minor typo in error message.
- CLEANUP: fix missing include <string.h> in proto/listener.h
- CLEANUP: protect checks.h from multiple inclusions
- MINOR: compression: acl "res.comp" and fetch "res.comp_algo"
- BUG/MINOR: http: add-header/set-header did not accept the ACL condition
- BUILD: mention in the Makefile that USE_PCRE_JIT is for libpcre >= 8.32
- BUG/MEDIUM: splicing is broken since 1.5-dev12
- BUG/MAJOR: acl: add implicit arguments to the resolve list
- BUG/MINOR: tcp: fix error reporting for TCP rules
- CLEANUP: peers: remove a bit of spaghetti to prepare for the next bugfix
- MINOR: stick-table: allow to allocate an entry without filling it
- BUG/MAJOR: peers: fix an overflow when syncing strings larger than 16 bytes
- MINOR: session: only call http_send_name_header() when changing the server
- MINOR: tcp: report the erroneous word in tcp-request track*
- BUG/MAJOR: backend: consistent hash can loop forever in certain circumstances
- BUG/MEDIUM: log: fix regression on log-format handling
- MEDIUM: log: report file name, line number, and directive name with log-format errors
- BUG/MINOR: cli: "clear table" did not work anymore without a key
- BUG/MINOR: cli: "clear table xx data.xx" does not work anymore
- BUG/MAJOR: http: compression still has defects on chunked responses
- BUG/MINOR: stats: fix confirmation links on the stats interface
- BUG/MINOR: stats: the status bar does not appear anymore after a change
- BUG/MEDIUM: stats: allocate the stats frontend also on "stats bind-process"
- BUG/MEDIUM: stats: fix a regression when dealing with POST requests
- BUG/MINOR: fix unterminated ACL array in compression
- BUILD: last fix broke non-linux platforms
- MINOR: init: indicate the SSL runtime version on -vv.
- BUG/MEDIUM: compression: the deflate algorithm must use global settings as well
- BUILD: stdbool is not portable (again)
- DOC: readme: add a small reminder about restrictions to respect in the code
- MINOR: ebtree: add new eb_next_dup/eb_prev_dup() functions to visit duplicates
- BUG/MINOR: acl: fix a double free during exit when using PCRE_JIT
- DOC: fix wrong copy-paste in the rspdel example
- MINOR: counters: make it easier to extend the amount of tracked counters
- MEDIUM: counters: add support for tracking a third counter
- MEDIUM: counters: add a new "gpc0_rate" counter in stick-tables
- BUG/MAJOR: http: always ensure response buffer has some room for a response
- MINOR: counters: add fetch/acl sc*_tracked to indicate whether a counter is tracked
- MINOR: defaults: allow REQURI_LEN and CAPTURE_LEN to be redefined
- MINOR: log: add a new flag 'L' for locally processed requests
- MINOR: http: add full-length header fetch methods
- MEDIUM: protocol: implement a "drain" function in protocol layers
- MEDIUM: http: add a new "http-response" ruleset
- MEDIUM: http: add the "set-nice" action to http-request and http-response
- MEDIUM: log: add a log level override value in struct session
- MEDIUM: http: add support for action "set-log-level" in http-request/http-response
- MEDIUM: http: add support for "set-tos" in http-request/http-response
- MEDIUM: http: add the "set-mark" action on http-request/http-response rules
- MEDIUM: tcp: add "tcp-request connection expect-proxy layer4"
- MEDIUM: acl: automatically detect the type of certain fetches
- MEDIUM: acl: remove a lot of useless ACLs that are equivalent to their fetches
- MEDIUM: acl: remove 15 additional useless ACLs that are equivalent to their fetches
- DOC: major reorg of ACL + sample fetch
- CLEANUP: http: remove the bogus urlp_ip ACL match
- MINOR: acl: add the new "env()" fetch method to retrieve an environment variable
- BUG/MINOR: acl: correctly consider boolean fetches when doing casts
- BUG/CRITICAL: fix a possible crash when using negative header occurrences
- DOC: update ROADMAP file
- MEDIUM: counters: use sc0/sc1/sc2 instead of sc1/sc2/sc3
- MEDIUM: stats: add proxy name filtering on the statistic page
It was a bit inconsistent to have gpc start at 0 and sc start at 1,
so make sc start at zero like gpc. No previous release was issued
with sc3 anyway, so no existing setup should be affected.
This is useful in order to take different actions across restarts without
touching the configuration (eg: soft-stop), or to pass some information
such as the local host name to the next hop.
The split between ACL and sample fetch was a terrible mess in the doc,
as it caused all entries to be duplicated with most of them not easy to
find, some missing and some wrong.
The new approach consists in describing the sample fetch methods and
indicating the ACLs that are derived from these fetches. The doc is
much smaller (1500 lines added, 2200 removed, net gain = 700 lines)
and much clearer.
The description of the ACL mechanics was revamped to take account of
the latest evolutions and clearly describe the compatibility between
types of fetches and ACL patterns.
The deprecated keywords have been marked as such, though they still
appear in the examples given for various other keywords.
This configures the client-facing connection to receive a PROXY protocol
header before any byte is read from the socket. This is equivalent to
having the "accept-proxy" keyword on the "bind" line, except that using
the TCP rule allows the PROXY protocol to be accepted only for certain
IP address ranges using an ACL. This is convenient when multiple layers
of load balancers are passed through by traffic coming from public
hosts.
"set-mark" is used to set the Netfilter MARK on all packets sent to the
client to the value passed in <mark> on platforms which support it. This
value is an unsigned 32 bit value which can be matched by netfilter and
by the routing table. It can be expressed both in decimal or hexadecimal
format (prefixed by "0x"). This can be useful to force certain packets to
take a different route (for example a cheaper network path for bulk
downloads). This works on Linux kernels 2.6.32 and above and requires
admin privileges.
This manipulates the TOS field of the IP header of outgoing packets sent
to the client. This can be used to set a specific DSCP traffic class based
on some request or response information. See RFC2474, 2597, 3260 and 4594
for more information.
Some users want to disable logging for certain non-important requests such as
stats requests or health-checks coming from another equipment. Other users want
to log with a higher importance (eg: notice) some special traffic (POST requests,
authenticated requests, requests coming from suspicious IPs) or some abnormally
large responses.
This patch responds to all these needs at once by adding a "set-log-level" action
to http-request/http-response. The 8 syslog levels are supported, as well as "silent"
to disable logging.
Some actions were clearly missing to process response headers. This
patch adds a new "http-response" ruleset which provides the following
actions :
- allow : stop evaluating http-response rules
- deny : stop and reject the response with a 502
- add-header : add a header in log-format mode
- set-header : set a header in log-format mode
The req.hdr and res.hdr fetch methods do not work well on headers which
are allowed to contain commas, such as User-Agent, Date or Expires.
More specifically, full-length matching is impossible if a comma is
present.
This patch introduces 4 new fetch functions which are designed to work
with these full-length headers :
- req.fhdr, req.fhdr_cnt
- res.fhdr, res.fhdr_cnt
These ones do not stop at commas and permit to return full-length header
values.
People who use "option dontlog-normal" are bothered with redirects and
stats being logged and reported as errors in the logs ("PR" = proxy
blocked the request).
This patch introduces a new flag 'L' for when a request is locally
processed, that is not considered as an error by the log filters. That
way we know a request was intercepted and processed by haproxy without
logging the line when "option dontlog-normal" is in effect.
We're often missin a third counter to track base, src and base+src at
the same time. Here we introduce track_sc3 to have this third counter.
It would be wise not to add much more counters because that slightly
increases the session size and processing time though the real issue
is more the declaration of the keywords in the code and in the doc.
This new pattern fetch returns the client certificate's SHA-1 fingerprint
(i.e. SHA-1 hash of DER-encoded certificate) in a binary chunk.
This can be useful to pass it to a server in a header or to stick a client
to a server across multiple SSL connections.
Improve the crt-list file format to allow a rule to negate a certain SNI :
<crtfile> [[!]<snifilter> ...]
This can be useful when a domain supports a wildcard but you don't want to
deliver the wildcard cert for certain specific domains.