Released version 1.4.5 with the following main changes :
- [DOC] report minimum kernel version for tproxy in the Makefile
- [MINOR] add the "ignore-persist" option to conditionally ignore persistence
- [DOC] add the "ignore-persist" option to conditionally ignore persistence
- [DOC] fix ignore-persist/force-persist documentation
- [BUG] cttproxy: socket fd leakage in check_cttproxy_version
- [DOC] doc/configuration.txt: fix typos
- [MINOR] option http-pretend-keepalive is both for FEs and BEs
- [MINOR] fix possible crash in debug mode with invalid responses
- [MINOR] halog: add support for statisticts on status codes
- [OPTIM] halog: use a faster zero test in fgets()
- [OPTIM] halog: minor speedup by using unlikely()
- [OPTIM] halog: speed up fgets2-64 by about 10%
- [DOC] refresh the README file and merge the CONTRIB file into it
- [MINOR] acl: support loading values from files
- [MEDIUM] ebtree: upgrade to version 6.0
- [MINOR] acl trees: add flags and union members to store values in trees
- [MEDIUM] acl: add ability to insert patterns in trees
- [MEDIUM] acl: add tree-based lookups of exact strings
- [MEDIUM] acl: add tree-based lookups of networks
- [MINOR] acl: ignore empty lines and comments in pattern files
- [MINOR] stick-tables: add support for "stick on hdr"
It is now possible to stick on an IP address found in a HTTP header. Right
now only the last occurrence of the header can be used, which is generally
enough for most uses. Also, the header extraction rule only knows how to
convert the header to IP. Later it will be usable as a plain string with
an implicit conversion, and the syntax will not change.
Most often, pattern files used by ACLs will be produced by tools
which emit some comments (eg: geolocation lists). It's very annoying
to have to clean the files before using them, and it does not make
much sense to be able to support patterns we already can't input in
the config file. So this patch makes the pattern file loader skip
lines beginning with a sharp and the empty ones, and strips leading
spaces and tabs.
Networks patterns loaded from files for longest match ACL testing
will now be arranged into a prefix tree. This is possible thanks to
the new prefix features in ebtree v6.0. Longest match testing is
slightly slower than exact data maching. However, the measured impact
of running at 42000 requests per second and testing whether the IP
address found in a header belongs to a list of 52000 networks or
not is 3% CPU (increase from 66% to 69%). This is low enough to
permit true geolocation based on huge tables.
Now if some ACL patterns are loaded from a file and the operation is
an exact string match, the data will be arranged in a tree, yielding
a significant performance boost on large data sets. Note that this
only works when case is sensitive.
A new dedicated function, acl_lookup_str(), has been created for this
matching. It is called for every possible input data to test and it
looks the tree up for the data. Since the keywords are loosely typed,
we would have had to add a new columns to all keywords to adjust the
function depending on the type. Instead, we just compare on the match
function. We call acl_lookup_str() when we could use acl_match_str().
The tree lookup is performed first, then the remaining patterns are
attempted if the tree returned nothing.
A quick test shows that when matching a header against a list of 52000
network names, haproxy uses 68% of one core on a core2-duo 3.2 GHz at
42000 requests per second, versus 66% without any rule, which means
only a 2% CPU increase for 52000 rules. Doing the same test without
the tree leads to 100% CPU at 6900 requests/s. Also it was possible
to run the same test at full speed with about 50 sets of 52000 rules
without any measurable performance drop.
The code is now ready to support loading pattern from filesinto trees. For
that, it will be required that the ACL keyword has a flag ACL_MAY_LOOKUP
and that the expr is case sensitive. When that is true, the pattern will
have a flag ACL_PAT_F_TREE_OK to indicate that it is possible to feed the
tree instead of a usual pattern if the parsing function is able to do this.
The tree's root is pre-initialized in the pattern's value so that the
function can easily find it. At that point, if the parsing function decides
to use the tree, it just sets ACL_PAT_F_TREE in the return flags so that
the caller knows the tree has been used and the pattern can be recycled.
That way it will be possible to load some patterns into the tree when it
is compatible, and other ones as linear linked lists. A good example of
this might be IPv4 network entries : right now we support holes in masks,
but this very rare feature is not compatible with binary lookup in trees.
So the parser will be able to decide itself whether the pattern can go to
the tree or not.
If we want to be able to match ACLs against a lot of possible values, we
need to put those values in trees. That will only work for exact matches,
which is normally just what is needed.
Right now, only IPv4 and string matching are planned, but others might come
later.
This version adds support for prefix-based matching of memory blocks,
as well as some code-size and performance improvements on the generic
code. It provides a prefix insertion and longest match which are
compatible with the rest of the common features (walk, duplicates,
delete, ...). This is typically used for network address matching. The
longest-match code is a bit slower than the original memory block
handling code, so they have not been merged together into generic
code. Still it's possible to perform about 10 million networks lookups
per second in a set of 50000, so this should be enough for most usages.
This version also fixes some bugs in parts that were not used, so there
is no need to backport them.
The "acl XXX -f <file>" syntax was supported but nothing was read from
the file. This is now possible. All lines are merged verbatim, even if
they contain spaces (useful for user-agents). There are shortcomings
though. The worst one is that error reporting is too approximative.
Patrick Mézard reported that it was a bit awkward to have the CONTRIB
and contrib entries in the source archive since those can conflict on
case-insensitive file systems. That made a good opportunity to refresh
the README file and to remove that old outdated file.
in cttproxy.c check_cttproxy_version socket is not closed before function
returned. Although it is called only once, I think it is better to close
the socket.
A new idea came up to detect the presence of a null byte in a word.
It saves several operations compared to the previous one, and eliminates
the jumps (about 6 instructions which can run 2-by-2 in parallel).
This sole optimisation improved the line count speed by about 30%.
When trying to display an invalid request or response we received,
we must at least check that we have identified something looking
like a start of message, otherwise we can dereference a NULL pointer.
Shame on me, I didn't correctly document the "ignore-persist" statement
(convinced I used it like this in my tests, which is not the case at all...)
This fixes the doc and updates the proxy keyword matrix to add "force-persist".
This is used to disable persistence depending on some conditions (for
example using an ACL matching static files or a specific User-Agent).
You can see it as a complement to "force-persist".
In the configuration file, the force-persist/ignore-persist declaration
order define the rules priority.
Used with the "appsesion" keyword, it can also help reducing memory usage,
as the session won't be hashed the persistence is ignored.
Released version 1.4.4 with the following main changes :
- [BUG] appsession should match the whole cookie name
- [CLEANUP] proxy: move PR_O_SSL3_CHK to options2 to release one flag
- [MEDIUM] backend: move the transparent proxy address selection to backend
- [MINOR] add very fast IP parsing functions
- [MINOR] add new tproxy flags for dynamic source address binding
- [MEDIUM] add ability to connect to a server from an IP found in a header
- [BUILD] config: last patch breaks build without CONFIG_HAP_LINUX_TPROXY
- [MINOR] http: make it possible to pretend keep-alive when doing close
- [MINOR] config: report "default-server" instead of "(null)" in error messages
I met a strange behaviour with appsession.
I firstly thought this was a regression due to one of my previous patch
but after testing with a 1.3.15.12 version, I also could reproduce it.
To illustrate, the configuration contains :
appsession PHPSESSID len 32 timeout 1h
Then I call a short PHP script containing :
setcookie("P", "should not match")
When calling this script thru haproxy, the cookie "P" matches the appsession rule :
Dumping hashtable 0x11f05c8
table[1572]: should+not+match
Shouldn't it be ignored ?
If you confirm, I'll send a patch for 1.3 and 1.4 branches to check that the
cookie length is equal to the appsession name length.
This is due to the comparison length, where the cookie length is took into
account instead of the appsession name length. Using the appsession name
length would allow ASPSESSIONIDXXX (+ check that memcmp won't go after the
buffer size).
Also, while testing, I noticed that HEAD requests where not available for
URIs containing the appsession parameter. 1.4.3 patch fixes an horrible
segfault I missed in a previous patch when appsession is not in the
configuration and HAProxy is compiled with DEBUG_HASH.
Some servers do not completely conform with RFC2616 requirements for
keep-alive when they receive a request with "Connection: close". More
specifically, they don't bother using chunked encoding, so the client
never knows whether the response is complete or not. One immediately
visible effect is that haproxy cannot maintain client connections alive.
The second issue is that truncated responses may be cached on clients
in case of network error or timeout.
Óscar Frías Barranco reported this issue on Tomcat 6.0.20, and
Patrik Nilsson with Jetty 6.1.21.
Cyril Bonté proposed this smart idea of pretending we run keep-alive
with the server and closing it at the last moment as is already done
with option forceclose. The advantage is that we only change one
emitted header but not the overall behaviour.
Since some servers such as nginx are able to close the connection
very quickly and save network packets when they're aware of the
close negociation in advance, we don't enable this behaviour by
default.
"option http-pretend-keepalive" will have to be used for that, in
conjunction with "option http-server-close".
Using get_ip_from_hdr2() we can look for occurrence #X or #-X and
extract the IP it contains. This is typically designed for use with
the X-Forwarded-For header.
Using "usesrc hdr_ip(name,occ)", it becomes possible to use the IP address
found in <name>, and possibly specify occurrence number <occ>, as the
source to connect to a server. This is possible both in a server and in
a backend's source statement. This is typically used to use the source
IP previously set by a upstream proxy.
The transparent proxy address selection was set in the TCP connect function
which is not the most appropriate place since this function has limited
access to the amount of parameters which could produce a source address.
Instead, now we determine the source address in backend.c:connect_server(),
right after calling assign_server_address() and we assign this address in
the session and pass it to the TCP connect function. This cannot be performed
in assign_server_address() itself because in some cases (transparent mode,
dispatch mode or http_proxy mode), we assign the address somewhere else.
This change will open the ability to bind to addresses extracted from many
other criteria (eg: from a header).
We'll need another flag in the 'options' member close to PR_O_TPXY_*,
and all are used, so let's move this easy one to options2 (which are
already used for SQL checks).
Released version 1.4.3 with the following main changes :
- [CLEANUP] stats: remove printf format warning in stats_dump_full_sess_to_buffer()
- [MEDIUM] session: better fix for connection to servers with closed input
- [DOC] indicate in the doc how to bind to port ranges
- [BUG] backend: L7 hashing must not be performed on incomplete requests
- [TESTS] add a simple program to test connection resets
- [MINOR] cli: "show errors" should display "backend <NONE>" when backend was not used
- [MINOR] config: emit warnings when HTTP-only options are used in TCP mode
- [MINOR] config: allow "slowstart 0s"
- [BUILD] 'make tags' did not consider files ending in '.c'
- [MINOR] checks: add the ability to disable a server in the config
It's very common to see people getting trapped by HTTP-only options
which don't work in TCP proxies. To help them definitely get rid of
those configs, let's emit warnings for all options and statements
which are not supported in their mode. That includes all HTTP-only
options, the cookies and the stats.
In order to ensure internal config correctness, the options are also
disabled.
It was disturbing to see a backend name associated with a bad request
when this "backend" was in fact the frontend. Instead, we now display
"backend <NONE>" if the "backend" has no backend capability :
> show errors
[25/Mar/2010:06:44:25.394] frontend fe (#1): invalid request
src 127.0.0.1, session #0, backend <NONE> (#-1), server <NONE> (#-1)
request length 45 bytes, error at position 0:
Isidore Li reported an occasional segfault when using URL hashing, and
kindly provided backtraces and core files to help debugging.
The problem was triggered by reset connections before the URL was sent,
and was due to the same bug which was fixed by commit e45997661b
(connections were attempted in case of connection abort). While that
bug was already fixed, it appeared that the same segfault could be
triggered when URL hashing is configured in an HTTP backend when the
frontend runs in TCP mode and no URL was seen. It is totally abnormal
to try to hash a null URL, as well as to process any kind of L7 hashing
when a full request was not seen.
This additional fix now ensures that layer7 hashing is not performed on
incomplete requests.
The following patch fixed an issue but brought another one :
296897 [MEDIUM] connect to servers even when the input has already been closed
The new issue is that when a connection is inspected and aborted using
TCP inspect rules, now it is sent to the server before being closed. So
that test is not satisfying. A probably better way is not to prevent a
connection from establishing if only BF_SHUTW_NOW is set but BF_SHUTW
is not. That way, the BF_SHUTW flag is not set if the request has any
data pending, which still fixes the stats issue, but does not let any
empty connection pass through.
Also, as a safety measure, we extend buffer_abort() to automatically
disable the BF_AUTO_CONNECT flag. While it appears to always be OK,
it is by pure luck, so better safe than sorry.
This warning was first reported by Ross West on FreeBSD, then by
Holger Just on OpenSolaris. It also happens on 64bit Linux. However,
fixing the format to use long int complains on 32bit Linux where
ptrdiff_t is apparently different. Better cast the pointer difference
to an int then.
Released version 1.4.2 with the following main changes :
- [CLEANUP] product branch update
- [DOC] Some more documentation cleanups
- [BUG] clf logs segfault when capturing a non existant header
- [OPTIM] config: only allocate check buffer when checks are enabled
- [MEDIUM] checks: support multi-packet health check responses
- [CLEANUP] session: remove duplicate test
- [BUG] http: don't wait for response data to leave buffer is client has left
- [MINOR] proto_uxst: set accept_date upon accept() to the wall clock time
- [MINOR] stats: don't send empty lines in "show errors"
- [MINOR] stats: make the data dump function reusable for other purposes
- [MINOR] stats socket: add show sess <id> to dump details about a session
- [BUG] stats: connection reset counters must be plain ascii, not HTML
- [BUG] url_param hash may return a down server
- [MINOR] force null-termination of hostname
- [MEDIUM] connect to servers even when the input has already been closed
- [BUG] don't merge anonymous ACLs !
- [BUG] config: fix endless loop when parsing "on-error"
- [MINOR] http: don't mark a server as failed when it returns 501/505
- [OPTIM] checks: try to detect the end of response without polling again
- [BUG] checks: don't report an error when recv() returns an error after data
- [BUG] checks: don't abort when second poll returns an error
- [MINOR] checks: make shutdown() silently fail
- [BUG] http: fix truncated responses on chunk encoding when size divides buffer size
- [BUG] init: unconditionally catch SIGPIPE
- [BUG] checks: don't wait for a close to start parsing the response
To save a little memory, the check_data buffer is only allocated
for the servers that are checked.
[WT: this patch saves 80 MB of RAM on the test config with 5000 servers]
Cyril Bonté reported a regression introduced with very last changes
on the checks code, which causes failed checks on if the server does
not close the connection in time. This happens on HTTP/1.1 checks or
on SMTP checks for instance.
This fix consists in restoring the old behaviour of parsing as soon
as something is available in the response buffer, and waiting for
more data if some are missing. This also helps releasing connections
earlier (eg: a GET check will not have to download the whole object).
Apparently some systems define MSG_NOSIGNAL but do not necessarily
check it (or maybe binaries are built somewhere and used on older
versions). There were reports of very recent FreeBSD setups causing
SIGPIPEs, while older ones catch the signal. Recent FreeBSD manpages
indeed define MSG_NOSIGNAL.
So let's now unconditionnaly catch the signal. It's useless not to do
it for the rare cases where it's not needed (linux 2.4 and below).
Bernhard Krieger reported truncated HTTP responses in presence of some
specific chunk-encoded data, and kindly offered complete traces of the
issue which made it easy to reproduce it.
Those traces showed that the chunks were of exactly 8192 bytes, chunk
size and CRLF included, which was exactly half the size of the buffer.
In this situation, the function http_chunk_skip_crlf() could erroneously
try to parse a CRLF after the chunk believing there were more data
pending, because the number of bytes present in the buffer was considered
instead of the number of remaining bytes to be parsed.