When trying to display an invalid request or response we received,
we must at least check that we have identified something looking
like a start of message, otherwise we can dereference a NULL pointer.
Shame on me, I didn't correctly document the "ignore-persist" statement
(convinced I used it like this in my tests, which is not the case at all...)
This fixes the doc and updates the proxy keyword matrix to add "force-persist".
This is used to disable persistence depending on some conditions (for
example using an ACL matching static files or a specific User-Agent).
You can see it as a complement to "force-persist".
In the configuration file, the force-persist/ignore-persist declaration
order define the rules priority.
Used with the "appsesion" keyword, it can also help reducing memory usage,
as the session won't be hashed the persistence is ignored.
Released version 1.4.4 with the following main changes :
- [BUG] appsession should match the whole cookie name
- [CLEANUP] proxy: move PR_O_SSL3_CHK to options2 to release one flag
- [MEDIUM] backend: move the transparent proxy address selection to backend
- [MINOR] add very fast IP parsing functions
- [MINOR] add new tproxy flags for dynamic source address binding
- [MEDIUM] add ability to connect to a server from an IP found in a header
- [BUILD] config: last patch breaks build without CONFIG_HAP_LINUX_TPROXY
- [MINOR] http: make it possible to pretend keep-alive when doing close
- [MINOR] config: report "default-server" instead of "(null)" in error messages
I met a strange behaviour with appsession.
I firstly thought this was a regression due to one of my previous patch
but after testing with a 1.3.15.12 version, I also could reproduce it.
To illustrate, the configuration contains :
appsession PHPSESSID len 32 timeout 1h
Then I call a short PHP script containing :
setcookie("P", "should not match")
When calling this script thru haproxy, the cookie "P" matches the appsession rule :
Dumping hashtable 0x11f05c8
table[1572]: should+not+match
Shouldn't it be ignored ?
If you confirm, I'll send a patch for 1.3 and 1.4 branches to check that the
cookie length is equal to the appsession name length.
This is due to the comparison length, where the cookie length is took into
account instead of the appsession name length. Using the appsession name
length would allow ASPSESSIONIDXXX (+ check that memcmp won't go after the
buffer size).
Also, while testing, I noticed that HEAD requests where not available for
URIs containing the appsession parameter. 1.4.3 patch fixes an horrible
segfault I missed in a previous patch when appsession is not in the
configuration and HAProxy is compiled with DEBUG_HASH.
Some servers do not completely conform with RFC2616 requirements for
keep-alive when they receive a request with "Connection: close". More
specifically, they don't bother using chunked encoding, so the client
never knows whether the response is complete or not. One immediately
visible effect is that haproxy cannot maintain client connections alive.
The second issue is that truncated responses may be cached on clients
in case of network error or timeout.
Óscar Frías Barranco reported this issue on Tomcat 6.0.20, and
Patrik Nilsson with Jetty 6.1.21.
Cyril Bonté proposed this smart idea of pretending we run keep-alive
with the server and closing it at the last moment as is already done
with option forceclose. The advantage is that we only change one
emitted header but not the overall behaviour.
Since some servers such as nginx are able to close the connection
very quickly and save network packets when they're aware of the
close negociation in advance, we don't enable this behaviour by
default.
"option http-pretend-keepalive" will have to be used for that, in
conjunction with "option http-server-close".
Using get_ip_from_hdr2() we can look for occurrence #X or #-X and
extract the IP it contains. This is typically designed for use with
the X-Forwarded-For header.
Using "usesrc hdr_ip(name,occ)", it becomes possible to use the IP address
found in <name>, and possibly specify occurrence number <occ>, as the
source to connect to a server. This is possible both in a server and in
a backend's source statement. This is typically used to use the source
IP previously set by a upstream proxy.
The transparent proxy address selection was set in the TCP connect function
which is not the most appropriate place since this function has limited
access to the amount of parameters which could produce a source address.
Instead, now we determine the source address in backend.c:connect_server(),
right after calling assign_server_address() and we assign this address in
the session and pass it to the TCP connect function. This cannot be performed
in assign_server_address() itself because in some cases (transparent mode,
dispatch mode or http_proxy mode), we assign the address somewhere else.
This change will open the ability to bind to addresses extracted from many
other criteria (eg: from a header).
We'll need another flag in the 'options' member close to PR_O_TPXY_*,
and all are used, so let's move this easy one to options2 (which are
already used for SQL checks).
Released version 1.4.3 with the following main changes :
- [CLEANUP] stats: remove printf format warning in stats_dump_full_sess_to_buffer()
- [MEDIUM] session: better fix for connection to servers with closed input
- [DOC] indicate in the doc how to bind to port ranges
- [BUG] backend: L7 hashing must not be performed on incomplete requests
- [TESTS] add a simple program to test connection resets
- [MINOR] cli: "show errors" should display "backend <NONE>" when backend was not used
- [MINOR] config: emit warnings when HTTP-only options are used in TCP mode
- [MINOR] config: allow "slowstart 0s"
- [BUILD] 'make tags' did not consider files ending in '.c'
- [MINOR] checks: add the ability to disable a server in the config
It's very common to see people getting trapped by HTTP-only options
which don't work in TCP proxies. To help them definitely get rid of
those configs, let's emit warnings for all options and statements
which are not supported in their mode. That includes all HTTP-only
options, the cookies and the stats.
In order to ensure internal config correctness, the options are also
disabled.
It was disturbing to see a backend name associated with a bad request
when this "backend" was in fact the frontend. Instead, we now display
"backend <NONE>" if the "backend" has no backend capability :
> show errors
[25/Mar/2010:06:44:25.394] frontend fe (#1): invalid request
src 127.0.0.1, session #0, backend <NONE> (#-1), server <NONE> (#-1)
request length 45 bytes, error at position 0:
Isidore Li reported an occasional segfault when using URL hashing, and
kindly provided backtraces and core files to help debugging.
The problem was triggered by reset connections before the URL was sent,
and was due to the same bug which was fixed by commit e45997661b
(connections were attempted in case of connection abort). While that
bug was already fixed, it appeared that the same segfault could be
triggered when URL hashing is configured in an HTTP backend when the
frontend runs in TCP mode and no URL was seen. It is totally abnormal
to try to hash a null URL, as well as to process any kind of L7 hashing
when a full request was not seen.
This additional fix now ensures that layer7 hashing is not performed on
incomplete requests.
The following patch fixed an issue but brought another one :
296897 [MEDIUM] connect to servers even when the input has already been closed
The new issue is that when a connection is inspected and aborted using
TCP inspect rules, now it is sent to the server before being closed. So
that test is not satisfying. A probably better way is not to prevent a
connection from establishing if only BF_SHUTW_NOW is set but BF_SHUTW
is not. That way, the BF_SHUTW flag is not set if the request has any
data pending, which still fixes the stats issue, but does not let any
empty connection pass through.
Also, as a safety measure, we extend buffer_abort() to automatically
disable the BF_AUTO_CONNECT flag. While it appears to always be OK,
it is by pure luck, so better safe than sorry.
This warning was first reported by Ross West on FreeBSD, then by
Holger Just on OpenSolaris. It also happens on 64bit Linux. However,
fixing the format to use long int complains on 32bit Linux where
ptrdiff_t is apparently different. Better cast the pointer difference
to an int then.
Released version 1.4.2 with the following main changes :
- [CLEANUP] product branch update
- [DOC] Some more documentation cleanups
- [BUG] clf logs segfault when capturing a non existant header
- [OPTIM] config: only allocate check buffer when checks are enabled
- [MEDIUM] checks: support multi-packet health check responses
- [CLEANUP] session: remove duplicate test
- [BUG] http: don't wait for response data to leave buffer is client has left
- [MINOR] proto_uxst: set accept_date upon accept() to the wall clock time
- [MINOR] stats: don't send empty lines in "show errors"
- [MINOR] stats: make the data dump function reusable for other purposes
- [MINOR] stats socket: add show sess <id> to dump details about a session
- [BUG] stats: connection reset counters must be plain ascii, not HTML
- [BUG] url_param hash may return a down server
- [MINOR] force null-termination of hostname
- [MEDIUM] connect to servers even when the input has already been closed
- [BUG] don't merge anonymous ACLs !
- [BUG] config: fix endless loop when parsing "on-error"
- [MINOR] http: don't mark a server as failed when it returns 501/505
- [OPTIM] checks: try to detect the end of response without polling again
- [BUG] checks: don't report an error when recv() returns an error after data
- [BUG] checks: don't abort when second poll returns an error
- [MINOR] checks: make shutdown() silently fail
- [BUG] http: fix truncated responses on chunk encoding when size divides buffer size
- [BUG] init: unconditionally catch SIGPIPE
- [BUG] checks: don't wait for a close to start parsing the response
To save a little memory, the check_data buffer is only allocated
for the servers that are checked.
[WT: this patch saves 80 MB of RAM on the test config with 5000 servers]
Cyril Bonté reported a regression introduced with very last changes
on the checks code, which causes failed checks on if the server does
not close the connection in time. This happens on HTTP/1.1 checks or
on SMTP checks for instance.
This fix consists in restoring the old behaviour of parsing as soon
as something is available in the response buffer, and waiting for
more data if some are missing. This also helps releasing connections
earlier (eg: a GET check will not have to download the whole object).
Apparently some systems define MSG_NOSIGNAL but do not necessarily
check it (or maybe binaries are built somewhere and used on older
versions). There were reports of very recent FreeBSD setups causing
SIGPIPEs, while older ones catch the signal. Recent FreeBSD manpages
indeed define MSG_NOSIGNAL.
So let's now unconditionnaly catch the signal. It's useless not to do
it for the rare cases where it's not needed (linux 2.4 and below).
Bernhard Krieger reported truncated HTTP responses in presence of some
specific chunk-encoded data, and kindly offered complete traces of the
issue which made it easy to reproduce it.
Those traces showed that the chunks were of exactly 8192 bytes, chunk
size and CRLF included, which was exactly half the size of the buffer.
In this situation, the function http_chunk_skip_crlf() could erroneously
try to parse a CRLF after the chunk believing there were more data
pending, because the number of bytes present in the buffer was considered
instead of the number of remaining bytes to be parsed.
This happens when a server immediately closes the connection after
the response without lingering or when we close before the end of
the data. We get an RST which translates into a late error. We must
not declare an error without checking that the contents are OK.
Since the recv() call returns every time it succeeds, we always need
to calls with one intermediate poll before detecting the end of response :
20:20:03.958207 recv(7, "HTTP/1.1 200\r\nConnection: close\r\n"..., 8030, 0) = 145
20:20:03.958365 epoll_wait(3, {{EPOLLIN, {u32=7, u64=7}}}, 8, 1000) = 1
20:20:03.958543 gettimeofday({1268767203, 958626}, NULL) = 0
20:20:03.958694 recv(7, ""..., 7885, 0) = 0
20:20:03.958833 shutdown(7, 2 /* send and receive */) = 0
Let's read as long as we can, that way we can detect end of connections
in the same call, which is much more efficient especially for LBs with
hundreds of servers :
20:29:58.797019 recv(7, "HTTP/1.1 200\r\nConnection: close\r\n"..., 8030, 0) = 145
20:29:58.797182 recv(7, ""..., 7885, 0) = 0
20:29:58.797356 shutdown(7, 2 /* send and receive */) = 0
We are seeing both real servers repeatedly going on- and off-line with
a period of tens of seconds. Packet tracing, stracing, and adding
debug code to HAProxy itself has revealed that the real servers are
always responding correctly, but HAProxy is sometimes receiving only
part of the response.
It appears that the real servers are sending the test page as three
separate packets. HAProxy receives the contents of one, two, or three
packets, apparently randomly. Naturally, the health check only
succeeds when all three packets' data are seen by HAProxy. If HAProxy
and the real servers are modified to use a plain HTML page for the
health check, the response is in the form of a single packet and the
checks do not fail.
(...)
I've added buffer and length variables to struct server, and allocated
space with the rest of the server initialisation.
(...)
It seems to be working fine in my tests, and handles check responses
that are bigger than the buffer.
Those two codes can be triggered on demand by client requests.
We must not fail a server on them.
Ideally we should ignore a certain amount of status codes which do
not indicate life nor death.
The new anonymous ACL feature was buggy. If several ones are
declared, the first rule is always matched because all of them
share the same internal name (".noname"). Now we simply declare
them with an empty name and ensure that we disable any merging
when the name is empty.
Hi Willy,
Please find a small patch to prevent haproxy segfaulting when logging captured headers in CLF format.
Example config to reproduce the bug :
listen test :10080
log 127.0.0.1 local7 debug err
mode http
option httplog clf
capture request header NonExistantHeader len 16
--
Cyril Bonté
The BF_AUTO_CLOSE flag prevented a connection from establishing on
a server if the other side's input channel was already closed. This
is wrong because there may be pending data to be sent.
This was causing an issue with stats, as noticed and reported by
Cyril Bonté. Since the stats are now handled as a server, sometimes
concurrent accesses were causing one of the connections to send the
shutdown(write) before the connection to the stats function was
established, which aborted it early.
This fix causes the BF_AUTO_CLOSE flag to be checked only when the
connection on the outgoing stream interface has reached an established
state. That way we're still able to connect, send the request then
close.
Marcello Gorlani reported that at least on FreeBSD, a long hostname
was reported with garbage on the stats page. POSIX does not make it
mandatory for gethostname() to NULL-terminate the string in case of
truncation, and at least FreeBSD appears not to do it. So let's
force null-termination to keep safe.
Since the last documentation cleanups, I've found more typos that I kept
in a corner instead of sending you a mail just for one character :)
--
Cyril Bonté
today I've noticed that the stats page still displays v1.3 in the
"Updates" link, due to the PRODUCT_BRANCH value in version.h, then
it's maybe time to send you the result (notice that the patch updates
PRODUCT_BRANCH to "1.4").
--
Cyril Bonté
Jozef Hovan reported a bug sometimes causing a down server to be
used in url_param hashing mode.
This happens if the following conditions are met :
- the backend contains more than one server with at least two
of different weights
- all servers but one are down
- the server which is not down has a weight which does not divide
all the other ones
Example: 3 servers with 20,20,10, the first one remains up.
The problem is caused by an optimisation in recalc_server_map()
which only fills the first map slot when only one server is up,
because all LB algorithms are optimized to use entry zero when
only one server is up... All but url_param. When doing the modulus,
we can return a position which is greater than zero and use an
entry which still refers to a server which has since been stopped.
One solution could be to optimize the url_param algo to proceed
as the other ones, but the fact that was wrong implies that we
can repeat the same bug later. So let's first correctly initialize
the map in order to avoid that trap.
When trying to spot some complex bugs, it's often needed to access
information on stuck sessions, which is quite difficult. This new
command helps one get detailed information about a session, with
flags, timers, states, etc... The buffer data are not dumped yet.