AIX wants string.h in signal.c (and is right to do so) :
gcc -Iinclude -Wall -O2 -g -DTPROXY -DENABLE_POLL -DCONFIG_HAPROXY_VERSION=\"1.3.18\" -DCONFIG_HAPROXY_DATE=\"2009/05/10\" -c -o src/signal.o src/signal.c
src/signal.c: In function 'signal_init':
src/signal.c:32: warning: implicit declaration of function 'memset'
src/signal.c:32: warning: incompatible implicit declaration of built-in function 'memset'
Do not exit early at the first error found while checking configuration
validity. This particularly helps spotting multiple wrong tracked server
names at once.
Try not to immediately exit on non-fatal errors while parsing a
listen section, so that the user has a chance to get most of the
errors at once, which is quite convenient especially during config
checks with the -c argument.
Try not to immediately exit on non-fatal errors while parsing the
global section, so that the user has a chance to get most of the
errors at once, which is quite convenient especially during config
checks with the -c argument. Some other errors such as unresolved
server names also don't make the parser exit too early.
MSIE does not correctly display spaced digits. It requires a margin of
at least one pixel. Also, it does not correctly hide empty cells, so we
work around this by setting the background white. Last, the H1 font was
too large, so we reduce it by one size, which is still OK in other
browsers.
We should respect tcp-smart-connect for checks too. First it reduces
the traffic, and second it ensures that the checks see the same thing
as the production traffic, which is better for debugging.
When the scheduler detected that a task was misplaced in the timer
queue, it used to place it right again. Unfortunately, it did not
check whether it would still call the new task from its new place.
This resulted in some tasks not getting called on timeout once in
a while, causing a minor drift for repetitive timers. This effect
was only observable with slow health checks and without any activity
because no other task would cause the scheduler to be immediately
called again.
In practice, it does not affect any real-world configuration, but
it's still better to fix it.
As reported by Maik Broemme, if something different from "if" or
"unless" was specified after "tcp-request content accept", the
condition would silently remain void. The parser must obviously
complain since this typically corresponds to a forgotten "if".
As reported by Jean-Baptiste Quenot and Robbie Aelter, sometimes a
backend server error is converted to a 502 error if the backend stops
before reading all the request. The reason is that the remote system
sends a TCP RST packet because there are still unread data pending in
the socket buffer. This RST is translated as a socket error on the
local system, and this error is reported by the poller.
However, most of the time, it's a write error, but the system is
still able to read the remaining pending data, such as in the trace
below :
send(7, "GET /aaa HTTP/1.0\r\nUser-Agent: Mo"..., 1123, MSG_DONTWAIT|MSG_NOSIGNAL) = 1123
epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN, {u32=7, u64=7}}) = 0
epoll_wait(3, {{EPOLLIN|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}}, 8, 1000) = 1
gettimeofday({1247593958, 643572}, NULL) = 0
recv(7, "HTTP/1.0 400 Bad request\r\nCache-C"..., 7000, MSG_NOSIGNAL) = 187
setsockopt(6, SOL_TCP, TCP_NODELAY, [0], 4) = 0
setsockopt(6, SOL_TCP, TCP_CORK, [1], 4) = 0
send(6, "HTTP/1.0 400 Bad request\r\nCache-C"..., 187, MSG_DONTWAIT|MSG_NOSIGNAL) = 187
shutdown(6, 1 /* send */) = 0
The recv succeeded while epoll_wait() reported an error.
Note: This case is very hard to reproduce and requires that the backend
server is reached via the loopback in order to minimise latency and
reduce the risk of sent data being ACKed.
When we close a socket with unread data in the buffer, or when the
nolinger option is set, we regularly lose the last fragment, which
often contains the error message. This typically occurs when sending
too large a request. Only the RST is seen due to the close() (since
not all data were read) and the output message never reaches the
network.
Doing a shutdown() before the close() solves this annoying issue
because the data are really pushed before the system sends the RST.
By default, when building from a git tree, haproxy's release date is
set to the last commit's date. But it was the wrong date which was
used, the initial patch's date, which can cause time jumps in the
past when an old patch gets merged. What we want is the commit date,
which reflects the correct code history.
The new statement "persist rdp-cookie" enables RDP cookie
persistence. The RDP cookie is then extracted from the RDP
protocol, and compared against available servers. If a server
matches the RDP cookie, then it gets the connection.
This patch adds support for hashing RDP cookies in order to
use them as a load-balancing key. The new "rdp-cookie(name)"
load-balancing metric has to be used for this. It is still
mandatory to wait for an RDP cookie in the frontend, otherwise
it will randomly work.
The RDP protocol is quite simple and documented, which permits
an easy detection and extraction of cookies. It can be useful
to match the MSTS cookie which can contain the username specified
by the client.
Since we can call the HTTP parser from TCP inspection rules, it makes
sense to be able to use the HTTP ACLs with it. That way, we can decide
from a TCP frontend to take a switching decision based on full layer7
decoding. This might be useful to perform layer7 content switching from
a layer4 frontend in fact. For instance, we might want to be able to
detect http/https on a frontend, but still switch to backend X or Y
depending on the Host header. Note that it is mandatory to wait for
an HTTP request otherwise the ACLs will randomly match.
Since we can now switch from TCP to HTTP, we need to be able to apply
the HTTP request timeout after switching. That means we need to take
it from the backend and not from the frontend. Since the backend points
to the frontend before switching, that changes nothing for the normal
case.
In case of switching from TCP to HTTP, we want the HTTP request timeout
to be properly initialized. For this, we have to jump to the analyser
without breaking out of the loop nor waiting for incoming data. The way
it is done right now is not particularly clean but it works.
A cleaner method might involve pushing function pointers into a circular
list.
This patch allows a TCP frontend to switch to an HTTP backend.
During the switch, missing structures are automatically allocated.
The HTTP parser is enabled so that the backend first waits for a
full HTTP request.
Now that we can perform TCP-based content switching, it makes sense
to be able to detect HTTP traffic and act accordingly. We already
have an HTTP decoder, we just have to call it in order to detect HTTP
protocol. Note that since the decoder will automatically fill in the
interesting fields of the HTTP transaction, it would make sense to
use this parsing to extend HTTP matching to TCP.
Right now only HTTP proxies may use HTTP headers in ACLs, but
when this evolves, we'll need to be able to allocate the hdr_idx
on demand. The solution consists in allocating it only when it is
certain that at least one ACL requires HTTP parsing, regardless
of the mode the proxy is in. This is what is achieved by this
patch.
This patch propagates the ACL conditions' "requires" bitfield
to the proxies. This makes it possible to know exactly what a
proxy might have to support for any request, which helps knowing
whether we have to allocate some space for certain types of
structures or not (eg: the hdr_idx struct).
The concept might be extended to a lot more types of information,
such as detecting whether we need to allocate some space for some
request ACLs which need a result in the response, etc...
The HTTP processing has been splitted into 7 steps, one of which
is not anymore HTTP-specific (content-switching). That way, it
becomes possible to use "use_backend" rules in TCP mode. A new
"use_server" directive should follow soon.
Some stream analysers might become generic enough to be called
for several bits. So we cannot have the analyser bit hard coded
into the analyser itself. Let's make the caller inform the callee.
We want to split several steps in HTTP processing so that
we can call individual analysers depending on what processing
we want to perform. The first step consists in splitting the
part that waits for a request from the rest.
redirect rules are documented as being processed last before
use_backend but were mistakenly processed before block rules.
Fortunately very few people use a mix of block and redirect
rules, so this bug has never been reported yet.
The splice code did not consider compatibility between both ends
of the connection. Now we set different capabilities on each
stream interface, depending on what the protocol can splice to/from.
Right now, only TCP is supported. Thanks to this, we're now able to
automatically detect when splice() is not implemented and automatically
disable it on one end instead of reporting errors to the upper layer.
It will soon be necessary to support permanent analysers (eg: HTTP in
keep-alive mode). We first have to slightly rework the call to the
request analysers so that we don't force ->analysers to be 0 before
forwarding data.
When the nolinger option is used, we must not close too fast because
some data might be left unsent. Instead we must proceed with a normal
shutdown first, then a close. Also, we want to avoid merging FIN with
the last segment if nolinger is set, because if that one gets lost,
there is no chance for it to be retransmitted.
We now support up to 10 distinct configuration files. They are
all loaded in the order defined by -f <file1> -f <file2> ...
This can be useful in order to store global, private, public,
etc... configurations in distinct files.
This is a first step towards support of multiple configuration files.
Now readcfgfile() only reads a file in memory and performs very minimal
parsing. The checks are performed afterwards.
Buffer errors (timeouts and I/O errors) were handled at two places,
just after the analysers and after again.
Now that the timeout detection has moved, it has become easier to
handle those errors.
This has also made it possible for the request and response analysers
to be processed together as a down-up event, and all the up-down I/O
updates to be processed afterwards, which is exactly what we're looking
for. Interestingly this has reduced the number of iterations of
(stream_int, req_resp) from (5,6,5) to (5,5,4).
Several tests have been run without any issue found.
It's useless to check for buffer timeouts every time we call
process_session() because we already control when we set the flag. So
let's check them at the precise moment where the flag is set.
We want to be able to keep information about errors and timeouts
as long as possible in the buffer. Let's not clear these flags
anymore and keep them static. This does not seem to cause any
trouble, though a finer review might be wise.
Sometimes it can be useful to limit the advertised TCP MSS on
incoming connections, for instance when requests come through
a VPN or when the system is running with jumbo frames enabled.
Passing the "mss <value>" arguments to a "bind" line will set
the value. This works under Linux >= 2.6.28, and maybe a few
earlier ones, though due to an old kernel bug most of earlier
versions will probably ignore it. It is also possible that some
other OSes will support this.
After considering various possibilities, we compiled haproxy under cygwin.
Attached is an updated full diff that also has the TARGET=cygwin documented.
The whole thing compiles and installs with this diff only.
In cygwin 1.7 (now in beta), there is apparently support for ipv6. Cygwin
1.5 (later versions, anyway) already includes some support in the form of a
define USE_IPV6. When defined, it declares the sockaddr_in6 struct and
possibly other things. The above definition AF_INET6=23 is taken from
their /usr/include/socket.h file (where it is #if 0'd out).
We are running into a socket limit. It appears that Cygwin (running on
Windows 2003 Server) will only allow us to set ulimit -n (maximum open
files) to 3200, which means we're a little short of 1600 connections.
The limit of 3200 is an internal Cygwin limit. Perhaps they can raise it in
the future. Using the nbproc option, I was able to bring up 10 servers. It
seems to me that they were able to handle over 2000 connections (even though
each had maxconn 1500 set, and the hard Cygwin fd limit).
This new option enables combining of request buffer data with
the initial ACK of an outgoing TCP connection. Doing so saves
one packet per connection which is quite noticeable on workloads
mostly consisting in small objects. The option is not enabled by
default.
Setting TCP_CORK on a socket before sending the last segment enables
automatic merging of this segment with the FIN from the shutdown()
call. Playing with TCP_CORK is not easy though as we have to track
the status of the TCP_NODELAY flag since both are mutually exclusive.
Doing so saves one more packet per session and offers about 5% more
performance.
There is no reason not to do it, so there is no associated option.
This option disables TCP quick ack upon accept. It is also
automatically enabled in HTTP mode, unless the option is
explicitly disabled with "no option tcp-smart-accept".
This saves one packet per connection which can bring reasonable
amounts of bandwidth for servers processing small requests.
A new keyword prefix "default" has been introduced in order to
reset some options to their default values. This can be needed
for instance when an option is forced disabled or enabled in a
defaults section and when later sections want to use automatic
settings regardless of what was specified there. Right now it
is only supported by options, just like the "no" prefix.
Sometimes we would want to implement implicit default options,
but for this we need to be able to disable them, which requires
to keep track of "no option" settings. With this change, an option
explicitly disabled in a defaults section will still be seen as
explicitly disabled. There should be no regression as nothing makes
use of this yet.
Some users are already hitting the 64k source port limit when
connecting to servers. The system usually maintains a list of
unused source ports, regardless of the source IP they're bound
to. So in order to go beyond the 64k concurrent connections, we
have to manage the source ip:port lists ourselves.
The solution consists in assigning a source port range to each
server and use a free port in that range when connecting to that
server, either for a proxied connection or for a health check.
The port must then be put back into the server's range when the
connection is closed.
This mechanism is used only when a port range is specified on
a server. It makes it possible to reach 64k connections per
server, possibly all from the same IP address. Right now it
should be more than enough even for huge deployments.
When a new process fails to grab some ports, it sends a signal to
the old process in order to release them. Then it tries to bind
again. If it still fails (eg: one of the ports is bound to a
completely different process), it must send the continue signal
to the old process so that this one re-binds to the ports. This
is correctly done, but the newly bound ports are not released
first, which sometimes causes the old process to remain running
with no port bound. The fix simply consists in unbinding all
ports before sending the signal to the old process.