This function works like a traditional putchar() except that it
can return 0 if the output buffer is full.
Now a basic character-based echo function would look like this, from
a stream interface :
while (1) {
c = buffer_si_peekchar(req);
if (c < 0)
break;
if (!buffer_si_putchar(res, c)) {
si->flags |= SI_FL_WAIT_ROOM;
break;
}
buffer_skip(req, 1);
req->flags |= BF_WRITE_PARTIAL;
res->flags |= BF_READ_PARTIAL;
}
The buffer_si_peekline() function is sort of a fgets() to be used from a
stream interface. It returns a complete line whenever possible, and does
not update the buffer's pointer, so that the reader is free to consume
what it wants to.
buffer_si_peekchar() only returns one character, and also needs a call
to buffer_skip() once the character is definitely consumed.
This functions act like their buffer_write*() counter-parts,
except that they're specifically designed to be used from a
stream interface handler, as they carefully check size limits
and automatically advance the read pointer depending on the
to_forward attribute.
buffer_feed_chunk() is an inline calling buffer_feed() as both
are the sames. For this reason, buffer_write_chunk() has also
been turned into an inline which calls buffer_write().
buffer_contig_space(), buffer_contig_data() and buffer_skip()
provide easy methods to extract/insert data from/into a buffer.
buffer_write() and buffer_write_chunk() currently do not check
max_len nor to_forward, so they will quickly become embarrassing
to use or will need an equivalent. The reason is that they are
used to build error messages which currently are not subject to
analysis.
This flag was incorrectly used as meaning "close immediately",
while it needs to say "close ASAP". ASAP here means when unsent
data pending in the buffer are sent. This helps cleaning up some
dirty tricks where the buffer output was checking the BF_SHUTR
flag combined with EMPTY and other such things. Now we have a
clearly defined semantics :
- producer sets SHUTR and *may* set SHUTW_NOW if WRITE_ENA is
set, otherwise leave it to the session processor to set it.
- consumer only checks SHUTW_NOW to decide whether or not to
call shutw().
This also induced very minor changes at some locations which were
not protected against buffer changes while the SHUTW_NOW flag was
set. Now we prevent send_max from changing when the flag is set.
Several tests have been run without any unexpected behaviour detected.
Some more cleanups are needed, as it clearly appears that some tests
could be removed with stricter semantics.
Tarpit was broken by recent splitting of analysers. It would still
let the connection go to the server due to a missing buffer_write_dis().
Also, it was performed too late (after content switching rules).
Please consider the following patches. They are required to
compile haproxy-1.4-dev2 on FreeBSD.
Summary:
1) include <sys/types.h> before <netinet/tcp.h>
2) Use IPPROTO_TCP instead of SOL_TCP
(they are both defined as 6, TCP protocol number)
send() supports the MSG_MORE flag on Linux, which does the same
as TCP_CORK except that we don't have to remove TCP_NODELAY before
and we don't need any syscall to set/remove it. This can save up
to 4 syscalls around a send() (two for setting it, two for removing
it), and it's much cleaner since it is not persistent. So make use
of it instead.
We used to call stream_sock_data_finish() directly at the end of
a session update, but if we want to support non-socket interfaces,
we need to have this function configurable. Now we access it via
->update().
Since it's now possible to change the buffer size by configuration,
we have to take special measures against writes that are larger than
the buffer size. Before this patch, the writers would indefinitely
block, waiting for some space to free up.
With this patch, the functions simply reject the data with an
appropriate code so that the writers can either detect and process
the error or go on, but never remain blocked.
This has been tested on the stats page which does no longer hang
with buffer sizes smaller than 2.5 kB (256 bytes is even OK for
the CSV version).
The new tune.bufsize and tune.maxrewrite global directives allow one to
change the buffer size and the maxrewrite size. Right now, setting bufsize
too low will block stats sockets which will not be able to write at all.
An error checking must be added to buffer_write_chunk() so that if it
cannot write its message to an empty buffer, it causes the caller to abort.
The first step towards dynamic buffer size consists in removing
all static definitions of the buffer size. Instead, we store a
buffer's size in itself. Right now they're all preinitialized
to BUFSIZE, but we will change that.
sess_establish() used to resort to protocol-specific guesses
in order to set rep->analysers. This is no longer needed as it
gets set from the frontend and the backend as a copy of what
was defined in the configuration.
s->srv_error was set depending on the frontend's protocol. Now it is
set by the HTTP analyser, so that even when switching from a TCP
frontend to an HTTP backend, we can have HTTP error messages.
Analyser bitmaps are now stored in the frontend and backend, and
combined at configuration time. That way, set_session_backend()
does not need to perform any protocol-specific combinations.
Since the listener is the one indicating what analyser and session
handlers to call, it makes sense that it also sets the task's nice
value. This also helps getting rid of the last trace of the stats
in the proto_uxst file.
The remains of the stats socket code has nothing to do in proto_uxst
anymore and must move to dumpstats. The code is much cleaner and more
structured. It was also an opportunity to rename AN_REQ_UNIX_STATS
as AN_REQ_STATS_SOCK as the stats socket is no longer unix-specific
either.
The last item refering to stats in proto_uxst is the setting of the
task's nice value which should in fact come from the listener.
process_session() is now ready to handle unix stats sockets. This
first step works and old code has not been removed. A cleanup is
required. The stats handler is not unix socket-centric anymore and
should move to dumpstats.c.
When a stream interface has no connect() function, it means it is
immediately connected, so we don't need any connection request.
This will be used with unix sockets.
In order to merge the unix session handling code, we have to maintain
the number of per-listener connections in the session. This was only
performed for unix sockets till now.
Creating a frontend for the global stats socket will help merge
unix sockets management with the other socket management. Since
frontends are huge structs, we only allocate it if required.
The connection establishment was completely handled by backend.c which
normally just handles LB algos. Since it's purely TCP, it must move to
proto_tcp.c. Also, instead of calling it directly, we now call it via
the stream interface, which will later help us unify session handling.
Andrew Azarov reported that haproxy-1.4-dev1 does not build
under FreeBSD 7.2 because SOL_TCP is not defined. So add a
check for its definition before using it. This only impacts
network optimisations anyway.
This Linux-specific option was never really used in production and
has since been superseded by new splicing options brought by recent
Linux kernels.
It caused several particular cases in the code because the kernel
would take care of the session without haproxy being able to do
anything on it, which became hard to handle in the new architecture.
Let's simply get rid of it now that there is a replacement available.
The new "node-name" stats setting enables reporting of a node ID on
the stats page. It is possible to return the system's host name as
well as a specific name.
Released version 1.4-dev2 with the following main changes :
- [BUG] task: fix possible crash when some timeouts are not configured
- [BUG] log: option tcplog would log to global if no logger was defined
Romuald du Song reported a strange bug causing "option tcplog" to
unexpectedly use global log parameters if no log server was declared.
Eventhough it can be useful in some circumstances, it only hides
configuration bugs and can even cause traffic logs to be sent to
the wrong logger, since global settings are just for the process.
This has been fixed and a warning has been added for configurations
where tcplog or httplog are set without any logger. This fix must
be backported to 1.3.20, but not to 1.3.15.X in order not to risk
any regression on old configurations.
Cristian Ditoiu reported a major regression when testing 1.3.19 at
transfer.ro. It would crash within a few minutes while 1.3.15.10
was OK. He offered to help so we could run gdb and debug the crash
live. We finally found that the crash was the result of a regression
introduced by recent fix 814c978fb6
(task: fix possible timer drift after update) which makes it possible
for a tree walk to start from a detached task if this task has got
its timeout disabled due to a missing timeout.
The trivial fix below has been extensively tested and confirmed not
to crash anymore.
Special thanks to Cristian who spontaneously provided a lot of help
and trust to debug this issue which at first glance looked impossible
after reading the code and traces, but took less than an hour to spot
and fix when caught live in gdb ! That's really appreciated !
Released version 1.4-dev1 with the following main changes :
- [MINOR] acl: add support for matching of RDP cookies
- [MEDIUM] add support for RDP cookie load-balancing
- [MEDIUM] add support for RDP cookie persistence
- [MINOR] add a new CLF log format
- [MINOR] startup: don't imply -q with -D
- [BUG] ensure that we correctly re-start old process in case of error
- [MEDIUM] add support for binding to source port ranges during connect
- [MINOR] config: track "no option"/"option" changes
- [MINOR] config: support resetting options do default values
- [MEDIUM] implement option tcp-smart-accept at the frontend
- [MEDIUM] stream_sock: implement tcp-cork for use during shutdowns on Linux
- [MEDIUM] implement tcp-smart-connect option at the backend
- [MEDIUM] add support for TCP MSS adjustment for listeners
- [MEDIUM] support setting a server weight to zero
- [MINOR] make DEFAULT_MAXCONN user-configurable at build time
- [MAJOR] session: don't clear buffer status flags anymore
- [MAJOR] session: only check for timeouts when they have just occurred.
- [MAJOR] session: simplify buffer error handling
- [MEDIUM] config: split parser and checker in two functions
- [MEDIUM] config: support loading multiple configuration files
- [MEDIUM] stream_sock: don't close prematurely when nolinger is set
- [MEDIUM] session: rework buffer analysis to permit permanent analysers
- [MEDIUM] splice: set the capability on each stream_interface
- [BUG] http: redirect rules were processed too early
- [CLEANUP] remove unused DEBUG_PARSE_NO_SPEEDUP define
- [MEDIUM] http: split request waiter from request processor
- [MEDIUM] session: tell analysers what bit they were called for
- [MAJOR] http: complete splitting of the remaining stages
- [MINOR] report in the proxies the requirements for ACLs
- [MINOR] http: rely on proxy->acl_requires to allocate hdr_idx
- [MINOR] acl: add HTTP protocol detection (req_proto_http)
- [MINOR] prepare callers of session_set_backend to handle errors
- [BUG] default ACLs did not properly set the ->requires flag
- [MEDIUM] allow a TCP frontend to switch to an HTTP backend
- [MINOR] ensure we can jump from swiching rules to http without data
- [MINOR] http: take http request timeout from the backend
- [MINOR] allow TCP inspection rules to make use of HTTP ACLs
- [BUILD] report commit date and not author's date as build date
- [MINOR] acl: don't complain anymore when using L7 acls in TCP
- [BUG] stream_sock: always shutdown(SHUT_WR) before closing
- [BUG] stream_sock: don't stop reading when the poller reports an error
- [BUG] config: tcp-request content only accepts "if" or "unless"
- [BUG] task: fix possible timer drift after update
- [MINOR] apply tcp-smart-connect option for the checks too
- [MINOR] stats: better displaying in MSIE
- [MINOR] config: improve error reporting in global section
- [MINOR] config: improve error reporting in listen sections
- [MINOR] config: the "capture" keyword is not allowed in backends
- [MINOR] config: improve error reporting when checking configuration
- [BUILD] fix a minor build warning on AIX
- [BUILD] use "git cmd" instead of "git-cmd"
- [CLEANUP] report 2009 not 2008 in the copyright banner.
- [MINOR] print usage on the stats sockets upon invalid commands
- [MINOR] acl: detect and report potential mistakes in ACLs
- [BUILD] fix incorrect printf arg count with tcp_splice
- [BUG] fix random pauses on last segment of a series
- [BUILD] add support for build under Cygwin
During a direct data transfer from the server to the client, if the
system did not have enough buffers anymore, haproxy would not enable
write polling again if it could write at least one data chunk. Under
normal conditions, this would remain undetected because the remaining
data would be pushed by next data chunks.
However, when this happens on the last chunk of a session, or the last
in a series in an interactive bidirectional TCP transfer, haproxy would
only start sending again when the read timeout was reached on the side
it stopped writing, causing long pauses on some protocols such as SQL.
This bug was reported by an Exceliance customer who generously offered
to help us by sending large amounts of traces and running various tests
on production systems.
It is quite hard to trigger it but it becomes easier with a ping-pong
TCP service which transfers random data sizes, with a modified version
of send() able to send packets smaller than the average transfer size.
A cleaner fix would imply only updating the write timeout when data
transfers are *attempted*, not succeeded, but that requires more
sensible code changes without fixing the result. It is a candidate
for a later patch though.
I've discovered a configuration with lots of occurrences of the
following :
acl xxx hdr_beg (host) xxx
The problem is that hdr_beg will match every header against patterns
(host) and xxx due to the space between both, which certainly is not
what the user wanted. Now we detect such ACLs and report a warning
with a suggestion to add "--" between "hdr_beg" and "(host)" if this
is definitely what is wanted.
When issuing commands on the unix socket, there's no way to
know if the result is empty or if the command is wrong. This
patch makes invalid command return a help message.
Newer GIT versions do not support "git-cmd" anymore, so date and version
can be wrong during development builds. Use "git cmd" now. Also fix
git-tar to use "git archive" instead of "git-tar-tree".
AIX wants string.h in signal.c (and is right to do so) :
gcc -Iinclude -Wall -O2 -g -DTPROXY -DENABLE_POLL -DCONFIG_HAPROXY_VERSION=\"1.3.18\" -DCONFIG_HAPROXY_DATE=\"2009/05/10\" -c -o src/signal.o src/signal.c
src/signal.c: In function 'signal_init':
src/signal.c:32: warning: implicit declaration of function 'memset'
src/signal.c:32: warning: incompatible implicit declaration of built-in function 'memset'
Do not exit early at the first error found while checking configuration
validity. This particularly helps spotting multiple wrong tracked server
names at once.
Try not to immediately exit on non-fatal errors while parsing a
listen section, so that the user has a chance to get most of the
errors at once, which is quite convenient especially during config
checks with the -c argument.
Try not to immediately exit on non-fatal errors while parsing the
global section, so that the user has a chance to get most of the
errors at once, which is quite convenient especially during config
checks with the -c argument. Some other errors such as unresolved
server names also don't make the parser exit too early.
MSIE does not correctly display spaced digits. It requires a margin of
at least one pixel. Also, it does not correctly hide empty cells, so we
work around this by setting the background white. Last, the H1 font was
too large, so we reduce it by one size, which is still OK in other
browsers.
We should respect tcp-smart-connect for checks too. First it reduces
the traffic, and second it ensures that the checks see the same thing
as the production traffic, which is better for debugging.
When the scheduler detected that a task was misplaced in the timer
queue, it used to place it right again. Unfortunately, it did not
check whether it would still call the new task from its new place.
This resulted in some tasks not getting called on timeout once in
a while, causing a minor drift for repetitive timers. This effect
was only observable with slow health checks and without any activity
because no other task would cause the scheduler to be immediately
called again.
In practice, it does not affect any real-world configuration, but
it's still better to fix it.
As reported by Maik Broemme, if something different from "if" or
"unless" was specified after "tcp-request content accept", the
condition would silently remain void. The parser must obviously
complain since this typically corresponds to a forgotten "if".
As reported by Jean-Baptiste Quenot and Robbie Aelter, sometimes a
backend server error is converted to a 502 error if the backend stops
before reading all the request. The reason is that the remote system
sends a TCP RST packet because there are still unread data pending in
the socket buffer. This RST is translated as a socket error on the
local system, and this error is reported by the poller.
However, most of the time, it's a write error, but the system is
still able to read the remaining pending data, such as in the trace
below :
send(7, "GET /aaa HTTP/1.0\r\nUser-Agent: Mo"..., 1123, MSG_DONTWAIT|MSG_NOSIGNAL) = 1123
epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN, {u32=7, u64=7}}) = 0
epoll_wait(3, {{EPOLLIN|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}}, 8, 1000) = 1
gettimeofday({1247593958, 643572}, NULL) = 0
recv(7, "HTTP/1.0 400 Bad request\r\nCache-C"..., 7000, MSG_NOSIGNAL) = 187
setsockopt(6, SOL_TCP, TCP_NODELAY, [0], 4) = 0
setsockopt(6, SOL_TCP, TCP_CORK, [1], 4) = 0
send(6, "HTTP/1.0 400 Bad request\r\nCache-C"..., 187, MSG_DONTWAIT|MSG_NOSIGNAL) = 187
shutdown(6, 1 /* send */) = 0
The recv succeeded while epoll_wait() reported an error.
Note: This case is very hard to reproduce and requires that the backend
server is reached via the loopback in order to minimise latency and
reduce the risk of sent data being ACKed.
When we close a socket with unread data in the buffer, or when the
nolinger option is set, we regularly lose the last fragment, which
often contains the error message. This typically occurs when sending
too large a request. Only the RST is seen due to the close() (since
not all data were read) and the output message never reaches the
network.
Doing a shutdown() before the close() solves this annoying issue
because the data are really pushed before the system sends the RST.