Timers are unsigned and used as tree positions. Ticks are signed and
used as absolute date within current time frame. While the two are
normally equal (except zero), it's important not to confuse them in
the code as they are not interchangeable.
We add two inline functions to turn each one into the other.
The comments have also been moved to the proper location, as it was
not easy to understand what was a tick and what was a timer unit.
All the tasks callbacks had to requeue the task themselves, and update
a global timeout. This was not convenient at all. Now the API has been
simplified. The tasks callbacks only have to update their expire timer,
and return either a pointer to the task or NULL if the task has been
deleted. The scheduler will take care of requeuing the task at the
proper place in the wait queue.
We don't need to remove then add tasks in the wait queue every time we
update a timeout. We only need to do that when the new timeout is earlier
than previous one. We can rely on wake_expired_tasks() to perform the
proper checks and bounce the misplaced tasks in the rare case where this
happens. The motivation behind this is that we very rarely hit timeouts,
so we save a lot of CPU cycles by moving the tasks very rarely. This now
means we can also find tasks with expiration date set to eternity in the
queue, and that is not a problem.
In many situations, we wake a task on an I/O event, then queue it
exactly where it was. This is a real waste because we delete/insert
tasks into the wait queue for nothing. The only reason for this is
that there was only one tree node in the task struct.
By adding another tree node, we can have one tree for the timers
(wait queue) and one tree for the priority (run queue). That way,
we can have a task both in the run queue and wait queue at the
same time. The wait queue now really holds timers, which is what
it was designed for.
The net gain is at least 1 delete/insert cycle per session, and up
to 2-3 depending on the workload, since we save one cycle each time
the expiration date is not changed during a wake up.
A bug was introduced with the ebtree-based scheduler. It seldom causes
some timeouts to last longer than required if they hit an expiration
date which is the same as the last queued date, is also part of a
duplicate tree without being the top of the tree. In this case, the
task will not be expired until after the duplicate tree has been
flushed.
It is easier to reproduce by setting a very short client timeout (1s)
and sending connections and waiting for them to expire with the 408
status. Then in parallel, inject at about 1kh/s. The bug causes the
connections to sometimes wait longer than 1s before timing out.
The cause was the use of eb_insert_dup() on wrong nodes, as this
function is designed to work only on the top of the dup tree. The
solution consists in updating last_timer only when its bit is -1,
and using it only if its bit is still -1 (top of a dup tree).
The fix has not reduced performance because it only fixes the case
where this bug could fire, which is extremely rare.
It's easier to take the counter's age into account when consulting it
than to rotate it first. It also saves some CPU cycles and avoids the
multiply for outdated counters, finally saving CPU cycles here too
when multiple operations need to read the same counter.
The freq_ctr code has also shrinked by one third consecutively to these
optimizations.
term_trace was very useful while reworking the lower layers but has almost
completely been removed from every place it was referenced. Even the few
remaining ones were not accurate, so it's better to completely remove those
references and re-add them from scratch later if needed.
In pure TCP mode, there is no response analyser to switch the server-side
stream interface from INI to CLO when the output has been closed after an
abort. This caused sessions to remain indefinitely active when they were
aborted by the client during a TCP content analysis.
The proper action is to switch the stream interface to the CLO state from
INI when we have write enable and shutdown write.
The rate-limit was applied to the smoothed value which does a special
case for frequencies below 2 events per period. This caused irregular
limitations when set to 1 session per second.
The proper way to handle this is to compute the number of remaining
events that can occur without reaching the limit. This is what has
been added. It also has the benefit that the frequency calculation
is now done once when entering event_accept(), before the accept()
loop, and not once per accept() loop anymore, thus saving a few CPU
cycles during very high loads.
With this fix, rate limits of 1/s are perfectly respected.
The new "rate-limit sessions" statement sets a limit on the number of
new connections per second on the frontend. As it is extremely accurate
(about 0.1%), it is efficient at limiting resource abuse or DoS.
These new ACLs match frontend session rate and backend session rate.
Examples are provided in the doc to explain how to use that in order
to limit abuse of service.
With this change, all frontends, backends, and servers maintain a session
counter and a timer to compute a session rate over the last second. This
value will be very useful because it varies instantly and can be used to
check thresholds. This value is also reported in the stats in a new "rate"
column.
Several algorithms will need to know the millisecond value within
the current second. Instead of doing a divide every time it is needed,
it's better to compute it when it changes, which is when now and now_ms
are recomputed.
curr_sec_ms_scaled is the same multiplied by 2^32/1000, which will be
useful to compute some ratios based on the position within last second.
The new "show errors" command sent on a unix socket will dump
all captured request and response errors for all proxies. It is
also possible to bound the log to frontends and backends whose
ID is passed as an optional parameter.
The output provides information about frontend, backend, server,
session ID, source address, error type, and error position along
with a complete dump of the request or response which has caused
the error.
If a new error scratches the one currently being reported, then
the dump is aborted with a warning message, and processing goes
on to next error.
Each proxy instance, either frontend or backend, now has some room
dedicated to storing a complete dated request or response in case
of parsing error. This will make it possible to consult errors in
order to find the exact cause, which is particularly important for
troubleshooting faulty applications.
If an invalid character is encountered while parsing an HTTP message, we
want to get buf->lr updated to reflect it.
Along this change, a few useless __label__ declarations have been removed
because they caused gcc to consume stack space without putting anything
there.
On overloaded systems, it sometimes happens that hundreds or thousands
of incoming connections are queued in the system's backlog, and all get
dequeued at once. The problem is that when haproxy processes them and
does not apply any limit, this can take some time and the internal date
does not progress, resulting in wrong timer measures for all sessions.
The most common effect of this is that all of these sessions report a
large request time (around several hundreds of ms) which is in fact
caused by the time spent accepting other connections. This might happen
on shared systems when the machine swaps.
For this reason, we finally apply a reasonable limit even in mono-process
mode. Accepting 100 connections at once is fast enough for extreme cases
and will not cause that much of a trouble when the system is saturated.
Problem reported by John Lauro. When "source ... usesrc ..." is
set in the defaults section, it is not possible anymore to remove
the "usesrc" part when declaring a more precise "source" in a
backend. The only workaround was to declare it by server.
We need to clear optional settings when declaring a new "source".
The problem was the same with the "interface" declaration.
Unix socket processing was still quite buggy. It did not properly
handle interrupted output due to a full response buffer. The fix
mainly consists in not trying to prematurely enable write on the
response buffer, just like the standard session works. This also
gets the unix socket code closer to the standard session code
handling.
Commit 8a5c626e73 introduced the sessions
dump on the unix socket. This implementation is buggy because it may try
to link to the sessions list's head after the last session is removed
with a backref. Also, for the LIST_ISEMPTY test to succeed, we have to
proceed with LIST_INIT after LIST_DEL.
Some parts from the previous doc about logging have been merged and
updated. Most of those parts have been reworked and completed. The
examples are now accurate and reflect recent versions.
As subject when i try to compile haproxy with -DDEBUG_FULL it stop at
stream_sock.c file with:
gcc -Iinclude -Wall -O2 -g -DDEBUG_FULL -DTPROXY -DENABLE_POLL
-DENABLE_EPOLL -DENABLE_SEPOLL -DNETFILTER -DUSE_GETSOCKNAME
-DCONFIG_HAPROXY_VERSION=\"1.3.15\"
-DCONFIG_HAPROXY_DATE=\"2008/04/19\" -c -o src/stream_sock.o
src/stream_sock.c
src/stream_sock.c: In function 'stream_sock_chk_rcv':
src/stream_sock.c:905: error: 'fd' undeclared (first use in this function)
src/stream_sock.c:905: error: (Each undeclared identifier is reported only once
src/stream_sock.c:905: error: for each function it appears in.)
src/stream_sock.c:905: error: 'ob' undeclared (first use in this function)
src/stream_sock.c: In function 'stream_sock_chk_snd':
src/stream_sock.c:940: error: 'fd' undeclared (first use in this function)
src/stream_sock.c:940: error: 'ib' undeclared (first use in this function)
make: *** [src/stream_sock.o] Error 1
With this patch all build fine:
Using the wrong operator (&& instead of &) causes DOWN->UP
transition to take longer than it should and to produce a lot of
redundant logs. With typical "track" usage (1-6 tracking servers) it
shouldn't make a big difference but for heavily tracked servers
this bug leads to hang with 100% CPU usage and extremely big
log spam.
The "bind-process" keyword lets the admin select which instances may
run on which process (in multi-process mode). It makes it easier to
more evenly distribute the load across multiple processes by avoiding
having too many listen to the same IP:ports.
Specifying "interface <name>" after the "source" statement allows
one to bind to a specific interface for proxy<->server traffic.
This makes it possible to use multiple links to reach multiple
servers, and to force traffic to pass via an interface different
from the one the system would have chosen based on the routing
table.
By appending "interface <name>" to a "bind" line, it is now possible
to specifically bind to a physical interface name. Note that this
currently only works on Linux and requires root privileges.
This will provide high performance data forwarding between sockets,
but it is broken on many kernels and will sometimes forward corrupted
data without some kernel patches. Consider this experimental for now.
Setting "nosplice" in the global section will disable the use of TCP
splicing (both tcpsplice and linux 2.6 splice). The same will be
achieved using the "-dS" parameter on the command line.
The global tuning options right now only concern the polling mechanisms,
and they are not in the global struct itself. It's not very practical to
add other options so let's move them to the global struct and remove
types/polling.h which was not used for anything else.