Some header values might be delimited with spaces, so it's not enough to
compare "close" or "keep-alive" with strncasecmp(). Use word_match() for
that.
Supported informations, available via "tr/td title":
- cap: capabilities (proxy)
- mode: one of tcp, http or health (proxy)
- id: SNMP ID (proxy, socket, server)
- IP (socket, server)
- cookie (backend, server)
Implement decreasing health based on observing communication between
HAProxy and servers.
Changes in this version 2:
- documentation
- close race between a started check and health analysis event
- don't force fastinter if it is not set
- better names for options
- layer4 support
Changes in this version 3:
- add stats
- port to the current 1.4 tree
It's a pain to enable regparm because ebtree is built in its corner
and does not depend on the rest of the config. This causes no problem
except that if the regparm settings are not exactly similar, then we
can get inconsistent function interfaces and crashes.
One solution realized in this patch consists in externalizing all
compiler settings and changing CONFIG_XXX_REGPARM into CONFIG_REGPARM
so that we ensure that any sub-component uses the same setting. Since
ebtree used a value here and not a boolean, haproxy's config has been
set to use a number too. Both haproxy's core and ebtree currently use
the same copy of the compiler.h file. That way we don't have any issue
anymore when one setting changes somewhere.
All files referencing the previous ebtree code were changed to point
to the new one in the ebtree directory. A makefile variable (EBTREE_DIR)
is also available to use files from another directory.
The ability to build the libebtree library temporarily remains disabled
because it can have an impact on some existing toolchains and does not
appear worth it in the medium term if we add support for multi-criteria
stickiness for instance.
Capture & display more data from health checks, like
strerror(errno) for L4 failed checks or a first line
from a response for L7 successes/failed checks.
Non ascii or control characters are masked with
chunk_htmlencode() (html stats) or chunk_asciiencode() (logs).
This patch implements "description" (proxy and global) and "node" (global)
options, removes "node-name" and adds "show-node" & "show-desc" options
for "stats". It also changes the way the header lines (with proxy name) and
the statistics are displayed, so stats no longer look so clumsy with very
long names.
Instead of "node-name" it is possible to use show-node/show-desc with
an optional parameter that overrides a default node/description.
backend cust-0045
# report specific values for this customer
stats show-node Europe
stats show-desc Master node for Europe, Asia, Africa
In TCP, we don't want to forward chunks of data, we want to forward
indefinitely. This patch introduces a special value for the amount
of data to be forwarded. When buffer_forward() is called with
BUF_INFINITE_FORWARD, it configures the buffer to never stop
forwarding until the end.
send() supports the MSG_MORE flag on Linux, which does the same
as TCP_CORK except that we don't have to remove TCP_NODELAY before
and we don't need any syscall to set/remove it. This can save up
to 4 syscalls around a send() (two for setting it, two for removing
it), and it's much cleaner since it is not persistent. So make use
of it instead.
The remains of the stats socket code has nothing to do in proto_uxst
anymore and must move to dumpstats. The code is much cleaner and more
structured. It was also an opportunity to rename AN_REQ_UNIX_STATS
as AN_REQ_STATS_SOCK as the stats socket is no longer unix-specific
either.
The last item refering to stats in proto_uxst is the setting of the
task's nice value which should in fact come from the listener.
The new "node-name" stats setting enables reporting of a node ID on
the stats page. It is possible to return the system's host name as
well as a specific name.
We now support up to 10 distinct configuration files. They are
all loaded in the order defined by -f <file1> -f <file2> ...
This can be useful in order to store global, private, public,
etc... configurations in distinct files.
This is a first step towards support of multiple configuration files.
Now readcfgfile() only reads a file in memory and performs very minimal
parsing. The checks are performed afterwards.
Some users are already hitting the 64k source port limit when
connecting to servers. The system usually maintains a list of
unused source ports, regardless of the source IP they're bound
to. So in order to go beyond the 64k concurrent connections, we
have to manage the source ip:port lists ourselves.
The solution consists in assigning a source port range to each
server and use a free port in that range when connecting to that
server, either for a proxied connection or for a health check.
The port must then be put back into the server's range when the
connection is closed.
This mechanism is used only when a port range is specified on
a server. It makes it possible to reach 64k connections per
server, possibly all from the same IP address. Right now it
should be more than enough even for huge deployments.
These functions will be used to deliver asynchronous signals in order
to make the signal handling functions more robust. The goal is to keep
the same interface to signal handlers.
I have attached a patch which will add on every http request a new
header 'X-Original-To'. If you have HAProxy running in transparent mode
with a big number of SQUID servers behind it, it is very nice to have
the original destination ip as a common header to make decisions based
on it.
The whole thing is configurable with a new option 'originalto'. I have
updated the sourcecode as well as the documentation. The 'haproxy-en.txt'
and 'haproxy-fr.txt' files are untouched, due to lack of my french
language knowledge. ;)
Also the patch adds this header for IPv4 only. I haven't any IPv6 test
environment running here and don't know if getsockopt() with SO_ORIGINAL_DST
will work on IPv6. If someone knows it and wants to test it I can modify
the diff. Feel free to ask me questions or things which should be changed. :)
--Maik
This function sets CSS letter spacing after each 3rd digit. The page must
create a class "rls" (right letter spacing) with style "letter-spacing: 0.3em"
in order to use it.
If we get very large data at once, it's almost certain that it's
worthless trying to read again, because we got everything we could
get.
Doing this has made all -EAGAIN disappear from splice reads. The
threshold has been put in the global tunable structures so that if
we one day want to make it accessible from user config, it will be
easy to do so.
Since we're now able to search from a precise expiration date in
the timer tree using ebtree 4.1, we don't need to maintain 4 trees
anymore. Not only does this simplify the code a lot, but it also
ensures that we can always look 24 days back and ahead, which
doubles the ability of the previous scheduler. Indeed, while based
on absolute values, the timer tree is now relative to <now> as we
can always search from <now>-31 bits.
The run queue uses the exact same principle now, and is now simpler
and a bit faster to process. With these changes alone, an overall
0.5% performance gain was observed.
Tests were performed on the few wrapping cases and everything works
as expected.
There are some configurations in which redirect rules are declared
after use_backend rules. We can also find "block" rules after any
of these ones. The processing sequence is :
- block
- redirect
- use_backend
So as of now we try to detect wrong ordering to warn the user about
a possibly undesired behaviour.
Most of the time, task_queue() will immediately return. By extracting
the preliminary checks and putting them in an inline function, we can
significantly reduce the number of calls to the function itself, and
most of the tests can be optimized away due to the caller's context.
Another minor improvement in process_runnable_tasks() consisted in
taking benefit from the processor's branch prediction unit by making
a special case of the process_session() callback which is by far the
most common one.
All this improved performance by about 1%, mainly during the call
from process_runnable_tasks().
Timers are unsigned and used as tree positions. Ticks are signed and
used as absolute date within current time frame. While the two are
normally equal (except zero), it's important not to confuse them in
the code as they are not interchangeable.
We add two inline functions to turn each one into the other.
The comments have also been moved to the proper location, as it was
not easy to understand what was a tick and what was a timer unit.
All the tasks callbacks had to requeue the task themselves, and update
a global timeout. This was not convenient at all. Now the API has been
simplified. The tasks callbacks only have to update their expire timer,
and return either a pointer to the task or NULL if the task has been
deleted. The scheduler will take care of requeuing the task at the
proper place in the wait queue.
With this change, all frontends, backends, and servers maintain a session
counter and a timer to compute a session rate over the last second. This
value will be very useful because it varies instantly and can be used to
check thresholds. This value is also reported in the stats in a new "rate"
column.
Several algorithms will need to know the millisecond value within
the current second. Instead of doing a divide every time it is needed,
it's better to compute it when it changes, which is when now and now_ms
are recomputed.
curr_sec_ms_scaled is the same multiplied by 2^32/1000, which will be
useful to compute some ratios based on the position within last second.
If an analyser sets buf->to_forward to a given value, that many
data will be forwarded between the two stream interfaces attached
to a buffer without waking the task up. The same applies once all
analysers have been released. This saves a large amount of calls
to process_session() and a number of task_dequeue/queue.
This type will be used to maintain back-references to items which
are subject to move between accesses. Typical usage includes session
removal during a listing.
GCC 3 and above do not inline large functions, which is a problem
with ebtree where most core functions are inlined.
This simple patch has both reduced code size and increased speed.
It should be back-ported to ebtree.
Gcc < 3 does not consider regparm declarations for function pointers.
This causes big trouble at least with pollers (and with any function
pointer after all). Disable CONFIG_HAP_USE_REGPARM for gcc < 3.
A new member has been added to the struct session. It keeps a trace
of what block of code performs a close or a shutdown on a socket, and
in what sequence. This is extremely convenient for post-mortem analysis
where flag combinations and states seem impossible. A new ABORT_NOW()
macro has also been added to make the code immediately segfault where
called.
In order to make pool usage more convenient, let pool_free2()
support NULL pointers by doing nothing, just like the standard
free(3) call does.
The various call places have been updated to remove the now
useless checks.
Because I needed it in my situation - here's a quick patch to
allow changing of the "x-forwarded-for" header by using a suboption to
"option forwardfor".
Suboption "header XYZ" will set the header from "x-forwarded-for" to "XYZ".
Default is still "x-forwarded-for" if the header value isn't defined.
Also the suboption 'except a.b.c.d/z' still works on the same line.
So it's now: option forwardfor [except a.b.c.d[/z]] [header XYZ]
The INTBITS macro was found to be already defined on some platforms,
and to equal 32 (while INTBITS was 5 here). Due to pure luck, there
was no declaration conflict, but it's nonetheless a problem to fix.
Looking at the code showed that this macro was only used for left
shifts and nothing else anymore. So the replacement is obvious. The
new macro, BITS_PER_INT is more obviously correct.
Any module which needs configuration keywords may now dynamically
register a keyword in a given section, and associate it with a
configuration parsing function using cfg_register_keywords() from
a constructor function. This makes the configuration parser more
modular because it is not required anymore to touch cfg_parse.c.
Example :
static int parse_global_blah(char **args, int section_type, struct proxy *curpx,
struct proxy *defpx, char *err, int errlen)
{
printf("parsing blah in global section\n");
return 0;
}
static int parse_listen_blah(char **args, int section_type, struct proxy *curpx,
struct proxy *defpx, char *err, int errlen)
{
printf("parsing blah in listen section\n");
if (*args[1]) {
snprintf(err, errlen, "missing arg for listen_blah!!!");
return -1;
}
return 0;
}
static struct cfg_kw_list cfg_kws = {{ },{
{ CFG_GLOBAL, "blah", parse_global_blah },
{ CFG_LISTEN, "blah", parse_listen_blah },
{ 0, NULL, NULL },
}};
__attribute__((constructor))
static void __module_init(void)
{
cfg_register_keywords(&cfg_kws);
}
This is the first attempt at moving all internal parts from
using struct timeval to integer ticks. Those provides simpler
and faster code due to simplified operations, and this change
also saved about 64 bytes per session.
A new header file has been added : include/common/ticks.h.
It is possible that some functions should finally not be inlined
because they're used quite a lot (eg: tick_first, tick_add_ifset
and tick_is_expired). More measurements are required in order to
decide whether this is interesting or not.
Some function and variable names are still subject to change for
a better overall logics.
This new time value will be used to compute timeouts and wait queue
positions. The operation is made once for all when time is retrieved.
A future improvement might consist in having it in ticks of 1/1024
second and to convert all timeouts into ticks.
The first implementation of the monotonic clock did not verify
forward jumps. The consequence is that a fast changing time may
expire a lot of tasks. While it does seem minor, in fact it is
problematic because most machines which boot with a wrong date
are in the past and suddenly see their time jump by several
years in the future.
The solution is to check if we spent more apparent time in
a poller than allowed (with a margin applied). The margin
is currently set to 1000 ms. It should be large enough for
any poll() to complete.
Tests with randomly jumping clock show that the result is quite
accurate (error less than 1 second at every change of more than
one second).
If the system date is set backwards while haproxy is running,
some scheduled events are delayed by the amount of time the
clock went backwards. This is particularly problematic on
systems where the date is set at boot, because it seldom
happens that health-checks do not get sent for a few hours.
Before switching to use clock_gettime() on systems which
provide it, we can at least ensure that the clock is not
going backwards and maintain two clocks : the "date" which
represents what the user wants to see (mostly for logs),
and an internal date stored in "now", used for scheduled
events.
The new TRACE macro is used almost like fprintf, except that a session
has to be passed instead of the file descriptor. It displays infos about
where it is called, session ptr and id, etc...
- free oldpids
- call free(exp->preg), not only regfree(exp->preg): req_exp, rsp_exp
- build a list of unique uri_auths and eventually free it
- prune_acl_cond/free for switching_rules
- add a callback pointer to free ptr from acl_pattern (used for regexs) and execute it
==1180== malloc/free: in use at exit: 0 bytes in 0 blocks.
==1180== malloc/free: 5,599 allocs, 5,599 frees, 4,220,556 bytes allocated.
==1180== All heap blocks were freed -- no leaks are possible.
This patch allows to specify a domain used when inserting a cookie
providing a session stickiness. Usefull for example with wildcard domains.
The patch adds one new variable to the struct proxy: cookiedomain.
When set the domain is appended to a Set-Cookie header.
Domain name is validated using the new invalid_domainchar() function.
It is basically invalid_char() limited to [A-Za-z0-9_.-]. Yes, the test
is too trivial and does not cover all wrong situations, but the main
purpose is to detect most common mistakes, not intentional abuses.
The underscore ("_") character is not RFC-valid but as it is
often (mis)used so I decided to allow it.
For Fedora 9 gcc 4.3 will be shipping as a feature, and right now haproxy does
not compile with gcc 4.3.
It appears that there is a reordering of headers or something along those lines,
This is the patch that gets haproxy to compile with gcc 4.3. I'm not sure if
this is the correct approach you would want to use, so please correct me.
If this works for you, I'll go ahead and put this patch in the src rpm until a
release of haproxy which compiles with gcc 4.3 is released.
Matt Farnsworth reported a memory leak in str2sun() in case a too large
socket path is passed. The bug is very minor because it only happens
once during config parsing, but has to be fixed nevertheless. The patch
Matt provided could even be improved by completely removing the useless
strdup() in this function.
Currently there is a ~16KB limit for a data size passed via unix socket.
It is caused by a trivial bug ttat is going to fixed soon, however
in most cases there is no need to dump a full stats.
This patch makes possible to select a scope of dumped data by extending
current "show stat" to "show stat [<iid> <type> <sid>]":
- iid is a proxy id, -1 to dump all proxies
- type selects type of dumpable objects: 1 for frontend, 2 for backend, 4 for
server, -1 for all types. Values can be ORed, for example:
1+2=3 -> frontend+backend.
1+2+4=7 -> frontend+backend+server.
- sid is a service id, -1 to dump everything from the selected proxy.
To do this I implemented a new session flag (SN_STAT_BOUND), added three
variables in data_ctx.stats (iid, type, sid), modified dumpstats.c and
completely revorked the process_uxst_stats: now it waits for a "\n"
terminated string, splits args and uses them. BTW: It should be quite easy
to add new commands, for example to enable/disable servers, the only problem
I can see is a not very lucky config name (*stats* socket). :|
During the work I also fixed two bug:
- s->flags were not initialized for proto_uxst
- missing comma if throttling not enabled (caused by a stupid change in
"Implement persistent id for proxies and servers")
Other changes:
- No more magic type valuse, use STATS_TYPE_FE/STATS_TYPE_BE/STATS_TYPE_SV
- Don't memset full s->data_ctx (it was clearing s->data_ctx.stats.{iid/type/sid},
instead initialize stats.sv & stats.sv_st (stats.px and stats.px_st were already
initialized)
With all that changes it was extremely easy to write a short perl plugin
for a perl-enabled net-snmp (also included in this patch).
29385 is my PEN (Private Enterprise Number) and I'm willing to donate
the SNMPv2-SMI::enterprises.29385.106.* OIDs for HAProxy if there
is nothing assigned already.
GCC4 is stupid (unbelievable news!).
When some code uses __builtin_expect(x != 0, 1), it really performs
the check of x != 0 then tests that the result is not zero! This is
a double check when only one was expected. Some performance drops
of 10% in the HTTP parser code have been observed due to this bug.
GCC 3.4 is fine though.
A solution consists in expecting that the tested value is 1. In
this case, it emits the correct code, but it's still not optimal
it seems. Finally the best solution is to ignore likely() and to
pray for the compiler to emit correct code. However, we still have
to fix unlikely() to remove the test there too, and to fix all
code which passed pointers overthere to pass integers instead.
Balabit's TPROXY version 4 which replaces CTTPROXY provides a similar
API to the previous proxy, but relies on IP_FREEBIND instead of
IP_TRANSPARENT. Let's add it.
Using some Linux kernel patches, it is possible to redirect non-local
traffic to local sockets when IP forwarding is enabled. In order to
enable this option, we introduce the "transparent" option keyword on
the "bind" command line. It will make the socket reachable by remote
sources even if the destination address does not belong to the machine.
This patch adds a possibility to invert most of available options by
introducing the "no" keyword, available as an additional prefix.
If it is found arguments are shifted left and an additional flag (inv)
is set.
It allows to use all options from a current defaults section, except
the selected ones, for example:
-- cut here --
defaults
contimeout 4200
clitimeout 50000
srvtimeout 40000
option contstats
listen stats 1.2.3.4:80
no option contstats
-- cut here --
Currenly inversion works only with the "option" keyword.
The patch also moves last_checks calculation at the end of the readcfgfile()
function and changes "PR_O_FORCE_CLO | PR_O_HTTP_CLOSE" into "PR_O_FORCE_CLO"
in cfg_opts so it is possible to invert forceclose without breaking httpclose
(and vice versa) and to invert tcpsplice in one proxy but to keep a proper
last_checks value when tcpsplice is used in another proxy. Now, the code
checks for PR_O_FORCE_CLO everywhere it checks for PR_O_HTTP_CLOSE.
I also decided to depreciate "redisp" and "redispatch" keywords as it is IMHO
better to use "option redispatch" which can be inverted.
Some useful documentation were added and at the same time I sorted
(alfabetically) all valid options both in the code and the documentation.
The code in haproxy-1.3.13.1 only supports syslogging to an internet
address. The attached patch:
- Adds support for syslogging to a UNIX domain socket (e.g., /dev/log).
If the address field begins with '/' (absolute file path), then
AF_UNIX is used to construct the socket. Otherwise, AF_INET is used.
- Achieves clean single-source build on both Mac OS X and Linux
(sockaddr_in.sin_len and sockaddr_un.sun_len field aren't always present).
For handling sendto() failures in send_log(), it appears that the existing
code is fine (no need to close/recreate socket) for both UDP and UNIX-domain
syslog server. So I left things alone (did not close/recreate socket).
Closing/recreating socket after each failure would also work, but would lead
to increased amount of unnecessary socket creation/destruction if syslog is
temporarily unavailable for some reason (especially for verbose loggers).
Please consider this patch for inclusion into the upstream haproxy codebase.
This new function accepts inputs in various default units, from
the microsecond to the day. It detects suffixes after numbers
and performs the appropriate conversions between the user's unit
and the program's unit, considering a unit-less number in the
default unit.
In order to avoid issues in the future, we want to restrict
the set of allowed characters for identifiers. Starting from
now, only A-Z, a-z, 0-9, '-', '_', '.' and ':' will be allowed
for a proxy, a server or an ACL name.
A test file has been added to check the restriction.
AIX does not know about MSG_DONTWAIT. Fortunately, nearly all sockets
are already set to O_NONBLOCK, so it's not even required to change the
code. It was only necessary to add this fcntl to the log socket which
lacked it. The MSG_DONTWAIT value has been defined to zero when unset
in order to make the code cleaner and more portable.
Also, on AIX, "hz" is defined, which causes a problem with one function
parameter in time.c. It's enough to rename the parameter there. Last,
fix a missing #include <string.h> in proxy.c.
Hello,
You will find attached an updated release of previously submitted patch.
It polish some part and extend ACL engine to match IP and PORT parsed in
HTTP request. (and take care of comments made by Willy ! ;))
Best regards,
Alexandre
Currently, there is a hidden line length limit in the haproxy, set
to 256-1 chars. With large acls (for example many hdr(host) matches)
it may be not enough and which is even worse, error message may
be totally confusing as everything above this limit is treated
as a next line:
echo -ne "frontend aqq 1.2.3.4:80\nmode http\nacl e hdr(host) -i X X X X X X X www.xx.example.com stats\n"|
sed s/X/www.some-host-name.example.com/g > ha.cfg && haproxy -c -f ./ha.cfg
[WARNING] 300/163906 (11342) : parsing [./ha.cfg:4] : 'stats' ignored because frontend 'aqq' has no backend capability.
Recently I hit simmilar problem and it took me a while to find why
requests for "stats" are not handled properly.
This patch:
- makes the limit configurable (LINESIZE)
- increases default line length limit from 256 to 2048
- increases MAX_LINE_ARGS from 40 to 64
- fixes hidden assignment in fgets()
- moves arg/end/args/line inside the loop, making code auditing easier
- adds a check that shows error if the limit is reached
- changes "*line++ = 0;" to "*line++ = '\0';" (cosmetics)
With this patch, when LINESIZE is defined to 256, above example produces:
[ALERT] 300/164724 (27364) : parsing [/tmp/ha.cfg:3]: line too long, limit: 255.
[ALERT] 300/164724 (27364) : Error reading configuration file : /tmp/ha.cfg
Building ev_kqueue on OpenBSD causes some warnings to occur,
because OpenBSD also uses LIST_* macros in sys/queue.h, included
from sys/event.h. Simply undefine those macros since we don't
need them.
This is in fact the same as ultoa() except that it's possible to
pass the string to be returned in case the value is NULL. This is
useful to report limits in printf calls.
Current ultoa() function is limited to one use per expression or
function call. Sometimes this is limitating. Change this in favor
of an array of 10 return values and shorter macros U2A0..U2A9
which respectively call the function with the 10 different buffers.
localtime() was called with pointers to tv_sec, which is time_t on
some platforms and long on others. A problem was encountered on
Sparc64 under OpenBSD where tv_sec is long (64 bits) and time_t is
32 bits. Since this architecture is big-endian, it exhibited the
bug because localtime() always worked with the high part of the
value which is always zero. This problem was identified and debugged
by Thierry Fournier.
The correct solution is to pass the date by value and not by pointer,
through an intermediate function. The use of localtime_r() instead of
localtime() also made it possible to get rid of the first call to
localtime() since it does not need to allocate memory anymore.
The strl2ic() and strl2uic() primitives used to convert string to
integers could return 10 times the value read if they stopped on
non-digit because of a mis-placed loop exit.
Hello,
This patch implements new statistics for SLA calculation by adding new
field 'Dwntime' with total down time since restart (both HTTP/CSV) and
extending status field (HTTP) or inserting a new one (CSV) with time
showing how long each server/backend is in a current state. Additionaly,
down transations are also calculated and displayed for backends, so it is
possible to know how many times selected backend was down, generating "No
server is available to handle this request." error.
New information are presentetd in two different ways:
- for HTTP: a "human redable form", one of "100000d 23h", "23h 59m" or
"59m 59s"
- for CSV: seconds
I believe that seconds resolution is enough.
As there are more columns in the status page I decided to shrink some
names to make more space:
- Weight -> Wght
- Check -> Chk
- Down -> Dwn
Making described changes I also made some improvements and fixed some
small bugs:
- don't increment s->health above 's->rise + s->fall - 1'. Previously it
was incremented an then (re)set to 's->rise + s->fall - 1'.
- do not set server down if it is down already
- do not set server up if it is up already
- fix colspan in multiple places (mostly introduced by my previous patch)
- add missing "status" header to CSV
- fix order of retries/redispatches in server (CSV)
- s/Tthen/Then/
- s/server/backend/ in DATA_ST_PX_BE (dumpstats.c)
Changes from previous version:
- deal with negative time intervales
- don't relay on s->state (SRV_RUNNING)
- little reworked human_time + compacted format (no spaces). If needed it
can be used in the future for other purposes by optionally making "cnt"
as an argument
- leave set_server_down mostly unchanged
- only little reworked "process_chk: 9"
- additional fields in CSV are appended to the rigth
- fix "SEC" macro
- named arguments (human_time, be_downtime, srv_downtime)
Hope it is OK. If there are only cosmetic changes needed please fill free
to correct it, however if there are some bigger changes required I would
like to discuss it first or at last to know what exactly was changed
especially since I already put this patch into my production server. :)
Thank you,
Best regards,
Krzysztof Oledzki
For people who manage many haproxies, it is sometimes convenient
to be informed of their version. This patch adds this, with the
option to disable this report by specifying "stats hide-version".
Also, the feature may be permanently disabled by setting the
STATS_VERSION_STRING to "" (empty string), or the format can
simply be adjusted.
When one server in one backend has a very low check interval, it imposes
its value as the minimal interval, causing all other servers to start
their checks close to each other, thus partially voiding the benefits of
the spread checks.
The solution consists in ignoring intervals lower than a given value
(SRV_CHK_INTER_THRES = 1000 ms) when computing the minimal interval,
and then assigning them a start date relative to their own interval
and not the global one.
With this change, the checks distribution clearly looks better.
The version does not appear anymore in the Makefiles nor in
the include files. It was a nightmare to maintain. Now there
is a VERSION file which contains the major version, a VERDATE
file which contains the date for this version and a SUBVERS
file which may contain a sub-version.
A "make version" target has been added to all makefiles to
check the version. The GNU Makefile also has an update-version
target to update those files. This should never be used.
It is still possible to override those values by specifying
them in the equivalent make variables. By default, the GNU
makefile tries to detect a GIT repository and always uses the
version and date from the current repository. This can be
disabled by setting IGNOREGIT to a non-void value.
src/chtbl.c, src/hashpjw.c and src/list.c are distributed under
an obscure license. While Aleks and I believe that this license
is OK for haproxy, other people think it is not compatible with
the GPL.
Whether it is or not is not the problem. The fact that it rises
a doubt is sufficient for this problem to be addressed. Arnaud
Cornet rewrote the unclear parts with clean GPLv2 and LGPL code.
The hash algorithm has changed too and the code has been slightly
simplified in the process. A lot of care has been taken in order
to respect the original API as much as possible, including the
LGPL for the exportable parts.
The new code has not been thoroughly tested but it looks OK now.
Sometimes it may be desirable to automatically refresh the
stats page. Most browsers support the "Refresh:" header with
an interval in seconds. Specifying "stats refresh xxx" will
automatically add this header.
- acl: smarter integer comparison support in ACLs
- acl: specify the direction during fetches
- acl: provide the argument length for fetch functions
- acl: provide a reference to the expr to fetch()
- acl: implement matching on header values
- acl: support maching on 'path' component
- acl: permit to return any header when no name specified
- errorfile: use a local file to feed error messages
- negation in ACL conds was not cleared between terms
- fix segfault at exit when using captures
- improve memory freeing upon exit
- acl: support '-i' to ignore case when matching
- str2net() must not change the const char *
- provide default ACLs
- acl: distinguish between request and response headers
- added the 'use_backend' keyword for full content-switching
- acl: added the TRUE and FALSE ACLs.
- shut warnings 'is*' macros from ctype.h on solaris
- do not re-arm read timeout in SHUTR state
- optimize I/O by detecting system starvation
- the epoll FD must not be shared between processes
- limit the number of events returned by *poll*
By default, epoll/kqueue used to return as many events as possible.
This could sometimes cause huge latencies (latencies of up to 400 ms
have been observed with many thousands of fds at once). Limiting the
number of events returned also reduces the latency by avoiding too
many blind processing. The value is set to 200 by default and can be
changed in the global section using the tune.maxpollevents parameter.
- fixed ev_sepoll again by rewriting the state machine
- switched all timeouts to timevals instead of milliseconds
- improved memory management using mempools v2.
- several minor optimizations
When we're interrupted by another instance, it is very likely
that the other one will need some memory. Now we know how to
free what is not used, so let's do it.
Also only free non-null pointers. Previously, pool_destroy()
did implicitly check for this case which was incidentely
needed.
- keep the number of users of each pool
- call the garbage collector on out of memory conditions
- sort the pools by size for faster creation
- force the alignment size to 16 bytes instead of 4*sizeof(void *)
Also during this process, a bug was found in appsession_refresh().
It would not automatically requeue the task in the queue, so the
old sessions would not vanish.
The timeout functions were difficult to manipulate because they were
rounding results to the millisecond. Thus, it was difficult to compare
and to check what expired and what did not. Also, the comparison
functions were heavy with multiplies and divides by 1000. Now, all
timeouts are stored in timevals, reducing the number of operations
for updates and leading to cleaner and more efficient code.
- several fixes in ev_sepoll
- fixed some expiration dates on some tasks
- fixed a bug in connection establishment detection due to speculative I/O
- fixed rare bug occuring on TCP with early close (reported by Andy Smith)
- implemented URI hashing algorithm (Guillaume Dallaire)
- implemented SMTP health checks (Peter van Dijk)
- replaced the rbtree with ul2tree from old scheduler project
- new framework for generic ACL support
- added the 'acl' and 'block' keywords to the config language
- added several ACL criteria and matches (IP, port, URI, ...)
- cleaned up and better modularization for some time functions
- fixed list macros
- fixed useless memory allocation in str2net()
- store the original destination address in the session
Peter van Dijk contributed this patch which implements the "smtpchk"
option, which is to SMTP what "httpchk" is to HTTP. By default, it sends
"HELO localhost" to the servers, and waits for the 250 message, but it
can also send a specific request.
tv_cmp2_ms handles multiple combinations of tv1 and tv2, but only
one form is used: (tv1 <= tv2). So it is overkill to use it everywhere.
A new function designed to do exactly this has been written for that
purpose: tv_cmp2_le. Also, removed old unused tv_* functions.
The fact that TV_ETERNITY was 0 was very awkward because it
required that comparison functions handled the special case.
Now it is ~0 and all comparisons are performed on unsigned
values, so that it is naturally greater than any other value.
A performance gain of about 2-5% has been noticed.
- modularized the polling mechanisms and use function pointers instead
of macros at many places
- implemented support for FreeBSD's kqueue() polling mechanism
- fixed a warning on OpenBSD : MIN/MAX redefined
- change socket registration order at startup to accomodate kqueue.
- several makefile cleanups to support old shells
- fix build with limits.h once for all
- ev_epoll: do not rely on fd_sets anymore, use changes stacks instead.
- fdtab now holds the results of polling
- implemented support for speculative I/O processing with epoll()
- remove useless calls to shutdown(SHUT_RD), resulting in small speed boost
- auto-registering of pollers at load time
The pollers will now be able to speculatively call the I/O
processing functions and decide whether or not they want to
poll on those FDs. The changes primarily consist in teaching
those functions how to pass the info they got an EAGAIN.
Patch #cf83df3d162687d9c74783357421bd89f596eaac was stupid. Including
limits.h is portable and easier. At least it now builds on Solaris,
FreeBSD, Linux and OpenBSD.
FreeBSD stores INT_MIN and INT_MAX in sys/limits.h only. Other systems
(Solaris) have it in sys/types.h and do not have sys/limits.h. Let's
include sys/limits.h only if INT_MAX is not defined.
- rewriting either the status line or request line could crash the
process due to a pointer which ought to be reset before parsing.
- rewriting the status line in the response did not work, it caused
a 502 Bad Gateway due to an erroneous state during parsing
- fix reqadd when no option httpclose is used.
- removed now unused fiprm and beprm from proxies
- split logs into two versions : TCP and HTTP
- added some docs about http headers storage and acls
- added a VIM script for syntax color highlighting (Bruno Michel)
- fixed several bugs which might have caused a crash with bad configs
- several optimizations in header processing
- many progresses towards transaction-based processing
- option forwardfor may be used in frontends
- completed HTTP response processing
- some code refactoring between request and response processing
- new HTTP header manipulation functions
- optimizations on the recv() patch to reduce CPU usage under very
high data rates.
- more user-friendly help about the 'usesrc' keyword (CTTPROXY)
- username/groupname support from Marcus Rueckert
- added the "except" keyword to the "forwardfor" option (Bryan German)
- support for health-checks on other addresses (Fabrice Dulaunoy)
- makefile for MacOS 10.4 / Darwin (Dan Zinngrabe)
- do not insert "Connection: close" in HTTP/1.0 messages
Generally, if a recv() returns less bytes than the MSS, it means that
there is nothing left in the system's buffers, and that it's not worth
trying to read again because we are very likely to get nothing. A
default read low limit has been set to 1460 bytes below which we stop
reading.
This has brought a little speed boost on small objects while maintaining
the same speed on large objects.
Multiple read polling was temporarily disabled, which had the side
effect of burning huge amounts of CPU on large objects. It has now
been re-implemented with a limit of 8 calls per wake-up, which seems
to provide best results at least on Linux.
The new rbtree-based scheduler makes heavy use of tv_cmp2(), and
this function becomes a huge CPU eater. Refine it a little bit in
order to slightly reduce CPU usage.
- fix critical bug introduced with 1.3.6 : an empty request header
may lead to a crash due to missing pointer assignment
- hdr_idx might be left uninitialized in debug mode
- fixed build on FreeBSD due to missing fd_set declaration
Sorin Pop reported a patch to fix build on FreeBSD.
The file common/standard.h used an fd_set in a declaration
but did not include enough headers for it to be known.
- stats now support the HEAD method too
- extracted http request from the session
- huge rework of the HTTP parser which is now a 28-state FSM.
- linux-style likely/unlikely macros for optimization hints
- do not create a server socket when there's no server
- added complete support and doc for TCP Splicing
- replaced the wait-queue linked list with an rbtree.
- stats: swap color sets for active and backup servers
- try to guess server check port when unset
- a few bugfixes and cleanups
The stats page could not tell the difference between a FE and a BE.
It has been revamped to indicate all relevant information. The font
is also slightly smaller in order for all the info to fit into small
screens. The data output path has been greatly simplified to use
string chunks.
Sin Yu's patch to permit to change the proxy from a regex was merged
with little changes :
- req_cap/rsp_cap are not reassigned to the new proxy, they stay
attached to the frontend
- the actions have been renamed "reqsetbe" and "reqisetbe" for
"set BackEnd".
- the buffer is not reset after the switch, instead, the headers are
parsed again by the backend
- in Sin's patch, it was theorically possible to switch multiple times,
but the switching track was lost, making it impossible to apply
server responsesin the reverse order. Now switching is limited to
1 action (separation between frontend and backend) but the filters
remain.
Now it will be extremely easy to add other switching conditions, such
as host matching, URI matching, etc...
There's still a hard work to be done on the logs and stats.
This structure will consume 4 bytes per header to keep track of
headers within a request or a response without having to parse
the whole request for each regex. As it's not possible to allocate
only 4 bytes, we define a max number of HTTP headers. We set it
to (BUFSIZE+79)/80 so that 8kB buffers can contain 100 headers
(like Apache), resulting in 400 bytes dedicated to indexation,
or about 400/(2*8kB) ~= 2.4% of the memory usage.
SO_REUSEPORT does not exist on Linux but the checks are available in
the code. With a little patch, it's possible to implement the feature,
but the value of SO_REUSEPORT will still have to be known from userland.
This patch adds a workaround to this problem by figuring out the value
for the one used by SO_REUSEADDR.
Released 1.3.3 with the following changes :
- fix broken redispatch option in case the connection has already
been marked "in progress" (ie: nearly always).
- support regparm on x86 to speed up some often called functions
- removed a few useless calls to gettimeofday() in log functions.
- lots of 'const char*' cleanups
- turn every FD_* into functions which are faster on recent CPUs
- builds again on OpenBSD and Solaris
Some of the tv_* functions are called very often. Passing their
arguments as registers is quite faster. This can be disabled
by setting CONFIG_HAP_DISABLE_REGPARM.
As suggested by Markus Elfring, a few "const char *" have replaced
some "char *" declarations where a function is not expected to
modify a value. It does not change the code but it helps detecting
coding errors.
- started the changes towards I/O completion callbacks. stream_sock* have
replaced event_*.
- added the new "reqtarpit" and "reqitarpit" protection features
It is now possible to tarpit connections based on regex matches.
The tarpit timeout is equal to the contimeout. A 500 server error
response is faked, and the logs show the status flags as "PT" which
indicate the connection has been tarpitted.
Released 1.3.1 with the following changes from 1.2.15 :
- now, haproxy warns about missing timeout during startup to try to
eliminate all those buggy configurations.
- added "Content-Type: text/html" in responses wherever appropriate, as
suggested by Cameron Simpson.
- implemented "option ssl-hello-chk" to use SSLv3 CLIENT HELLO messages to
test server's health
- implemented "monitor-uri" so that haproxy can reply to a specific URI with
an "HTTP/1.0 200 OK" response. This is useful to validate multiple proxies
at once.