Too many problem reports were caused by missing timeouts. While
there has never been any default value since version 1.0, having
no timeout is abnormal in networked environments, and will lead
to various problems such as CLOSE_WAIT sockets accumulating and
nasty things like this. For this reason, it's better to annoy
the users until they fix their configs than letting them run
buggy configurations.
The files are now stored under :
- include/haproxy for the generic includes
- include/types.h for the structures needed within prototypes
- include/proto.h for function prototypes and inline functions
- src/*.c for the C files
Most include files are now covered by LGPL. A last move still needs
to be done to put inline functions under GPL and not LGPL.
Version has been set to 1.3.0 in the code but some control still
needs to be done before releasing.
Released 1.2.14 with the following changes :
- new HTML status report with the 'stats' keyword.
- added the 'abortonclose' option to better resist traffic surges
- implemented dynamic traffic regulation with the 'minconn' option
- show request time on denied requests
- definitely fixed hot reconf on OpenBSD by the use of SO_REUSEPORT
- now a proxy instance is allowed to run without servers, which is
useful to dedicate one instance to stats
- added lots of error counters
- a missing parenthesis preventd matching of cacheable cookies
- a missing parenthesis in poll_loop() might have caused missed events.
When 'minconn' is set, the number of simultaneous sessions sent to the server
will be limited by a dynamic value depending on the global load on the
instance itself. The principle is to fix the maximal concurrency on the server
proportionnally to the instance's usage relative to its maxconn, with a minimum
fixed to <minconn>. The formula for the number of simultaneous sessions sent
to the server is then max(srv_minconn, srv_maxconn*px_conn/px_maxconn). This
helps unloading the servers when the load is very low.
It has been rewritten and now supports an initialization state. It now also
prevents from dumping stopped(disabled) listeners and it is possible to
specify a scope with a list of proxies that are allowed to be dumped from
the one being configured ('.' meaning "this one"). The 'stats' entry can
be configured from the 'defaults' instance and it is correctly flushed
from proxies which redefine it.
Removed one missed debugging write(), fixed buffer management.
Now the HTML table output uses a color code with a caption. Some
more statistics have been collected such as maximum values reached
and failed health checks. Null limits now show "-" instead of "0".
Right now it only validates the user/passwd according to a specified list,
and lets the user pass through the proxy if the authentication is OK, and
it refuses any invalid access with a 401 Unauthorized response.
- an uninitialized field in the struct session could cause a crash when
the session was freed. This has been encountered on Solaris only.
- Solaris and OpenBSD no not support shutdown() on listening socket. Let's
be nice to them by performing a soft stop if pause fails.
At least OpenBSD and Solaris do not support shutdown() on listening socket.
So instead of blocking the hot reconfiguration, at least we can perform a
soft stop if the shutdown fails, so that the new daemon can bind to the
ports without trouble.
Summary of changes :
- 'maxconn' server parameter to do per-server session limitation
- queueing to support non-blocking session limitation
- fixed removal of cookies for cookie-less servers such as backup servers
- two separate wait queues for expirable and non-expirable tasks provide
better performance with lots of sessions.
- some code cleanups and performance improvements
- made state dumps a bit more verbose
- fixed missing checks for NULL srv in dispatch mode
- load balancing on backup servers was not possible in source hash mode.
- two session flags shared the same bit, but fortunately they were not
compatible.
If a task was queued on a server and if this task was alone and aborted
before any other task did anything, there were situations by which it
might have queued itself in the run queue, then exited, and the upcoming
tv_queue() associated to the run loop would have resurrected it siently,
causing crashes in task_queue.
The new principle consists in assigning a task to every server that
needs a connection limit. This task will be woken up every time we
suspect we might leave some place to queue a task. The server's task
itself will only have to run across its queue and run the available
number of tasks.