select, poll and epoll now have their dedicated functions and have
been split into distinct files. Several FD manipulation primitives
have been provided with each poller.
The rest of the code needs to be cleaned to remove traces of
StaticReadEvent/StaticWriteEvent. A trick involving a macro has
temporarily been used right now. Some work needs to be done to
factorize tests and sets everywhere.
- rewriting either the status line or request line could crash the
process due to a pointer which ought to be reset before parsing.
- rewriting the status line in the response did not work, it caused
a 502 Bad Gateway due to an erroneous state during parsing
- fix reqadd when no option httpclose is used.
- removed now unused fiprm and beprm from proxies
- split logs into two versions : TCP and HTTP
- added some docs about http headers storage and acls
- added a VIM script for syntax color highlighting (Bruno Michel)
logs are handled better with dedicated functions. The HTTP implementation
moved to proto_http.c. It has been cleaned up a bit. Now a frontend with
option httplog and no log will not call the function anymore.
The fiprm and beprm were added to ease the transition between
a single listener mode to frontends+backends. They are no longer
needed and make the code a bit more complicated. Remove them.
- fixed several bugs which might have caused a crash with bad configs
- several optimizations in header processing
- many progresses towards transaction-based processing
- option forwardfor may be used in frontends
- completed HTTP response processing
- some code refactoring between request and response processing
- new HTTP header manipulation functions
- optimizations on the recv() patch to reduce CPU usage under very
high data rates.
- more user-friendly help about the 'usesrc' keyword (CTTPROXY)
- username/groupname support from Marcus Rueckert
- added the "except" keyword to the "forwardfor" option (Bryan German)
- support for health-checks on other addresses (Fabrice Dulaunoy)
- makefile for MacOS 10.4 / Darwin (Dan Zinngrabe)
- do not insert "Connection: close" in HTTP/1.0 messages
Struct server has gathered lots of informations over the time, but
it's better for clarity and performance to group those information
by usage, the most common ones at the top and the least ones at the
bottom.
Patch from Fabrice Dulaunoy. Explanation below, and script
merged in examples/.
This patch allow to put a different address in the check part for each
server (and not only a specific port)
I need this feature because I've a complex settings where, when a specific
farm goes down, I need to switch a set of other farm either if these other
farm behave perfectly well.
For that purpose, I've made a small PERL daemon with some REGEX or PORT
test which allow me to test a bunch of thing.
Patch from Bryan Germann for 1.2.17.
In some circumstances, it is useful not to add the X-Forwarded-For
header, for instance when the client is another reverse-proxy or
stunnel running on the same machine and which already adds it. This
patch adds the "except" keyword to the "forwardfor" option, allowing
to specify an address or network which will not be added to this
header.
Previously, use of the "usesrc" keyword could silently fail if
either the module was not loaded, or the user did not have enough
permissions. Now the errors are better diagnosed and more appropriate
advices are given.
Generally, if a recv() returns less bytes than the MSS, it means that
there is nothing left in the system's buffers, and that it's not worth
trying to read again because we are very likely to get nothing. A
default read low limit has been set to 1460 bytes below which we stop
reading.
This has brought a little speed boost on small objects while maintaining
the same speed on large objects.
Multiple read polling was temporarily disabled, which had the side
effect of burning huge amounts of CPU on large objects. It has now
been re-implemented with a limit of 8 calls per wake-up, which seems
to provide best results at least on Linux.
Two new functions http_header_add_tail() and http_header_add_tail2()
make it easier to append headers, and also reduce the number of
sprintf() calls and perform stricter checks.
Some session flags were clearly related to HTTP transactions.
A new 'flags' field has been added to http_txn, and the
associated flags moved to proto_http.h.
Now the response is correctly processed in the backend first
then in the frontend. It has followed intensive tests to
catch regressions, and everything seems OK now, but the code
is young anyway.
Some parts of HTTP processing were incorrectly called "request" while
they are messages or transactions. The following structure members
have changed :
http_msg.hdr_state => msg_state
http_msg.sor => som
http_req.req_state => removed
http_req => http_txn
The new rbtree-based scheduler makes heavy use of tv_cmp2(), and
this function becomes a huge CPU eater. Refine it a little bit in
order to slightly reduce CPU usage.
- fix critical bug introduced with 1.3.6 : an empty request header
may lead to a crash due to missing pointer assignment
- hdr_idx might be left uninitialized in debug mode
- fixed build on FreeBSD due to missing fd_set declaration
Sorin Pop reported a patch to fix build on FreeBSD.
The file common/standard.h used an fd_set in a declaration
but did not include enough headers for it to be known.
- stats now support the HEAD method too
- extracted http request from the session
- huge rework of the HTTP parser which is now a 28-state FSM.
- linux-style likely/unlikely macros for optimization hints
- do not create a server socket when there's no server
The HTTP parser has been rewritten for better compliance to RFC2616.
The same parser is now usable for both requests and responses, and
it now supports HTTP/0.9 as well as multi-line headers. It has also
been improved for speed ; a typicial HTTP request is parsed in about
2 microseconds on a 1 GHz processor.
The monitor-uri check has been moved so that the requests are not
logged. The httpclose option now tries to change as little as
possible in the request, and does not affect the first header if
it is already set to 'close'. HTTP/0.9 requests are converted to
HTTP/1.0 before being forwarded.
Headers and request transformations are now distinct. The headers
list is updated after each insertion/removal/transformation. The
request is re-parsed and checked after each transformation. It is
not possible anymore to remove a request, and requests which lead
to invalid request lines are now rejected.
A struct http_req has been created to collect every information
related to an HTTP request being processed. Right now, it is
still in the struct session but the frontier is clear now.
- added complete support and doc for TCP Splicing
- replaced the wait-queue linked list with an rbtree.
- stats: swap color sets for active and backup servers
- try to guess server check port when unset
- a few bugfixes and cleanups
This patch from Sin Yu makes use of an rbtree for the wait queue,
which will solve the slowdown problem encountered when timeouts
are heterogenous in the configuration. The next step will be to
turn maintain_proxies() into a per-proxy task so that we won't
have to scan them all after each poll() loop.
The stats page could not tell the difference between a FE and a BE.
It has been revamped to indicate all relevant information. The font
is also slightly smaller in order for all the info to fit into small
screens. The data output path has been greatly simplified to use
string chunks.
The notion of capabilities has been added to the proxy so that we
know whether a proxy supports frontend, backend, or rulesets. Given
this, some parameters are optionnal, some are ignored with a warning
and others are forbidden. It is now possible to write valid two level
configs without binding to dummy address/ports.
The maxconn argument is used only for the listeners, and the
fullconn is used only for the backends. If unset, it inherits
maxconn's value which itself can inherit the default or the
global value (we might need to change this).
It is now possible to define an errorloc in the backend as well as
in the frontend. The backend's will be used first, and if undefined,
then the frontend's will be used instead. If none is used, then the
original error messages will be used.
HTTP error messages were all specific cases handled by an IF.
Now they are all in an array so that it will be easier to add
new ones. Also, the return functions now use chunks as inputs
so that it should be easier to provide alternative return
messages if needed.
Sin Yu's patch to permit to change the proxy from a regex was merged
with little changes :
- req_cap/rsp_cap are not reassigned to the new proxy, they stay
attached to the frontend
- the actions have been renamed "reqsetbe" and "reqisetbe" for
"set BackEnd".
- the buffer is not reset after the switch, instead, the headers are
parsed again by the backend
- in Sin's patch, it was theorically possible to switch multiple times,
but the switching track was lost, making it impossible to apply
server responsesin the reverse order. Now switching is limited to
1 action (separation between frontend and backend) but the filters
remain.
Now it will be extremely easy to add other switching conditions, such
as host matching, URI matching, etc...
There's still a hard work to be done on the logs and stats.
The nbconn attribute in the proxies was not relevant anymore because
a frontend A may use backend B and both of them must account for their
respective connections. For this reason, there now are two separate
counters for frontend and backend connections.
The stats page has been updated to reflect the backend, but a separate
line entry for the frontend with error counts would be good.
Note that as of now, beconn may be higher than maxconn, because maxconn
applies to the frontend, while beconn may be increased due to sessions
passed from another frontend.
There was a confusion about the way to find filters and backend
parameters from sessions. The chaining has been changed between
the session and the proxy.
Now, a session knows only two proxies : one frontend (->fe) and
one backend (->be). Each proxy has a link to the proxy providing
filters and to the proxy providing backend parameters (both self
by default).
The captures (cookies and headers) have been attached to the
frontend's filters for now.
The uri_auth and the statistics are attached to the backend's
filters so that the uri can depend on a hostname for instance.
A proxy will be able to borrow parameters from another one.
In particular, the filters will be inheritable from another
proxy, and the backend parameters too.
The check of uri_auth is now in a separate function which is
checked after every backend switch, so that it will be possible
to have an uri_auth for the frontend and another one for the
backend.
The req_cap, hdr_state, hdr_idx, auth_hdr and req_line have been moved
to a dedicated hreq structure in the session. It makes is easier to
add HTTP-specific fields such as SOR (start of request) and EOF (end
of headers).
It also made it possible to fix two bugs introduced by last commit :
- end of headers not correctly detected
- hdr_idx not freed upon one specific error during session creation
When the backend side will be reworked, it should rely on a similar
structure.
The new parser uses an FSM to strictly follow RFC2616.
Headers are indexed and parsed only once they're all available.
That way, complex regexes make more sense.
HTTP processing is now performed in several phases by calling
multiple functions, making the code cleaner and easier to read.
Note that req[i]pass does not work anymore because it would
require that we mark a header to be ignored. What is really
needed is to have the ability to add an exception to a matching
(match xx except yy).
Several bugs have been fixed in appsession during the conversion
to the new FSM (method length and recovery on malloc errors).
The code does build and work with the debug examples, but is
not usable yet to connect to anything as it does not forward
the requests yet.
This structure will consume 4 bytes per header to keep track of
headers within a request or a response without having to parse
the whole request for each regex. As it's not possible to allocate
only 4 bytes, we define a max number of HTTP headers. We set it
to (BUFSIZE+79)/80 so that 8kB buffers can contain 100 headers
(like Apache), resulting in 400 bytes dedicated to indexation,
or about 400/(2*8kB) ~= 2.4% of the memory usage.
The references to the proxy from the session have been turned into
Frontend (fe), Filters (fi) and Backend (be). This should ease the
migration to the L7 switching features. Next step will be to kill
the struct proxy and have 3 independant structs instead, each
referenced from entities called listener, frontend, filters and
backend.
SO_REUSEPORT does not exist on Linux but the checks are available in
the code. With a little patch, it's possible to implement the feature,
but the value of SO_REUSEPORT will still have to be known from userland.
This patch adds a workaround to this problem by figuring out the value
for the one used by SO_REUSEADDR.
Using the cttproxy kernel patch, it's possible to bind to any source
address. It is highly recommended to use the 03-natdel patch with the
other ones.
A new keyword appears as a complement to the "source" keyword : "usesrc".
The source address is mandatory and must be valid on the interface which
will see the packets. The "usesrc" option supports "client" (for full
client_ip:client_port spoofing), "client_ip" (for client_ip spoofing)
and any 'IP[:port]' combination to pretend to be another machine.
Right now, the source binding is missing from server health-checks if
set to another address. It must be implemented (think restricted firewalls).
The doc is still missing too.
Released 1.3.3 with the following changes :
- fix broken redispatch option in case the connection has already
been marked "in progress" (ie: nearly always).
- support regparm on x86 to speed up some often called functions
- removed a few useless calls to gettimeofday() in log functions.
- lots of 'const char*' cleanups
- turn every FD_* into functions which are faster on recent CPUs
- builds again on OpenBSD and Solaris
Linux and BSD know about u_int32_t, while Solaris knows about uint32_t.
This is getting boring and unsigned int perfectly fits the goal for the
moment. Further investigation will be performed anyway.
Some of the tv_* functions are called very often. Passing their
arguments as registers is quite faster. This can be disabled
by setting CONFIG_HAP_DISABLE_REGPARM.
As suggested by Markus Elfring, a few "const char *" have replaced
some "char *" declarations where a function is not expected to
modify a value. It does not change the code but it helps detecting
coding errors.
- started the changes towards I/O completion callbacks. stream_sock* have
replaced event_*.
- added the new "reqtarpit" and "reqitarpit" protection features
It is now possible to tarpit connections based on regex matches.
The tarpit timeout is equal to the contimeout. A 500 server error
response is faked, and the logs show the status flags as "PT" which
indicate the connection has been tarpitted.
The timeouts, expiration timers and results are now stored in the buffers.
The timers will have to change a bit to become more flexible, and when the
I/O completion functions will be written, the connect_complete() will have
to be extracted from the write() function.
Released 1.3.1 with the following changes from 1.2.15 :
- now, haproxy warns about missing timeout during startup to try to
eliminate all those buggy configurations.
- added "Content-Type: text/html" in responses wherever appropriate, as
suggested by Cameron Simpson.
- implemented "option ssl-hello-chk" to use SSLv3 CLIENT HELLO messages to
test server's health
- implemented "monitor-uri" so that haproxy can reply to a specific URI with
an "HTTP/1.0 200 OK" response. This is useful to validate multiple proxies
at once.
This makes it possible to relay SSL connections in pure TCP instances while
ensuring the remote end really receives our data eventhough intermediate
agents (firewalls, proxies, ...) might acknowledge the connection.
The files are now stored under :
- include/haproxy for the generic includes
- include/types.h for the structures needed within prototypes
- include/proto.h for function prototypes and inline functions
- src/*.c for the C files
Most include files are now covered by LGPL. A last move still needs
to be done to put inline functions under GPL and not LGPL.
Version has been set to 1.3.0 in the code but some control still
needs to be done before releasing.
It has been rewritten and now supports an initialization state. It now also
prevents from dumping stopped(disabled) listeners and it is possible to
specify a scope with a list of proxies that are allowed to be dumped from
the one being configured ('.' meaning "this one"). The 'stats' entry can
be configured from the 'defaults' instance and it is correctly flushed
from proxies which redefine it.
Right now it only validates the user/passwd according to a specified list,
and lets the user pass through the proxy if the authentication is OK, and
it refuses any invalid access with a 401 Unauthorized response.
* made epoll() support a compile-time option : ENABLE_EPOLL
* provided a very little libc replacement for a possibly missing epoll()
implementation which can be enabled by -DUSE_MY_EPOLL
* implemented the poll() poller, which can be enabled with -DENABLE_POLL.
The equivalent runtime argument becomes '-P'. A few tests show that it
performs like select() with many fds, but slightly slower (certainly
because of the higher amount of memory involved).
* separated the 3 polling methods and the tasks scheduler into 4 distinct
functions which makes the code a lot more modular.
* moved some event tables to private static declarations inside the poller
functions.
* the poller functions can now initialize themselves, run, and cleanup.
* changed the runtime argument to enable epoll() to '-E'.
* removed buggy epoll_ctl() code in the client_retnclose() function. This
function was never meant to remove anything.
* fixed a typo which caused glibc to yell about a double free on exit.
* removed error checking after epoll_ctl(DEL) because we can never know if
the fd is still active or already closed.
* added a few entries in the makefile
* merged Alexander Lazic's and Klaus Wagner's work on application
cookie-based persistence. Since this is the first merge, this version is
not intended for general use and reports are more than welcome. Some
documentation is really needed though.