In backend.c, we had an EV_FD_SET() called before fd_insert().
This is wrong because fd_insert updates maxfd which might be
used by some of the pollers during EV_FD_SET(), although this
is not currently the case.
The following patch introduced a minor bug :
[MINOR] permit renaming of x-forwarded-for header
If "option forwardfor" is declared in a defaults section, the header name
is never set and we see an empty header name before the value. Also, the
header name was not reset between two defaults sections.
22 regression tests for state machines are managed by the new
file tests/test-fsm.cfg. Check it, they are all documented
inside. Most of the bugs introduced during the FSM extraction
have been found with these tests.
Gcc < 3 does not consider regparm declarations for function pointers.
This causes big trouble at least with pollers (and with any function
pointer after all). Disable CONFIG_HAP_USE_REGPARM for gcc < 3.
When any processing remains on a buffer, it must be up to the
processing functions to set the termination flags, because they
are the only ones who know about higher levels.
It's a shame not to use buffer->wex for connection timeouts since by
definition it cannot be used till the connection is not established.
Using it instead of ->cex also makes the buffer processing more
symmetric.
Instead of calling all functions in a loop, process_session now
calls them according to buffer flags changes. This ensures that
we almost never call functions for nothing. The flags settings
are still quite coarse, but the number of average functions
calls per session has dropped from 31 to 18 (the calls to
process_srv dropped from 13 to 7 and the calls to process_cli
dropped from 13 to 8).
This could still be improved by memorizing which flags each
function uses, but that would add a level of complexity which
is not desirable and maybe even not worth the small gain.
It is not always convenient to run checks on req->l in functions to
check if a buffer is empty or full. Now the stream_sock functions
set flags BF_EMPTY and BF_FULL according to the buffer contents. Of
course, functions which touch the buffer contents adjust the flags
too.
BF_SHUTR_PENDING and BF_SHUTW_PENDING were poor ideas because
BF_SHUTR is the pending of BF_SHUTW_DONE and BF_SHUTW is the
pending of BF_SHUTR_DONE. Remove those two useless and confusing
"pending" versions and rename buffer_shut{r,w}_* functions.
process_response is not allowed to touch srv_state (this is an
incident which has survived the code migration). This bug was
causing connection exhaustion on frontend due to some closed
sockets marked SV_STDATA again.
It wasn't really wise to separate BF_MAY_CONNECT and BF_MAY_FORWARD,
as it caused trouble in TCP mode because the connection was allowed
but not the forwarding. Remove BF_MAY_CONNECT.
Since the separation of TCP and HTTP state machines, the HTTP
code must not play anymore with the file descriptor status
without checking if they are closed. Remains of such practice
have caused busy loops under some circumstances (mainly when
client closed during headers response).
If __fd_clo() was called on a file descriptor which was previously
disabled, it was not removed from the spec list. This apparently
could not happen on previous code because the TCP states prevented
this, but now it happens regularly. The effects are spec entries
stuck populated, leading to busy loops.
A new member has been added to the struct session. It keeps a trace
of what block of code performs a close or a shutdown on a socket, and
in what sequence. This is extremely convenient for post-mortem analysis
where flag combinations and states seem impossible. A new ABORT_NOW()
macro has also been added to make the code immediately segfault where
called.
All references to CL_STSHUT* and SV_STSHUT* were removed where
possible. Some of them could not be removed because they are
still in use by the unix sockets.
A bug remains at this stage. Injecting with a very short timeout
sometimes leads to a client in close state and a server in data
state with all buffer flags indicating a shutdown but the server
fd still enable, thus causing a busy loop.
The HTTP response is now processed in its own function, regardless of
the TCP state. All FSMs have become fairly simpler and must still be
improved by removing useless CL_STSHUT* and SV_STSHUT* (still used by
proto_uxst). The number of calls to process_* is still huge though.
Next steps consist in :
- removing useless assignments of CL_STSHUT* and SV_STSHUT*
- add a BF_EMPTY flag to buffers to indicate an empty buffer
- returning smarter values in process_* so that each callee
may explicitly indicate whom needs to be called after it.
- unify read and write timeouts for a same side. The way it
is now is too complicated and error-prone
- auditing code for regression testing
We're close to getting something which works fairly better now.
TCP timeouts are not managed anymore by the response FSM. Warning,
the FORCE_CLOSE state does not work anymore for now. All remaining
bugs causing stale connections have been swept.
The HTTP response code has been moved to a specific function
called "process_response" and the SV_STHEADERS state has been
removed and replaced with the flag AN_RTR_HTTP_HDR.
Due to a recent change in the FSMs, if the client closes with buffer
full, then the server loops waiting for headers. We can safely ignore
this case since the server FSM will have to be reworked too. Let's
fix the root cause for now.
For the first time, HTTP and TCP are not merged anymore. All request
processing has moved to process_request while the TCP processing of
the frontend remains in process_cli. The code is a lot cleaner,
simpler, smaller (1%) and slightly faster (1% too).
Right now, the HTTP state machine cannot easily command the TCP
state machine, but it does not cause that many difficulties.
The response processing has not yet been extracted, and the unix-stream
state machines have to be broken down that way too.
The CL_STDATA, CL_STSHUTR and CL_STSHUTW states still exist and are
exactly the sames. They will have to be all merged into CL_STDATA
once the work has stabilized. It is also possible that this single
state will disappear in favor of just buffer flags.
The SV_STANALYZE state was installed on the server side but was really
meant to be processed with the rest of the request on the client side.
It suffered from several issues, mostly related to the way timeouts were
handled while waiting for data.
All known issues related to timeouts during a request - and specifically
a request involving body processing - have been raised and fixed. At this
point, the code is a bit dirty but works fine, so next steps might be
cleanups with an ability to come back to the current state in case of
trouble.
This is a first attempt at separating data processing from the
TCP state machine. Those two states have been replaced with flags
in the session indicating what needs to be analyzed. The corresponding
code is still called before and in lieu of TCP states.
Next change should get rid of the specific SV_STANALYZE which is in
fact a client state.
Then next change should consist in making it possible to analyze
TCP contents while being in CL_STDATA (or CL_STSHUT*).
Client timeout could be refreshed in stream_sock_*, but this is
undesired when the timeout is already set to eternity. The effect
is that a session could still be aborted if client timeout was
smaller than server timeout. A second effect is that sessions
expired on the server side would expire with "cD" flags.
The fix consists in not updating it if it was not previously set.
A cleaner method might consist in updating the buffer timeout. This
is probably what will be done later when the state machines only
deal with the buffers.
Due to a copy-paste typo, the client timeout was refreshed instead
of the server's when waiting for server response. This means that
the server's timeout remained eternity.
If an HTTP/0.9-like POST request is sent to haproxy while
configured with url_param + check_post, it will crash. The
reason is that the total buffer length was computed based
on req->total (which equals the number of bytes read) and
not req->l (number of bytes in the buffer), thus leading
to wrong size calculations when calling memchr().
The affected code does not look like it could have been
exploited to run arbitrary code, only reads were performed
at wrong locations.
A new buffer flag BF_MAY_FORWARD has been added so that the client
FSM can check whether it is allowed to forward the response to the
client. The client FSM does not have to monitor the server state
anymore.
A new buffer flag BF_MAY_CONNECT has been added so that the server
FSM can check whether it is allowed to establish a connection or
not. That way, the client FSM only has to move this flag and the
server side does not need to monitor client state anymore.
The open/close nature of each half of the client side is known
to the buffer, so let the server state machine rely on this
instead of checking the client state for CL_STSHUT* or
CL_STCLOSE.
In order to make pool usage more convenient, let pool_free2()
support NULL pointers by doing nothing, just like the standard
free(3) call does.
The various call places have been updated to remove the now
useless checks.
Because I needed it in my situation - here's a quick patch to
allow changing of the "x-forwarded-for" header by using a suboption to
"option forwardfor".
Suboption "header XYZ" will set the header from "x-forwarded-for" to "XYZ".
Default is still "x-forwarded-for" if the header value isn't defined.
Also the suboption 'except a.b.c.d/z' still works on the same line.
So it's now: option forwardfor [except a.b.c.d[/z]] [header XYZ]
When an ACL is referenced at a wrong place (eg: response during request, layer7
during layer4), try to indicate precisely the name and requirements of this ACL.
Only the first faulty ACL is returned. A small change consisting in iterating
that way may improve reports :
cap = ACL_USE_any_unexpected
while ((acl=cond_find_require(cond, cap))) {
warning()
cap &= ~acl->requires;
}
This will report the first ACL of each unsupported type. But doing so will
mangle the error reporting a lot, so we need to rework error reports first.
All currently known ACL verbs have been assigned a type which makes
it possible to detect inconsistencies, such as response values used
in request rules.
ACL now hold information on the availability of the data they rely
on. They can indicate which parts of the requests/responses they
require, and the rules parser may now report inconsistencies.
As an example, switching rules are now checked for response-specific
ACLs, though those are not still set. A warning is reported in case
of mismatch. ACLs keyword restrictions will now have to be specifically
set wherever a better control is expected.
The line number where an ACL condition is declared has been added to
the conditions in order to be able to report the faulty line number
during post-loading checks.
The new "wait_end" acl delays evaluation of the rule (and the next ones)
to the end of the analysis period. This is intented to be used with TCP
content analysis. A rule referencing such an ACL will not match until
the delay is over. An equivalent default ACL "WAIT_END" has been created.
For protocol analysis, it's not always convenient to have to run through
a fetch then a match against dummy values. It's easier to let the fetch()
function set the result itself. This obviously works only for boolean
values.
With content inspection, checking the presence of data in the
request buffer is very important. It's getting boring to always
add such an ACL, so let's add it by default.