This might have been introduced with chunk extensions. Note that
the server redirect still does not work because http_get_path()
cannot get the correct path once the request message is in the
HTTP_MSG_DONE state (->som does not point to the start of message
anymore).
Released version 1.4-dev6 with the following main changes :
- [BUILD] warning in stream_interface.h
- [BUILD] warning ultoa_r returns char *
- [MINOR] hana: only report stats if it is enabled
- [MINOR] stats: add "a link" & "a href" for sockets
- [MINOR]: stats: add show-legends to report additional informations
- [MEDIUM] default-server support
- [BUG]: add 'observer', 'on-error', 'error-limit' to supported options list
- [MINOR] stats: add href to tracked server
- [BUG] stats: show UP/DOWN status also in tracking servers
- [DOC] Restore ability to search a keyword at the beginning of a line
- [BUG] stats: cookie should be reported under backend not under proxy
- [BUG] cfgparser/stats: fix error message
- [BUG] http: disable auto-closing during chunk analysis
- [BUG] http: fix hopefully last closing issue on data forwarding
- [DEBUG] add an http_silent_debug function to debug HTTP states
- [MAJOR] http: fix again the forward analysers
- [BUG] http_process_res_common() must not skip the forward analyser
- [BUG] http: some possible missed close remain in the forward chain
- [BUG] http: redirect needed to be updated after recent changes
- [BUG] http: don't set no-linger on response in case of forced close
- [MEDIUM] http: restore the original behaviour of option httpclose
- [TESTS] add a file to test various connection modes
- [BUG] http: check options before the connection header
- [MAJOR] session: fix the order by which the analysers are run
- [MEDIUM] session: also consider request analysers added during response
- [MEDIUM] http: make safer use of the DONT_READ and AUTO_CLOSE flags
- [BUG] http: memory leak with captures when using keep-alive
- [BUG] http: fix for capture memory leak was incorrect
- [MINOR] http redirect: use proper call to return last response
- [MEDIUM] http: wait for some flush of the response buffer before a new request
- [MEDIUM] session: limit the number of analyser loops
The initial code's intention was to loop on the analysers as long
as an analyser is added by another one. [This code was wrong due to
the while(0) which breaks even on a continue statement, but the
initial intention must be changed too]. In fact we should limit the
number of times we loop on analysers in order to limit latency.
Using maxpollevents as a limit makes sense since this tunable is
used for the exact same purposes. We may add another tunable later
if that ever makes sense, so it's very unlikely.
If we accept a new request and that request produces an immediate
response (error, redirect, ...), then we may fail to send it in
case of pipelined requests if the response buffer is full. To avoid
this, we check the availability of at least maxrewrite bytes in the
response buffer before accepting a new pipelined request.
During a redirect, we used to send the last chunk of response with
stream_int_cond_close(). But this is wrong in case of pipeline,
because if the response already contains something, this function
will refrain from touching the buffer. Use a concatenation function
instead.
Also, this call might still fail when the buffer is full, we need
a second fix to refrain from parsing an HTTP request as long as the
response buffer is full, otherwise we may not even be able to return
a pending redirect or an error code.
That patch was incorrect because under some circumstances, the
capture memory could be freed by session_free() and then again
by http_end_txn(), causing a double free and an eventual segfault.
The pool use count was also reported wrong due to this bug.
The cleanup code was removed from session_free() to remain only
in http_end_txn().
Hank A. Paulson reported a massive memory leak when using keep-alive
mode. The information he provided made it easy to find that captured
request and response headers were erased but not released when renewing
a request.
Several HTTP analysers used to set those flags to values that
were useful but without considering the possibility that they
were not called again to clean what they did. First, replace
direct flag manipulation with more explicit macros. Second,
enforce a rule stating that any buffer which changes one of
these flags from the default must restore it after completion,
so that other analysers see correct flags.
With both this fix and the previous one about analyser bits,
we should not see any more stuck sessions.
A request analyser may very well be added while processing a response
(eg: end of an HTTP keep-alive response). It's very dangerous to only
rely on flags that ought to change in order to loop back, so let's
correctly detect a possible new analyser addition instead of guessing.
With the introduction of keep-alive, we have created situations
where an analyser can add other analysers to the current list,
which are behind it, which have already been processed once, and
which are needed immediately because without them there will be
no more I/O activity. This is typically the case for enabling
reading of a new request after preparing for a new request.
Instead of creating specific cases for some analysers (there was
already one such before), we now use a little bit of algorithmics
to create an ordered bit chain supporting priorities and fast
operations.
Another advantage of this new construction is that it's not a
real loop anymore, so if an analyser is unknown, it will not
loop but just ignore it.
Note that it is easy to skip multiple analysers at once now in
order to speed up the checking a bit. Some test code has shown
a minor gain though.
This change has been carefully re-read and has no direct reason
of causing a regression. However it has been tagged "major"
because the fact that it runs the analysers correctly might
trigger an old sleeping bug somewhere in one of the analysers.
This patch implements default-server support allowing to change
default server options. It can be used in [defaults] or [backend]/[listen]
sections. Currently the following options are supported:
- error-limit
- fall
- inter
- fastinter
- downinter
- maxconn
- maxqueue
- minconn
- on-error
- port
- rise
- slowstart
- weight
Supported informations, available via "tr/td title":
- cap: capabilities (proxy)
- mode: one of tcp, http or health (proxy)
- id: SNMP ID (proxy, socket, server)
- IP (socket, server)
- cookie (backend, server)
This patch adds add "a link" & "a href" html tags for sockets.
As sockets may have the same name like servers, I decided to
add "+" char (forbidden in names assigned to servers), as a prefix.
It is useless to report statistics if the feature was not enabled.
It also makes possible to distinguish if health analyses is
enabled or not only by looking at the stats page.
Commit 0dfdf19b64 introduced a
regression because the connection header is now parsed and checked
depending on the configured options, but the options are set after
calling it instead of being set before.
Historically, "option httpclose" has always worked the same way. It
only mangles the "Connection" header in the request and the response
if needed, but does not affect the connection by itself, and ignores
any further data. It is dangerous to change this behaviour without
leaving any other alternative. If an active close is desired, it's
better to make use of "option forceclose" which does exactly what
it intends to do.
So as of now, "option httpclose" will only mangle the headers as
before, and will only affect the connection by itself when combined
with another connection-related option (eg: keepalive or server-close).
We basically have to mimmic the code of process_session() here, so
when the remote output is closed, we must abort otherwise we'll end
up with data which cannot leave the buffer.
By default this function returned 0 indicating an end of analysis.
This was not a problem as long as it was the last analyser in the
chain but becomes quite a big one now since it skips the forwarder
with auto_close enabled, causing some data to pass under the nose
of the last one undetected.
There were still several situations leading to CLOSE_WAIT sockets
remaining there forever because some complex transitions were
obviously not caught due to the impossibility to resync changes
between the request and response FSMs.
This patch now centralizes the global transaction state and feeds
it from both request and response transitions. That way, whoever
finishes first, there will be no issue for converging to the correct
state.
Some heavy use of the new debugging function has helped a lot. Maybe
those calls could be removed after some time. First tests are very
positive.
This function outputs to fd #-1 the status of request and response
buffers, the transaction states, the stream interface states, etc...
That way, it's easy to find that output in an strace report, correctly
placed WRT the other syscalls.
The data forwarders are analysers. As such, the have to check for
various situations on which they have to abort, one of them being
the lack of data with closed input. Now we don't leave the functions
anymore without performing these checks. This has solved the new
CLOSE_WAIT issue that became more noticeable since last patch.
It may happen that we forward a close just after we sent the last
chunk, because we forgot to clear the AUTO_CLOSE flag.
This issue caused some pages to be truncated depending on some
timing races. Issue initially reported by Cyril Bont.
Released version 1.4-dev5 with the following main changes :
- [MINOR] server tracking: don't care about the tracked server's mode
- [MEDIUM] appsession: add "len", "prefix" and "mode" options
- [MEDIUM] appsession: add the "request-learn" option
- [BUG] Configuration parser bug when escaping characters
- [MINOR] CSS & HTML fun
- [MINOR] Collect & provide http response codes received from servers
- [BUG] Fix silly typo: hspr_other -> hrsp_other
- [MINOR] Add "a name" to stats page
- [MINOR] add additional "a href"s to stats page
- [MINOR] Collect & provide http response codes for frontends, fix backends
- [DOC] some small spell fixes and unifications
- [MEDIUM] Decrease server health based on http responses / events, version 3
- [BUG] format '%d' expects type 'int', but argument 5 has type 'long int'
- [BUG] config: fix erroneous check on cookie domain names, again
- [BUG] Healthchecks: get a proper error code if connection cannot be completed immediately
- [DOC] trivial fix for man page
- [MINOR] config: report all supported options for the "bind" keyword
- [MINOR] tcp: add support for the defer_accept bind option
- [MINOR] unix socket: report the socket path in case of bind error
- [CONTRIB] halog: support searching by response time
- [DOC] add a reminder about obsolete documents
- [DOC] point to 1.4 doc, not 1.3
- [DOC] option tcp-smart-connect was missing from index
- [MINOR] http: detect connection: close earlier
- [CLEANUP] sepoll: clean up the fd_clr/fd_set functions
- [OPTIM] move some rarely used fields out of fdtab
- [MEDIUM] fd: merge fd_list into fdtab
- [MAJOR] buffer: flag BF_DONT_READ to disable reads when not required
- [MINOR] http: add new transaction flags for keep-alive and content-length
- [MEDIUM] http request: parse connection, content-length and transfer-encoding
- [MINOR] http request: update the TX_SRV_CONN_KA flag on rewrite
- [MINOR] http request: simplify the test of no-data
- [MEDIUM] http request: simplify POST length detection
- [MEDIUM] http request: make use of pre-parsed transfer-encoding header
- [MAJOR] http: create the analyser which waits for a response
- [MINOR] http: pre-set the persistent flags in the transaction
- [MEDIUM] http response: check body length and set transaction flags
- [MINOR] http response: update the TX_CLI_CONN_KA flag on rewrite
- [MINOR] http: remove the last call to stream_int_return
- [IMPORT] import ebtree v5.0 into directory ebtree/
- [MEDIUM] build: switch ebtree users to use new ebtree version
- [CLEANUP] ebtree: remove old unused files
- [BUG] definitely fix regparm issues between haproxy core and ebtree
- [CLEANUP] ebtree: cast to char * to get rid of gcc warning
- [BUILD] missing #ifndef in ebmbtree.h
- [BUILD] missing #ifndef in ebsttree.h
- [MINOR] tools: add hex2i() function to convert hex char to int
- [MINOR] http: create new MSG_BODY sub-states
- [BUG] stream_sock: BUF_INFINITE_FORWARD broke splice on 64-bit platforms
- [DOC] option is "defer-accept", not "defer_accept"
- [MINOR] http: keep pointer to beginning of data
- [BUG] x-original-to: name was not set in default instance
- [MINOR] http: detect tunnel mode and set it in the session
- [BUG] config: fix error message when config file is not found
- [BUG] config: fix wrong handling of too large argument count
- [BUG] config: disable 'option httplog' on TCP proxies
- [BUG] config: fix erroneous check on cookie domain names
- [BUG] config: cookie domain was ignored in defaults sections
- [MINOR] config: support passing multiple "domain" statements to cookies
- [MINOR] ebtree: add functions to lookup non-null terminated strings
- [MINOR] config: don't report error on all subsequent files on failure
- [BUG] second fix for the printf format warning
- [BUG] check_post: limit analysis to the buffer length
- [MEDIUM] http: process request body in a specific analyser
- [MEDIUM] backend: remove HTTP POST parsing from get_server_ph_post()
- [MAJOR] http: completely process the "connection" header
- [MINOR] http: only consider chunk encoding with HTTP/1.1
- [MAJOR] buffers: automatically compute the maximum buffer length
- [MINOR] http: move the http transaction init/cleanup code to proto_http
- [MINOR] http: move 1xx handling earlier to eliminate a lot of ifs
- [MINOR] http: introduce a new synchronisation state : HTTP_MSG_DONE
- [MEDIUM] http: rework chunk-size parser
- [MEDIUM] http: add a new transaction flags indicating if we know the transfer length
- [MINOR] buffers: add buffer_ignore() to skip some bytes
- [BUG] http: offsets are relative to the buffer, not to ->som
- [MEDIUM] http: automatically re-aling request buffer
- [BUG] http: body parsing must consider the start of message
- [MINOR] new function stream_int_cond_close()
- [MAJOR] http: implement body parser
- [BUG] http: typos on several unlikely() around header insertion
- [BUG] stream_sock: wrong max computation on recv
- [MEDIUM] http: rework the buffer alignment logic
- [BUG] buffers: wrong size calculation for displaced data
- [MINOR] stream_sock: prepare for closing when all pending data are sent
- [MEDIUM] http: add two more states for the closing period
- [MEDIUM] http: properly handle "option forceclose"
- [MINOR] stream_sock: add SI_FL_NOLINGER for faster close
- [MEDIUM] http: make forceclose use SI_FL_NOLINGER
- [MEDIUM] session: set SI_FL_NOLINGER when aborting on write timeouts
- [MEDIUM] http: add some SI_FL_NOLINGER around server errors
- [MINOR] config: option forceclose is valid in frontends too
- [BUILD] halog: insufficient include path in makefile
- [MEDIUM] http: make the analyser not rely on msg being initialized anymore
- [MEDIUM] http: make the parsers able to wait for a buffer flush
- [MAJOR] http: add support for option http-server-close
- [BUG] http: ensure we abort data transfer on write error
- [BUG] last fix was overzealous and disabled server-close
- [BUG] http: fix erroneous trailers size computation
- [MINOR] stream_sock: enable MSG_MORE when forwarding finite amount of data
- [OPTIM] http: set MSG_MORE on response when a pipelined request is pending
- [BUG] http: redirects were broken by chunk changes
- [BUG] http: the request URI pointer is relative to the buffer
- [OPTIM] http: don't immediately enable reading on request
- [MINOR] http: move redirect messages to HTTP/1.1 with a content-length
- [BUG] http: take care of errors, timeouts and aborts during the data phase
- [MINOR] http: don't wait for sending requests to the server
- [MINOR] http: make the conditional redirect support keep-alive
- [BUG] http: fix cookie parser to support spaces and commas in values
- [MINOR] config: some options were missing for "redirect"
- [MINOR] redirect: add support for unconditional rules
- [MINOR] config: centralize proxy struct initialization
- [MEDIUM] config: remove the limitation of 10 reqadd/rspadd statements
- [MEDIUM] config: remove the limitation of 10 config files
- [CLEANUP] http: remove a remaining impossible condition
- [OPTIM] http: optimize a bit the construct of the forward loops
The cookie parser could be fooled by spaces or commas in cookie names
and values, causing the persistence cookie not to be matched if located
just after such a cookie. Now spaces found in values are considered as
part of the value, and spaces, commas and semi-colons found in values
or names, are skipped till next cookie name.
This fix must be backported to 1.3.
In case of a non-blocking socket, used for connecting to a remote
server (not localhost), the error reported by the health check
was most of a time one of EINPROGRESS/EAGAIN/EALREADY.
This patch adds a getsockopt(..., SO_ERROR, ...) call so now
the proper error message is reported.
It makes sense to permit a client to keep its connection when
performing a redirect to the same host. We only detect the fact
that the redirect location begins with a slash to use the keep-alive
(if the client supports it).
By default we automatically wait for enough data to fill large
packets if buf->to_forward is not null. This causes a problem
with POST/Expect requests which have a data size but no data
immediately available. Instead of causing noticeable delays on
such requests, simply add a flag to disable waiting when sending
requests.
In server-close mode particularly, the response buffer is marked for
no-auto-close after a response passed through. This prevented a POST
request from being aborted on errors, timeouts or anything if the
response was received before the request was complete.
If we enable reading of a request immediately after completing
another one, we end up performing small reads until the request
buffer is complete. This takes time and makes it harder to realign
the buffer when needed. Just enable reading when we need to.