The function buffer_contig_space is buggy and could lead to pernicious bugs
(never hitted until now, AFAIK). This function should return the number of bytes
that can be written into the buffer at once (without wrapping).
First, this function is used to inject input data (bi_putblk) and to inject
output data (bo_putblk and bo_inject). But there is no context. So it cannot
decide where contiguous space should placed. For input data, it should be after
bi_end(buf) (ie, buf->p + buf->i modulo wrapping calculation). For output data,
it should be after bo_end(buf) (ie, buf->p) and input data are assumed to not
exist (else there is no space at all).
Then, considering we need to inject input data, this function does not always
returns the right value. And when we need to inject output data, we must be sure
to have no input data at all (buf->i == 0), else the result can also be wrong
(but this is the caller responsibility, so everything should be fine here).
The buffer can be in 3 different states:
1) no wrapping
<---- o ----><----- i ----->
+------------+------------+-------------+------------+
| |oooooooooooo|iiiiiiiiiiiii|xxxxxxxxxxxx|
+------------+------------+-------------+------------+
^ <contig_space>
p ^ ^
l r
2) input wrapping
...---> <---- o ----><-------- i -------...
+-----+------------+------------+--------------------+
|iiiii|xxxxxxxxxxxx|oooooooooooo|iiiiiiiiiiiiiiiiiiii|
+-----+------------+------------+--------------------+
<contig_space> ^
^ ^ p
l r
3) output wrapping
...------ o ------><----- i -----> <----...
+------------------+-------------+------------+------+
|oooooooooooooooooo|iiiiiiiiiiiii|xxxxxxxxxxxx|oooooo|
+------------------+-------------+------------+------+
^ <contig_space>
p ^ ^
l r
buffer_contig_space returns (l - r). The cases 1 and 3 are correctly
handled. But for the second case, r is wrong. It points on the buffer's end
(buf->data + buf->size). It should be bo_end(buf) (ie, buf->p - buf->o).
To fix the bug, the function has been splitted. Now, bi_contig_space and
bo_contig_space should be used to know the contiguous space available to insert,
respectively, input data and output data. For bo_contig_space, input data are
assumed to not exist. And the right version is used, depending what we want to
do.
In addition, to clarify the buffer's API, buffer_realign does not return value
anymore. So it has the same API than buffer_slow_realign.
This patch can be backported in 1.7, 1.6 and 1.5.
This patch adds 'no-agent-check' setting supported both by 'default-server'
and 'server' directives to disable an agent check for a specific server which would
have 'agent-check' set as default value (inherited from 'default-server'
'agent-check' setting), or, on 'default-server' lines, to disable 'agent-check' setting
as default value for any further 'server' declarations.
For instance, provided this configuration:
default-server agent-check
server srv1
server srv2 no-agent-check
server srv3
default-server no-agent-check
server srv4
srv1 and srv3 would have an agent check enabled contrary to srv2 and srv4.
We do not allocate anymore anything when parsing 'default-server' 'agent-check'
setting.
This patch makes 'default-server' directives support 'sni' settings.
A field 'sni_expr' has been added to 'struct server' to temporary
stores SNI expressions as strings during both 'default-server' and 'server'
lines parsing. So, to duplicate SNI expressions from 'default-server' 'sni' setting
for new 'server' instances we only have to "strdup" these strings as this is
often done for most of the 'server' settings.
Then, sample expressions are computed calling sample_parse_expr() (only for 'server'
instances).
A new function has been added to produce the same error output as before in case
of any error during 'sni' settings parsing (display_parser_err()).
Should not break anything.
Before this patch 'check' setting was only supported by 'server' directives.
This patch makes also 'default-server' directives support this setting.
A new 'no-check' keyword parser has been implemented to disable this setting both
in 'default-server' and 'server' directives.
Should not break anything.
When SIGUSR1 is received, haproxy enters in soft-stop and quits when no
connection remains.
It can happen that the instance remains alive for a long time, depending
on timeouts and traffic. This option ensures that soft-stop won't run
for too long.
Example:
global
hard-stop-after 30s # Once in soft-stop, the instance will remain
# alive for at most 30 seconds.
We'll need to differenciate between pollers which can report hangup at
the same time as read (POLL_RDHUP) from the other ones, because only
these ones may benefit from the fd_done_recv() optimization. Epoll has
had support for EPOLLRDHUP since Linux 2.6.17 and has always been used
this way in haproxy, so now we only set the flag once we've observed it
once in a response. It means that some initial requests may try to
perform a second recv() call, but after the first closed connection it
will be enough to know that the second call is not needed anymore.
Later we may extend these flags to designate event-triggered pollers.
A tcp half connection can cause 100% CPU on expiration.
First reproduced with this haproxy configuration :
global
tune.bufsize 10485760
defaults
timeout server-fin 90s
timeout client-fin 90s
backend node2
mode tcp
timeout server 900s
timeout connect 10s
server def 127.0.0.1:3333
frontend fe_api
mode tcp
timeout client 900s
bind :1990
use_backend node2
Ie timeout server-fin shorter than timeout server, the backend server
sends data, this package is left in the cache of haproxy, the backend
server continue sending fin package, haproxy recv fin package. this
time the session information is as follows:
time the session information is as follows:
0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
srv=def ts=08 age=1s calls=3 rq[f=848000h,i=0,an=00h,rx=14m58s,wx=,ax=]
rp[f=8004c020h,i=0,an=00h,rx=,wx=14m58s,ax=] s0=[7,0h,fd=6,ex=]
s1=[7,18h,fd=7,ex=] exp=14m58s
rp has set the CF_SHUTR state, next, the client sends the fin package,
session information is as follows:
0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
srv=def ts=08 age=38s calls=4 rq[f=84a020h,i=0,an=00h,rx=,wx=,ax=]
rp[f=8004c020h,i=0,an=00h,rx=1m11s,wx=14m21s,ax=] s0=[7,0h,fd=6,ex=]
s1=[9,10h,fd=7,ex=] exp=1m11s
After waiting 90s, session information is as follows:
0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
srv=def ts=04 age=4m11s calls=718074391 rq[f=84a020h,i=0,an=00h,rx=,wx=,ax=]
rp[f=8004c020h,i=0,an=00h,rx=?,wx=10m49s,ax=] s0=[7,0h,fd=6,ex=]
s1=[9,10h,fd=7,ex=] exp=? run(nice=0)
cpu information:
6899 root 20 0 112224 21408 4260 R 100.0 0.7 3:04.96 haproxy
Buffering is set to ensure that there is data in the haproxy buffer, and haproxy
can receive the fin package, set the CF_SHUTR flag, If the CF_SHUTR flag has been
set, The following code does not clear the timeout message, causing cpu 100%:
stream.c:process_stream:
if (unlikely((res->flags & (CF_SHUTR|CF_READ_TIMEOUT)) == CF_READ_TIMEOUT)) {
if (si_b->flags & SI_FL_NOHALF)
si_b->flags |= SI_FL_NOLINGER;
si_shutr(si_b);
}
If you have closed the read, set the read timeout does not make sense.
With or without cf_shutr, read timeout is set:
if (tick_isset(s->be->timeout.serverfin)) {
res->rto = s->be->timeout.serverfin;
res->rex = tick_add(now_ms, res->rto);
}
After discussion on the mailing list, setting half-closed timeouts the
hard way here doesn't make sense. They should be set only at the moment
the shutdown() is performed. It will also solve a special case which was
already reported of some half-closed timeouts not working when the shutw()
is performed directly at the stream-interface layer (no analyser involved).
Since the stream interface layer cannot know the timeout values, we'll have
to store them directly in the stream interface so that they are used upon
shutw(). This patch does this, fixing the problem.
An easier reproducer to validate the fix is to keep the huge buffer and
shorten all timeouts, then call it under tcploop server and client, and
wait 3 seconds to see haproxy run at 100% CPU :
global
tune.bufsize 10485760
listen px
bind :1990
timeout client 90s
timeout server 90s
timeout connect 1s
timeout server-fin 3s
timeout client-fin 3s
server def 127.0.0.1:3333
$ tcploop 3333 L W N20 A P100 F P10000 &
$ tcploop 127.0.0.1:1990 C S10000000 F
"sample-fetch which captures the cipherlist" patch introduce #define
do deal with trace functions only available in openssl > 1.0.2.
Add this #define to libressl and boringssl environment.
Thanks to Piotr Kubaj for postponing and testing with libressl.
SSL_CTX_set_ecdh_auto is declared (when present) with #define. A simple #ifdef
avoid to list all cases of ssllibs. It's a placebo in new ssllibs. It's ok with
openssl 1.0.1, 1.0.2, 1.1.0, libressl and boringssl.
Thanks to Piotr Kubaj for postponing and testing with libressl.
Despite the previous commit working fine on all tests, it's still not
sufficient to completely address the problem. If the connection handler
is called with an event validating an L4 connection but some handshakes
remain (eg: accept-proxy), it will still wake the function up, which
will not report the activity, and will not detect a change once the
handshake it complete so it will not notify the ->wake() handler.
In fact the only reason why the ->wake() handler is still called here
is because after dropping the last handshake, we try to call ->recv()
and ->send() in turn and change the flags in order to detect a data
activity. But if for any reason the data layer is not interested in
reading nor writing, it will not get these events.
A cleaner way to address this is to call the ->wake() handler only
on definitive status changes (shut, error), on real data activity,
and on a complete connection setup, measured as CONNECTED with no
more handshake pending.
It could be argued that the handshake flags have to be made part of
the condition to set CO_FL_CONNECTED but that would currently break
a part of the health checks. Also a handshake could appear at any
moment even after a connection is established so we'd lose the
ability to detect a second end of handshake.
For now the situation around CO_FL_CONNECTED is not clean :
- session_accept() only sets CO_FL_CONNECTED if there's no pending
handshake ;
- conn_fd_handler() will set it once L4 and L6 are complete, which
will do what session_accept() above refrained from doing even if
an accept_proxy handshake is still pending ;
- ssl_sock_infocbk() and ssl_sock_handshake() consider that a
handshake performed with CO_FL_CONNECTED set is a renegociation ;
=> they should instead filter on CO_FL_WAIT_L6_CONN
- all ssl_fc_* sample fetch functions wait for CO_FL_CONNECTED before
accepting to fetch information
=> they should also get rid of any pending handshake
- smp_fetch_fc_rcvd_proxy() uses !CO_FL_CONNECTED instead of
CO_FL_ACCEPT_PROXY
- health checks (standard and tcp-checks) don't check for HANDSHAKE
and may report a successful check based on CO_FL_CONNECTED while
not yet done (eg: send buffer full on send_proxy).
This patch aims at solving some of these side effects in a backportable
way before this is reworked in depth :
- we need to call ->wake() to report connection success, measure
connection time, notify that the data layer is ready and update
the data layer after activity ; this has to be done either if
we switch from pending {L4,L6}_CONN to nothing with no handshakes
left, or if we notice some handshakes were pending and are now
done.
- we document that CO_FL_CONNECTED exactly means "L4 connection
setup confirmed at least once, L6 connection setup confirmed
at least once or not necessary, all this regardless of any
possibly remaining handshakes or future L6 negociations".
This patch also renames CO_FL_CONN_STATUS to the more explicit
CO_FL_NOTIFY_DATA, and works around the previous flags trick consiting
in setting an impossible combination of flags to notify the data layer,
by simply clearing the current flags.
This fix should be backported to 1.7, 1.6 and 1.5.
When a filter is used, there are 2 channel's analyzers to surround all the
others, flt_start_analyze and flt_end_analyze. This is the good place to acquire
and release resources used by filters, when needed. In addition, the last one is
used to synchronize the both channels, especially for HTTP streams. We must wait
that the analyze is finished for the both channels for an HTTP transaction
before restarting it for the next one.
But this part was buggy, leading to unexpected behaviours. First, depending on
which channel ends first, the request or the response can be switch in a
"forward forever" mode. Then, the HTTP transaction can be cleaned up too early,
while a processing is still in progress on a channel.
To fix the bug, the flag CF_FLT_ANALYZE has been added. It is set on channels in
flt_start_analyze and is kept if at least one filter is still analyzing the
channel. So, we can trigger the channel syncrhonization if this flag was removed
on the both channels. In addition, the flag TX_WAIT_CLEANUP has been added on
the transaction to know if the transaction must be cleaned up or not during
channels syncrhonization. This way, we are sure to reset everything once all the
processings are finished.
This patch should be backported in 1.7.
This adds 3 new commands to the cli :
enable dynamic-cookie backend <backend> that enables dynamic cookies for a
specified backend
disable dynamic-cookie backend <backend> that disables dynamic cookies for a
specified backend
set dynamic-cookie-key backend <backend> that lets one change the dynamic
cookie secret key, for a specified backend.
This adds a new "dynamic" keyword for the cookie option. If set, a cookie
will be generated for each server (assuming one isn't already provided on
the "server" line), from the IP of the server, the TCP port, and a secret
key provided. To provide the secret key, a new keyword as been added,
"dynamic-cookie-key", for backends.
Example :
backend bk_web
balance roundrobin
dynamic-cookie-key "bla"
cookie WEBSRV insert dynamic
server s1 127.0.0.1:80 check
server s2 192.168.56.1:80 check
This is a first step to be able to dynamically add and remove servers,
without modifying the configuration file, and still have all the load
balancers redirect the traffic to the right server.
Provide a way to generate session cookies, based on the IP address of the
server, the TCP port, and a secret key provided.
This may be used to output the JSON schema which describes the output of
show info json and show stats json.
The JSON output is without any extra whitespace in order to reduce the
volume of output. For human consumption passing the output through a
pretty printer may be helpful.
e.g.:
$ echo "show schema json" | socat /var/run/haproxy.stat stdio | \
python -m json.tool
The implementation does not generate the schema. Some consideration could
be given to integrating the output of the schema with the output of
typed and json info and stats. In particular the types (u32, s64, etc...)
and tags.
A sample verification of show info json and show stats json using
the schema is as follows. It uses the jsonschema python module:
cat > jschema.py << __EOF__
import json
from jsonschema import validate
from jsonschema.validators import Draft3Validator
with open('schema.txt', 'r') as f:
schema = json.load(f)
Draft3Validator.check_schema(schema)
with open('instance.txt', 'r') as f:
instance = json.load(f)
validate(instance, schema, Draft3Validator)
__EOF__
$ echo "show schema json" | socat /var/run/haproxy.stat stdio > schema.txt
$ echo "show info json" | socat /var/run/haproxy.stat stdio > instance.txt
python ./jschema.py
$ echo "show stats json" | socat /var/run/haproxy.stat stdio > instance.txt
python ./jschema.py
Signed-off-by: Simon Horman <horms@verge.net.au>
Add a json parameter to show (info|stat) which will output information
in JSON format. A follow-up patch will add a JSON schema which describes
the format of the JSON output of these commands.
The JSON output is without any extra whitespace in order to reduce the
volume of output. For human consumption passing the output through a
pretty printer may be helpful.
e.g.:
$ echo "show info json" | socat /var/run/haproxy.stat stdio | \
python -m json.tool
STAT_STARTED has bee added in order to track if show output has begun or
not. This is used in order to allow the JSON output routines to only insert
a "," between elements when needed. I would value any feedback on how this
might be done better.
Signed-off-by: Simon Horman <horms@verge.net.au>
This commit removes second argument(msgnum) from http_error_message and
changes http_error_message to use s->txn->status/http_get_status_idx for
mapping status code from 200..504 to HTTP_ERR_200..HTTP_ERR_504(enum).
This is needed for http-request tarpit deny_status commit.
This is like the nbsrv() sample fetch function except that it works as
a converter so it can count the number of available servers of a backend
name retrieved using a sample fetch or an environment variable.
Signed-off-by: Nenad Merdanovic <nmerdan@haproxy.com>
This option can be used to enable or to disable (prefixing the option line with
the "no" keyword) the sending of fragmented payload to agents. By default, this
option is enabled.
These options can be used to enable or to disable (prefixing the option line
with the "no" keyword), respectively, pipelined and asynchronous exchanged
between HAproxy and agents. By default, pipelining and async options are
enabled.
Now, when a payload is fragmented, the first frame must define the frame type
and the followings must use the special type SPOE_FRM_T_UNSET. This way, it is
easy to know if a fragment is the first one or not. Of course, all frames must
still share the same stream-id and frame-id.
Update SPOA example accordingly.
Now, as for peers, we use an opaque pointer to store information related to the
SPOE filter in appctx structure. These information are now stored in a dedicated
structure (spoe_appctx) and allocated, using a pool, when the applet is created.
This removes the dependency between applets and the SPOE filter and avoids to
eventually inflate the appctx structure.
Now, HAProxy and agents can announce the support for "pipelining" and/or "async"
capabilities during the HELLO handshake. For now, HAProxy always announces the
support of both. In addition, in its HELLO frames. HAproxy adds the "engine-id"
key. It is a uniq string that identify a SPOE engine.
The "pipelining" capability is the ability for a peer to decouple NOTIFY and ACK
frames. This is a symmectical capability. To be used, it must be supported by
HAproxy and agents. Unlike HTTP pipelining, the ACK frames can be send in any
order, but always on the same TCP connection used for the corresponding NOTIFY
frame.
The "async" capability is similar to the pipelining, but here any TCP connection
established between HAProxy and the agent can be used to send ACK frames. if an
agent accepts connections from multiple HAProxy, it can use the "engine-id"
value to group TCP connections.
Bug introduced with "removes SSL_CTX_set_ssl_version call and cleanup CTX
creation": ssl_sock_new_ctx is called before all the bind line is parsed.
The fix consists of separating the use of default_ctx as the initialization
context of the SSL connection via bind_conf->initial_ctx. Initial_ctx contains
all the necessary parameters before performing the selection of the CTX:
default_ctx is processed as others ctx without unnecessary parameters.
This patch used boringssl's callback to analyse CLientHello before any
handshake to extract key signature capabilities.
Certificat with better signature (ECDSA before RSA) is choosed
transparenty, if client can support it. RSA and ECDSA certificates can
be declare in a row (without order). This makes it possible to set
different ssl and filter parameter with crt-list.
The trash buffers are becoming increasingly complex to deal with due to
the code's modularity allowing some functions to be chained and causing
the same chunk buffers to be used multiple times along the chain, possibly
corrupting each other. In fact the trash were designed from scratch for
explicitly not surviving a function call but string manipulation makes
this impossible most of the time while not fullfilling the need for
reliable temporary chunks.
Here we introduce the ability to allocate a temporary trash chunk which
is reserved, so that it will not conflict with the trash chunks other
functions use, and will even support reentrant calls (eg: build_logline).
For this, we create a new pool which is exactly the size of a usual chunk
buffer plus the size of the chunk struct so that these chunks when allocated
are exactly the same size as the ones returned by get_trash_buffer(). These
chunks may fail so the caller must check them, and the caller is also
responsible for freeing them.
The code focuses on minimal changes and ease of reliable backporting
because it will be needed in stable versions in order to support next
patch.
The function dns_init_resolvers() is used to initialize socket used to
send DNS queries.
This patch gives the function the ability to close a socket before
re-opening it.
[wt: this needs to be backported to 1.7 for next fix]
Right now not only we're limited to 8 bits, but it's mentionned nowhere
and the limit was already reached. In addition, pp_opts (proxy protocol
options) were set to 32 bits while only 3 are needed. So let's swap
these two and group them together to avoid leaving two holes in the
structure, saving 64 bits on 64-bit machines.
A recent patch to support BoringSSL caused this warning to appear on
OpenSSL 1.1.0 :
src/ssl_sock.c:3062:4: warning: statement with no effect [-Wunused-value]
It's caused by SSL_CTX_set_ecdh_auto() which is now only a macro testing
that the last argument is zero, and the result is not used here. Let's
just kill it for both versions.
Tested with 0.9.8, 1.0.0, 1.0.1, 1.0.2, 1.1.0. This fix may be backported
to 1.7 if the boringssl fix is as well.
This function was deprecated in 1.1.0 causing this warning :
src/ssl_sock.c:551:3: warning: 'RAND_pseudo_bytes' is deprecated (declared at /opt/openssl-1.1.0/include/openssl/rand.h:47) [-Wdeprecated-declarations]
The man suggests to use RAND_bytes() instead. While the return codes
differ, it turns out that the function was already misused and was
relying on RAND_bytes() return code instead.
The patch was tested on 0.9.8, 1.0.0, 1.0.1, 1.0.2 and 1.1.0.
This fix must be backported to 1.7 and the return code check should
be backported to earlier versions if relevant.
In 1.0.0, this function was replaced with ERR_remove_thread_state().
As of openssl 1.1.0, both are now deprecated and do nothing at all.
Thus we simply make this call do nothing in 1.1.0 to silence the
warning.
The change was tested with 0.9.8, 1.0.0, 1.0.1, 1.0.2 and 1.1.0.
This kills the following warning on 1.1.0 :
src/ssl_sock.c:7266:9: warning: 'ERR_remove_state' is deprecated (declared at /dev/shm/openssl-1.1.0b/include/openssl/err.h:247) [-Wdeprecated-declarations]
This fix should be backported to 1.7.
After the code was ported to support 1.1.0, this one broke on 1.0.0 :
src/shctx.c:406: undefined reference to `SSL_SESSION_set1_id_context'
The function was indeed introduced only in 1.0.1. The build was validated
with 0.9.8, 1.0.0, 1.0.1, 1.0.2 and 1.1.0.
This fix must be backported to 1.7.
Limitations:
. disable force-ssl/tls (need more work)
should be set earlier with SSL_CTX_new (SSL_CTX_set_ssl_version is removed)
. disable generate-certificates (need more work)
introduce SSL_NO_GENERATE_CERTIFICATES to disable generate-certificates.
Cleanup some #ifdef and type related to boringssl env.
crt-list is extend to support ssl configuration. You can now have
such line in crt-list <file>:
mycert.pem [npn h2,http/1.1]
Support include "npn", "alpn", "verify", "ca_file", "crl_file",
"ecdhe", "ciphers" configuration and ssl options.
"crt-base" is also supported to fetch certificates.
The previous version used an O(number of proxies)^2 algo to get the sum of
the number of maxconns of frontends which reference a backend at least once.
This new version adds the frontend's maxconn number to the backend's
struct proxy member 'tot_fe_maxconn' when the backend name is resolved
for switching rules or default_backend statment. At the end, the final
backend's fullconn is computed looping only one time for all on proxies O(n).
The load of a configuration using a large amount of backends (10 thousands)
without configured fullconn was reduced from several minutes to few seconds.
Keeping the address and the port in the same field causes a lot of problems,
specifically on the DNS part where we're forced to cheat on the family to be
able to keep the port. This causes some issues such as some families not being
resolvable anymore.
This patch first moves the service port to a new field "svc_port" so that the
port field is never used anymore in the "addr" field (struct sockaddr_storage).
All call places were adapted (there aren't that many).
fc_rcvd_proxy : boolean
Returns true if the client initiated the connection with a PROXY protocol
header.
A flag is added on the struct connection if a PROXY header is successfully
parsed.
The older 'rsprep' directive allows modification of the status reason.
Extend 'http-response set-status' to take an optional string of the new
status reason.
http-response set-status 418 reason "I'm a coffeepot"
Matching updates in Lua code:
- AppletHTTP.set_status
- HTTP.res_set_status
Signed-off-by: Robin H. Johnson <robbat2@gentoo.org>
debug_hexdump() prints to the requested output stream (typically stdout
or stderr) an hex dump of the blob passed in argument. This is useful
to help debug binary protocols.
Error captures almost always report a state 26 (MSG_ERROR) making it
very hard to know what the parser was expecting. The reason is that
we have to switch to MSG_ERROR to trigger the dump, and then during
the dump we capture the current state which is already MSG_ERROR. With
this change we now copy the current state into an err_state field that
will be reported as the faulty state.
This patch looks a bit large because the parser doesn't update the
current state until it runs out of data so the current state is never
known when jumping to ther error label! Thus the code had to be updated
to take copies of the current state before switching to MSG_ERROR based
on the switch/case values.
As a bonus, it now shows the current state in human-readable form and
not only in numeric form ; in the past it was not an issue since it was
always 26 (MSG_ERROR).
At least now we can get exploitable invalid request/response reports :
[05/Jan/2017:19:28:57.095] frontend f (#2): invalid request
backend <NONE> (#-1), server <NONE> (#-1), event #1
src 127.0.0.1:39894, session #4, session flags 0x00000080
HTTP msg state MSG_RQURI(4), msg flags 0x00000000, tx flags 0x00000000
HTTP chunk len 0 bytes, HTTP body len 0 bytes
buffer flags 0x00908002, out 0 bytes, total 20 bytes
pending 20 bytes, wrapping at 16384, error at position 5:
00000 GET /\e HTTP/1.0\r\n
00017 \r\n
00019 \n
[05/Jan/2017:19:28:33.827] backend b (#3): invalid response
frontend f (#2), server s1 (#1), event #0
src 127.0.0.1:39718, session #0, session flags 0x000004ce
HTTP msg state MSG_HDR_NAME(17), msg flags 0x00000000, tx flags 0x08300000
HTTP chunk len 0 bytes, HTTP body len 0 bytes
buffer flags 0x80008002, out 0 bytes, total 59 bytes
pending 59 bytes, wrapping at 16384, error at position 31:
00000 HTTP/1.1 200 OK\r\n
00017 Content-length : 10\r\n
00038 \r\n
00040 0a\r\n
00044 0123456789\r\n
00056 0\r\n
This should be backported to 1.7 and 1.6 at least to help with bug
reports.
It is important to defined analyzers (AN_REQ_* and AN_RES_*) in the same order
they are evaluated in process_stream. This order is really important because
during analyzers evaluation, we run them in the order of the lower bit to the
higher one. This way, when an analyzer adds/removes another one during its
evaluation, we know if it is located before or after it. So, when it adds an
analyzer which is located before it, we can switch to it immediately, even if it
has already been called once but removed since.
With the time, and introduction of new analyzers, this order was broken up. the
main problems come from the filter analyzers. We used values not related with
their evaluation order. Furthermore, we used same values for request and response
analyzers.
So, to fix the bug, filter analyzers have been splitted in 2 distinct lists to
have different analyzers for the request channel than those for the response
channel. And of course, we have moved them to the right place.
Some other analyzers have been reordered to respect the evaluation order:
* AN_REQ_HTTP_TARPIT has been moved just before AN_REQ_SRV_RULES
* AN_REQ_PRST_RDP_COOKIE has been moved just before AN_REQ_STICKING_RULES
* AN_RES_STORE_RULES has been moved just after AN_RES_WAIT_HTTP
Note today we have 29 analyzers, all stored into a 32 bits bitfield. So we can
still add 4 more analyzers before having a problem. A good way to fend off the
problem for a while could be to have a different bitfield for request and
response analyzers.
[wt: all of this must be backported to 1.7, and part of it must be backported
to 1.6 and 1.5]