This is a first step in avoiding to constantly reinitialize chunks.
It replaces the old chunk_reset() which was not properly named as it
used to drop everything and was only used by chunk_destroy(). It has
been renamed chunk_drop().
We don't want the lower layer to forward a close while we're compressing,
and we want the system to fuse outgoing TCP segments using MSG_MORE as
much as possible to save round trips that can emerge from sending short
packets with a PUSH flag.
A test on a remote busy DSL line consisting in compressing a 100MB file
on the fly full of zeroes only showed a transfer rate of a few kB/s due
to these round trips.
We will need to be able to switch server connections on a session and
to keep idle connections. In order to achieve this, the preliminary
requirement is that the connections can survive the session and be
detached from them.
Right now they're still allocated at exactly the same place, so when
there is a session, there are always 2 connections. We could soon
improve on this by allocating the outgoing connection only during a
connect().
This current patch touches a lot of code and intentionally does not
change any functionnality. Performance tests show no regression (even
a very minor improvement). The doc has not yet been updated.
Thomas Heil reported that health checks did not work anymore when a backend
or server has "usesrc clientip". This is because the source address is not
set and tcp_bind_socket() tries to bind to that address anyway.
The solution consists in explicitly clearing the source address in the checks
and to make tcp_bind_socket() avoid binding when the address is not set. This
also has an indirect benefit that a useless bind() syscall will be avoided
when using "source 0.0.0.0 usesrc clientip" in health checks.
From the beginning it has been said that -D must always be used on the
command line from startup scripts so that haproxy does not accidentally
stay in foreground when loaded from init script... Except that this has
not been true for a long time now.
The fix is easy and must be backported to 1.4 too which is affected.
ssl_c_notbefore: start date of client cert (string, eg: "121022182230Z" for YYMMDDhhmmss[Z])
ssl_c_notafter: end date of client cert (string, eg: "121022182230Z" for YYMMDDhhmmss[Z])
ssl_f_notbefore: start date of frontend cert (string, eg: "121022182230Z" for YYMMDDhhmmss[Z])
ssl_f_notafter: end date of frontend cert (string, eg: "121022182230Z" for YYMMDDhhmmss[Z])
ssl_c_key_alg: algo used to encrypt the client's cert key (ex: rsaEncryption)
ssl_f_key_alg: algo used to encrypt the frontend's cert key (ex: rsaEncryption)
ssl_c_s_dn : client cert subject DN (string)
ssl_c_i_dn : client cert issuer DN (string)
ssl_f_s_dn : frontend cert subject DN (string)
ssl_f_i_dn : frontend cert issuer DN (string)
Return either the full DN without params, or just the DN entry (first param) or
its specific occurrence (second param).
The crappy zlib and openssl libs both define a free_func as a different typedef.
That's a very clever idea to use such a generic name in general purpose libraries,
really... The zlib one is easier to redefine than openssl's, so let's only fix this
one.
Decreasing the deflateInit2's memLevel parameter from 9 to 8 does not
affect the compression ratio and increases the compression speed by 12%.
Lower values do not increase transfer speed but decrease the compression
ratio so it looks like 8 is optimal.
A number of older browsers have many issues with compressed contents. It
happens that all these older browsers announce themselves as "Mozilla/4"
and that despite not being all broken, the amount of working browsers
announcing themselves this way compared to all other ones is so tiny
that it's not worth wasting cycles trying to adapt to every specific
one.
So let's simply disable compression for these older browsers.
More information on this very detailed article :
http://zoompf.com/2012/02/lose-the-wait-http-compression
This commit introduces HTTP compression using the zlib library.
http_response_forward_body has been modified to call the compression
functions.
This feature includes 3 algorithms: identity, gzip and deflate:
* identity: this is mostly for debugging, and it was useful for
developping the compression feature. With Content-Length in input, it
is making each chunk with the data available in the current buffer.
With chunks in input, it is rechunking, the output chunks will be
bigger or smaller depending of the size of the input chunk and the
size of the buffer. Identity does not apply any change on data.
* gzip: same as identity, but applying a gzip compression. The data
are deflated using the Z_NO_FLUSH flag in zlib. When there is no more
data in the input buffer, it flushes the data in the output buffer
(Z_SYNC_FLUSH). At the end of data, when it receives the last chunk in
input, or when there is no more data to read, it writes the end of
data with Z_FINISH and the ending chunk.
* deflate: same as gzip, but with deflate algorithm and zlib format.
Note that this algorithm has ambiguous support on many browsers and
no support at all from recent ones. It is strongly recommended not
to use it for anything else than experimentation.
You can't choose the compression ratio at the moment, it will be set to
Z_BEST_SPEED (1), as tests have shown very little benefit in terms of
compression ration when going above for HTML contents, at the cost of
a massive CPU impact.
Compression will be activated depending of the Accept-Encoding request
header. With identity, it does not take care of that header.
To build HAProxy with zlib support, use USE_ZLIB=1 in the make
parameters.
This work was initially started by David Du Colombier at Exceliance.
This state's name is confusing as it is only used with chunked encoding
and makes newcomers think it's also related to the content-length. Let's
call it CHUNK_CRLF to clear any doubt on this.
This tiny function was not inlined because initially not much used.
However it's been used un the chunk parser for a while and it became
one of the most CPU-cycle eater there. By inlining it, the chunk parser
speed was increased by 74 %. We're almost 3 times faster than original
with just the last 4 commits.
These functions are not that long and the compiler inlines them well. Doing
so has sped up the chunked encoding parser by 41% !
Note that http_forward_trailers was also declared static because it's not
exported.
Most calls to channel_forward() are performed with short byte counts and
are already optimized in channel_forward() taking just a few instructions.
Thus it's a waste of CPU cycles to call a function for this, let's just
inline the short byte count case and fall back to the common one for
remaining situations.
Doing so has increased the chunked encoding parser's performance by 12% !
Commit ceb4ac9c states that IPv6 values are accepted by "hdr_ip" acl,
but the code didn't allow it. This patch provides the ability to accept IPv6
values.
url2sa() mistakenly uses "addr" as a reference. This causes a segfault when
option http_proxy or url_ip are used.
This bug was introduced in haproxy 1.5 and doesn't need to be backported.
Some tests revealed that IPs not in the range of IPv6 subnets incorrectly
matched (for example "acl BUG src 2804::/16" applied to a src IP "127.0.0.1").
This is caused by the acl_match_ip() function applies a mask in host byte
order, whereas it should be in network byte order.
Using "stats bind-process", it becomes possible to indicate to haproxy which
process will get the incoming connections to the stats socket. It will also
shut down the warning when nbproc > 1.
In some circumstances, if the connection to the server is aborted while
some data were planned to be sent and the poller reported an ability to
send, then conn_fd_handler() would still call conn->data->send(), causing
the data layer to dereference the now NULL conn->xprt and crash.
So we have to check for conn->xprt validity before calling the data
layer.
This issue was introduced after 1.5-dev12 so it does not need any backport
and does not affect any released version.
Special thanks go to Cristian Ditoiu who once again provided amazing help
to troubleshoot this bug !
It happens that on some systems, the libc is recent enough to permit
building with accept4() but the kernel does not support it. The result
is then a disaster since no connection is accepted. We now detect this
and automatically fall back to accept() and fcntl() when this happens.
Please consider the following patch for configuration.txt to clarify meaning
of bufsize, maxrewrite and the size of HTTP request which can be processed.
Jaroslaw Bojar diagnosed an issue when haproxy switches to tunnel mode
after a transfer. The response data are sent with the MSG_MORE flag,
causing them to be needlessly queued in the kernel. In order to fix this,
we set the CF_NEVER_WAIT flag on the channels when switching to tunnel
mode.
One issue remained with client-side keep-alive : if the response is sent
before the end of the request, it suffers the same issue for the same
reason. This is easily addressed by setting the CF_SEND_DONTWAIT flag
on the channel when the response has been parsed and we're waiting for
the other side.
The same issue is present in 1.4 so the fix must be backported.
While checking haproxy's SSL stack with www.ssllabs.com, it appeared that
immediately closing upon a failed handshake caused a TCP reset to be emitted.
This is because OpenSSL does not consume pending data in the socket buffers.
One side effect is that if the reset packet is lost, the client might not get
it. So now when a handshake fails, we try to clean the socket buffers before
closing, resulting in a clean FIN instead of an RST.
ACL and sample fetches use args list and it is really not convenient to
check for null args everywhere. Now for empty args we pass a constant
list of end of lists. It will allow us to remove many useless checks.
fetch keywords which support arguments do not support being called
without parenthesis even if all arguments are optional. Let's fix
this to allow fetch keywords without parenthesis as is already done
in ACLs.
It's sometimes needed to be able to compare a zero-terminated string with a
chunk, so we now have two functions to do that, one strcmp() equivalent and
one strcasecmp() equivalent.
The ssl_npn match could not work by itself because clients do not use
the NPN extension unless the server advertises the protocols it supports.
Thanks to Simone Bordet for the explanations on how to get it right.