Struct hlua_txn is called "htxn" or "ht" everywhere, while here it's
called "hs" which is the name used everywhere for struct "hlua_smp".
Such confusion participate to the dangers of copy-pasting code, so
let's fix the name here.
Last bug was an example of a side effect of abuse of copy-paste, but
there are other places at risk, so better fix all occurrences of sizeof
to really reference the object size in order to limit the risks in the
future.
In hlua_converters_new(), we used to allocate the size of an hlua_txn
instead of hlua_smp, resulting in random crashes with one integer being
randomly overwritten at the end, even when no converter is being used.
Commit a0dc23f ("MEDIUM: http: implement http-request set-{method,path,query,uri}")
forgot to null-terminate the list, resulting in crashes when these actions
are used if the platform doesn't pad the struct with nulls.
Thanks to Gunay Arslan for reporting a detailed trace showing the
origin of this bug.
No backport to 1.5 is needed.
It's documented that these sample fetch functions should count all headers
and/or all values when called with no name but in practice it's not what is
being done as a missing name causes an immediate return and an absence of
result.
This bug is present in 1.5 as well and must be backported.
This library is designed to emit a zlib-compatible stream with no
memory usage and to favor resource savings over compression ratio.
While zlib requires 256 kB of RAM per compression context (and can only
support 4000 connections per GB of RAM), the stateless compression
offered by libslz does not need to retain buffers between subsequent
calls. In theory this slightly reduces the compression ratio but in
practice it does not have that much of an effect since the zlib
window is limited to 32kB.
Libslz is available at :
http://git.1wt.eu/web?p=libslz.git
It was designed for web compression and provides a lot of savings
over zlib in haproxy. Here are the preliminary results on a single
core of a core2-quad 3.0 GHz in 32-bit for only 300 concurrent
sessions visiting the home page of www.haproxy.org (76 kB) with
the default 16kB buffers :
BW In BW Out BW Saved Ratio memory VSZ/RSS
zlib 237 Mbps 92 Mbps 145 Mbps 2.58 84M / 69M
slz 733 Mbps 380 Mbps 353 Mbps 1.93 5.9M / 4.2M
So while the compression ratio is lower, the bandwidth savings are
much more important due to the significantly lower compression cost
which allows to consume even more data from the servers. In the
example above, zlib became the bottleneck at 24% of the output
bandwidth. Also the difference in memory usage is obvious.
More tests run on a single core of a core i5-3320M, with 500 concurrent
users and the default 16kB buffers :
At 100% CPU (no limit) :
BW In BW Out BW Saved Ratio memory VSZ/RSS hits/s
zlib 480 Mbps 188 Mbps 292 Mbps 2.55 130M / 101M 744
slz 1700 Mbps 810 Mbps 890 Mbps 2.10 23.7M / 9.7M 2382
At 85% CPU (limited) :
BW In BW Out BW Saved Ratio memory VSZ/RSS hits/s
zlib 1240 Mbps 976 Mbps 264 Mbps 1.27 130M / 100M 1738
slz 1600 Mbps 976 Mbps 624 Mbps 1.64 23.7M / 9.7M 2210
The most important benefit really happens when the CPU usage is
limited by "maxcompcpuusage" or the BW limited by "maxcomprate" :
in order to preserve resources, haproxy throttles the compression
ratio until usage is within limits. Since slz is much cheaper, the
average compression ratio is much higher and the input bandwidth
is quite higher for one Gbps output.
Other tests made with some reference files :
BW In BW Out BW Saved Ratio hits/s
daniels.html zlib 1320 Mbps 163 Mbps 1157 Mbps 8.10 1925
slz 3600 Mbps 580 Mbps 3020 Mbps 6.20 5300
tv.com/listing zlib 980 Mbps 124 Mbps 856 Mbps 7.90 310
slz 3300 Mbps 553 Mbps 2747 Mbps 5.97 1100
jquery.min.js zlib 430 Mbps 180 Mbps 250 Mbps 2.39 547
slz 1470 Mbps 764 Mbps 706 Mbps 1.92 1815
bootstrap.min.css zlib 790 Mbps 165 Mbps 625 Mbps 4.79 777
slz 2450 Mbps 650 Mbps 1800 Mbps 3.77 2400
So on top of saving a lot of memory, slz is constantly 2.5-3.5 times
faster than zlib and results in providing more savings for a fixed CPU
usage. For links smaller than 100 Mbps, zlib still provides a better
compression ratio, at the expense of a much higher CPU usage.
Larger input files provide slightly higher bandwidth for both libs, at
the expense of a bit more memory usage for zlib (it converges to 256kB
per connection).
This function used to take a zlib-specific flag as argument to indicate
whether a buffer flush or end of contents was met, let's split it in two
so that we don't depend on zlib anymore.
This algorithm is exactly the same as "deflate" without the zlib wrapper,
and used as an alternative when the browser wants "deflate". All major
browsers understand it and despite violating the standards, it is known
to work better than "deflate", at least on MSIE and some versions of
Safari. Do not use it in conjunction with "deflate", use either one or
the other since both react to the same Accept-Encoding token. Note that
the lack of Adler32 checksum makes it slightly faster.
Thanks to MSIE/IIS, the "deflate" name is ambigous. According to the RFC
it's a zlib-wrapped deflate stream, but IIS used to send only a raw deflate
stream, which is the only format MSIE understands for "deflate". The other
widely used browsers do support both formats. For this reason some people
prefer to emit a raw deflate stream on "deflate" to serve more users even
it that means violating the standards. Haproxy only follows the standard,
so they cannot do this.
This patch makes it possible to have one algorithm name in the configuration
and another one in the protocol. This will make it possible to have a new
configuration token to add a different algorithm so that users can decide if
they want a raw deflate or the standard one.
There's no reason for exporting identity_* nor deflate_*, they're only
used in the same file. Mark them static, it will make it easier to add
other algorithms.
When checking if the buffer is large enough, we used to rely on a fixed
size that was "apparently" enough. We need to consider the expansion
factor of deflate-encoded streams instead, which is of 5 bytes per 32kB.
The previous value was OK till 128kB buffers but became wrong past that.
It's totally harmless since we always keep the reserve when compressiong,
so there's 1kB or so available, which is enough for buffers as large as
6.5 MB, but better fix the check anyway.
This fix could be backported into 1.5 since compression was added there.
Till now we used to rely on a fixed maximum chunk size. Thanks to last
commit we're now free to adjust the chunk's length before sending the
data, so we don't have to use 6 digits all the time anymore, and if
one wants buffers larger than 16 MB it is now possible.
Till now we used to copy the pending outgoing data into the new buffer,
then compute the chunk size, then compress, then fix the chunk size,
then copy the remaining data into the destination buffer. If the
compression would fail for whatever reason (eg: not enough input bytes
to push an extra block), this work still had to be performed for no
added value. It also presents the disadvantage of having to use a fixed
length to encode the chunk size.
Thanks to the body parser changes that went late into 1.5, the buffers
are not modified anymore during these operations. So this patch rearranges
operations so that they're more optimal :
1) init() prepares a new buffer and reserves space in it for pending
outgoing data (no copy) and for chunk size
2) data are compressed
3) only if data were added to the buffer, then the old data are copied
and the chunk size is set.
A few optimisations are still possible to go further :
- decide whether we prefer to copy pending outgoing data from the
old buffer to the new one, or pending incoming compressed data
from the new one to the old one, based on the amount of outgoing
data available. Given that pending outgoing data are rare and the
operation could be complex in the presence of extra input data,
it's probably better to ignore this one ;
- compute the needed length for the chunk size. This would avoid
sending lots of leading zeroes when not needed.
This fixes an issue that occurs when backend servers run on different
addresses or ports and you wish to healthcheck them via a consistent
port. For example, if you allocate backends dynamically as containers
that expose different ports and you use an inetd based healthchecking
component that runs on a dedicated port.
By adding the server address and port to the send-state header, the
healthcheck component can deduce which address and port to check by
reading the X-Haproxy-Server-State header out of the healthcheck and
parsing out the address and port.
Lua supports a memory allocator. This is very important as it's the
only way we can control the amount of memory allocatable by Lua scripts.
That avoids prevents bogus scripts from eating all of the system's memory.
The value can be enforced using tune.lua.maxmem in the global section.
Thispatch adds global log function. Each log message is writed on
the stderr and is sent to the default syslog server. These two
actions are done according the configuration.
The class name before the function doesn't have the same
syntax than the declared class names.
The return code is not declared in the prototype of the function.
After the last API modification, the hello world example does not
yet run.
This class is accessible via the TXN object. It is created only if
the attached proxy have HTTP mode. It contain all the HTTP
manipulation functions:
- req_get_headers
- req_del_header
- req_rep_header
- req_rep_value
- req_add_header
- req_set_header
- req_set_method
- req_set_path
- req_set_query
- req_set_uri
- res_get_headers
- res_del_header
- res_rep_header
- res_rep_value
- res_add_header
- res_set_header
This function is a callback for HTTP actions. This function
creates the replacement string from a build_logline() format
and transform the header.
This patch split this function in two part. With this modification,
the header transformation and the replacement string are separed.
We can now transform the header with another replacement string
source than a build_logline() format.
The first part is the replacement engine. It take a replacement action
number and a replacement string and process the action.
The second part is the function which is called by the 'http-request
action' to replace a request line part. This function makes the
string used as replacement.
This split permits to use the replacement engine in other parts of the
code than the request action. The Lua use it for his own http action.
Last commit ecc9547 ("BUILD: lua: it miss the '-ldl' directive") broke
build on systems without libdl (eg: FreeBSD). Since lua requires libdl
on some systems, let's simplify this by adding a USE_DL build directive
to enable/disable use of libdl. It's set by default on all linux flavors.
These function used an invalid header parser.
- The trailing white-spaces were embedded in the replacement regex,
- The double-quote (") containing comma (,) were not respected.
This patch replace this parser by the "official" parser http_find_header2().
The function http_replace_value use bad variable to detect the end
of the input string.
Regression introduced by the patch "MEDIUM: regex: Remove null
terminated strings." (c9c2daf2)
We need to backport this patch int the 1.5 stable branch.
WT: there is no possibility to overwrite existing data as we only read
past the end of the request buffer, to copy into the trash. The copy
is bounded by buffer_replace2(), just like the replacement performed
by exp_replace(). However if a buffer happens to contain non-zero data
up to the next unmapped page boundary, there's a theorical risk of
crashing the process despite this not being reproducible in tests.
The risk is low because "http-request replace-value" did not work due
to this bug so that probably means it's not used yet.
If the Lua code causes an infinite loop without yield possible, the
clock is not updated. This patch check the clock when the Lua control
code cannot yield.
This bug is introduced by the commit "MEDIUM: http/tcp: permit to
resume http and tcp custom actions" ( bc4c1ac6ad ).
Before this patch, the return code of the function was ignored.
After this path, if the function returns 0, it wats a YIELD.
The function http_action_set_req_line() retunrs 0, in succes case.
This patch changes the return code of this function.
This will be useful later to state that some listeners have to use
certain decoders (typically an HTTP/2 decoder) regardless of the
regular processing applied to other listeners. For now it simply
defaults to the frontend's default target, and it is used by the
session.
Listerner->timeout is a vestigal thing going back to 2007 or so. It
used to only be used by stats and peers frontends to hold a pointer
to the proxy's client timeout. Now that we use regular frontends, we
don't use it anymore.
Some services such as peers and CLI pre-set the target applet immediately
during accept(), and for this reason they're forced to have a dedicated
accept() function which does not even properly follow everything the regular
one does (eg: sndbuf/rcvbuf/linger/nodelay are not set, etc).
Let's store the default target when known into the frontend's config so that
it's session_accept() which automatically sets it.
The peers frontend timeout was mistakenly set on timeout.connect instead
of timeout.client, resulting in no timeout being applied to the peers
connections. The impact is just that peers can establish connections and
remain connected until they speak. Once they start speaking, only one of
them will still be accepted, and old sessions will be killed, so the
problem is limited. This fix should however be backported to 1.5 since
it was introduced in 1.5-dev3 with peers.
When we explicitly write si[0] or si[1], we know whether we're working
with s->req or s->res, so better use that instead of si_ic/si_oc(), to
make the code simpler and more readable.
Some fetches uses a proxy name as first parameter. This is used to identify
frontend, backend or table. This first argument is declared as mandatory,
but the documentation says that is optional.
The behavior of the function smp_resolve_args() is to use the current proxy
if it match the required argument.
This patch implements the same behavior in the Lua argument checker for
sample-fetches and sample-converters.
The default value is stored in a special struct that describe the
map. This default value is parsed with special parser. This is
useless because HAProxy provides a space to store the default
value (the args) and HAProxy provides also standard parser for
the input types (args again).
This patch remove this special storage and replace it by an argument.
In other way the args of maps are declared as the expected type in
place of strings.
The first argument in the stack is 1 and not 0. So the analyzer
starts at bad argument number, and the error message contains
also bad argument number.
It was inappropriate to put this flag on every failed write into an
input buffer because it depends where it happens. When it's in the
context of an analyser (eg: hlua) it makes sense. When it's in the
context of an applet (eg: dumpstats), it does not make sense, and
it only happens to work because currently applets are scheduled by
the sessions. The proper solution for applets would be to add the
flag SI_FL_WAIT_ROOM on the stream interface.
Thus, we now don't set any flag anymore in bi_put* and it's up to the
caller to either set CF_WAKE_WRITE on the channel or SI_FL_WAIT_ROOM
on the stream interface. Changes were applied to hlua, peers and
dumpstats.