Most of the keywords don't need to have their own entry in the appctx
union, they just need to reuse some generic pointers like we've been
used to do in the appctx with st{0,1,2}. This patch adds p0, p1, i0, i1
and initializes them to zero before calling the parser. This way some
of the simplest existing keywords will be able to disappear from the
union.
It's worth noting that this is an extension to what was initially
attempted via the "private" member that I removed a few patches ago by
not understanding how it was supposed to be used. Here the fact that
we share the same union will force us to be stricter: the code either
uses the general purpose variables or it uses its own fields but not
both.
This is a leftover from the cleanup campaign, the stats scope was still
initialized by the CLI instead of being initialized by the stats keyword
parsers. This should probably be backported to 1.7 to make the code more
consistent.
The appctx storage became a real mess along the years. It now contains
mostly CLI-specific parts that share the same storage as the "cli" part
which in fact only contains the fields needed to pass an error message
to the caller, and it also has room a few other regular applets which
may become more and more common.
This first patch moves the parts around in the union so that all
standard applet parts are grouped together and the CLI-specific ones
are grouped together. It also adds a few comments to indicate what
certain parts are used for since it's sometimes a bit confusing.
Sometimes a registered keyword will not need any specific parsing nor
initialization, so it's annoying to have to write an empty parsing
function returning zero just for this.
This patch makes it possible to automatically call a keyword's I/O
handler of when the parsing function is not defined, while still allowing
a parser to set the I/O handler itself.
The signals system embedded in Lua can be tranformed in general purpose
signals code. To reach this goal, this path removes the Lua part of the
signals.
This is an easy job, because Lua is useles with signal. I change just two
prototypes.
The struct hlua size is 128 bytes. The size is the biggest of all the elements
of the union embedded in the appctx struct. With HTTP2, it is possible that this
appctx struct will be use many times for each connection, so the 128 bytes are
a little bit heavy for the global memory consomation.
This patch replace the embbeded hlua struct by a pointer and an associated memory
pool. Now, the memory for lua is allocated only if it is required.
[wt: the appctx is now down to 160 bytes]
Another small bug in "show cli sockets" made the last fix always report
process 64 due to a signedness issue in the shift operation when building
the mask.
Just like previous patch, this was the only other user of the "private"
field of the applet. It used to store a copy of the keyword's action.
Let's just put it into ->table->action and use it from there. It also
slightly simplifies the code by removing a few pointer to integer casts.
We have very few users of the appctx's private field which was introduced
prior to the split of the CLI. Unfortunately it was not removed after the
end. This commit simply introduces hlua_cli->fcn which is the pointer to
the Lua function that the Lua code used to store in this private pointer.
While developing an experimental applet performing only one read per full
line, it appeared that it would be woken up for the client's close, not
read all data (missing LF), then wait for a subsequent call, and would only
be woken up on client timeout to finish the read. The reason is that we
preset SI_FL_WAIT_DATA in the stream-interface's flags to avoid a fast loop,
but there's nothing which can remove this flag until there's a read operation.
We must definitely remove it in stream_int_notify() each time we're called
with CF_SHUTW_NOW because we know there will be no more subsequent read
and we don't want an applet which keeps the WANT_GET flag to block on this.
This fix should be backported to 1.7 and 1.6 though it's uncertain whether
cli, peers, lua or spoe really are affected there.
Sometimes certain commits don't contain useful tracking information but
we'd still like to be able to report them. Here we implement a hash on
the author's name, e-mail and date, the subject and the body before the
first s-o-b or cherry-picked line. These parts are supposed to be
reasonable invariant across backports and are usable to compute an
invariant hash of a given commit. When we don't find ancestry in a
commit, we try this method (if -H is specified) to compare commits
hashes and we can report a match. The equivalent commit is reported
as "XXXX+?" to indicate that it's an apparent backport but we don't
know how far it goes.
This problem is already detected here:
8dc7316a6f
Another case raises. Now HAProxy sends a final message (typically
with "http-request deny"). Once the the message is sent, the response
channel flags are not modified.
HAProxy executes a Lua sample-fecthes for building logs, and the
result is ignored because the response flag remains set to the value
HTTP_MSG_RPBEFORE. So the Lua function hlua_check_proto() want to
guarantee the valid state of the buffer and ask for aborting the
request.
The function check_proto() is not the good way to ensure request
consistency. The real question is not "Are the message valid ?", but
"Are the validity of message unchanged ?"
This patch memorize the parser state before entering int the Lua
code, and perform a check when it go out of the Lua code. If the parser
state change for down, the request is aborted because the HTTP message
is degraded.
This patch should be backported in version 1.6 and 1.7
Fixing the build using LibreSSL as OpenSSL implementation.
Currently, LibreSSL 2.4.4 provides the same API of OpenSSL 1.0.1x,
but it redefine the OpenSSL version number as 2.0.x, breaking all
checks with OpenSSL 1.1.x.
The patch solves the issue checking the definition of the symbol
LIBRESSL_VERSION_NUMBER when Openssl 1.1.x features are requested.
When an entity tries to get a buffer, if it cannot be allocted, for example
because the number of buffers which may be allocated per process is limited,
this entity is added in a list (called <buffer_wq>) and wait for an available
buffer.
Historically, the <buffer_wq> list was logically attached to streams because it
were the only entities likely to be added in it. Now, applets can also be
waiting for a free buffer. And with filters, we could imagine to have more other
entities waiting for a buffer. So it make sense to have a generic list.
Anyway, with the current design there is a bug. When an applet failed to get a
buffer, it will wait. But we add the stream attached to the applet in
<buffer_wq>, instead of the applet itself. So when a buffer is available, we
wake up the stream and not the waiting applet. So, it is possible to have
waiting applets and never awakened.
So, now, <buffer_wq> is independant from streams. And we really add the waiting
entity in <buffer_wq>. To be generic, the entity is responsible to define the
callback used to awaken it.
In addition, applets will still request an input buffer when they become
active. But they will not be sleeped anymore if no buffer are available. So this
is the responsibility to the applet I/O handler to check if this buffer is
allocated or not. This way, an applet can decide if this buffer is required or
not and can do additional processing if not.
[wt: backport to 1.7 and 1.6]
A stream can be awakened for different reasons. During its processing, it can be
early stopped if no buffer is available. In this situation, the reason why the
stream was awakened is lost, because we rely on the task state, which is reset
after each processing loop.
In many cases, that's not a big deal. But it can be useful to accumulate the
task states if the stream processing is interrupted, especially if some filters
need to be called.
To be clearer, here is an simple example:
1) A stream is awakened with the reason TASK_WOKEN_MSG.
2) Because no buffer is available, the processing is interrupted, the stream
is back to sleep. And the task state is reset.
3) Some buffers become available, so the stream is awakened with the reason
TASK_WOKEN_RES. At this step, the previous reason (TASK_WOKEN_MSG) is lost.
Now, the task states are saved for a stream and reset only when the stream
processing is not interrupted. The correspoing bitfield represents the pending
events for a stream. And we use this one instead of the task state during the
stream processing.
Note that TASK_WOKEN_TIMER and TASK_WOKEN_RES are always removed because these
events are always handled during the stream processing.
[wt: backport to 1.7 and 1.6]
<run_queue> is used to track the number of task in the run queue and
<run_queue_cur> is a copy used for the reporting purpose. These counters has
been renamed, respectively, <tasks_run_queue> and <tasks_run_queue_cur>. So the
naming is consistent between tasks and applets.
[wt: needed for next fixes, backport to 1.7 and 1.6]
As for tasks, 2 counters has been added to track :
* the total number of applets : nb_applets
* the number of active applets : applets_active_queue
[wt: needed for next fixes, to backport to 1.7 and 1.6]
[wt: while it could seem suspicious, the preceeding call to
dump_servers_state() indeed flushes the trash in case anything is
emitted. No backport needed though.]
When the option "http-buffer-request" is set, HAProxy send itself the
"HTTP/1.1 100 Continue" response in order to retrieve the post content.
When HAProxy forward the request, it send the body directly after the
headers. The header "Expect: 100-continue" was sent with the headers.
This header is useless because the body will be sent in all cases, and
the server reponse is not removed by haproxy.
This patch removes the header "Expect: 100-continue" if HAProxy sent it
itself.
The parameter "value" of the function TXN.set_var() was not documented.
This is a regression from the commit 85d79c94a9.
This patch must be backported in 1.7
These 2 patches add ability to fetch frontend/backend name in your
logic, so they can be used later to make routing decisions (fe_name) or
taking some actions based on backend which responded to request (be_name).
In our case we needed a fetcher to be able to extract information we
needed from frontend name.
When doing a parallel build on multiple CPUs it's common that at the end
a few CPUs only are busy compiling very large files while the other ones
have finished. By placing the largest files first, we can ensure that in
the worst case they are present from the beginning to the end, and that
other processes are free to take smaller files. This ordering was made
based on a measurement consisting in counting the number of times a given
file appears in the build. The top ten looks like this :
145 src/cfgparse.c
131 src/proto_http.c
83 src/ssl_sock.c
74 src/stats.c
73 src/stream.c
55 src/flt_spoe.c
48 src/server.c
46 src/pattern.c
43 src/checks.c
42 src/flt_http_comp.c
Only a few files were moved, ssl_sock would need to be moved as well but
that would not be a convenient thing to do in the makefile. This new
order allows to save about 10-15% of build time on 4 CPUs, which is nice.
(http|tcp)-(request|response) action cannot take arguments from the
configuration file. Arguments are useful for executing the action with
a special context.
This patch adds the possibility of passing arguments to an action. It
runs exactly like sample fetches and other Lua wrappers.
Note that this patch implements a 'TODO'.
The variable are compared only using text, the final '\0' (or the
string length) are not checked. So, the variable name "txn.internal"
matchs other one call "txn.int".
This patch fix this behavior
It must be backported ni 1.6 and 1.7
By investigating a keep-alive issue with CloudFlare, we[1] found that
when using the 'set-cookie' option in a redirect (302) HAproxy is adding
an extra `\r\n`.
Triggering rule :
`http-request redirect location / set-cookie Cookie=value if [...]`
Expected result :
```
HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: /
Set-Cookie: Cookie=value; path=/;
Connection: close
```
Actual result :
```
HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: /
Set-Cookie: Cookie=value; path=/;
Connection: close
```
This extra `\r\n` seems to be harmless with another HAproxy instance in
front of it (sanitizing) or when using a browser. But we confirm that
the CloudFlare NGINX implementation is not able to handle this. It
seems that both 'Content-length: 0' and extra carriage return broke RFC
(to be confirmed).
When looking into the code, this carriage-return was already present in
1.3.X versions but just before closing the connection which was ok I
think. Then, with 1.4.X the keep-alive feature was added and this piece
of code remains unchanged.
[1] all credit for the bug finding goes to CloudFlare Support Team
[wt: the bug was indeed present since the Set-Cookie was introduced
in 1.3.16, by commit 0140f25 ("[MINOR] redirect: add support for
"set-cookie" and "clear-cookie"") so backporting to all supported
versions is desired]
Definitions and examples for 51d.single and 51d.all have been added to
configuration.txt so it now appears in online documentation in addition
to the README, The 51degrees-property-name-list entry has also been
updated to make it clear that multiple properties can be added.
The recent CLI reorganization managed to break these two commands
by having their parser return 1 (indicating an end of processing)
instead of 0 to indicate new calls to the io handler were needed.
Namely the faulty commits are :
69e9644 ("REORG: cli: move show stat resolvers to dns.c")
32af203 ("REORG: cli: move ssl CLI functions to ssl_sock.c")
The fix is trivial and there is no other loss of functionality. Thanks
to Dragan Dosen for reporting the issue and the faulty commits. The
backport is needed in 1.7.