This is convenient when processing large dumps, it allows to copy-paste
values to inspect from one window to another, or to directly transfer
a "show fd"/"show stream" output through sed. In order to do this, simply
pass "-" alone instead of the value and they will all be read one line at
a time from stdin. For example, in order to quickly print the different
set of connection flags from "show fd", this is sufficient:
sed -ne 's/^.* cflg=\([^ ]*\).*/\1/p' | contrib/debug/flags conn -
Consider a configuration like this:
> acl t always_true
> acl or always_false
>
> http-response set-header Foo Bar if t or t
The 'or' within the condition will be treated as a logical disjunction
and the header will be set, despite the ACL 'or' being falsy.
This patch makes it an error to declare such an ACL that will never
work. This patch may be backported to stable releases, turning the
error into a warning only (the code was written in a way to make this
trivial). It should not break anything and might improve the users'
lifes.
If the longjmp() call is not flagged as "noreturn", for example, because the
operating system doesn't target a gcc-compatible compiler, we may get this
warning when building Lua :
src/hlua.c: In function 'hlua_panic_ljmp':
src/hlua.c:128:1: warning: no return statement in function returning non-void [-Wreturn-type]
static int hlua_panic_ljmp(lua_State *L) { longjmp(safe_ljmp_env, 1); }
^~~~~~
The function's prototype cannot be changed because it must be compatible
with Lua's callbacks. Let's simply enclose the call inside WILL_LJMP()
which we created exactly to signal a call to longjmp(). It lets the compiler
know we won't get back into the function and that the return statement is
not needed.
A reg test has been added to ensure the evaluation of http-after-responses rules
is functionnal for all kind of responses (server, applet and internal
responses).
2 reg tests have been added to ensure the HTTP return action is functionnal. A
reg test is about returning error files. The other one is about returning
default responses and responses based on string or file payloads.
It is now possible to intercept HTTP messages from a lua action and reply to
clients. To do so, a reply object must be provided to the function
txn:done(). It may contain a status code with a reason, a header list and a
body. By default, if an empty reply object is used, an empty 200 response is
returned. If no reply is passed when txn:done() is called, the previous
behaviour is respected, the transaction is terminated and nothing is returned to
the client. The same is done for TCP streams. When txn:done() is called, the
action is terminated with the code ACT_RET_DONE on success and ACT_RET_ERR on
error, interrupting the message analysis.
The reply object may be created for the lua, by hand. Or txn:reply() may be
called. If so, this object provides some methods to fill it:
* Reply:set_status(<status> [ <reason>]) : Set the status and optionally the
reason. If no reason is provided, the default one corresponding to the status
code is used.
* Reply:add_header(<name>, <value>) : Add a header. For a given name, the
values are stored in an ordered list.
* Reply:del_header(<name>) : Removes all occurrences of a header name.
* Reply:set_body(<body>) : Set the reply body.
Here are some examples, all doing the same:
-- ex. 1
txn:done{
status = 400,
reason = "Bad request",
headers = {
["content-type"] = { "text/html" },
["cache-control"] = { "no-cache", "no-store" },
},
body = "<html><body><h1>invalid request<h1></body></html>"
}
-- ex. 2
local reply = txn:reply{
status = 400,
reason = "Bad request",
headers = {
["content-type"] = { "text/html" },
["cache-control"] = { "no-cache", "no-store" }
},
body = "<html><body><h1>invalid request<h1></body></html>"
}
txn:done(reply)
-- ex. 3
local reply = txn:reply()
reply:set_status(400, "Bad request")
reply:add_header("content-length", "text/html")
reply:add_header("cache-control", "no-cache")
reply:add_header("cache-control", "no-store")
reply:set_body("<html><body><h1>invalid request<h1></body></html>")
txn:done(reply)
This function may be used to defined a timeout when a lua action returns
act:YIELD. It is a way to force to reexecute the script after a short time
(defined in milliseconds).
Unlike core:sleep() or core:yield(), the script is fully reexecuted if it
returns act:YIELD. With core functions to yield, the script is interrupted and
restarts from the yield point. When a script returns act:YIELD, it is finished
but the message analysis is blocked on the action waiting its end.
ACT_RET_* code are now available from lua scripts. The gloabl object "act" is
used to register these codes as constant. Now, lua actions can return any of
following codes :
* act.CONTINUE for ACT_RET_CONT
* act.STOP for ACT_RET_STOP
* act.YIELD for ACT_RET_YIELD
* act.ERROR for ACT_RET_ERR
* act.DONE for ACT_RET_DONE
* act.DENY for ACT_RET_DENY
* act.ABORT for ACT_RET_ABRT
* act.INVALID for ACT_RET_INV
For instance, following script denied all requests :
core.register_action("deny", { "http-req" }, function (txn)
return act.DENY
end)
Thus "http-request lua.deny" do exactly the same than "http-request deny".
When an action successfully finishes, the action return code (ACT_RET_*) is now
retrieve on the stack, ff the first element is an integer. In addition, in
hlua_txn_done(), the value ACT_RET_DONE is pushed on the stack before
exiting. Thus, when a script uses this function, the corresponding action still
finishes with the good code. Thanks to this change, the flag HLUA_STOP is now
useless. So it has been removed.
It is a mandatory step to allow a lua action to return any action return code.
In http_process_res_common() analyzer, when a invalid response is reported, the
failed_resp counters must be incremented.
No need to backport this patch, except if the commit b8a5371a ("MEDIUM:
http-ana: Properly handle internal processing errors") is backported too.
It is not possible anymore to alter the HTTP parser state from lua sample
fetches or lua actions. So there is no reason to still check for the parser
state consistency.
It is now possible to append extra headers to the generated responses by HTTP
return actions, while it is not based on an errorfile. For return actions based
on errorfiles, these extra headers are ignored. To define an extra header, a
"hdr" argument must be used with a name and a value. The value is a log-format
string. For instance:
http-request status 200 hdr "x-src" "%[src]" hdr "x-dst" "%[dst]"
Thanks to this new action, it is now possible to return any responses from
HAProxy, with any status code, based on an errorfile, a file or a string. Unlike
the other internal messages generated by HAProxy, these ones are not interpreted
as errors. And it is not necessary to use a file containing a full HTTP
response, although it is still possible. In addition, using a log-format string
or a log-format file, it is possible to have responses with a dynamic
content. This action can be used on the request path or the response path. The
only constraint is to have a responses smaller than a buffer. And to avoid any
warning the buffer space reserved to the headers rewritting should also be free.
When a response is returned with a file or a string as payload, it only contains
the content-length header and the content-type header, if applicable. Here are
examples:
http-request return content-type image/x-icon file /var/www/favicon.ico \
if { path /favicon.ico }
http-request return status 403 content-type text/plain \
lf-string "Access denied. IP %[src] is blacklisted." \
if { src -f /etc/haproxy/blacklist.lst }
This patch introduces the 'http-after-response' rules. These rules are evaluated
at the end of the response analysis, just before the data forwarding, on ALL
HTTP responses, the server ones but also all responses generated by
HAProxy. Thanks to this ruleset, it is now possible for instance to add some
headers to the responses generated by the stats applet. Following actions are
supported :
* allow
* add-header
* del-header
* replace-header
* replace-value
* set-header
* set-status
* set-var
* strict-mode
* unset-var
Call http_forward_proxy_resp() function when an internal response is
returned. It concerns redirect, auth and error reponses. But also 100-Continue
and 103-Early-Hints responses. For errors, there is a subtlety. if the forward
fails, an HTTP 500 error is generated if it is not already an internal
error. For now http_forward_proxy_resp() cannot fail. But it will be possible
when the new ruleset applied on all responses will be added.
Operations performed when internal responses (redirect/deny/auth/errors) are
returned are always the same. The http_forward_proxy_resp() function is added to
group all of them under a unique function.
The http_server_error() function now relies on http_reply_and_close(). Both do
almost the same actions. In addtion, http_server_error() sets the error flag and
the final state flag on the stream.
The rule direction must be tested to do specific processing on the request
path. intercepted_req counter shoud be updated if the rule is evaluated on the
frontend and remaining request's analyzers must be removed. But only on the
request path. The rule direction must also be tested to set the right final
stream state flag.
This patch depends on the commit "MINOR: http-rules: Add a flag on redirect
rules to know the rule direction". Both must be backported to all stable
versions.
HTTP redirect rules can be evaluated on the request or the response path. So
when a redirect rule is evaluated, it is important to have this information
because some specific processing may be performed depending on the direction. So
the REDIRECT_FLAG_FROM_REQ flag has been added. It is set when applicable on the
redirect rule during the parsing.
This patch is mandatory to fix a bug on redirect rule. It must be backported to
all stable versions.
It is important to not forget to specify the HTX resposne was internally
generated when a server perform a redirect. This information is used by the H1
multiplexer to choose the right connexion mode when the response is sent to the
client.
This patch must be backported to 2.1.
The first index in an HTX message is the HTX block index from which the HTTP
analysis must be performed. When HAProxy sends an HTTP response, on error or
redirect, this index must be reset because all pending incoming data are
considered as forwarded. For now, it is only a bug for 103-Early-Hints
response. For other responses, it is not a problem. But it will be when the new
ruleset applied on all responses will be added. For 103 responses, if the first
index is not reset, if there are rewritting rules on server responses, the
generated 103 responses, if any, are evaluated too.
This patch must be backported and probably adapted, at least for 103 responses,
as far as 1.9.
<.arg.dns.dns_opts> field in the act_rule structure is now dynamically allocated
when a do-resolve rule is parsed. This drastically reduces the structure size.
When an error is returned to a client, the right message is injected into the
response buffer. It is performed by http_server_error() or
http_replay_and_close(). Both ignore any data already present into the channel's
buffer. While it is legitimate to remove all input data, it is important to not
remove any outgoing data.
So now, we try to append the error message to the response buffer, only removing
input data. We rely on the channel_htx_copy_msg() function to do so. So this
patch depends on the following two commits:
* MINOR: htx: Add a function to append an HTX message to another one
* MINOR: htx/channel: Add a function to copy an HTX message in a channel's buffer
This patch must be backported as far as 1.9. However, above patches must be
backported first.
The channel_htx_copy_msg() function can now be used to copy an HTX message in a
channel's buffer. This function takes care to not overwrite existing data.
This patch depends on the commit "MINOR: htx: Add a function to append an HTX
message to another one". Both are mandatory to fix a bug in
http_reply_and_close() function. Be careful to backport both first.
the htx_append_msg() function can now be used to append an HTX message to
another one. All the message is copied or nothing. If an error occurs during the
copy, all changes are rolled back.
This patch is mandatory to fix a bug in http_reply_and_close() function. Be
careful to backport it first.
If an error file is too big and, once converted in HTX, runs over the buffer
space reserved to headers rewritting, a warning is emitted. Because a new set of
rules will be added to allow headers rewritting on all responses, including
HAProxy ones, it is important to always keep this space free for error files.
When a header rewrite fails, an internal errors is triggered. But
SF_ERR_INTERNAL is documented to be the concequence of a bug and must be
reported to the dev teamm. So, when this happens, the SF_ERR_PRXCOND termination
flag is set now.
When the global structure is initialized, instead of setting tune.maxrewrite to
-1, its default value can be immediately set. This way, it is always defined
during the configuration validity check. Otherwise, the only way to have it at
this stage, it is to explicity set it in the global section.
Since the strict rewritting mode was introduced, actions manipulating headers
(set/add/replace) always rely on the request message to test if the
HTTP_MSGF_SOFT_RW flag is set or not. But, of course, we must only rely on the
request for http-request rules. For http-response rules, we must use the
response message.
This patch must be backported if the strict rewritting is backported too.
It's often convenient, for example to dump two channels or two stream-int
at once. Now all input values are decoded and the value is recalled before
the dump when there is more than one to display.
It's often confusing to have a whole dump on the screen while only
checking for a set of task or stream flags, and appending "|grep ^chn"
isn't very convenient to repeat the opeation. Instead let's add the
ability to filter the output as certain types only by prepending their
name(s) before the value.
The function was added in commit 6c39198b57,
but was also used within a single function `free_dcache` which was unused
itself.
see issue #301
see commit 10ce0c2f31 which removed
`free_dcache`
The function was changed to be static in commit
6c39198b57, but even that commit
no longer uses it. The purpose of the change vs. outright removal
is unclear.
see issue #301
As reported in issue #485 the test for !len at the end of the
loop in get_var_int() is useless since it was already done inside
the loop. Actually the code is more readable if we remove the first
one so let's do this instead. The resulting code is exactly the same
since the compiler already optimized the test away.
In ssl_sock_load_dh_params(), if haproxy failed to apply the dhparam
with SSL_CTX_set_tmp_dh(), it will apply the DH with
SSL_CTX_set_dh_auto().
The problem is that we don't clean the OpenSSL errors when leaving this
function so it could fail to load the certificate, even if it's only a
warning.
Fixes bug #483.
Must be backported in 2.1.
Given that some OSes have bash in /usr/local/bin and in order not to
give too easy an excuse to Olivier for not backporting fixes, let's
make a few scripts rely on /usr/bin/env bash instead of /bin/bash :-)
We have the ability per bind option to ignore certain errors (CA, crt, ...),
and for this we use a 64-bit field. In issue #479 coverity reports a risk of
too large a left shift. For now as of OpenSSL 1.1.1 the highest error value
that may be reported by X509_STORE_CTX_get_error() seems to be around 50 so
there should be no risk yet, but it's enough of a warning to add a check so
that we don't accidently hide random errors in the future.
This may be backported to relevant stable branches.
The script is simply called from the repository holding the patch to
backport, with the last branch number and the commit(s) ID(s) to send
there and it then follows the chain of "down" repos to go down one step
until it meets the indicated last one. It basically automates what we do
by hand. Example:
./scripts/backport 1.9 1c7c0d6b97
Note that it does *not* push, which still has to be done by hand after
building and testing.
This new setting in the global section alters the way HAProxy will look
for unspecified files (.ocsp, .sctl, .issuer, bundles) during the
loading of the SSL certificates.
By default, HAProxy discovers automatically a lot of files not specified
in the configuration, and you may want to disable this behavior if you
want to optimize the startup time.
This patch sets flags in global_ssl.extra_files and then check them
before trying to load an extra file.
In __pool_get_first(), don't forget to unlock the pool lock if the pool is
empty, otherwise no writer will be able to take the lock, and as it is done
when reloading, it leads to an infinite loop on reload.
This should be backported with commit 04f5fe87d3
When using lockless pools, add a new rwlock, flush_pool. read-lock it when
getting memory from the pool, so that concurrenct access are still
authorized, but write-lock it when we're about to free memory, in
pool_flush() and pool_gc().
The problem is, when removing an item from the pool, we unreference it
to get the next one, however, that pointer may have been free'd in the
meanwhile, and that could provoke a crash if the pointer has been unmapped.
It should be OK to use a rwlock, as normal operations will still be able
to access the pool concurrently, and calls to pool_flush() and pool_gc()
should be pretty rare.
This should be backported to 2.1, 2.0 and 1.9.
In pool_create(), only initialize the pool spinlock if we just created the
pool, in the event we're reusing it, there's no need to initialize it again.
In pool_flush(), we can't just set the free_list to NULL, or we may suffer
the ABA problem. Instead, use a double-width CAS and update the sequence
number.
This should be backported to 2.1, 2.0 and 1.9.
This may, or may not, be related to github issue #476.
We can't clear flags on tasklets because we don't know if they're still
present upon return (they all return NULL, maybe that could change in
the future). As a side effect, once TASK_RUNNING is set, it's never
cleared anymore, which is misleading and resulted in some incorrect
flagging of bulk tasks in the recent scheduler changes. And the only
reason for setting TASK_RUNNING on tasklets was to detect self-wakers,
which is not done using a dedicated flag. So instead of setting this
flags for no opportunity to clear it, let's simply not set it.
Now that we can more accurately watch which connection is really
being woken up from itself, it was desirable to re-adjust the CPU BW
thresholds based on measurements. New tests with 60000 concurrent
connections were run at 100 Gbps with unbounded queues and showed
the following distribution:
scenario TC0 TC1 TC2 observation
-------------------+---+---+----+---------------------------
TCP conn rate : 32, 51, 17
HTTP conn rate : 34, 41, 25
TCP byte rate : 2, 3, 95 (2 MB objets)
splicing byte rate: 11, 6, 83 (2 MB objets)
H2 10k object : 44, 23, 33 client-limited
mixed traffic : 18, 10, 72 2*1m+1*0: 11kcps, 36 Gbps
The H2 experienced a huge change since it uses a persistent connection
that was accidently flagged in the previous test. The splicing test
exhibits a higher need for short tasklets, so does the mixed traffic
test. Given that latency mainly matters for conn rate and H2 here,
the ratios were readjusted as 33% for TC0, 50% for TC1 and 17% for
TC2, keeping in mind that whatever is not consumed by one class is
automatically shared in equal propertions by the next one(s). This
setting immediately provided a nice improvement as with the default
settings (maxpollevents=200, runqueue-depth=200), the same ratios as
above are still reported, while the time to request "show activity"
on the CLI dropped to 30-50ms. The average loop time is around 5.7ms
on the mixed traffic.
In addition, one extra stress test at 90.5 Gbps with 5100 conn/s shows
70-100ms CLI request time, with an average loop time of 17 ms.
sched->current is used to know the current task/tasklet, and is currently
only used by the panic dump code. However it turns out it was not set for
tasklets, which prevents us from using it for more usages, despite the
panic handling code already handling this case very well. Let's make sure
it's now set.
Commit a17664d829 ("MEDIUM: tasks: automatically requeue into the bulk
queue an already running tasklet") tried to inflict a penalty to
self-requeuing tasks/tasklets which correspond to those involved in
large, high-latency data transfers, for the benefit of all other
processing which requires a low latency. However, it turns out that
while it ought to do this on a case-by-case basis, basing itself on
the RUNNING flag isn't accurate because this flag doesn't leave for
tasklets, so we'd rather need a distinct flag to tag such tasklets.
This commit introduces TASK_SELF_WAKING to mark tasklets acting like
this. For now it's still set when TASK_RUNNING is present but this
will have to change. The flag is kept across wakeups.