This fetch method returns the response buffer len, similarly
to req.len for the request. Previously it was only possible
to rely on "res.payload(0,size) -m found" to find if at least
that amount of data was available, which was a bit tricky.
This new action immediately closes the connection with the server
when the condition is met. The first such rule executed ends the
rules evaluation. The main purpose of this action is to force a
connection to be finished between a client and a server after an
exchange when the application protocol expects some long time outs
to elapse first. The goal is to eliminate idle connections which
take signifiant resources on servers with certain protocols.
When a process with large stick tables is replaced by a new one and remains
present until the last connection finishes, it keeps these data in memory
for nothing since they will never be used anymore by incoming connections,
except during syncing with the new process. This is especially problematic
when dealing with long session protocols such as WebSocket as it becomes
possible to stack many processes and eat a lot of memory.
So the idea here is to know if a table still needs to be synced or not,
and to purge all unused entries once the sync is complete. This means that
after a few hundred milliseconds when everything has been synchronized with
the new process, only a few entries will remain allocated (only the ones
held by sessions during the restart) and all the remaining memory will be
freed.
Note that we carefully do that only after the grace period is expired so as
not to impact a possible proxy that needs to accept a few more connections
before leaving.
Doing this required to add a sync counter to the stick tables, to know how
many peer sync sessions are still in progress in order not to flush the entries
until all synchronizations are completed.
David Berard reported that send-proxy was broken on FreeBSD and tracked the
issue to be an error returned by send(). We already had the same issue in
the past in another area which was addressed by the following commit :
0ea0cf6 BUG: raw_sock: also consider ENOTCONN in addition to EAGAIN
In fact, on Linux send() returns EAGAIN when the connection is not yet
established while other OSes return ENOTCONN. Let's consider ENOTCONN for
send-proxy there as the same as EAGAIN.
David confirmed that this change properly fixed the issue.
Another place was affected as well (health checks with send-proxy), and
was fixed.
This fix does not need any backport since it only affects 1.5.
verifyhost allows you to specify a hostname that the remote server's
SSL certificate must match. Connections that don't match will be
closed with an SSL error.
With a facily of 2 or 1 digit, the send size was wrong and bytes with
unknown value were sent.
The size was calculated using the start of the buffer and not the start
of the data which varies with the number of digits of the facility.
This bug was reported by Samuel Stoller and reported by Lukas Tribus.
When a request fail, the unique_id was allocated but not generated.
The string was not initialized and junk was printed in the log with %ID.
This patch changes the behavior of the unique_id. The unique_id is now
generated when a request failed.
This bug was reported by Patrick Hemmer.
The HTTP request counter is incremented non atomically, which means that
many requests can log the same ID. Let's increment it when it is consumed
so that we avoid this case.
This bug was reported by Patrick Hemmer. It's 1.5-specific and does not
need to be backported.
Mathew Levett reported an issue which is a bit nasty and hard to track
down. RDP cookies contain both the IP and the port, and haproxy matches
them exactly. So if a server has no port specified (or a remapped port),
it will never match a port specified in a cookie. Better warn the user
when this is detected.
Apollon Oikonomopoulos reported a build failure on Hurd where PATH_MAX
is not defined. The only place where it is referenced is ssl_sock.c,
all other places use MAXPATHLEN instead, with a fallback to 128 when
the OS does not define it. So let's switch to MAXPATHLEN as well.
Mark Brooks reported the following issue :
"My table looks like this -
0x24a8294: key=192.168.136.10 use=0 exp=1761492 server_id=3
0x24a8344: key=192.168.136.11 use=0 exp=1761506 server_id=2
0x24a83f4: key=192.168.136.12 use=0 exp=1761520 server_id=3
0x24a84a4: key=192.168.136.13 use=0 exp=1761534 server_id=2
0x24a8554: key=192.168.136.14 use=0 exp=1761548 server_id=3
0x24a8604: key=192.168.136.15 use=0 exp=1761563 server_id=2
0x24a86b4: key=192.168.136.16 use=0 exp=1761580 server_id=3
0x24a8764: key=192.168.136.17 use=0 exp=1761592 server_id=2
0x24a8814: key=192.168.136.18 use=0 exp=1761607 server_id=3
0x24a88c4: key=192.168.136.19 use=0 exp=1761622 server_id=2
0x24a8974: key=192.168.136.20 use=0 exp=1761636 server_id=3
0x24a8a24: key=192.168.136.21 use=0 exp=1761649 server_id=2
im running the command -
socat unix-connect:/var/run/haproxy.stat stdio <<< 'clear table VIP_Name-2 data.server_id eq 2'
Id assume that the entries with server_id = 2 would be removed but its
removing everything each time."
The cause of the issue is a missing test for skip_entry when deciding
whether to clear the key or not. The test was present when only the
last node is to be removed, so removing only the first node from a
list of two always did the right thing, explaining why it remained
unnoticed in basic unit tests.
The bug was introduced by commit 8fa52f4e which attempted to fix a
previous issue with this feature where only the last node was removed.
This bug is 1.5-specific and does not require any backport.
Such load balance algorithms as roundrobin, leastconn and first will check the
server after being selected with the following condition:
if (!s->maxconn || (!s->nbpend && s->served < srv_dynamic_maxconn(s)))
But static-rr uses the different one in map_get_server_rr() as below:
if (!srv->maxconn || srv->cur_sess < srv_dynamic_maxconn(srv))
After viewing this difference, it is a better choice for static-rr to use the
same check condition as other algorithms.
This change will only affect static-rr. Though all hash algorithms with type
map-based will use the same server map as static-rr, they call another function
map_get_server_hash() to get server.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
When using req.payload and res.payload to look up for specific content at an
arbitrary location, we're often facing the problem of not knowing the input
buffer length. If the length argument is larger than the buffer length, the
function did not match, and if they're smaller, there is a risk of not getting
the expected content. This is especially true when looking for data in SOAP
requests.
So let's make some provisions for scanning the whole buffer by specifying a
length of 0 bytes. This greatly simplifies the processing of random-sized
input data.
The "set table" statement allows to create new entries with their respective
values. Till now it was limited to a single data type per line, requiring as
many "set table" statements as the desired data types to be set. Since this
is only a parser limitation, this patch gets rid of it. It also allows the
creation of a key with no data types (all reset to their default values).
Since commit 654694e1, it has been possible to feed some data into
stick tables from the CLI. That commit considered that frequency
counters would only have their previous value set, so that they
progressively fade out. But this does not match any real world
use case in fact. The only reason for feeding a freq counter is
to pass some data learned outside. We certainly don't want to see
such data start to vanish immediately, otherwise it will force the
external scripts to loop very frequently to limit the losses.
So let's set the current value instead in order to guarantee that
the data remains stable over the full period, then starts to fade
out between 1* and 2* the period.
sc_* sample fetches now take an optional parameter which allows to look
the key in an alternate table. This is convenient to pass multiple
information for the same key at once (eg: have multiple gpc0 for the
same key, or support being fed complementary information from the CLI).
Example :
listen front
bind :8000
tcp-request content track-sc0 src table local-ip
http-response set-header src-id %[sc0_get_gpc0]+%[sc0_get_gpc0(global-ip)]
server dummy 127.0.0.1:8001
backend local-ip
stick-table size 1k type ip store gpc0
backend global-ip
stick-table size 1k type ip store gpc0
One very annoying issue when trying to extend the sticky counters beyond
the current 3 counters is that it requires a massive copy-paste of fetch
functions (we don't have to copy-paste code anymore), just so that the
fetch names exist.
So let's have an alternate form like "sc_*(num)" to allow passing the
counter number as an argument without having to redefine new fetch names.
The MAX_SESS_STKCTR macro defines the number of usable sticky counters,
which defaults to 3.
In preparation of more flexibility in the stick counters, make their
number configurable. It still defaults to 3 which is the minimum
accepted value. Changing the value alone is not sufficient to get
more counters, some bitfields still need to be updated and the TCP
actions need to be updated as well, but this update tries to be
easier, which is nice for experimentation purposes.
smp_fetch_sc0_trackers, smp_fetch_sc1_trackers and smp_fetch_sc2_trackers
were merged into a single function which relies on the fetch name to decide
what to return.
This is also a bug fix for this feature which has never worked till its bogus
introduction by commit "2406db4 MEDIUM: counters: add sc1_trackers/sc2_trackers"
(1.5-dev10).
Instead of returning the value in the sample, it was returned as the fetch
result!
There is no need to backport this fix anyway since it's 1.5-specific and
nobody uses the feature.
smp_fetch_sc0_bytes_out_rate, smp_fetch_sc1_bytes_out_rate, smp_fetch_sc2_bytes_out_rate,
smp_fetch_src_bytes_out_rate and smp_fetch_bytes_out_rate were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_kbytes_out, smp_fetch_sc1_kbytes_out, smp_fetch_sc2_kbytes_out,
smp_fetch_src_kbytes_out and smp_fetch_kbytes_out were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_bytes_in_rate, smp_fetch_sc1_bytes_in_rate, smp_fetch_sc2_bytes_in_rate,
smp_fetch_src_bytes_in_rate and smp_fetch_bytes_in_rate were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_kbytes_in, smp_fetch_sc1_kbytes_in, smp_fetch_sc2_kbytes_in,
smp_fetch_src_kbytes_in and smp_fetch_kbytes_in were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_http_err_rate, smp_fetch_sc1_http_err_rate, smp_fetch_sc2_http_err_rate,
smp_fetch_src_http_err_rate and smp_fetch_http_err_rate were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_http_err_cnt, smp_fetch_sc1_http_err_cnt, smp_fetch_sc2_http_err_cnt,
smp_fetch_src_http_err_cnt and smp_fetch_http_err_cnt were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_http_req_rate, smp_fetch_sc1_http_req_rate, smp_fetch_sc2_http_req_rate,
smp_fetch_src_http_req_rate and smp_fetch_http_req_rate were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_http_req_cnt, smp_fetch_sc1_http_req_cnt, smp_fetch_sc2_http_req_cnt,
smp_fetch_src_http_req_cnt and smp_fetch_http_req_cnt were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_sess_rate, smp_fetch_sc1_sess_rate, smp_fetch_sc2_sess_rate,
smp_fetch_src_sess_rate and smp_fetch_sess_rate were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_sess_cnt, smp_fetch_sc1_sess_cnt, smp_fetch_sc2_sess_cnt,
smp_fetch_src_sess_cnt and smp_fetch_sess_cnt were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_conn_cur, smp_fetch_sc1_conn_cur, smp_fetch_sc2_conn_cur,
smp_fetch_src_conn_cur and smp_fetch_conn_cur were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_conn_rate, smp_fetch_sc1_conn_rate, smp_fetch_sc2_conn_rate,
smp_fetch_src_conn_rate and smp_fetch_conn_rate were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_conn_cnt, smp_fetch_sc1_conn_cnt, smp_fetch_sc2_conn_cnt,
smp_fetch_src_conn_cnt and smp_fetch_conn_cnt were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_clr_gpc0, smp_fetch_sc1_clr_gpc0, smp_fetch_sc2_clr_gpc0,
smp_fetch_src_clr_gpc0 and smp_fetch_clr_gpc0 were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_inc_gpc0, smp_fetch_sc1_inc_gpc0, smp_fetch_sc2_inc_gpc0,
smp_fetch_src_inc_gpc0 and smp_fetch_inc_gpc0 were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_gpc0, smp_fetch_sc1_gpc0, smp_fetch_sc2_gpc0,
smp_fetch_src_gpc0 and smp_fetch_gpc0 were merged into a single
function which relies on the fetch name to decide what to return.
smp_fetch_sc0_get_gpc0, smp_fetch_sc1_get_gpc0, smp_fetch_sc2_get_gpc0,
smp_fetch_src_get_gpc0 and smp_fetch_get_gpc0 were merged into a single
function which relies on the fetch name to decide what to return.
This function aims at simplifying the prefetching of the table and entry
when using any of the session counters fetches. The principle is that the
src_* variant produces a stkctr that is used instead of the one from the
session. That way we can call the same function from all session counter
fetch functions and always have a single function to support sc[0-9]_/src_.
This function is also called directly from backend.c, so let's stop
building fake args to call it as a sample fetch, and have a lower
layer more generic function instead.
We're having a lot of duplicate code just because of minor variants between
fetch functions that could be dealt with if the functions had the pointer to
the original keyword, so let's pass it as the last argument. An earlier
version used to pass a pointer to the sample_fetch element, but this is not
the best solution for two reasons :
- fetch functions will solely rely on the keyword string
- some other smp_fetch_* users do not have the pointer to the original
keyword and were forced to pass NULL.
So finally we're passing a pointer to the keyword as a const char *, which
perfectly fits the original purpose.
Converts an integer supposed to contain a date since epoch to
a string representing this date in a format suitable for use
in HTTP header fields. If an offset value is specified, then
it is a number of seconds that is added to the date before the
conversion is operated. This is particularly useful to emit
Date header fields, Expires values in responses when combined
with a positive offset, or Last-Modified values when the
offset is negative.
Returns the current date as the epoch (number of seconds since 01/01/1970).
If an offset value is specified, then it is a number of seconds that is added
to the current date before returning the value. This is particularly useful
to compute relative dates, as both positive and negative offsets are allowed.
There is no more reason for having "always_true", "always_false" and "env"
in acl.c while they're the most basic sample fetch keywords, so let's move
them to sample.c where it's easier to find them.
sample_process() used to return NULL on changing data, regardless of the
SMP_OPT_FINAL flag. Let's change this so that it is now possible to
include such data in logs or HTTP headers. Also, one unconvenient
thing was that it used to always set the sample flags to zero, making
it incompatible with ACLs which may need to call it multiple times. Only
do this for locally-allocated samples.
We now support having a comma-delimited converter list, which can start
right after the fetch keyword. The immediate benefit is that it allows
to use converters in log-format expressions, for example :
set-header source-net %[src,ipmask(24)]
The parser is also slightly improved and should be more resilient against
configuration errors. Also, optional arguments in converters were mistakenly
not allowed till now, so this was fixed.
Splicing is avoided for small transfers because it's generally cheaper
to perform a couple of recv+send calls than pipe+splice+splice. This
has the consequence that the last chunk of a large transfer may be
transferred using recv+send if it's less than 4 kB. But when the pipe
is already set up, it's better to use splice() to read the pending data,
since they will get merged with the pending ones. This is what now
happens everytime the reader is slower than the writer.
Note that this change alone could have fixed most of the CPU hog bug,
except at the end when only the close was pending.
As explained in previous patch, we incorrectly call chk_snd() when
performing a read even if the write event is already subscribed to
poll(). This is counter-productive because we're almost sure to get
an EAGAIN.
A quick test shows that this fix halves the number of failed splice()
calls without adding any extra work on other syscalls.
This could have been tagged as an improvement, but since this behaviour
made the analysis of previous bug more complex, it still qualifies as
a fix.
Mark Janssen reported an issue in 1.5-dev19 which was introduced
in 1.5-dev12 by commit 96199b10. From time to time, randomly, the
CPU usage spikes to 100% for seconds to minutes.
A deep analysis of the traces provided shows that it happens when
waiting for the response to a second pipelined HTTP request, or
when trying to handle the received shutdown advertised by epoll()
after the last block of data. Each time, splice() was involved with
data pending in the pipe.
The cause of this was that such events could not be taken into account
by splice nor by recv and were left pending :
- the transfer of the last block of data, optionally with a shutdown
was not handled by splice() because of the validation that to_forward
is higher than MIN_SPLICE_FORWARD ;
- the next recv() call was inhibited because of the test on presence
of data in the pipe. This is also what prevented the recv() call
from handling a response to a pipelined request until the client
had ACKed the previous response.
No less than 4 different methods were experimented to fix this, and the
current one was finally chosen. The principle is that if an event is not
caught by splice(), then it MUST be caught by recv(). So we remove the
condition on the pipe's emptiness to perform an recv(), and in order to
prevent recv() from being used in the middle of a transfer, we mark
supposedly full pipes with CO_FL_WAIT_ROOM, which makes sense because
the reason for stopping a splice()-based receive is that the pipe is
supposed to be full.
The net effect is that we don't wake up and sleep in loops during these
transient states. This happened much more often than expected, sometimes
for a few cycles at end of transfers, but rarely long enough to be
noticed, unless a client timed out with data pending in the pipe. The
effect on CPU usage is visible even when transfering 1MB objects in
pipeline, where the CPU usage drops from 10 to 6% on a small machine at
medium bandwidth.
Some further improvements are needed :
- the last chunk of a splice() transfer is never done using splice due
to the test on to_forward. This is wrong and should be performed with
splice if the pipe has not yet been emptied ;
- si_chk_snd() should not be called when the write event is already being
polled, otherwise we're almost certain to get EAGAIN.
Many thanks to Mark for all the traces he cared to provide, they were
essential for understanding this issue which was not reproducible
without.
Only 1.5-dev is affected, no backport is needed.