This feature will be required at some point, when the stick tables are
used to enforce security measures. For instance, some visitors may be
incorrectly flagged as abusers and would ask the site admins to remove
their entry from the table.
It is now possible to dump a table's contents with keys, expire,
use count, and various data using the command above on the stats
socket.
"show table" only shows main table stats, while "show table <name>"
dumps table contents, only if the socket level is admin.
This patch adds support for the following session counters :
- http_req_cnt : HTTP request count
- http_req_rate: HTTP request rate
- http_err_cnt : HTTP request error count
- http_err_rate: HTTP request error rate
The equivalent ACLs have been added to check the tracked counters
for the current session or the counters of the current source.
This counter may be used to track anything. Two sets of ACLs are available
to manage it, one gets its value, and the other one increments its value
and returns it. In the second case, the entry is created if it did not
exist.
Thus it is possible for example to mark a source as being an abuser and
to keep it marked as long as it does not wait for the entry to expire :
# The rules below use gpc0 to track abusers, and reject them if
# a source has been marked as such. The track-counters statement
# automatically refreshes the entry which will not expire until a
# 1-minute silence is respected from the source. The second rule
# evaluates the second part if the first one is true, so GPC0 will
# be increased once the conn_rate is above 100/5s.
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request track-counters src
tcp-request reject if { trk_get_gpc0 gt 0 }
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
Alternatively, it is possible to let the entry expire even in presence of
traffic by swapping the check for gpc0 and the track-counters statement :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request track-counters src
tcp-request reject if { trk_conn_rate gt 100 } { trk_inc_gpc0 gt 0}
It is also possible not to track counters at all, but entry lookups will
then be performed more often :
stick-table type ip size 200k expire 1m store conn_rate(5s),gpc0
tcp-request reject if { src_get_gpc0 gt 0 }
tcp-request reject if { src_conn_rate gt 100 } { src_inc_gpc0 gt 0}
The '0' at the end of the counter name is there because if we find that more
counters may be useful, other ones will be added.
This function looks up a key, updates its expiration date, or creates
it if it was not found. acl_fetch_src_updt_conn_cnt() was updated to
make use of it.
These counters maintain incoming and outgoing byte rates in a stick-table,
over a period which is defined in the configuration (2 ms to 24 days).
They can be used to detect service abuse and enforce a certain bandwidth
limits per source address for instance, and block if the rate is passed
over. Since 32-bit counters are used to compute the rates, it is important
not to use too long periods so that we don't have to deal with rates above
4 GB per period.
Example :
# block if more than 5 Megs retrieved in 30 seconds from a source.
stick-table type ip size 200k expire 1m store bytes_out_rate(30s)
tcp-request track-counters src
tcp-request reject if { trk_bytes_out_rate gt 5000000 }
# cause a 15 seconds pause to requests from sources in excess of 2 megs/30s
tcp-request inspect-delay 15s
tcp-request content accept if { trk_bytes_out_rate gt 2000000 } WAIT_END
These counters maintain incoming connection rates and session rates
in a stick-table, over a period which is defined in the configuration
(2 ms to 24 days). They can be used to detect service abuse and
enforce a certain accept rate per source address for instance, and
block if the rate is passed over.
Example :
# block if more than 50 requests per 5 seconds from a source.
stick-table type ip size 200k expire 1m store conn_rate(5s),sess_rate(5s)
tcp-request track-counters src
tcp-request reject if { trk_conn_rate gt 50 }
# cause a 3 seconds pause to requests from sources in excess of 20 requests/5s
tcp-request inspect-delay 3s
tcp-request content accept if { trk_sess_rate gt 20 } WAIT_END
We're now able to return errors based on the validity of an argument
passed to a stick-table store data type. We also support ARG_T_DELAY
to pass delays to stored data types (eg: for rate counters).
Some data types will require arguments (eg: period for a rate counter).
This patch adds support for such arguments between parenthesis in the
"store" directive of the stick-table statement. Right now only integers
are supported.
When a session tracks a counter, automatically increase the cumulated
connection count. This makes src_updt_conn_cnt() almost useless. In
fact it might still be used to update different tables.
Most of the time we'll want to check the connection count of the
criterion we're currently tracking. So instead of duplicating the
src* tests, let's add trk_conn_cnt to report the total number of
connections from the stick table entry currently being tracked.
A nice part of the code was factored, and we should do the same
for the other criteria.
The new "bytes_in_cnt" and "bytes_out_cnt" session counters have been
added. They're automatically updated when session counters are updated.
They can be matched with the "src_kbytes_in" and "src_kbytes_out" ACLs
which apply to the volume per source address. This can be used to deny
access to service abusers.
The new "conn_cur" session counter has been added. It is automatically
updated upon "track XXX" directives, and the entry is touched at the
moment we increment the value so that we don't consider further counter
updates as real updates, otherwise we would end up updating upon completion,
which may not be desired. Probably that some other event counters (eg: HTTP
requests) will have to be updated upon each event though.
This counter can be matched against current session's source address using
the "src_conn_cur" ACL.
It was not normal to have counter fetches in proto_tcp.c. The only
reason was that the key based on the source address was fetched there,
but now we have split the key extraction and data processing, we must
move that to a more appropriate place. Session seems OK since the
counters are all manipulated from here.
Also, since we're precisely counting number of connections with these
ACLs, we rename them src_conn_cnt and src_updt_conn_cnt. This is not
a problem right now since no version was emitted with these keywords.
The "_cnt" suffix is already used by ACLs to count various data,
so it makes sense to use the same one in "conn_cnt" instead of
"conn_cum" to count cumulated connections.
This is not a problem because no version was emitted with those
keywords.
Thus we'll try to stick to the following rules :
xxxx_cnt : cumulated event count for criterion xxxx
xxxx_cur : current number of concurrent entries for criterion xxxx
xxxx_rate: event rate for criterion xxxx
This patch adds the ability to set a pointer in the session to an
entry in a stick table which holds various counters related to a
specific pattern.
Right now the syntax matches the target syntax and only the "src"
pattern can be specified, to track counters related to the session's
IPv4 source address. There is a special function to extract it and
convert it to a key. But the goal is to be able to later support as
many patterns as for the stick rules, and get rid of the specific
function.
The "track-counters" directive may only be set in a "tcp-request"
statement right now. Only the first one applies. Probably that later
we'll support multi-criteria tracking for a single session and that
we'll have to name tracking pointers.
No counter is updated right now, only the refcount is. Some subsequent
patches will have to bring that feature.
This ACL's count can change along the session's life because it depends
on other sessions' activity. Switch it to volatile since any session
could appear while evaluating the ACLs.
The buffer_feed* functions that are used to send data to buffers did only
support sending contiguous chunks while they're relying on memcpy(). This
patch improves on this by making them able to write in two chunks if needed.
Thus, the buffer_almost_full() function has been improved to really consider
the remaining space and not just what can be written at once.
Now we stop relying on BF_READ_DONTWAIT, which is unrelated to the
wakeups, and only consider activity to decide whether to wake the task
up instead of considering the other side's activity. It is worth noting
that the local stream interface's flags were not updated consecutively
to a call to chk_snd(), which could possibly result in hung tasks from
time to time. This fix will avoid possible loops and uncaught events.
Sometimes it's necessary to be able to perform some "layer 6" analysis
in the backend. TCP request rules were not available till now, although
documented in the diagram. Enable them in backend now.
When resetting a session's request analysers, we must take them from the
listener, not from the frontend. At the moment there is no difference
but this might change.
Since the BF_READ_ATTACHED bug was fixed, a new issue surfaced. When
a connection closes on the return path in tunnel mode while the request
input is already closed, the request analyser which is waiting for a
state change never gets woken up so it never closes the request output.
This causes stuck sessions to remain indefinitely.
One way to reliably reproduce the issue is the following (note that the
client expects a keep-alive but not the server) :
server: printf "HTTP/1.0 303\r\n\r\n" | nc -lp8080
client: printf "GET / HTTP/1.1\r\n\r\n" | nc 127.1 2500
The reason for the issue is that we don't wake the analysers up on
stream interface state changes. So the least intrusive and most reliable
thing to do is to consider stream interface state changes to call the
analysers.
We just need to remember what state each series of analysers have seen
and check for the differences. In practice, that works.
A later improvement later could consist in being able to let analysers
state what they're interested to monitor :
- left SI's state
- right SI's state
- request buffer flags
- response buffer flags
That could help having only one set of analysers and call them once
status changes.
After a read, there was a condition to mandatorily wake the task
up if the BF_READ_DONTWAIT flag was set. This was wrong because
the wakeup condition in this case can be deduced from the other
ones. Another condition was put on the other side not being in
SI_ST_EST state. It is not appropriate to do this because it
causes a useless wakeup at the beginning of every first request
in case of speculative polling, due to the fact that we don't
read anything and that the other side is still in SI_ST_INI.
Also, the wakeup was performed whenever to_forward was null,
which causes an unexpected wakeup upon the first read for the
same reason. However, those two conditions are valid if and
only if at least one read was performed.
Also, the BF_SHUTR flag was tested as part of the wakeup condition,
while this one can only be set if BF_READ_NULL is set too. So let's
simplify this ambiguous test by removing the BF_SHUTR part from the
condition to only process events.
Last, the BF_READ_DONTWAIT flag was unconditionally cleared,
while sometimes there would have been no I/O. Now we only clear
it once the I/O operation has been performed, which maintains
its validity until the I/O occurs.
Finally, those fixes saved approximately 16% of the per-session
wakeups and 20% of the epoll_ctl() calls, which translates into
slightly less under high load due to the request often being ready
when the read() occurs. A performance increase between 2 and 5% is
expected depending on the workload.
It does not seem necessary to backport this change to 1.4, eventhough
it fixes some performance issues. It may later be backported if
required to fix something else because the risk of regression seems
very low due to the fact that we're more in line with the documented
semantics.
Some config parsing functions need to return composite status codes
when they rely on other functions. Let's provide a few such codes
for general use and extend them later.
Some freq counters will have to work on periods different from 1 second.
The original freq counters rely on the period to be exactly one second.
The new ones (freq_ctr_period) let the user define the period in ticks,
and all computations are operated over that period. When reading a value,
it indicates the amount of events over that period too.
We'll need to divide 64 bits by 32 bits with new frequency counters.
Gcc does not know when it can safely do that, but the way we build
our operations let us be sure. So let's provide an optimised version
for that purpose.
This will be used when an I/O handler running in a stream interface
needs to establish a connection somewhere. We want the session
processor to evaluate both I/O handlers, depending on which side has
one. Doing so also requires that stream_int_update_embedded() wakes
the session up only when the other side is established or has closed,
for instance in order to handle connection errors without looping
indefinitely during the connection setup time.
The session processor still relies on BF_READ_ATTACHED being set,
though we must do whatever is required to remove this dependency.
This member will be used later when frontends are created on the
fly by some tasks. It will also be usable later if we need to
support multiple config instances for example.
When a connection is closed on a stream interface, some iohandlers
will need to be informed in order to release some resources. This
normally happens upon a shutr+shutw. It is the equivalent of the
fd_delete() call which is done for real sockets, except that this
time we release internal resources.
It can also be used with real sockets because it does not cost
anything else and might one day be useful.
Till now when a server was configured with address 0.0.0.0, the
connection was forwarded to this address which generally is intercepted
by the system as a local address, so this was completely useless.
One sometimes useful feature for outgoing transparent proxies is to
be able to forward the connection to the same address the client
requested. This patch fixes the meaning of 0.0.0.0 precisely to
ensure that the connection will be forwarded to the initial client's
destination address.
configuration.txt is thorough and accurate but lacked sample configurations
clarifying both the syntax and the relations between global, defaults,
frontend, backend and listen sections. Besides, almost all examples to be found
in haproxy-en.txt or online tutorials make use of the 'listen' syntax while
'frontend/backend' is really the one to know about.
(cherry picked from commit 01ac10ad189b11c563eeb835733fba58e6c5271d)
(cherry picked from commit 61ba936e6858dfcf9964d25870726621d8188fb9)
[ note: the bug was finally not present in 1.5-dev but at least we
have to reset store_count to be compatible with 1.4 ]
Commit d6e9e3b5e320b957e6c491bd92d91afad30ba638 caused recently created
entries to be removed as soon as they were created, breaking stickiness.
It is not clear whether a use-after-free was possible or not in this case.
This bug was reported by Ben Congleton and narrowed down by Hervé Commowick,
both of whom also tested the fix. Thanks to them !
The quote_arg() function can be used to quote an argument or indicate
"end of line" if it's null or empty. It should be useful to more precisely
report location of problems in the configuration.
When an entry already exists, we just need to update its expiration
timer. Let's have a dedicated function for that instead of spreading
open code everywhere.
This change also ensures that an update of an existing sticky session
really leads to an update of its expiration timer, which was apparently
not the case till now. This point needs to be checked in 1.4.