2018-08-31 09:05:39 +00:00
# commit 3e60b11
# BUG/MEDIUM: stick-tables: Decrement ref_cnt in table_* converters
#
# When using table_* converters ref_cnt was incremented
# and never decremented causing entries to not expire.
#
# The root cause appears to be that stktable_lookup_key()
# was called within all sample_conv_table_* functions which was
# incrementing ref_cnt and not decrementing after completion.
#
# Added stktable_release() to the end of each sample_conv_table_*
# function and reworked the end logic to ensure that ref_cnt is
# always decremented after use.
#
# This should be backported to 1.8
2019-03-29 14:07:24 +00:00
#REGTEST_TYPE=bug
MINOR: stick-tables/counters: add http_fail_cnt and http_fail_rate data types
Historically we've been counting lots of client-triggered events in stick
tables to help detect misbehaving ones, but we've been missing the same on
the server side, and there's been repeated requests for being able to count
the server errors per URL in order to precisely monitor the quality of
service or even to avoid routing requests to certain dead services, which
is also called "circuit breaking" nowadays.
This commit introduces http_fail_cnt and http_fail_rate, which work like
http_err_cnt and http_err_rate in that they respectively count events and
their frequency, but they only consider server-side issues such as network
errors, unparsable and truncated responses, and 5xx status codes other
than 501 and 505 (since these ones are usually triggered by the client).
Note that retryable errors are purposely not accounted for, so that only
what the client really sees is considered.
With this it becomes very simple to put some protective measures in place
to perform a redirect or return an excuse page when the error rate goes
beyond a certain threshold for a given URL, and give more chances to the
server to recover from this condition. Typically it could look like this
to bypass a URL causing more than 10 requests per second:
stick-table type string len 80 size 4k expire 1m store http_fail_rate(1m)
http-request track-sc0 base # track host+path, ignore query string
http-request return status 503 content-type text/html \
lf-file excuse.html if { sc0_http_fail_rate gt 10 }
A more advanced mechanism using gpt0 could even implement high/low rates
to disable/enable the service.
Reg-test converteers_ref_cnt_never_dec.vtc was updated to test it.
2021-02-10 11:07:15 +00:00
#REQUIRE_VERSION=2.4
2019-03-29 14:07:24 +00:00
2018-08-31 09:05:39 +00:00
varnishtest "stick-tables: Test expirations when used with table_*"
# As some macros for haproxy are used in this file, this line is mandatory.
feature ignore_unknown_macro
# Do nothing.
server s1 {
} -start
haproxy h1 -conf {
# Configuration file of 'h1' haproxy instance.
defaults
mode http
2018-12-19 10:49:39 +00:00
${no-htx} option http-use-htx
2018-08-31 09:05:39 +00:00
timeout connect 5s
timeout server 30s
timeout client 30s
frontend http1
bind "fd@${my_frontend_fd}"
MINOR: stick-tables/counters: add http_fail_cnt and http_fail_rate data types
Historically we've been counting lots of client-triggered events in stick
tables to help detect misbehaving ones, but we've been missing the same on
the server side, and there's been repeated requests for being able to count
the server errors per URL in order to precisely monitor the quality of
service or even to avoid routing requests to certain dead services, which
is also called "circuit breaking" nowadays.
This commit introduces http_fail_cnt and http_fail_rate, which work like
http_err_cnt and http_err_rate in that they respectively count events and
their frequency, but they only consider server-side issues such as network
errors, unparsable and truncated responses, and 5xx status codes other
than 501 and 505 (since these ones are usually triggered by the client).
Note that retryable errors are purposely not accounted for, so that only
what the client really sees is considered.
With this it becomes very simple to put some protective measures in place
to perform a redirect or return an excuse page when the error rate goes
beyond a certain threshold for a given URL, and give more chances to the
server to recover from this condition. Typically it could look like this
to bypass a URL causing more than 10 requests per second:
stick-table type string len 80 size 4k expire 1m store http_fail_rate(1m)
http-request track-sc0 base # track host+path, ignore query string
http-request return status 503 content-type text/html \
lf-file excuse.html if { sc0_http_fail_rate gt 10 }
A more advanced mechanism using gpt0 could even implement high/low rates
to disable/enable the service.
Reg-test converteers_ref_cnt_never_dec.vtc was updated to test it.
2021-02-10 11:07:15 +00:00
stick-table size 1k expire 1ms type ip store conn_rate(10s),http_req_cnt,http_err_cnt,http_fail_cnt,http_req_rate(10s),http_err_rate(10s),http_fail_rate(10s),gpc0,gpc0_rate(10s),gpt0
2018-08-31 09:05:39 +00:00
http-request track-sc0 req.hdr(X-Forwarded-For)
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_http_req_cnt(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_trackers(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),in_table(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_bytes_in_rate(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_bytes_out_rate(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_conn_cnt(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_conn_cur(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_conn_rate(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_gpt0(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_gpc0(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_gpc0_rate(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_http_err_cnt(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_http_err_rate(http1) -m int lt 0 }
MINOR: stick-tables/counters: add http_fail_cnt and http_fail_rate data types
Historically we've been counting lots of client-triggered events in stick
tables to help detect misbehaving ones, but we've been missing the same on
the server side, and there's been repeated requests for being able to count
the server errors per URL in order to precisely monitor the quality of
service or even to avoid routing requests to certain dead services, which
is also called "circuit breaking" nowadays.
This commit introduces http_fail_cnt and http_fail_rate, which work like
http_err_cnt and http_err_rate in that they respectively count events and
their frequency, but they only consider server-side issues such as network
errors, unparsable and truncated responses, and 5xx status codes other
than 501 and 505 (since these ones are usually triggered by the client).
Note that retryable errors are purposely not accounted for, so that only
what the client really sees is considered.
With this it becomes very simple to put some protective measures in place
to perform a redirect or return an excuse page when the error rate goes
beyond a certain threshold for a given URL, and give more chances to the
server to recover from this condition. Typically it could look like this
to bypass a URL causing more than 10 requests per second:
stick-table type string len 80 size 4k expire 1m store http_fail_rate(1m)
http-request track-sc0 base # track host+path, ignore query string
http-request return status 503 content-type text/html \
lf-file excuse.html if { sc0_http_fail_rate gt 10 }
A more advanced mechanism using gpt0 could even implement high/low rates
to disable/enable the service.
Reg-test converteers_ref_cnt_never_dec.vtc was updated to test it.
2021-02-10 11:07:15 +00:00
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_http_fail_cnt(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_http_fail_rate(http1) -m int lt 0 }
2018-08-31 09:05:39 +00:00
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_http_req_cnt(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_http_req_rate(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_kbytes_in(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_kbytes_out(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_server_id(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_sess_cnt(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_sess_rate(http1) -m int lt 0 }
http-request redirect location https://${s1_addr}:${s1_port}/ if { req.hdr(X-Forwarded-For),table_trackers(http1) -m int lt 0 }
2020-08-19 09:20:28 +00:00
} -start
2018-08-31 09:05:39 +00:00
client c1 -connect ${h1_my_frontend_fd_sock} {
txreq -url "/" -hdr "X-Forwarded-For: 127.0.0.1"
rxresp
expect resp.status == 503
} -run
haproxy h1 -cli {
send "show table http1"
MINOR: stick-tables/counters: add http_fail_cnt and http_fail_rate data types
Historically we've been counting lots of client-triggered events in stick
tables to help detect misbehaving ones, but we've been missing the same on
the server side, and there's been repeated requests for being able to count
the server errors per URL in order to precisely monitor the quality of
service or even to avoid routing requests to certain dead services, which
is also called "circuit breaking" nowadays.
This commit introduces http_fail_cnt and http_fail_rate, which work like
http_err_cnt and http_err_rate in that they respectively count events and
their frequency, but they only consider server-side issues such as network
errors, unparsable and truncated responses, and 5xx status codes other
than 501 and 505 (since these ones are usually triggered by the client).
Note that retryable errors are purposely not accounted for, so that only
what the client really sees is considered.
With this it becomes very simple to put some protective measures in place
to perform a redirect or return an excuse page when the error rate goes
beyond a certain threshold for a given URL, and give more chances to the
server to recover from this condition. Typically it could look like this
to bypass a URL causing more than 10 requests per second:
stick-table type string len 80 size 4k expire 1m store http_fail_rate(1m)
http-request track-sc0 base # track host+path, ignore query string
http-request return status 503 content-type text/html \
lf-file excuse.html if { sc0_http_fail_rate gt 10 }
A more advanced mechanism using gpt0 could even implement high/low rates
to disable/enable the service.
Reg-test converteers_ref_cnt_never_dec.vtc was updated to test it.
2021-02-10 11:07:15 +00:00
expect ~ "table: http1, type: ip, size:1024, used:(0|1\\n0x[0-9a-f]*: key=127\\.0\\.0\\.1 use=0 exp=[0-9]* gpt0=0 gpc0=0 gpc0_rate\\(10000\\)=0 conn_rate\\(10000\\)=1 http_req_cnt=1 http_req_rate\\(10000\\)=1 http_err_cnt=0 http_err_rate\\(10000\\)=0 http_fail_cnt=0 http_fail_rate\\(10000\\)=0)\\n$"
2018-08-31 09:05:39 +00:00
} -wait