This adds 3 new commands to the cli :
enable dynamic-cookie backend <backend> that enables dynamic cookies for a
specified backend
disable dynamic-cookie backend <backend> that disables dynamic cookies for a
specified backend
set dynamic-cookie-key backend <backend> that lets one change the dynamic
cookie secret key, for a specified backend.
This adds a new "dynamic" keyword for the cookie option. If set, a cookie
will be generated for each server (assuming one isn't already provided on
the "server" line), from the IP of the server, the TCP port, and a secret
key provided. To provide the secret key, a new keyword as been added,
"dynamic-cookie-key", for backends.
Example :
backend bk_web
balance roundrobin
dynamic-cookie-key "bla"
cookie WEBSRV insert dynamic
server s1 127.0.0.1:80 check
server s2 192.168.56.1:80 check
This is a first step to be able to dynamically add and remove servers,
without modifying the configuration file, and still have all the load
balancers redirect the traffic to the right server.
Provide a way to generate session cookies, based on the IP address of the
server, the TCP port, and a secret key provided.
Matthias Fechner reported a regression in 1.7.3 brought by the backport
of commit 819efbf ("BUG/MEDIUM: tcp: don't poll for write when connect()
succeeds"), causing some connections to fail to establish once in a while.
While this commit itself was a fix for a bad sequencing of connection
events, it in fact unveiled a much deeper bug going back to the connection
rework era in v1.5-dev12 : 8f8c92f ("MAJOR: connection: add a new
CO_FL_CONNECTED flag").
It's worth noting that in a lab reproducing a similar environment as
Matthias' about only 1 every 19000 connections exhibit this behaviour,
making the issue not so easy to observe. A trick to make the problem
more observable consists in disabling non-blocking mode on the socket
before calling connect() and re-enabling it later, so that connect()
always succeeds. Then it becomes 100% reproducible.
The problem is that this CO_FL_CONNECTED flag is tested after deciding to
call the data layer (typically the stream interface but might be a health
check as well), and that the decision to call the data layer relies on a
change of one of the flags covered by the CO_FL_CONN_STATE set, which is
made of CO_FL_CONNECTED among others.
Before the fix above, this bug couldn't appear with TCP but it could
appear with Unix sockets. Indeed, connect() was always considered
blocking so the CO_FL_WAIT_L4_CONN connection flag was always set, and
polling for write events was always enabled. This used to guarantee that
the conn_fd_handler() could detect a change among the CO_FL_CONN_STATE
flags.
Now with the fix above, if a connect() immediately succeeds for non-ssl
connection with send-proxy enabled, and no data in the buffer (thus TCP
mode only), the CO_FL_WAIT_L4_CONN flag is not set, the lack of data in
the buffer doesn't enable polling flags for the data layer, the
CO_FL_CONNECTED flag is not set due to send-proxy still being pending,
and once send-proxy is done, its completion doesn't cause the data layer
to be woken up due to the fact that CO_FL_CONNECT is still not present
and that the CO_FL_SEND_PROXY flag is not watched in CO_FL_CONN_STATE.
Then no progress is made when data are received from the client (and
attempted to be forwarded), because a CF_WRITE_NULL (or CF_WRITE_PARTIAL)
flag is needed for the stream-interface state to turn from SI_ST_CON to
SI_ST_EST, allowing ->chk_snd() to be called when new data arrive. And
the only way to set this flag is to call the data layer of course.
After the connect timeout, the connection gets killed and if in the mean
time some data have accumulated in the buffer, the retry will succeed.
This patch fixes this situation by simply placing the update of
CO_FL_CONNECTED where it should have been, before the check for a flag
change needed to wake up the data layer and not after.
This fix must be backported to 1.7, 1.6 and 1.5. Versions not having
the patch above are still affected for unix sockets.
Special thanks to Matthias Fechner who provided a very detailed bug
report with a bisection designating the faulty patch, and to Olivier
Houchard for providing full access to a pretty similar environment where
the issue could first be reproduced.
This may be used to output the JSON schema which describes the output of
show info json and show stats json.
The JSON output is without any extra whitespace in order to reduce the
volume of output. For human consumption passing the output through a
pretty printer may be helpful.
e.g.:
$ echo "show schema json" | socat /var/run/haproxy.stat stdio | \
python -m json.tool
The implementation does not generate the schema. Some consideration could
be given to integrating the output of the schema with the output of
typed and json info and stats. In particular the types (u32, s64, etc...)
and tags.
A sample verification of show info json and show stats json using
the schema is as follows. It uses the jsonschema python module:
cat > jschema.py << __EOF__
import json
from jsonschema import validate
from jsonschema.validators import Draft3Validator
with open('schema.txt', 'r') as f:
schema = json.load(f)
Draft3Validator.check_schema(schema)
with open('instance.txt', 'r') as f:
instance = json.load(f)
validate(instance, schema, Draft3Validator)
__EOF__
$ echo "show schema json" | socat /var/run/haproxy.stat stdio > schema.txt
$ echo "show info json" | socat /var/run/haproxy.stat stdio > instance.txt
python ./jschema.py
$ echo "show stats json" | socat /var/run/haproxy.stat stdio > instance.txt
python ./jschema.py
Signed-off-by: Simon Horman <horms@verge.net.au>
Add a json parameter to show (info|stat) which will output information
in JSON format. A follow-up patch will add a JSON schema which describes
the format of the JSON output of these commands.
The JSON output is without any extra whitespace in order to reduce the
volume of output. For human consumption passing the output through a
pretty printer may be helpful.
e.g.:
$ echo "show info json" | socat /var/run/haproxy.stat stdio | \
python -m json.tool
STAT_STARTED has bee added in order to track if show output has begun or
not. This is used in order to allow the JSON output routines to only insert
a "," between elements when needed. I would value any feedback on how this
might be done better.
Signed-off-by: Simon Horman <horms@verge.net.au>
Given that all call places except one had to set txn->status prior to
calling http_server_error(), it's simpler to make this function rely
on txn->status than have it store it from an argument.
This commit removes second argument(msgnum) from http_error_message and
changes http_error_message to use s->txn->status/http_get_status_idx for
mapping status code from 200..504 to HTTP_ERR_200..HTTP_ERR_504(enum).
This is needed for http-request tarpit deny_status commit.
It adds "hostname" as a new sample fetch. It does exactly the same as
"%H" in a log format except that it can be used outside of log formats.
Signed-off-by: Nenad Merdanovic <nmerdan@haproxy.com>
2 places were using an open-coded implementation of this function to count
available servers. Note that the avg_queue_size() fetch didn't check that
the proxy was in STOPPED state so it would possibly return a wrong server
count here but that wouldn't impact the returned value.
Signed-off-by: Nenad Merdanovic <nmerdan@haproxy.com>
This is like the nbsrv() sample fetch function except that it works as
a converter so it can count the number of available servers of a backend
name retrieved using a sample fetch or an environment variable.
Signed-off-by: Nenad Merdanovic <nmerdan@haproxy.com>
The said form of the CLI command didn't return anything since commit
ad8be61c7.
This fix needs to be backported to 1.7.
Signed-off-by: Nenad Merdanovic <nmerdan@haproxy.com>
The memory is released by cli_release_mlook, which also properly sets the
pointer to NULL. This was introduced with a big code reorganization
involving moving to the new keyword registration form in commit ad8be61c7.
This fix needs to be backported to 1.7.
Signed-off-by: Nenad Merdanovic <nmerdan@haproxy.com>
Invalid OCSP file (for example empty one that can be used to enable
OCSP response to be set dynamically later) causes errors that are
placed on OpenSSL error stack. Those errors are not cleared so
anything that checks this stack later will fail.
Following configuration:
bind :443 ssl crt crt1.pem crt crt2.pem
With following files:
crt1.pem
crt1.pem.ocsp - empty one
crt2.pem.rsa
crt2.pem.ecdsa
Will fail to load.
This patch should be backported to 1.7.
Surprizingly, http-request, http-response, block, redirect, and capture
rules did not cause a warning to be emitted when used in a TCP proxy, so
let's fix this.
This patch may be backported to older versions as it helps spotting
configuration issues.
As its named said, this statement customize the maximum allowed size for frames
exchanged between HAProxy and SPOAs. It should be greater than or equal to 256
and less than or equal to (tune.bufsize - 4) (4 bytes are reserved to the frame
length).
This option can be used to enable or to disable (prefixing the option line with
the "no" keyword) the sending of fragmented payload to agents. By default, this
option is enabled.
These options can be used to enable or to disable (prefixing the option line
with the "no" keyword), respectively, pipelined and asynchronous exchanged
between HAproxy and agents. By default, pipelining and async options are
enabled.
Now, when a payload is fragmented, the first frame must define the frame type
and the followings must use the special type SPOE_FRM_T_UNSET. This way, it is
easy to know if a fragment is the first one or not. Of course, all frames must
still share the same stream-id and frame-id.
Update SPOA example accordingly.
If an agent want to abort the processing a fragmented NOTIFY frame before
receiving all fragments, it can send an ACK frame at any time with ABORT bit set
(and of course, the FIN bit too).
Beside this change, SPOE_FRM_ERR_FRAMEID_NOTFOUND error flag has been added. It
is set when a unknown ACK frame is received.
Now, agents can announce the support for the "fragmentation" capability during
the HELLO handshake. HAProxy will never announce it because fragmented frame
decoding is not implemented yet. But it can send such kind of frames. So, if an
agent supports this capability, payloads exceeding the frame size will be
split. A fragemented payload consists of several frames with the FIN bit clear
and terminated by a single frame with the FIN bit set. All these frames must
share the same STREAM-ID and FRAME-ID.
Note that an unfragemnted payload consists of a single frame with the FIN bit
set. And HELLO and DISCONNECT frames cannot be fragmented. This means that only
NOTIFY frames can transport fragmented payload for now.
The max_frame_size value is negociated between HAProxy and SPOE agents during
the HELLO handshake. It is a per-connection value. Different SPOE agents can
choose to use different max_frame_size values. So, now, we keep the minimum of
all known max_frame_size. This minimum is updated when a new connection to a
SPOE agent is opened and when a connection is closed. We use this value as a
limit to encode messages in NOTIFY frames.
This happens when buffer allocation failed. In the SPOE context, buffers are
allocated by streams and SPOE applets at different time. First, by streams, when
messages need to be encoded before sending them in a NOTIFY frame. Then, by SPOE
applets, when a ACK frame is received.
The first case works as expected, we wake up the stream. But for the second one,
we must wake up the waiting SPOE applet.
Now, when option "set-on-error" is enabled, we set a status code representing
the error occurred instead of "true". For values under 256, it represents an
error coming from the engine. Below 256, it reports a SPOP error. In this case,
to retrieve the right SPOP status code, you must remove 256 to this value. Here
are possible values:
* 1: a timeout occurred during the event processing.
* 2: an error was triggered during the ressources allocation.
* 255: an unknown error occurred during the event processing.
* 256+N: a SPOP error occurred during the event processing.
Now, as for peers, we use an opaque pointer to store information related to the
SPOE filter in appctx structure. These information are now stored in a dedicated
structure (spoe_appctx) and allocated, using a pool, when the applet is created.
This removes the dependency between applets and the SPOE filter and avoids to
eventually inflate the appctx structure.
Now, we can use the option '-c' to enable the support of a capability. By
default, all capabilities are disabled. For example:
$> ./spoa -c async -c pipelining
In addition, it is also possible to set the maximum frame size supported by your
agent (-m) and to add a delay in frames processing (-t).
Now, HAProxy and agents can announce the support for "pipelining" and/or "async"
capabilities during the HELLO handshake. For now, HAProxy always announces the
support of both. In addition, in its HELLO frames. HAproxy adds the "engine-id"
key. It is a uniq string that identify a SPOE engine.
The "pipelining" capability is the ability for a peer to decouple NOTIFY and ACK
frames. This is a symmectical capability. To be used, it must be supported by
HAproxy and agents. Unlike HTTP pipelining, the ACK frames can be send in any
order, but always on the same TCP connection used for the corresponding NOTIFY
frame.
The "async" capability is similar to the pipelining, but here any TCP connection
established between HAProxy and the agent can be used to send ACK frames. if an
agent accepts connections from multiple HAProxy, it can use the "engine-id"
value to group TCP connections.
The array of pointers passed to sample_parse_expr was not really an array but a
pointer to pointer. So it can easily lead to a segfault during the configuration
parsing.
During a soft stop, we need to wakeup all SPOE applets to stop them. So we loop
on all proxies, and for each proxy, on all filters. But we must be sure to only
handle SPOE filters here. To do so, we use a specific id.
Use SSL_set_ex_data/SSL_get_ex_data standard API call to store capture.
We need to avoid internal structures/undocumented calls usage to try to
control the beast and limit painful compatibilities.
Bug introduced with "removes SSL_CTX_set_ssl_version call and cleanup CTX
creation": ssl_sock_new_ctx is called before all the bind line is parsed.
The fix consists of separating the use of default_ctx as the initialization
context of the SSL connection via bind_conf->initial_ctx. Initial_ctx contains
all the necessary parameters before performing the selection of the CTX:
default_ctx is processed as others ctx without unnecessary parameters.
This new sample-fetches captures the cipher list offer by the client
SSL connection during the client-hello phase. This is useful for
fingerprint the SSL connection.
BoringSSL doesn't support SSL_CTX_set_ssl_version. To remove this call, the
CTX creation is cleanup to clarify what is happening. SSL_CTX_new is used to
match the original behavior, in order: force-<method> according the method
version then the default method with no-<method> options.
OPENSSL_NO_SSL3 error message is now in force-sslv3 parsing (as force-tls*).
For CTX creation in bind environement, all CTX set related to the initial ctx
are aggregate to ssl_sock_new_ctx function for clarity.
Tests with crt-list have shown that server_method, options and mode are
linked to the initial CTX (default_ctx): all ssl-options are link to each
bind line and must be removed from crt-list.
Extract from RFC 6066:
"If the server understood the ClientHello extension but does not recognize
the server name, the server SHOULD take one of two actions: either abort the
handshake by sending a fatal-level unrecognized_name(112) alert or continue the
handshake. It is NOT RECOMMENDED to send a warning-level unrecognized_name(112)
alert, because the client's behavior in response to warning-level alerts is
unpredictable. If there is a mismatch between the server name used by the
client application and the server name of the credential chosen by the server,
this mismatch will become apparent when the client application performs the
server endpoint identification, at which point the client application will have
to decide whether to proceed with the communication."
Thanks Roberto Guimaraes for the bug repport, spotted with openssl-1.1.0.
This fix must be backported.
SSL verify and client_CA inherits from the initial ctx (default_ctx).
When a certificate is found, the SSL connection environment must be replaced by
the certificate configuration (via SSL_set_verify and SSL_set_client_CA_list).
This patch used boringssl's callback to analyse CLientHello before any
handshake to extract key signature capabilities.
Certificat with better signature (ECDSA before RSA) is choosed
transparenty, if client can support it. RSA and ECDSA certificates can
be declare in a row (without order). This makes it possible to set
different ssl and filter parameter with crt-list.
In 1.4-dev5 when we started to implement keep-alive, commit a9679ac
("[MINOR] http: make the conditional redirect support keep-alive")
added a specific check was added to support keep-alive on redirect
rules but only when the location would start with a "/" indicating
the client would come back to the same server.
But nowadays most applications put http:// or https:// in front of
each and every location, and continuing to perform a close there is
counter-efficient, especially when multiple objects are fetched at
once from a same origin which redirects them to the correct origin
(eg: after an http to https forced upgrade).
It's about time to get rid of this old trick as it causes more harm
than good at an era where persistent connections are omnipresent.
Special thanks to Ciprian Dorin Craciun for providing convincing
arguments with a pretty valid use case and proposing this draft
patch which addresses the issue he was facing.
This change although not exactly a bug fix should be backported
to 1.7 to adapt better to existing infrastructure.