With strict-sni, ssl connection will fail if no certificate match. Have no
certificate in bind line, fail on all ssl connections. It's ok with the
behavior of strict-sni. When 'generate-certificates' is set 'strict-sni' is
never used. When 'strict-sni' is set, default_ctx is never used. Allow to start
without certificate only in this case.
Use case is to start haproxy with ssl before customer start to use certificates.
Typically with 'crt' on a empty directory and 'strict-sni' parameters.
Since the commit f6b37c67 ["BUG/MEDIUM: ssl: in bind line, ssl-options after
'crt' are ignored."], the certificates generation is broken.
To generate a certificate, we retrieved the private key of the default
certificate using the SSL object. But since the commit f6b37c67, the SSL object
is created with a dummy certificate (initial_ctx).
So to fix the bug, we use directly the default certificate in the bind_conf
structure. We use SSL_CTX_get0_privatekey function to do so. Because this
function does not exist for OpenSSL < 1.0.2 and for LibreSSL, it has been added
in openssl-compat.h with the right #ifdef.
This one dumps the fdtab for all active FDs with some quickly interpretable
characters to read the flags (like upper case=set, lower case=unset). It
can probably be improved to report fdupdt[] and/or fdinfo[] but at least it
provides a good start and allows to see how FDs are seen. When the fd owner
is a connection, its flags are also reported as it can help compare with the
polling status, and the target (fe/px/sv) as well. When it's a listener, the
listener's state is reported as well as the frontend it belongs to.
The logical operations were inverted so enable/disable operations did
the opposite.
The bug is present since 1.7 so the fix should be backported there.
Commits 2ab8867 ("MINOR: ssl: compare server certificate names to the
SNI on outgoing connections") and 96c7b8d ("BUG/MINOR: ssl: Fix check
against SNI during server certificate verification") made it possible
to check that the server's certificate matches the name presented in
the SNI field. While it solves a class of problems, it opens another
one which is that by failing such a connection, we'll retry it and put
more load on the server. It can be a real problem if a user can trigger
this issue, which is what will very often happen when the SNI is forwarded
from the client to the server.
This patch solves this by detecting that this very specific hostname
verification failed and that the hostname was provided using SNI, and
then it simply disables retries and the failure is immediate.
At the time of writing this patch, the previous patches were not backported
(yet), so no backport is needed for this one unless the aforementionned
patches are backported as well. This patch requires previous patches
"BUG/MINOR: ssl: make use of the name in SNI before verifyhost" and
"MINOR: ssl: add a new error code for wrong server certificates".
If a server presents an unexpected certificate to haproxy, that is, a
certificate that doesn't match the expected name as configured in
verifyhost or as requested using SNI, we want to store that precious
information. Fortunately we have access to the connection in the
verification callback so it's possible to store an error code there.
For this purpose we use CO_ER_SSL_MISMATCH_SNI (for when the cert name
didn't match the one requested using SNI) and CO_ER_SSL_MISMATCH for
when it doesn't match verifyhost.
Commit 2ab8867 ("MINOR: ssl: compare server certificate names to the SNI
on outgoing connections") introduced the ability to check server cert
names against the name provided with in the SNI, but verifyhost was kept
as a way to force the name to check against. This was a mistake, because :
- if an SNI is used, any static hostname in verifyhost will be wrong ;
worse, if it matches and doesn't match the SNI, the server presented
the wrong certificate ;
- there's no way to have a default name to check against for health
checks anymore because the point above mandates the removal of the
verifyhost directive
This patch reverses the ordering of the check : whenever SNI is used, the
name provided always has precedence (ie the server must always present a
certificate that matches the requested name). And if no SNI is provided,
then verifyhost is used, and will be configured to match the server's
default certificate name. This will work both when SNI is not used and
for health checks.
If the commit 2ab8867 is backported in 1.7 and/or 1.6, this one must be
backported too.
This patch fixes the commit 2ab8867 ("MINOR: ssl: compare server certificate
names to the SNI on outgoing connections")
When we check the certificate sent by a server, in the verify callback, we get
the SNI from the session (SSL_SESSION object). In OpenSSL, tlsext_hostname value
for this session is copied from the ssl connection (SSL object). But the copy is
done only if the "server_name" extension is found in the server hello
message. This means the server has found a certificate matching the client's
SNI.
When the server returns a default certificate not matching the client's SNI, it
doesn't set any "server_name" extension in the server hello message. So no SNI
is set on the SSL session and SSL_SESSION_get0_hostname always returns NULL.
To fix the problemn, we get the SNI directly from the SSL connection. It is
always defined with the value set by the client.
If the commit 2ab8867 is backported in 1.7 and/or 1.6, this one must be
backported too.
Note: it's worth mentionning that by making the SNI check work, we
introduce another problem by which failed SNI checks can cause
long connection retries on the server, and in certain cases the
SNI value used comes from the client. So this patch series must
not be backported until this issue is resolved.
Adis Nezirovic reports:
While playing with Lua API I've noticed that core.proxies attribute
doesn't return all the proxies, more precisely the ones with same names
(e.g. for frontend and backend with the same name it would only return
the latter one).
So, this patch fixes this problem without breaking the actual behaviour.
We have two case of proxies with frontend/backend capabilities:
The first case is the listen. This case is not a problem because the
proxy object process these two entities as only one and it is the
expected behavior. With these case the "proxies" list works fine.
The second case is the frontend and backend with the same name. i think
that this case is possible for compatibility with 'listen' declaration.
These two proxes with same name and different capabilities must not
processed with the same object (different statitics, differents orders).
In fact, one the the two object crush the other one whoch is no longer
accessible.
To fix this problem, this patch adds two lists which are "frontends" and
"backends", each of these list contains specialized proxy, but warning
the "listen" proxy are declare in each list.
By Adis Nezirovic:
This is just for convenience and uniformity, Proxy.servers/listeners
returns a table/hash of objects with names as keys, but for example when
I want to pass such object to some other Lua function I have to manually
copy the name (or wrap the object), since the object itself doesn't
expose name info.
This patch simply adds the proxy name as member of the proxy object.
The recent scheduler change broke the Lua co-sockets due to
hlua_applet_wakeup() returning NULL after waking the applet up. With the
previous scheduler, returning NULL was a way to do nothing on return.
With the new one it keeps TASK_RUNNING set, causing all new notifications
to end up into t->pending_state instead of t->state, and prevents the
task from being added into the run queue again, so and it's never woken
up anymore.
The applet keeps waking up, causing hlua_socket_handler() to do nothing
new, then si_applet_wake_cb() calling stream_int_notify() to try to wake
the task up, which it can't do due to the TASK_RUNNING flag, then decide
that since the associated task is not in the run queue, it needs to call
stream_int_update_applet() to propagate the update. This last one finds
that the applet needs to be woken up to deal with the last reported events
and calling appctx_wakeup() again. Previously, this situation didn't exist
because the task was always added in the run queue despite the TASK_RUNNING
flag.
By returning the task instead in hlua_applet_wakeup(), we can ensure its
flag is properly cleared and the task is requeued if needed or just sits
waiting for new events to happen.
This fix requires the previous ones ("BUG/MINOR: lua: always detach the
tcp/http tasks before freeing them") and MINOR: task: always preinitialize
the task's timeout in task_init().
Thanks to Thierry, Christopher and Emeric for the long head-scratching
session!
No backport is needed as the bug doesn't appear in older versions and
it's unsure whether we'll not break something by backporting it.
task_init() is called exclusively by task_new() which is the only way
to create a task. Most callers set t->expire to TICK_ETERNITY, some set
it to another value and a few like Lua don't set it at all as they don't
need a timeout, causing random values to be used in case the task gets
queued.
Let's always set t->expire to TICK_ETERNITY in task_init() so that all
tasks are now initialized in a clean state.
This patch can be backported as it will definitely make the code more
robust (at least the Lua code, possibly other places).
In hlua_{http,tcp}_applet_release(), a call to task_free() is performed
to release the task, but no task_delete() is made on these tasks. Till
now it wasn't much of a problem because this was normally not done with
the task in the run queue, and the task was never put into the wait queue
since it doesn't have any timer. But with threading it will become an
issue. And not having this already prevents another bug from being fixed.
Thanks to Christopher for spotting this one. A backport to 1.7 and 1.6 is
preferred for safety.
For known methods (GET,POST...), in samples, an enum is used instead of a chunk
to reference the method. So there is no needs to allocate memory when a variable
is stored with this kind of sample.
First, the type SMP_T_METH was not handled by smp_dup function. It was never
called with this kind of samples, so it's not really a problem. But, this could
be useful in future.
For all known HTTP methods (GET, POST...), there is no extra space allocated for
a sample of type SMP_T_METH. But for unkown methods, it uses a chunk. So, like
for strings, we duplicate data, using a trash chunk.
The get_addr() method of the Lua Server class incorrectly used
INET_ADDRSTRLEN for IPv6 addresses resulting in failing to convert
longer IPv6 addresses to strings.
This fix should be backported to 1.7.
The get_addr() method of the Lua Server class was using the
'sockaddr_storage addr' member to get the port value. HAProxy does not
store ports in this member as it uses a separate member, called
'svc_port'.
This fix should be backported to 1.7.
In commit "MINOR: http: Switch requests/responses in TUNNEL mode only by
checking txn flags", it is possible to have an infinite loop on HTTP_MSG_CLOSING
state.
The previously documented location doesn't work anymore and must not be
used. Warning for backports, different branches are in use depending on
the version (v3.2.10 for 1.7, v3.2.5 for 1.6).
Akhnin Nikita reported that Lua doesn't build on Solaris 10 because
the code uses timegm() to parse a date, which is not provided there.
The recommended way to implement timegm() is broken in the man page,
as it is based on a change of the TZ environment variable at run time
before calling the function (which is obviously not thread safe, and
terribly inefficient).
Here instead we rely on the new my_timegm() function, it should be
sufficient for all known use cases.
timegm() is not provided everywhere and the documentation on how to
replace it is bogus as it proposes an inefficient and non-thread safe
alternative.
Here we reimplement everything needed to compute the number of seconds
since Epoch based on the broken down fields in struct tm. It is only
guaranteed to return correct values for correct inputs. It was successfully
tested with all possible 32-bit values of time_t converted to struct tm
using gmtime() and back to time_t using the legacy timegm() and this
function, and both functions always produced the same result.
Thanks to Benoît Garnier for an instructive discussion and detailed
explanations of the various time functions, leading to this solution.
The commit 5db33cbd "MEDIUM: ssl: ssl_methods implementation is reworked and
factored for min/max tlsxx" drop the case when ssl lib have removed SSLv3.
The commit 1e59fcc5 "BUG/MINOR: ssl: Be sure that SSLv3 connection methods
exist for openssl < 1.1.0" fix build but it's false because haproxy think
that ssl lib support SSLv3.
SSL_OP_NO_* are flags to set in ssl_options and is the way haproxy do the
link between ssl capabilities and haproxy configuration. (The mapping table
is done via methodVersions). SSL_OP_NO_* is set to 0 when ssl lib doesn't
support a new TLS version. Older version (like SSLv3) can be removed at
build or unsupported (like libressl). In all case OPENSSL_NO_SSL3 is define.
To keep the same logic, this patch alter SSL_OP_NO_SSLv3 to 0 when SSLv3 is
not supported by ssl lib (when OPENSSL_NO_SSL3 is define).
The previous patch ("MINOR: http: Rely on analyzers mask to end processing in
forward_body functions") contains a bug for keep-alive transactions.
For these transactions, AN_REQ_FLT_END and AN_RES_FLT_END analyzers must be
removed only when all outgoing data was forwarded.
Instead of relying on request or response state, we use "chn->analysers" mask as
all other analyzers. So now, http_resync_states does not return anything
anymore.
The debug message in http_resync_states has been improved.
When the body length of a HTTP response is undefined, the HTTP parser is blocked
in the body parsing. Before HAProxy 1.7, in this case, because
AN_RES_HTTP_XFER_BODY is never set, there is no visible effect. When the server
closes its connection to terminate the response, HAProxy catches it as a normal
closure. Since 1.7, we always set this analyzer to enter at least once in
http_response_forward_body. But, in the present case, when the server connection
is closed, http_response_forward_body is called one time too many. The response
is correctly sent to the client, but an error is catched and logged with "SD--"
flags.
To reproduce the bug, you can use the configuration "tests/test-fsm.cfg". The
tests 3 and 21 hit the bug.
Idea to fix the bug is to switch the response in TUNNEL mode without switching
the request. This is possible because of previous patches.
First, we need to detect responses with undefined body length during states
synchronization. Excluding tunnelled transactions, when the response length is
undefined, TX_CON_WANT_CLO is always set on the transaction. So, when states are
synchronized, if TX_CON_WANT_CLO is set, the response is switched in TUNNEL mode
and the request remains unchanged.
Then, in http_msg_forward_body, we add a specific check to switch the response
in DONE mode if the body length is undefined and if there is no data filter.
This patch depends on following previous commits:
* MINOR: http: Switch requests/responses in TUNNEL mode only by checking txn flags
* MINOR: http: Reorder/rewrite checks in http_resync_states
This patch must be backported in 1.7 with 2 previous ones.
Today, the only way to have a request or a response in HTTP_MSG_TUNNEL state is
to have the flag TX_CON_WANT_TUN set on the transaction. So this is a symmetric
state. Both the request and the response are switch in same time in this
state. This can be done only by checking transaction flags instead of relying on
the other side state. This is the purpose of this patch.
This way, if for any reason we need to switch only one side in TUNNEL mode, it
will be possible. And to prepare asymmetric cases, we check channel flags in
DONE _AND_ TUNNEL states.
WARNING: This patch will be used to fix a bug. The fix will be commited in a
very next commit. So if the fix is backported, this one must be backported too.
The previous patch removed the forced symmetry of the TUNNEL mode during the
state synchronization. Here, we take care to remove body analyzer only on the
channel in TUNNEL mode. In fact, today, this change has no effect because both
sides are switched in same time. But this way, with some changes, it will be
possible to keep body analyzer on a side (to finish the states synchronization)
with the other one in TUNNEL mode.
WARNING: This patch will be used to fix a bug. The fix will be commited in a
very next commit. So if the fix is backported, this one must be backported too.
Make it clear that optional components must not break when disabled,
that openssl is the only officially supported library and its support
must not be broken, and that bug fixes must always be detailed.
We cannot perform garbage collection on unreferenced thread.
This memory is now free and another Lua process can use it for
other things.
HAProxy is monothread, so this bug doesn't cause crash.
This patch must be backported in 1.6 and 1.7
In some cases, the socket is misused. The user can open socket and never
close it, or open the socket and close it without sending data. This
causes resources leak on all resources associated to the stream (buffer,
spoe, ...)
This is caused by the stream_shutdown function which is called outside
of the stream execution process. Sometimes, the shtudown is required
while the stream is not started, so the cleanup is ignored.
This patch change the shutdown mode of the session. Now if the session is
no longer used and the Lua want to destroy it, it just set a destroy flag
and the session kill itself.
This patch should be backported in 1.6 and 1.7
When we destroy the Lua session, we manipulates Lua stack,
so errors can raises. It will be better to catch these errors.
This patch should be backported in 1.6 and 1.7
Just forgot of reset the safe mode. This have not consequences
the safe mode just set a pointer on fucntion which is called only
and initialises a longjmp.
Out of lua execution, this longjmp is never executed and the
function is never called.
This patch should be backported in 1.6 and 1.7
Functions hdr_idx_first_idx() and hdr_idx_first_pos() were missing a
"const" qualifier on their arguments which are not modified, causing
a warning in some experimental H2 code.
When several stick-tables were configured with several peers sections,
only a part of them could be synchronized: the ones attached to the last
parsed 'peers' section. This was due to the fact that, at least, the peer I/O handler
refered to the wrong peer section list, in fact always the same: the last one parsed.
The fact that the global peer section list was named "struct peers *peers"
lead to this issue. This variable name is dangerous ;).
So this patch renames global 'peers' variable to 'cfg_peers' to ensure that
no such wrong references are still in use, then all the functions wich used
old 'peers' variable have been modified to refer to the correct peer list.
Must be backported to 1.6 and 1.7.