When a connection is marked as private, it is now added in the session server
list. We don't wait a stream is detached from the mux to do so. When the
connection is created, this happens after the mux creation. Otherwise, it is
performed when the connection is marked as private.
To allow that, when a connection is created, the session is systematically set
as the connectin owner. Thus, a backend connection has always a owner during its
creation. And a private connection has always a owner until its death.
Note that outside the detach() callback, if the call to session_add_conn()
failed, the error is ignored. In this situation, we retry to add the connection
into the session server list in the detach() callback. If this fails at this
step, the multiplexer is destroyed and the connection is closed.
To set a connection as private, the conn_set_private() function must now be
called. It sets the CO_FL_PRIVATE flags, but it also remove the connection from
the available connection list, if necessary. For now, it never happens because
only HTTP/1 connections may be set as private after their creation. And these
connections are never inserted in the available connection list.
When a new connection is created, it may immediatly be set as private if
http-reuse never is configured for the backend. There is no reason to wait the
call to mux->detach() to do so.
If an expression is configured to set the SNI on a server connection, the
connection is marked as private. To not needlessly add it in the available
connection list when the mux is installed, the SNI is now set on the connection
before installing the mux, just after the call to si_connect().
When a stream is detached from a backend private connection, we must not insert
it in the available connection list. In addition, we must be sure to remove it
from this list. To ensure it is properly performed, this part has been slightly
refactored to clearly split processing of private connections from the others.
This patch should probably be backported to 2.2.
When a stream is detached from a backend private connection, we must not insert
it in the available connection list. In addition, we must be sure to remove it
from this list. To ensure it is properly performed, this part has been slightly
refactored to clearly split processing of private connections from the others.
This patch should probably be backported to 2.2.
we used to use travis-ci brew plugin to install "socat", travis-ci brew
plugin works predictable in "all update" mode. sometimes it might take 12 minutes.
let us improve developer velocity by running brew from command line. It takes 2 minutes
instead of 12 minutes
latest osx seems to have more stable brew, let us switch to latest osx available.
osx images list: https://docs.travis-ci.com/user/reference/osx/#macos-version
The dummy function takes care of doing a bit of work using a malloc()
to avoid returning a constant but it doesn't free the tested pointer,
which coverity noticed in issue #741. Let's free it before testing it
for the return value.
This may be backported but is not important since this code is only
present to allow to build the device detection code and not to actually
run it.
A bug in task_kill() was fixed by commy 54d31170a ("BUG/MAJOR: sched:
make sure task_kill() always queues the task") which added a list
initialization before adding an element. But in fact an inconditional
addition would have done the same and been simpler than first
initializing then checking the element was initialized. Let's use
MT_LIST_ADDQ() there to add the task to kill into the shared queue
and kill the dirty LIST_INIT().
When a connection is added to an idle list, it's already detached and
cannot be seen by two threads at once, so there's no point using
TRY_ADDQ, there will never be any conflict. Let's just use the cheaper
ADDQ.
The TRY_ADDQ there was not needed since the wait list is exclusively
owned by the caller. There's a preliminary test on MT_LIST_ADDED()
that might have been eliminated by keeping MT_LIST_TRY_ADDQ() but
it would have required two more expensive writes before testing so
better keep the test the way it is.
Initially when mt_lists were added, their purpose was to be used with
the scheduler, where anyone may concurrently add the same tasklet, so
it sounded natural to implement a check in MT_LIST_ADD{,Q}. Later their
usage was extended and MT_LIST_ADD{,Q} started to be used on situations
where the element to be added was exclusively owned by the one performing
the operation so a conflict was impossible. This became more obvious with
the idle connections and the new macro was called MT_LIST_ADDQ_NOCHECK.
But this remains confusing and at many places it's not expected that
an MT_LIST_ADD could possibly fail, and worse, at some places we start
by initializing it before adding (and the test is superflous) so let's
rename them to something more conventional to denote the presence of the
check or not:
MT_LIST_ADD{,Q} : inconditional operation, the caller owns the
element, and doesn't care about the element's
current state (exactly like LIST_ADD)
MT_LIST_TRY_ADD{,Q}: only perform the operation if the element is not
already added or in the process of being added.
This means that the previously "safe" MT_LIST_ADD{,Q} are not "safe"
anymore. This also means that in case of backport mistakes in the
future causing this to be overlooked, the slower and safer functions
will still be used by default.
Note that the missing unchecked MT_LIST_ADD macro was added.
The rest of the code will have to be reviewed so that a number of
callers of MT_LIST_TRY_ADDQ are changed to MT_LIST_ADDQ to remove
the unneeded test.
Previous commit b24bc0d ("MINOR: tcp: Support TCP keepalive parameters
customization") broke non-Linux builds as TCP_KEEP{CNT,IDLE,INTVL} are
not necessarily defined elsewhere.
This patch adds the required #ifdefs to condition the visibility of the
keywords, and adds a mention in the doc about their dependency on Linux.
It is now possible to customize TCP keepalive parameters.
These correspond to the socket options TCP_KEEPCNT, TCP_KEEPIDLE, TCP_KEEPINTVL
and are valid for the defaults, listen, frontend and backend sections.
This patch fixes GitHub issue #670.
The torture test run for previous commit 787dc20 ("BUG/MEDIUM: lists: add
missing store barrier on MT_LIST_BEHEAD()") finally broke again after 34M
connections. It appeared that MT_LIST_ADD and MT_LIST_ADDQ were suffering
from the same missing barrier when restoring the original pointers before
giving up, when checking if the element was already added. This is indeed
something which seldom happens with the shared scheduler, in case two
threads simultaneously try to wake up the same tasklet.
With a store barrier there after reverting the pointers, the torture test
survived 750M connections on the NanoPI-Fire3, so it looks good this time.
Probably that MT_LIST_BEHEAD should be added to test-list.c since it seems
to be more sensitive to concurrent accesses with MT_LIST_ADDQ.
It's worth noting that there is no barrier between the last two pointers
update, while there is one in MT_LIST_POP and MT_LIST_BEHEAD, the latter
having shown to be needed, but I cannot demonstrate why we would need
one here. Given that the code seems solid here, let's stick to what is
shown to work.
This fix should be backported to 2.1, just for the sake of safety since
the issue couldn't be triggered there, but it could change with the
compiler or when backporting a fix for example.
When running multi-threaded tests on my NanoPI-Fire3 (8 A53 cores), I
managed to occasionally get either a bus error or a segfault in the
scheduler, but only when running at a high connection rate (injecting
on a tcp-request connection reject rule). The bug is rare and happens
around once per million connections. I could never reproduce it with
less than 4 threads nor on A72 cores.
Haproxy 2.1.0 would also fail there but not 2.1.7.
Every time the crash happened with the TL_URGENT task list corrupted,
though it was not immediately after the LIST_SPLICE() call, indicating
background activity survived the MT_LIST_BEHEAD() operation. This queue
is where the shared runqueue is transferred, and the shared runqueue
gets fast inter-thread tasklet wakeups from idle conn takeover and new
connections.
Comparing the MT_LIST_BEHEAD() and MT_LIST_DEL() implementations, it's
quite obvious that a few barriers are missing from the former, and these
will simply fail on weakly ordered caches.
Two store barriers were added before the break() on failure, to match
what is done on the normal path. Missing them almost always results in
a segfault which is quite rare but consistent (after ~3M connections).
The 3rd one before updating n->prev seems intuitively needed though I
coudln't make the code fail without it. It's present in MT_LIST_DEL so
better not be needlessly creative. The last one is the most important
one, and seems to be the one that helps a concurrent MT_LIST_ADDQ()
detect a late failure and try again. With this, the code survives at
least 30M connections.
Interestingly the exact same issue was addressed in 2.0-dev2 for MT_LIST_DEL
with commit 690d2ad4d ("BUG/MEDIUM: list: add missing store barriers when
updating elements and head").
This fix must be backported to 2.1 as MT_LIST_BEHEAD() is also used there.
It's only tagged as medium because it will only affect entry-level CPUs
like Cortex A53 (x86 are not affected), and requires load levels that are
very hard to achieve on such machines to trigger it. In practice it's
unlikely anyone will ever hit it.
Compiling HAProxy with USE_LUA=1 and running a configuration check within
valgrind with a very simple configuration such as:
listen foo
bind *:8080
Will report quite a few possible leaks afterwards:
==24048== LEAK SUMMARY:
==24048== definitely lost: 0 bytes in 0 blocks
==24048== indirectly lost: 0 bytes in 0 blocks
==24048== possibly lost: 95,513 bytes in 1,209 blocks
==24048== still reachable: 329,960 bytes in 71 blocks
==24048== suppressed: 0 bytes in 0 blocks
Printing these possible leaks shows that all of them are caused by Lua.
Luckily Lua makes it *very* easy to free all used memory, so let's do
this on shutdown.
Afterwards this patch is applied the output looks much better:
==24199== LEAK SUMMARY:
==24199== definitely lost: 0 bytes in 0 blocks
==24199== indirectly lost: 0 bytes in 0 blocks
==24199== possibly lost: 0 bytes in 0 blocks
==24199== still reachable: 329,960 bytes in 71 blocks
==24199== suppressed: 0 bytes in 0 blocks
Given the following example configuration:
listen foo
mode http
bind *:8080
http-request set-var(txn.leak) meth(GET)
server x example.com:80
Running a configuration check with valgrind reports:
==25992== 4 bytes in 1 blocks are definitely lost in loss record 1 of 344
==25992== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==25992== by 0x4E239D: my_strndup (tools.c:2261)
==25992== by 0x581E20: make_arg_list (arg.c:253)
==25992== by 0x4DE91D: sample_parse_expr (sample.c:890)
==25992== by 0x58E304: parse_store (vars.c:772)
==25992== by 0x566A3F: parse_http_req_cond (http_rules.c:95)
==25992== by 0x4A4CE6: cfg_parse_listen (cfgparse-listen.c:1339)
==25992== by 0x494C59: readcfgfile (cfgparse.c:2049)
==25992== by 0x545145: init (haproxy.c:2029)
==25992== by 0x421E42: main (haproxy.c:3175)
After this patch is applied the leak is gone as expected.
This is a fairly minor leak, but it can add up for many uses of the `bool()`
sample fetch. The bug most likely exists since the `bool()` sample fetch was
introduced in commit cc103299c7. The fix may
be backported to HAProxy 1.6+.
Given the following example configuration:
listen foo
mode http
bind *:8080
http-request set-var(txn.leak) bool(1)
server x example.com:80
Running a configuration check with valgrind reports:
==24233== 2 bytes in 1 blocks are definitely lost in loss record 1 of 345
==24233== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==24233== by 0x4E238D: my_strndup (tools.c:2261)
==24233== by 0x581E10: make_arg_list (arg.c:253)
==24233== by 0x4DE90D: sample_parse_expr (sample.c:890)
==24233== by 0x58E2F4: parse_store (vars.c:772)
==24233== by 0x566A2F: parse_http_req_cond (http_rules.c:95)
==24233== by 0x4A4CE6: cfg_parse_listen (cfgparse-listen.c:1339)
==24233== by 0x494C59: readcfgfile (cfgparse.c:2049)
==24233== by 0x545135: init (haproxy.c:2029)
==24233== by 0x421E42: main (haproxy.c:3175)
After this patch is applied the leak is gone as expected.
This is a fairly minor leak, but it can add up for many uses of the `bool()`
sample fetch. The bug most likely exists since the `bool()` sample fetch was
introduced in commit cc103299c7. The fix may
be backported to HAProxy 1.6+.
Given the following example configuration:
backend foo
mode http
use-server %[str(x)] if { always_true }
server x example.com:80
Running a configuration check with valgrind reports:
==19376== 170 (40 direct, 130 indirect) bytes in 1 blocks are definitely lost in loss record 281 of 347
==19376== at 0x4C2FB55: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==19376== by 0x5091AC: add_sample_to_logformat_list (log.c:511)
==19376== by 0x50A5A6: parse_logformat_string (log.c:671)
==19376== by 0x4957F2: check_config_validity (cfgparse.c:2588)
==19376== by 0x54442D: init (haproxy.c:2129)
==19376== by 0x421E42: main (haproxy.c:3169)
After this patch is applied the leak is gone as expected.
This is a very minor leak that can only be observed if deinit() is called,
shortly before the OS will free all memory of the process anyway. No
backport needed.
Given the following example configuration:
backend foo
mode http
use-server x if { always_true }
server x example.com:80
Running a configuration check with valgrind reports:
==18650== 14 bytes in 1 blocks are definitely lost in loss record 3 of 345
==18650== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==18650== by 0x649E489: strdup (strdup.c:42)
==18650== by 0x4A5438: cfg_parse_listen (cfgparse-listen.c:1548)
==18650== by 0x494C59: readcfgfile (cfgparse.c:2049)
==18650== by 0x5450B5: init (haproxy.c:2029)
==18650== by 0x421E42: main (haproxy.c:3168)
After this patch is applied the leak is gone as expected.
This is a very minor leak that can only be observed if deinit() is called,
shortly before the OS will free all memory of the process anyway. No
backport needed.
Given the following example configuration:
frontend foo
mode http
bind *:8080
unique-id-header x
Running a configuration check with valgrind reports:
==17621== 2 bytes in 1 blocks are definitely lost in loss record 1 of 341
==17621== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==17621== by 0x649E489: strdup (strdup.c:42)
==17621== by 0x4A87F1: cfg_parse_listen (cfgparse-listen.c:2747)
==17621== by 0x494C59: readcfgfile (cfgparse.c:2049)
==17621== by 0x545095: init (haproxy.c:2029)
==17621== by 0x421E42: main (haproxy.c:3167)
After this patch is applied the leak is gone as expected.
This is a very minor leak that can only be observed if deinit() is called,
shortly before the OS will free all memory of the process anyway. No
backport needed.
Given the following example configuration:
resolvers test
nameserver test 127.0.0.1:53
listen foo
bind *:8080
server foo example.com resolvers test
Running a configuration check within valgrind reports:
==21995== 5 bytes in 1 blocks are definitely lost in loss record 1 of 30
==21995== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==21995== by 0x5726489: strdup (strdup.c:42)
==21995== by 0x4B2CFB: parse_server (server.c:2163)
==21995== by 0x4680C1: cfg_parse_listen (cfgparse-listen.c:534)
==21995== by 0x459E33: readcfgfile (cfgparse.c:2167)
==21995== by 0x50778D: init (haproxy.c:2021)
==21995== by 0x418262: main (haproxy.c:3133)
==21995==
==21995== 12 bytes in 1 blocks are definitely lost in loss record 3 of 30
==21995== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==21995== by 0x5726489: strdup (strdup.c:42)
==21995== by 0x4AC666: srv_prepare_for_resolution (server.c:1606)
==21995== by 0x4B2EBD: parse_server (server.c:2081)
==21995== by 0x4680C1: cfg_parse_listen (cfgparse-listen.c:534)
==21995== by 0x459E33: readcfgfile (cfgparse.c:2167)
==21995== by 0x50778D: init (haproxy.c:2021)
==21995== by 0x418262: main (haproxy.c:3133)
with one more leak unrelated to `struct server`. After applying this
patch the leak is gone as expected.
This is a very minor leak that can only be observed if deinit() is called,
shortly before the OS will free all memory of the process anyway. No
backport needed.
Given the following example configuration:
frontend foo
mode http
bind *:8080
unique-id-format x
Running a configuration check with valgrind reports:
==30712== 42 (40 direct, 2 indirect) bytes in 1 blocks are definitely lost in loss record 18 of 39
==30712== at 0x4C2FB55: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==30712== by 0x4ED7E9: add_to_logformat_list (log.c:462)
==30712== by 0x4EEE28: parse_logformat_string (log.c:720)
==30712== by 0x47B09A: check_config_validity (cfgparse.c:3046)
==30712== by 0x52881D: init (haproxy.c:2121)
==30712== by 0x41F382: main (haproxy.c:3126)
After this patch is applied the leak is gone as expected.
This is a very minor leak that can only be observed if deinit() is called,
shortly before the OS will free all memory of the process anyway. No
backport needed.
Instead of just calling release_sample_arg(conv_expr->arg_p) we also must
free() the conv_expr itself (after removing it from the list).
Given the following example configuration:
frontend foo
bind *:8080
mode http
http-request set-var(txn.foo) str(bar)
acl is_match str(foo),strcmp(txn.hash) -m bool
Running a configuration check within valgrind reports:
==1431== 32 bytes in 1 blocks are definitely lost in loss record 20 of 43
==1431== at 0x4C2FB55: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==1431== by 0x4C39B5: sample_parse_expr (sample.c:982)
==1431== by 0x56B410: parse_acl_expr (acl.c:319)
==1431== by 0x56BA7F: parse_acl (acl.c:697)
==1431== by 0x48D225: cfg_parse_listen (cfgparse-listen.c:816)
==1431== by 0x4797C3: readcfgfile (cfgparse.c:2167)
==1431== by 0x52943D: init (haproxy.c:2021)
==1431== by 0x41F382: main (haproxy.c:3133)
After this patch is applied the leak is gone as expected.
This is a fairly minor leak that can only be observed if samples need to be
freed, which is not something that should occur during normal processing and
most likely only during shut down. Thus no backport should be needed.
Instead of simply calling free() in expr->smp->arg_p in certain cases
properly free the sample using release_sample_expr().
Given the following example configuration:
frontend foo
bind *:8080
mode http
http-request set-var(txn.foo) str(bar)
acl is_match str(foo),strcmp(txn.hash) -m bool
Running a configuration check within valgrind reports:
==31371== 160 (48 direct, 112 indirect) bytes in 1 blocks are definitely lost in loss record 35 of 45
==31371== at 0x4C2FB55: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==31371== by 0x4C3832: sample_parse_expr (sample.c:876)
==31371== by 0x56B3E0: parse_acl_expr (acl.c:319)
==31371== by 0x56BA4F: parse_acl (acl.c:697)
==31371== by 0x48D225: cfg_parse_listen (cfgparse-listen.c:816)
==31371== by 0x4797C3: readcfgfile (cfgparse.c:2167)
==31371== by 0x5293ED: init (haproxy.c:2021)
==31371== by 0x41F382: main (haproxy.c:3126)
After this patch this leak is reduced. It will be fully removed in a
follow up patch:
==32503== 32 bytes in 1 blocks are definitely lost in loss record 20 of 43
==32503== at 0x4C2FB55: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==32503== by 0x4C39B5: sample_parse_expr (sample.c:982)
==32503== by 0x56B410: parse_acl_expr (acl.c:319)
==32503== by 0x56BA7F: parse_acl (acl.c:697)
==32503== by 0x48D225: cfg_parse_listen (cfgparse-listen.c:816)
==32503== by 0x4797C3: readcfgfile (cfgparse.c:2167)
==32503== by 0x52943D: init (haproxy.c:2021)
==32503== by 0x41F382: main (haproxy.c:3133)
This is a fairly minor leak that can only be observed if ACLs need to be
freed, which is not something that should occur during normal processing
and most likely only during shut down. Thus no backport should be needed.
Released version 2.3-dev0 with the following main changes :
- [RELEASE] Released version 2.3-dev0
- MINOR: version: back to development, update status message
Released version 2.2.0 with the following main changes :
- BUILD: mux-h2: fix typo breaking build when using DEBUG_LOCK
- CLEANUP: makefile: update the outdated list of DEBUG_xxx options
- BUILD: tools: make resolve_sym_name() return a const
- CLEANUP: auth: fix useless self-include of auth-t.h
- BUILD: tree-wide: cast arguments to tolower/toupper to unsigned char
- CLEANUP: assorted typo fixes in the code and comments
- WIP/MINOR: ssl: add sample fetches for keylog in frontend
- DOC: fix tune.ssl.keylog sample fetches array
- BUG/MINOR: ssl: check conn in keylog sample fetch
- DOC: configuration: various typo fixes
- MINOR: log: Remove unused case statement during the log-format string parsing
- BUG/MINOR: mux-h1: Fix the splicing in TUNNEL mode
- BUG/MINOR: mux-h1: Don't read data from a pipe if the mux is unable to receive
- BUG/MINOR: mux-h1: Disable splicing only if input data was processed
- BUG/MEDIUM: mux-h1: Disable splicing for the conn-stream if read0 is received
- MINOR: mux-h1: Improve traces about the splicing
- BUG/MINOR: backend: Remove CO_FL_SESS_IDLE if a client remains on the last server
- BUG/MEDIUM: connection: Don't consider new private connections as available
- BUG/MINOR: connection: See new connection as available only on reuse always
- DOC: configuration: remove obsolete mentions of H2 being converted to HTTP/1.x
- CLEANUP: ssl: remove unrelevant comment in smp_fetch_ssl_x_keylog()
- DOC: update INSTALL with new compiler versions
- DOC: minor update to coding style file
- MINOR: version: mention that it's an LTS release now
The first H2 implementation in version 1.8 used to turn HTTP/2 requests
to HTTP/1.1, causing many limitations. This is not true anymore and we
don't suffer from the lack of server-side H2 nor are we forced to close
mode anymore, so let's remove such obsolete mentions.
This could be backported to 2.0.
When the multiplexer creation is delayed after the handshakes phase, the
connection is added in the available connection list if http-reuse never is not
configured for the backend. But it is a wrong statement. At this step, the
connection is not safe because it is a new connection. So it must be added in
the available connection list only if http-reuse always is used.
No backport needed, this is 2.2-dev.
When a connection is created and the multiplexer is installed, if the connection
is marked as private, don't consider it as available, regardless the number of
available streams. This test is performed when the mux is installed when the
connection is created, in connect_server(), and when the mux is installed after
the handshakes stage.
No backport needed, this is 2.2-dev.
When a connection is picked from the session server list because the proxy or
the session are marked to use the last requested server, if it is idle, we must
marked it as used removing the CO_FL_SESS_IDLE flag and decrementing the session
idle_conns counter.
This patch must be backported as far as 1.9.
Trace messages have been added when the CS_FL_MAY_SPLICE flag is set or unset
and when the splicing is really enabled for the H1 connection.
This patch may be backpored to 2.1 to ease debugging.
The CS_FL_MAY_SPLICE flag must be unset for the conn-stream if a read0 is
received while reading on the kernel pipe. It is mandatory when some data was
also received. Otherwise, this flag prevent the call to the h1 rcv_buf()
callback. Thus the read0 will never be handled by the h1 multiplexer leading to
a freeze of the session until a timeout is reached.
This patch must be backported to 2.1 and 2.0.
In h1_rcv_buf(), the splicing is systematically disabled if it was previously
enabled. When it happens, if the splicing is enabled it means the channel's
buffer was empty before calling h1_rcv_buf(). Thus, the only reason to disable
the splicing at this step is when some input data have just been processed.
This patch may be backported to 2.1 and 2.0.
In h1_rcv_pipe(), if the mux is unable to receive data, for instance because the
multiplexer is blocked on input waiting the other side (BUSY mode), no receive
must be performed.
This patch must be backported to 2.1 and 2.0.
In the commit 17ccd1a35 ("BUG/MEDIUM: connection: add a mux flag to indicate
splice usability"), The CS_FL_MAY_SPLICE flags was added to notify the upper
layer that the mux is able to use the splicing. But this was only done for the
payload in a message, in HTTP_MSG_DATA state. But the splicing is also possible
in TUNNEL mode, in HTTP_MSG_TUNNEL state. In addition, the splicing ability is
always disabled for chunked messages.
This patch must be backported to 2.1 and 2.0.