This way the directory structure remains the same as with the real lib and
one can apply the same build options regardless of where the lib is stored,
removing any possible confusion.
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
complains:
> src/debug.c: In function "ha_panic":
> src/debug.c:162:2: warning: ignoring return value of "write", declared with attribute warn_unused_result [-Wunused-result]
> (void) write(2, trash.area, trash.data);
> ^
Change the formatting of the uptime field in "show proc" so it's easier
to parse it. Remove the space between the day and the hour and align the
field on 15 characters.
These are intended for use by HAProxy developers to ensure any changes
did not affect the 51Degrees implementation. The 51Degrees module can be
enabled and used by using the source in contrib/51d. This will run
without breaking, but will not return any meaningful information.
This is ideal for testing HAProxy core code, and other modules alongside
51Degrees, but should never be used as an actual module as it does
nothing.
The _51d_fetch method, and the two methods it calls to fetch HTTP
headers (_51d_set_device_offsets, and _51d_set_headers), now support
both legacy and HTX operation.
This should be backported to 1.9.
This action is particularly convenient to replace some deprecated usees
of "reqrep". It takes a match and a format string including back-
references. The reqrep warning was updated to suggest it as well.
Released version 2.0-dev7 with the following main changes :
- BUG/MEDIUM: mux-h2: make sure the connection timeout is always set
- MINOR: tools: add new bitmap manipulation functions
- MINOR: logs: use the new bitmap functions instead of fd_sets for encoding maps
- MINOR: chunks: Make sure trash_size is only set once.
- Revert "MINOR: chunks: Make sure trash_size is only set once."
- MINOR: threads: serialize threads initialization
- MINOR peers: data structure simplifications for server names dictionary cache.
- DOC: peers: Update for dictionary cache entries for peers protocol.
- MINOR: dict: Store the length of the dictionary entries.
- MINOR: peers: A bit of optimization when encoding cached server names.
- MINOR: peers: Optimization for dictionary cache lookup.
- MEDIUM: tools: improve time format error detection
- BUG/MEDIUM: H1: When upgrading, make sure we don't free the buffer too early.
- BUG/MEDIUM: stream_interface: Make sure we call si_cs_process() if CS_FL_EOI.
- MINOR: threads: avoid clearing harmless twice in thread_release()
- MEDIUM: threads: add thread_sync_release() to synchronize steps
- BUG/MEDIUM: init/threads: prevent initialized threads from starting before others
- OPTIM/MINOR: init/threads: only call protocol_enable_all() on first thread
- BUG/MINOR: dict: race condition fix when inserting dictionary entries.
- MEDIUM: init/threads: don't use spinlocks during the init phase
- BUG/MINOR: cache/htx: Fix the counting of data already sent by the cache applet
- BUG/MEDIUM: compression/htx: Fix the adding of the last data block
- MINOR: flt_trace: Don't scrash the original offset during the random forwarding
- MAJOR: htx: Rework how free rooms are tracked in an HTX message
- MINOR: htx: Add the function htx_move_blk_before()
- Revert "BUG/MEDIUM: H1: When upgrading, make sure we don't free the buffer too early."
- BUG/MINOR: http-rules: mention "deny_status" for "deny" in the error message
- MINOR: http: turn default error files to HTTP/1.1
- BUG/MEDIUM: h1: Don't try to subscribe if we had a connection error.
- BUG/MEDIUM: h1: Don't consider we're connected if the handshake isn't done.
- MINOR: contrib/spoa_server: Upgrade SPOP to 2.0
- BUG/MEDIUM: contrib/spoa_server: Set FIN flag on agent frames
- MINOR: contrib/spoa_server: Add random IP score
- DOC/MINOR: contrib/spoa_server: Fix typo in README
The example configuration uses sess.ip_score however this variable
is not referenced within the example scripts. This patch adds support
for sess.ip_score to the python + lua scripts and generates a
random number between 1 and 100.
In h1_process(), don't consider we're connected if we still have handshakes
pending. It used not to happen, because we would not be called if there
were any ongoing handshakes, but that changed now that the handshakes are
handled by a xprt, and not by conn_fd_handler() directly.
For quite a long time we've been saying that the default error files
should produce HTTP/1.1 responses and since it's of low importance, it
always gets forgotten.
So here it finally comes. Each status code now properly contains a
content-length header so that the output is clean and doesn't force
upstream proxies to switch to chunked encoding or to close the connection
immediately after the response, which is particularly annoying for 401
or 407 for example. It's worth noting that the 3xx codes had already
been turned to HTTP/1.1.
This patch will obviously not change anything for user-provided error files.
The error message indicating an unknown keyword on an http-request rule
doesn't mention the "deny_status" option which comes with the "deny" rule,
this is particularly confusing.
This can be backported to all versions supporting this option.
The function htx_add_data_before() was removed because it was buggy. The
function htx_move_blk_before() may be used if necessary to do something
equivalent, except it just moves blocks. It doesn't handle the adding.
In an HTX message, it may have 2 available rooms to store a new block. The first
one is between the blocks and their payload. Blocks are added starting from the
end of the buffer and their payloads are added starting from the begining. So
the first free room is between these 2 edges. The second one is at the begining
of the buffer, when we start to wrap to add new payloads. Once we start to use
this one, the other one is ignored until the next defragmentation of the HTX
message.
In theory, there is no problem. But in practice, some lacks in the HTX structure
force us to defragment too often HTX messages to always be in a known state. The
second free room is not tracked as it should do and the first one may be easily
corrupted when rewrites happen.
So to fix the problem and avoid unecessary defragmentation, the HTX structure
has been refactored. The front (the block's position of the first payload before
the blocks) is no more stored. Instead we keep the relative addresses of 3 edges:
* tail_addr : The start address of the free space in front of the the blocks
table
* head_addr : The start address of the free space at the beginning
* end_addr : The end address of the free space at the beginning
Here is the general view of the HTX message now:
head_addr end_addr tail_addr
| | |
V V V
+------------+------------+------------+------------+------------------+
| | | | | |
| PAYLOAD | Free space | PAYLOAD | Free space | Blocks area |
| ==> | 1 | ==> | 2 | <== |
+------------+------------+------------+------------+------------------+
<head_addr> is always lower or equal to <end_addr> and <tail_addr>. <end_addr>
is always lower or equal to <tail_addr>.
In addition;, to simplify everything, the blocks area are now contiguous. It
doesn't wrap anymore. So the head is always the block with the lowest position,
and the tail is always the one with the highest position.
There is no bug here, but this patch improves the debug message reported during
the random forwarding. The original offset is kept untouched so its value may be
used to format the message. Before, 0 was always reported.
The function htx_add_data_before() is buggy and cannot work. It first add a data
block and then move it before another one, passed in argument. The problem
happens when a defragmentation is done to add the new block. In this case, the
reference is no longer valid, because the blocks are rearranged. So, instead of
moving the new block before the reference, it is moved at the head of the HTX
message.
So this function has been removed. It was only used by the compression filter to
add a last data block before a TLR, EOT or EOM block. Now, the new function
htx_add_last_data() is used. It adds a last data block, after all others and
before any TLR, EOT or EOM block. Then, the next bock is get. It is the first
non-data block after data in the HTX message. The compression loop continues
with it.
This patch must be backported to 1.9.
Since the commit 8f3c256f7 ("MEDIUM: cache/htx: Always store info about HTX
blocks in the cache"), it is possible to read info about a data block without
sending anything. It is possible because we rely on the function htx_add_data(),
which will try to add data without any defragmentation. In such case, info about
the data block are skipped but don't count in data sent.
No need to backport this patch, expect if the commit 8f3c256f7 is backported
too.
PiBa-NL found some pathological cases where starting threads can hinder
each other and cause a measurable slow down. This problem is reproducible
with the following config (haproxy must be built with -DDEBUG_DEV) :
global
stats socket /tmp/sock1 mode 666 level admin
nbthread 64
backend stopme
timeout server 1s
option tcp-check
tcp-check send "debug dev exit\n"
server cli unix@/tmp/sock1 check
This will cause the process to be stopped once the checks are ready to
start. Binding all these to just a few cores magnifies the problem.
Starting them in loops shows a significant time difference among the
commits :
# before startup serialization
$ time for i in {1..20}; do taskset -c 0,1,2,3 ./haproxy-e186161 -db -f slow-init.cfg >/dev/null 2>&1; done
real 0m1.581s
user 0m0.621s
sys 0m5.339s
# after startup serialization
$ time for i in {1..20}; do taskset -c 0,1,2,3 ./haproxy-e4d7c9dd -db -f slow-init.cfg >/dev/null 2>&1; done
real 0m2.366s
user 0m0.894s
sys 0m8.238s
In order to address this, let's use plain mutexes and cond_wait during
the init phase. With this done, waiting threads now sleep and the problem
completely disappeared :
$ time for i in {1..20}; do taskset -c 0,1,2,3 ./haproxy -db -f slow-init.cfg >/dev/null 2>&1; done
real 0m0.161s
user 0m0.079s
sys 0m0.149s
When checking the result of an ebis_insert() call in an ebtree with unique keys,
if already present, in place of freeing() the old one and return the new one,
rather the correct way is to free the new one, and return the old one. For
this, the __dict_insert() function was folded into dict_insert() as this
significantly simplifies the test of duplicates.
Thanks to Olivier for having reported this bug which came with this one:
"MINOR: dict: Add dictionary new data structure".
There's no point in calling this on each and every thread since the first
thread passing there will enable the listeners, and the next ones will
simply scan all of them in turn to discover that they are already
initialized. Let's only initilize them on the first thread. This could
slightly speed up start up on very large configurations, eventhough most
of the time is still spent in the main thread binding the sockets.
A few measurements have constantly shown that this decreases the startup
time by ~0.1s for 150k listeners. Starting all of them in parallel doesn't
provide better results and can still expose some undesired races.
Since commit 6ec902a ("MINOR: threads: serialize threads initialization")
we now serialize threads initialization. But doing so has emphasized another
race which is that some threads may actually start the loop before others
are done initializing.
As soon as all threads enter the first thread_release() call, their rdv
bit is cleared and they're all waiting for all others' rdv to be cleared
as well, with their harmless bit set. The first one to notice the cleared
mask will progress through thread_isolate(), take rdv again preventing
most others from noticing its short pass to zero, and this first one will
be able to run all the way through the initialization till the last call
to thread_release() which it happily crosses, being the only one with the
rdv bit, leaving the room for one or a few others to do the same. This
results in some threads entering the loop before others are done with
their initialization, which is particularly bad. PiBa-NL reported that
some regtests fail for him due to this (which was impossible to reproduce
here, but races are racy by definition). However placing some printf()
in the initialization code definitely shows this unsychronized startup.
This patch takes a different approach in three steps :
- first, we don't start with thread_release() anymore and we don't
set the rdv mask anymore in the main call. This was initially done
to let all threads start toghether, which we don't want. Instead
we just start with thread_isolate(). Since all threads are harmful
by default, they all wait for each other's readiness before starting.
- second, we don't release with thread_release() but with
thread_sync_release(), meaning that we don't leave the function until
other ones have reached the point in the function where they decide
to leave it as well.
- third, it makes sure we don't start the listeners using
protocol_enable_all() before all threads have allocated their local
FD tables or have initialized their pollers, otherwise startup could
be racy as well. It's worth noting that it is even possible to limit
this call to thread #0 as it only needs to be performed once.
This now guarantees that all thread init calls start only after all threads
are ready, and that no thread enters the polling loop before all others have
completed their initialization.
Please check GH issues #111 and #117 for more context.
No backport is needed, though if some new init races are reported in
1.9 (or even 1.8) which do not affect 2.0, then it may make sense to
carefully backport this small series.
This function provides an alternate way to leave a critical section run
under thread_isolate(). Currently, a thread may remain in thread_release()
without having the time to notice that the rdv mask was released and taken
again by another thread entering thread_isolate() (often the same that just
released it). This is because threads wait in harmless mode in the loop,
which is compatible with the conditions to enter thread_isolate(). It's
not possible to make them wait with the harmless bit off or we cannot know
when the job is finished for the next thread to start in thread_isolate(),
and if we don't clear the rdv bit when going there, we create another
race on the start point of thread_isolate().
This new synchronous variant of thread_release() makes use of an extra
mask to indicate the threads that want to be synchronously released. In
this case, they will be marked harmless before releasing their sync bit,
and will wait for others to release their bit as well, guaranteeing that
thread_isolate() cannot be started by any of them before they all left
thread_sync_release(). This allows to construct synchronized blocks like
this :
thread_isolate()
/* optionally do something alone here */
thread_sync_release()
/* do something together here */
thread_isolate()
/* optionally do something alone here */
thread_sync_release()
And so on. This is particularly useful during initialization where several
steps have to be respected and no thread must start a step before the
previous one is completed by other threads.
This one must not be placed after any call to thread_release() or it would
risk to block an earlier call to thread_isolate() which the current thread
managed to leave without waiting for others to complete, and end up here
with the thread's harmless bit cleared, blocking others. This might be
improved in the future.
thread_release() is to be called after thread_isolate(), i.e. when the
thread already has its harmless bit cleared. No need to clear it twice,
thus avoid calling thread_harmless_end() and directly check the rdv
bits then loop on them.
In si_cs_recv(), if we got the CS_FL_EOI flag on the conn_stream, make sure
we return 1, so that si_cs_process() will be called, and wake
process_stream() up, otherwise if we're unlucky the flag will never be
noticed, and the stream won't be woken up.
In h1_release(), when we want to upgrade the mux to h2, make sure we set
h1c->ibuf to BUF_NULL before calling conn_upgrade_mux_fe().
If the upgrade is successful, the buffer will be provided to the new mux,
h1_release() will be called recursively, it will so try to free h1c->ibuf,
and freeing the buffer we just provided to the new mux would be unfortunate.
As reported in GH issue #109 and in discourse issue
https://discourse.haproxy.org/t/haproxy-returns-408-or-504-error-when-timeout-client-value-is-every-25d
the time parser doesn't error on overflows nor underflows. This is a
recurring problem which additionally has the bad taste of taking a long
time before hitting the user.
This patch makes parse_time_err() return special error codes for overflows
and underflows, and adds the control in the call places to report suitable
errors depending on the requested unit. In practice, underflows are almost
never returned as the parsing function takes care of rounding values up,
so this might possibly happen on 64-bit overflows returning exactly zero
after rounding though. It is not really possible to cut the patch into
pieces as it changes the function's API, hence all callers.
Tests were run on about every relevant part (cookie maxlife/maxidle,
server inter, stats timeout, timeout*, cli's set timeout command,
tcp-request/response inspect-delay).
When we look up an dictionary entry in the cache used upon transmission
we store the last result in ->prev_lookup of struct dcache_tx so that
to compare it with the subsequent entries to look up and save performances.
When a server name is cached we only send its cache entry ID which has
an encoded length of 1 (because smaller than PEER_ENC_2BYTES_MIN).
So, in this case we only have to encode 1, the already known encoded length
of this ID before encoding it.
Furthermore we do not have to call strlen() to compute the lengths of server
name strings thanks to this commit: "MINOR: dict: Store the length of the
dictionary entries".
When allocating new dictionary entries we store the length of the strings.
May be useful so that not to have to call strlen() too much often at runing
time.
Add information about how the peers protocol send/receive entries of
LRU caches for literal dictionaries (e.g. server names in replacement
for server IDs).
We store pointers to server names dictionary entries in a pre-allocated array of
ebpt_node's (->entries member of struct dcache_tx) to cache those sent to remote
peers. Consequently the ID used to identify the server name dictionary entry is
also used as index for this array. There is no need to implement a lookup by key
for this dictionary cache.
There is no point in initializing threads in parallel when we know that
it's the moment where some global variables are turned to thread-local
ones, and/or that some global variables are updated (like global_now or
trash_size). Some FDs might be created/destroyed/reallocated and could
be tricky to follow as well (think about epoll_fd for example).
Instead of having to be extremely careful about all these, and to trigger
false positives in thread sanitizers, let's simply initialize one thread
at a time. The init step is very fast so nobody should even notice, and
we won't have any more doubts about what might have happened when
analysing a dump.
See GH issues #111 and #117 for some background on this.
This reverts commit 1c3b83242d.
It was made only to silence the thread sanitizer but ends up creating a
bug. Indeed, if "tune.bufsize" is in the global section, the trash_size
value is not updated anymore and the trash becomes smaller than a buffer!
Let's stop trying to fix the thread sanitizer reports, they are invalid,
and trying to fix them actually introduces bugs where there were none.
See GH issue #117 for more context. No backport is needed.
The trash_size variable is shared by all threads, and is set by all threads,
when alloc_trash_buffers() is called. To make sure it's set only once,
to silence a harmless data race, use a CAS to set it, and only set it if it
was 0.
The fd_sets we've been using in the log encoding functions are not portable
and were shown to break at least under Cygwin. This patch gets rid of them
in favor of the new bitmap functions. It was verified with the config below
that the log output was exactly the same before and after the change :
defaults
mode http
option httplog
log stdout local0
timeout client 1s
timeout server 1s
timeout connect 1s
frontend foo
bind :8001
capture request header chars len 255
backend bar
option httpchk "GET" "/" "HTTP/1.0\r\nchars: \x01\x02\x03\x04\x05\x06\x07\x08\x09\x0b\x0c\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f\x20\x21\x22\x23\x24\x25\x26\x27\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f\x30\x31\x32\x33\x34\x35\x36\x37\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f\x40\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
server foo 127.0.0.1:8001 check
We now have ha_bit_{set,clr,flip,test} to manipulate bitfields made
of arrays of longs. The goal is to get rid of the remaining non-portable
FD_{SET,CLR,ISSET} that still exist at a few places.
There seems to be a tricky case in the H2 mux related to stream flow
control versus buffer a full situation : is a large response cannot
be entirely sent to the client due to the stream window being too
small, the stream is paused with the SFCTL flag. Then the upper
layer stream might get bored and expire this stream. It will then
shut it down first. But the shutdown operation might fail if the
mux buffer is full, resulting in the h2s being subscribed to the
deferred_shut event with the stream *not* added to the send_list
since it's blocked in SFCTL. In the mean time the upper layer completely
closes, calling h2_detach(). There we have a send_wait (the pending
shutw), the stream is marked with SFCTL so we orphan it.
Then if the client finally reads all the data that were clogging the
buffer, the send_list is run again, but our stream is not there. From
this point, the connection's stream list is not empty, the mux buffer
is empty, so the connection's timeout is not set. If the client
disappears without updating the stream's window, nothing will expire
the connection.
This patch makes sure we always keep the connection timeout updated.
There might be finer solutions, such as checking that there are still
living streams in the connection (i.e. streams not blocked in SFCTL
state), though this is not necessarily trivial nor useful, since the
client timeout is the same for the upper level stream and the connection
anyway.
This patch needs to be backported to 1.9 and 1.8 after some observation.
Released version 2.0-dev6 with the following main changes :
- BUG/MEDIUM: connection: fix multiple handshake polling issues
- MINOR: connection: also stop receiving after a SOCKS4 response
- MINOR: mux-h1: don't try to recv() before the connection is ready
- BUG/MEDIUM: mux-h1: only check input data for the current stream, not next one
- MEDIUM: mux-h1: don't use CS_FL_REOS anymore
- CLEANUP: connection: remove the now unused CS_FL_REOS flag
- CONTRIB: debug: add 4 missing connection/conn_stream flags
- MEDIUM: stream: make a full process_stream() loop when completing I/O on exit
- MINOR: server: increase the default pool-purge-delay to 5 seconds
- BUILD: tools: do not use the weak attribute for trace() on obsolete linkers
- BUG/MEDIUM: vars: make sure the scope is always valid when accessing vars
- BUG/MEDIUM: vars: make the tcp/http unset-var() action support conditions
- BUILD: task: fix a build warning when threads are disabled
- CLEANUP: peers: Remove tabs characters.
- CLEANUP: peers: Replace hard-coded values by macros.
- BUG/MINOR: peers: Wrong stick-table update message building.
- MINOR: dict: Add dictionary new data structure.
- MINOR: peers: Add a LRU cache implementation for dictionaries.
- MINOR: stick-table: Add "server_name" new data type.
- MINOR: cfgparse: Space allocation for "server_name" stick-table data type.
- MINOR: proxy: Add a "server by name" tree to proxy.
- MINOR: server: Add a dictionary for server names.
- MINOR: stream: Stickiness server lookup by name.
- MINOR: peers: Make peers protocol support new "server_name" data type.
- MINOR: stick-table: Make the CLI stick-table handler support dictionary entry data type.
- REGTEST: Add a basic server by name stickiness reg test.
- MINOR: peers: Add dictionary cache information to "show peers" CLI command.
- MINOR: peers: Replace hard-coded for peer protocol 64-bits value encoding by macros.
- MINOR: peers: Replace hard-coded values for peer protocol messaging by macros.
- CLEANUP: ssl: remove unneeded defined(OPENSSL_IS_BORINGSSL)
- BUILD: travis-ci improvements
- MINOR: SSL: add client/server random sample fetches
- BUG/MINOR: channel/htx: Don't alter channel during forward for empty HTX message
- BUG/MINOR: contrib/prometheus-exporter: Add HTX data block in one time
- BUG/MINOR: mux-h1: errflag must be set on H1S and not H1M during output processing
- MEDIUM: mux-h1: refactor output processing
- MINOR: mux-h1: Add the flag HAVE_O_CONN on h1s
- MINOR: mux-h1: Add h1_eval_htx_hdrs_size() to estimate size of the HTX headers
- MINOR: mux-h1: Don't count the EOM in the estimated size of headers
- MEDIUM: cache/htx: Always store info about HTX blocks in the cache
- MEDIUM: htx: Add the parsing of trailers of chunked messages
- MINOR: htx: Don't use end-of-data blocks anymore
- BUG/MINOR: mux-h1: Don't send more data than expected
- BUG/MINOR: flt_trace/htx: Only apply the random forwarding on the message body.
- BUG/MINOR: peers: Wrong "server_name" decoding.
- BUG/MEDIUM: servers: Don't attempt to destroy idle connections if disabled.
- MEDIUM: checks: Make sure we unsubscribe before calling cs_destroy().
- MEDIUM: connections: Wake the upper layer even if sending/receiving is disabled.
- MEDIUM: ssl: Handle subscribe by itself.
- MINOR: ssl: Make ssl_sock_handshake() static.
- MINOR: connections: Add a new xprt method, remove_xprt.
- MINOR: connections: Add a new xprt method, add_xprt().
- MEDIUM: connections: Introduce a handshake pseudo-XPRT.
- MEDIUM: connections: Remove CONN_FL_SOCK*
- BUG/MEDIUM: ssl: Don't forget to initialize ctx->send_recv and ctx->recv_wait.
- BUG/MINOR: peers: Wrong server name parsing.
- MINOR: server: really increase the pool-purge-delay default to 5 seconds
- BUG/MINOR: stream: don't emit a send-name-header in conn error or disconnect states
- MINOR: stream-int: use bit fields to match multiple stream-int states at once
- MEDIUM: stream-int: remove dangerous interval checks for stream-int states
- MEDIUM: stream-int: introduce a new state SI_ST_RDY
- MAJOR: stream-int: switch from SI_ST_CON to SI_ST_RDY on I/O
- MEDIUM: stream-int: make idle-conns switch to ST_RDY
- MEDIUM: stream: re-arrange the connection setup status reporting
- MINOR: stream-int: split si_update() into si_update_rx() and si_update_tx()
- MINOR: stream-int: make si_sync_send() from the send code of si_update_both()
- MEDIUM: stream: rearrange the events to remove the loop
- MEDIUM: stream: only loop on flags relevant to the analysers
- MEDIUM: stream: don't abusively loop back on changes on CF_SHUT*_NOW
- BUILD: stream-int: avoid a build warning in dev mode in si_state_bit()
- BUILD: peers: fix a build warning about an incorrect intiialization
- BUG/MINOR: time: make sure only one thread sets global_now at boot
- BUG/MEDIUM: tcp: Make sure we keep the polling consistent in tcp_probe_connect.