This bug is in an unexpected clause of the switch..case, inside
h1_process_output(). The wrong structure is used to set the error flag.
This patch must be backported to 1.9.
Since recent changes on the way HTX data blocks are added in an HTX message, we
must now be sure the prometheus service add its own blocks in one time. Indeed,
the function htx_add_data() may now decide to only copy a part of data. So
instead, we must call htx_add_data_atonce() instead.
In channel_htx_forward() and channel_htx_forward_forever(), if the HTX message
is empty, the underlying buffer may be really empty too. And we have no warranty
the caller will call htx_to_buf() later. And in practice, it is almost never
done. So the channel's buffer must not be altered. Otherwise, the buffer may be
considered as full (data == size) for an empty HTX message and no outgoing data.
This patch must be backported to 1.9.
This adds 4 sample fetches:
- ssl_fc_client_random
- ssl_fc_server_random
- ssl_bc_client_random
- ssl_bc_server_random
These fetches retrieve the client or server random value sent during the
handshake.
Their use is to be able to decrypt traffic sent using ephemeral ciphers. Tools
like wireshark expect a TLS log file with lines in a few known formats
(https://code.wireshark.org/review/gitweb?p=wireshark.git;a=blob;f=epan/dissectors/packet-tls-utils.c;h=28a51fb1fb029eae5cea52d37ff5b67d9b11950f;hb=HEAD#l5209).
Previously the only format supported using data retrievable from HAProxy state
was the one utilizing the Session-ID. However an SSL/TLS session ID is
optional, and thus cannot be relied upon for this purpose.
This change introduces the ability to extract the client random instead which
can be used for one of the other formats. The change also adds the ability to
extract the server random, just in case it might have some other use, as the
code change to support this was trivial.
full list:
update LibreSSL to 2.9.2
speed up build by using "make -j3"
cache BoringSSL checkout
build prometeus exporter
add basic cygwin build
add USE_TFO=1, USE_SYSTEMD=1 to linux builds
With this patch we define macros for the minimum values which are
encoded for 2 up to 10 bytes. This latter is big enough to encode
UINT64_MAX. We replaced at several places 240 value by PEER_ENC_2BYTES_MIN
which is the minimum value which is encoded with 2 bytes. The peer protocol
encoding consisting in encoding with only one byte a value which is
less than PEER_ENC_2BYTES_MIN and with at least 2 bytes a 64-bits value greater
than PEER_ENC_2BYTES_MIN.
With this new reg test we ensure the server by names stickiness is functional
between servers organized differently (with identical names, but with different IDs)
among two haproxy processes backends.
Make usage of the APIs implemented for dictionaries (dict.c) and their LRU caches (struct dcache)
so that to send/receive server names used for the server by name stickiness. These
names are sent over the network as follows:
- in every case we send the encode length of the data (STD_T_DICT), then
- if the server names is not present in the cache used upon transmission (struct dcache_tx)
we cache it and we the ID of this TX cache entry followed the encode length of the
server name, and finally the sever name itseft (non NULL terminated string).
- if the server name is present, we repead these operations but we only send the TX cache
entry ID.
Upon receipt, the couple of (cache IDs, server name) are stored the LRU cache used
only upon receipt (struct dcache_rx). As the peers protocol is symetrical, the fact
that the server name is present in the received data (resp. or not) denotes if
the entry is absent (resp. or not).
With this patch we modify the stickiness server targets lookup behavior.
First we look for this server targets by their names before looking for them by their
IDs if not found. We also insert a dictionary entry for the name of the server targets
and store the address of this entry in the underlying stick-table.
When parsing sticking rules, with this patch we reserve some room for the new
"server_name" stick-table data type, as this is already done for "server_id",
setting the offset and used space (in bytes) in the stick-table entry thanks
to stkable_alloc_data_type().
This simple patch only adds definitions to create a new stick-table
data type ID and a new standard type to store information in relation
wich dictionary entries (STD_T_DICT).
We want to send some stick-table data fields stored as strings in dictionaries
without consuming too much memory and CPU. To do so we implement with this patch
a cache for send/received dictionaries entries. These dictionary of strings entries are
stored in others real dictionary entries with an identifier as key (unsigned int)
and a pointer to the dictionary of strings entries as values.
This patch adds minimalistic definitions to implement dictionary new data structure
which is an ebtree of ebpt_node structs with strings as keys. Note that this has nothing
to see with real dictionary data structure (maps of keys in association with values).
When creating this patch "CLEANUP: peers: Replace hard-coded values by macros",
we realized there was a remaining place in peer_prepare_updatemsg() where the maximum
of an encoded length harcoded value could be replaced by PEER_MSG_ENCODED_LENGTH_MAXLEN
macro. But in this case, the 1 harcoded value for the header length is wrong. Should
be 2 or PEER_MSG_HEADER_LEN. So, there is a missing byte to encode the length of
remaining data after the header.
Note that the bug was never encountered because even with a missing byte, we could
encode a maximum length which would be (1<<25) (32MB) according to the following
extract of the peers protocol documentation which were from far a never reached limit
I guess:
I) Encoded Integer and Bitfield.
0 <= X < 240 : 1 byte (7.875 bits) [ XXXX XXXX ]
240 <= X < 2288 : 2 bytes (11 bits) [ 1111 XXXX ] [ 0XXX XXXX ]
2288 <= X < 264432 : 3 bytes (18 bits) [ 1111 XXXX ] [ 1XXX XXXX ] [ 0XXX XXXX ]
264432 <= X < 33818864 : 4 bytes (25 bits) [ 1111 XXXX ] [ 1XXX XXXX ]*2 [ 0XXX XXXX ]
33818864 <= X < 4328786160 : 5 bytes (32 bits) [ 1111 XXXX ] [ 1XXX XXXX ]*3 [ 0XXX XXXX ]
All the peer stick-table messages are made of a 2-byte header (PEER_MSG_HEADER_LEN)
followed by the encoded length of the remaining data wich is harcoded as 5 (in bytes)
for the maximum (PEER_MSG_ENCODED_LENGTH_MAXLEN). With such a length we can encode
a maximum length which equals to (1 << 32) - 1, which is from far enough.
This patches replaces both these values by macros where applicable.
The __decl_hathreads() macro will leave a lone semi-colon making the end
of variables declarations, resulting in a warning if threads are disabled.
Let's simply swap it with the last variable. Thanks to Ilya Shipitsin for
reporting this issue.
No backport is needed.
Patrick Hemmer reported that http-request unset-var(foo) if ... fails to
parse. The reason is that it reuses the same parser as "set-var(foo)" which
makes a special case of the arguments, supposed to be a sample expression
for set-var, but which must not exist for unset-var. Unfortunately the
parser finds "if" or "unless" and believes it's an expression. Let's simply
drop the test so that the outer rule parser deals with potential extraneous
keywords.
This should be backported to all versions supporting unset-var().
Patrick Hemmer reported that a simple tcp rule involving a variable like
this is enough to crash haproxy :
frontend foo
bind :8001
tcp-request session set-var(txn.foo) src
The tests on the variables scopes is not strict enough, it needs to always
verify if the stream is valid when accessing a req/res/txn variable. This
patch does this by adding a new get_vars() function which does the job
instead of open-coding all the lookups everywhere.
It must be backported to all versions supporting set-var and
"tcp-request session" so at least 1.9 and 1.8.
The default dummy trace() function is marked weak in order to be easily
replaced at link time. Some linkers are having issues with the weak
attribute, so let's not mark it on these linkers. They will simply not
be able to build with TRACE=1, which is no big deal since it's only used
by developers.
The default used to be a very aggressive delay of 1 second before starting
to purge idle connections, but tests show that with bursty traffic it's a
bit short. Let's increase this to 5 seconds.
During 1.9 development cycle a shortcut was made in process_stream() to
update the analysers immediately after an I/O even detected on the send()
path while leaving the function. In order to prevent this from being abused
by a single stream stealing all the CPU, the loop didn't cover the initial
recv() call, so that events ultimately converge.
This has caused a number of issues over time because the conditions to
decide to loop are a bit tricky. For example the CF_READ_PARTIAL flag is
not immediately removed from rqf_last and may appear for a long time at
this point, sometimes causing some loops to last long.
Another unexpected side effect is that all analysers are called again with
no data to process, just because CF_WRITE_PARTIAL is present. We cannot get
rid of this event even if of very rare use, because some analysers might
wait for some data to leave a buffer before proceeding. With a full loop,
this event would have been merged with a subsequent recv() allowing analysers
to do something more useful than just ack an event they don't care about.
While during early 1.9-dev it was very important to be kind with the
scheduler, nowadays it's lock-free for local tasks so this optimization
is much less interesting to use it for I/Os, especially if we factor in
the trouble it causes.
This patch thus removes the use of the loop for regular I/Os and instead
performs a task_wakeup() with an I/O event so that the task will be
scheduled after all other ones and will have a chance to perform another
recv() and possibly to gather more I/O events to be processed at once.
Synchronous errors and transitions to SI_ST_DIS however are still handled
by the loop.
Doing so significantly reduces the average number of calls to analysers
(those are typically halved when compression is enabled in legacy mode),
and as a side benefit, has increased the H1 performance by about 1%.
This flag was already removed from other muxes and from the upper layers,
because it was misused. It indicates to the mux that the end of a stream
was already seen and is pending after existing data, but this should not
be on the conn_stream but internal to the mux.
This patch creates a new H1S flag H1S_F_REOS to replace it and uses it to
replace the last uses of CS_FL_REOS.
The mux-h1 doesn't properly propagate end of streams to the application
layer when requests are pipelined. This is visible by launching h2load
in h1 mode with -m greater than 1 : issuing Ctrl-C has no effect until
the client timeout expires.
The reason is that among the checks conditionning the reporting of the
end of stream status and waking up the streams, is a test on the presence
of remaining input data in the demux. But with pipelining, these data may
be present for another stream and should not prevent the end of stream
condition from being reported.
This patch addresses this issue by introducing a new function
"h1s_data_pending" which returns a boolean indicating if there are in
the demux buffer any data for the current stream. That is, if the stream
is in H1_MSG_DONE state, there are never any data for it. And if it's in
a different state, then the demux buffer is checked. This replaces the
tests on b_data(&h1c->ibuf) and correctly allows end of streams to be
reported at the end of requests.
It's worth noting that 1.9 doesn't suffer from this issue but it possibly
isn't completely immune either given that the same tests are present.
Just as we already do in h1_send(), if the connection is not yet ready,
do not proceed and instead subscribe. This avoids a needless recvfrom()
and subscription to polling for a case which will never work since the
request was not even sent.
Just as is done in previous patch for all handshake handlers,
also stop receiving after a SOCKS4 response was received. This
one escaped the previous cleanup but must be done to keep the
code safe.
Connection handshakes were rarely stacked on top of each other, but the
recent experiments consisting in sending PROXY over SOCKS4 revealed a
number of issues in these lower layers. First, each handler waiting for
data MUST subscribe to recv events with __conn_sock_want_recv() and MUST
unsubscribe from send events using __conn_sock_stop_send() to avoid any
wake-up loop in case a previous sender has set this. Second, each handler
waiting for sending MUST subscribe to send events with __conn_sock_want_send()
and MUST unsubscribe from recv events using __conn_sock_stop_recv() to
avoid any wake-up loop in case some data are available on the connection.
Till now this was done at various random places, and in particular the
cases where the FD was not ready for recv forgot to re-enable reading.
Second, while senders can happily use conn_sock_send() which automatically
handles EINTR, loops, and marks the FD as not ready with fd_cant_send(),
there is no equivalent for recv so receivers facing EAGAIN MUST call
fd_cant_send() to enable polling. It could be argued that implementing
an equivalent conn_sock_recv() function could be useful and more
long-term proof than the current situation.
Third, both types of handlers MUST unsubscribe from their respective
events once they managed to do their job, and none may even play with
__conn_xprt_*(). Here again this was lacking, and one surprizing call
to __conn_xprt_stop_recv() was present in the proxy protocol parser
for TCP6 messages!
Thanks to Alexander Liu for his help on this issue.
This patch must be backported to 1.9 and possibly some older versions,
though the SOCKS parts should be dropped.
Released version 2.0-dev5 with the following main changes :
- BUILD: watchdog: use si_value.sival_int, not si_int for the timer's value
- BUILD: signals: FreeBSD has SI_LWP instead of SI_TKILL
- BUILD: watchdog: condition it to USE_RT
- MINOR: raw_sock: report global traffic statistics
- MINOR: stats: report the global output bit rate in human readable form
- BUG/MINOR: proto-htx: Try to keep connections alive on redirect
- BUG/MEDIUM: spoe: Don't use the SPOE applet after releasing it
- BUG/MINOR: lua: Set right direction and flags on new HTTP objects
- BUG/MINOR: mux-h2: Count EOM in bytes sent when a HEADERS frame is formatted
- BUG/MINOR: mux-h1: Report EOI instead EOS on parsing error or H2 upgrade
- BUG/MEDIUM: proto-htx: Not forward too much data when 1xx reponses are handled
- BUG/MINOR: htx: Remove a forgotten while loop in htx_defrag()
- DOC: fix typos
- BUG/MINOR: ssl_sock: Fix memory leak when disabling compression
- OPTIM: freq-ctr: don't take the date lock for most updates
- MEDIUM: mux-h2: avoid doing expensive buffer realigns when not absolutely needed
- CLEANUP: debug: remove the TRACE() macro
- MINOR: buffer: introduce b_make() to make a buffer from its parameters
- MINOR: buffer: add a new buffer ring API to manipulate rings of buffers
- MEDIUM: mux-h2: replace all occurrences of mbuf with a buffer ring
- MEDIUM: mux-h2: make the conditions to send based on mbuf, not just its tail
- MINOR: mux-h2: introduce h2_release_mbuf() to release all buffers in the mbuf ring
- MEDIUM: mux-h2: make the send() function iterate over all mux buffers
- CLEANUP: mux-h2: consistently use a local variable for the mbuf
- MINOR: mux-h2: report the mbuf's head and tail in "show fd"
- MAJOR: mux-h2: switch to next mux buffer on buffer full condition.
- BUILD: connections: shut up gcc about impossible out-of-bounds warning
- BUILD: ssl: fix latest LibreSSL reg-test error
- MINOR: cli/activity: remove "fd_del" and "fd_skip" from show activity
- MINOR: cli/activity: add 3 general purpose counters in development mode
- BUG/MAJOR: lb/threads: make sure the avoided server is not full on second pass
- BUG/MEDIUM: queue: fix the tree walk in pendconn_redistribute.
- BUG/MEDIUM: threads: fix double-word CAS on non-optimized 32-bit platforms
- MEDIUM: config: now alert when two servers have the same name
- MINOR: htx: Remove the macro IS_HTX_SMP() and always use IS_HTX_STRM() instead
- MINOR: htx: Move the macro IS_HTX_STRM() in proto/stream.h
- MINOR: htx: Store the head position instead of the wrap one
- MINOR: htx: Store start-line block's position instead of address of its payload
- MINOR: htx: Add functions to get the first block of an HTX message
- MINOR: mux-h2/htx: Get the start-line from the head when HEADERS frame is built
- MINOR: htx: Replace the function http_find_stline() by http_get_stline()
- CLEANUP: htx: Remove unused function htx_get_stline()
- MINOR: http/htx: Use sl_pos directly to replace the start-line
- MEDIUM: http/htx: Perform analysis relatively to the first block
- MINOR: channel/htx: Call channel_htx_recv_max() from channel_recv_max()
- MINOR: htx: Add function htx_get_max_blksz()
- BUG/MINOR: htx: Change htx_xfer_blk() to also count metadata
- MEDIUM: mux-h1: Use the count value received from the SI in h1_rcv_buf()
- MINOR: mux-h2: Use the count value received from the SI in h2_rcv_buf()
- MINOR: stream-int: Don't use the flag CO_RFL_KEEP_RSV anymore in si_cs_recv()
- MINOR: connection: Remove the unused flag CO_RFL_KEEP_RSV
- MINOR: mux-h2/htx: Support zero-copy when possible in h2_rcv_buf()
- MINOR: htx: Add a field to set the memory used by headers in the HTX start-line
- MINOR: h2/htx: Set hdrs_bytes on the SL when an HTX message is produced
- MINOR: mux-h1: Set hdrs_bytes on the SL when an HTX message is produced
- MINOR: htx: Be sure to xfer all headers in one time in htx_xfer_blks()
- MEDIUM: htx: 1xx messages are now part of the final reponses
- MINOR: channel/htx: Add function to forward headers of an HTX message
- MINOR: filters/htx: Use channel_htx_fwd_headers() after headers filtering
- MINOR: proto-htx: Use channel_htx_fwd_headers() to forward 1xx responses
- MEDIUM: htx: Store the first block position instead of the start-line one
- MINOR: stats/htx: don't use the first block position but the head one
- MINOR: channel/htx: Add functions to forward a part or all HTX payload
- MINOR: proto-htx: Use channel_htx_fwd_all() when unfiltered body are forwarded
- MEDIUM: filters/htx: Filter body relatively to the first block
- MINOR: htx: Optimize htx_drain() when all data are drained
- MINOR: htx: don't rely on htx_find_blk() anymore in the function htx_truncate()
- MINOR: htx: remove the unused function htx_find_blk()
- MINOR: htx: Remove support of pseudo headers because it is unused
- BUG/MEDIUM: http: fix "http-request reject" when not final
- MINOR: ssl: Make sure the underlying xprt's init method doesn't fail.
- MINOR: ssl: Don't forget to call the close method of the underlying xprt.
- MINOR: htx: rename htx_append_blk_value() to htx_add_data_atonce()
- MINOR: htx: make htx_add_data() return the transmitted byte count
- MEDIUM: htx: make htx_add_data() never defragment the buffer
- MINOR: activity: write totals on the "show activity" output
- MINOR: activity: report totals and average separately
- MEDIUM: poller: separate the wait time from the wake events
- MINOR: activity: report the number of failed pool/buffer allocations
- MEDIUM: buffers: relax the buffer lock a little bit
- MINOR: task: turn the WQ lock to an RW_LOCK
- MEDIUM: task: don't grab the WR lock just to check the WQ
- BUG/MEDIUM: mux-h1: Don't skip the TCP splicing when there is no more data to read
- MEDIUM: sessions: Introduce session flags.
- BUG/MEDIUM: h2: Don't forget to set h2s->cs to NULL after having free'd cs.
- BUG/MEDIUM: mux-h2: fix the conditions to end the h2_send() loop
- BUG/MEDIUM: mux-h2: don't refrain from offering oneself a used buffer
- BUG/MEDIUM: connection: Use the session to get the origin address if needed.
- MEDIUM: tasks: Get rid of active_tasks_mask.
- MEDIUM: connection: Upstream SOCKS4 proxy support
- BUILD: contrib/prometheus: fix build breakage caused by move of idle_pct
- BUG/MINOR: deinit/threads: make hard-stop-after perform a clean exit
As reported in GH issue #99, when hard-stop-after triggers and threads
are in use, the chance that any thread releases the resources in use by
the other ones is non-null. Thus no thread should be allowed to deinit()
nor exit by itself.
Here we take a different approach. We simply use a 3rd possible value
for the "killed" variable so that all threads know they must break out
of the run-poll-loop and immediately stop.
This patch was tested by commenting the stream_shutdown() calls in
hard_stop() to increase the chances to see a stream use released
resources. With this fix applied, it never crashes anymore.
This fix should be backported to 1.9 and 1.8.
The idle_pct thread-local variable was moved to struct thread_info by
commit 81036f2 ("MINOR: time: move the cpu, mono, and idle time to
thread_info") but not updated in service-prometheus.c, thus breaking
it.
No backport is needed. This fixes GH issue #110.
Have "socks4" and "check-via-socks4" server keyword added.
Implement handshake with SOCKS4 proxy server for tcp stream connection.
See issue #82.
I have the "SOCKS: A protocol for TCP proxy across firewalls" doc found
at "https://www.openssh.com/txt/socks4.protocol". Please reference to it.
[wt: for now connecting to the SOCKS4 proxy over unix sockets is not
supported, and mixing IPv4/IPv6 is discouraged; indeed, the control
layer is unique for a connection and will be used both for connecting
and for target address manipulation. As such it may for example report
incorrect destination addresses in logs if the proxy is reached over
IPv6]
Remove the active_tasks_mask variable, we can deduce if we've work to do
by other means, and it is costly to maintain. Instead, introduce a new
function, thread_has_tasks(), that returns non-zero if there's tasks
scheduled for the thread, zero otherwise.
In conn_si_send_proxy(), if we don't have a conn_stream yet, because the mux
won't be created until the SSL handshake is done, retrieve the opposite's
connection from the session. At this point, we know the session associated
with the connection is the one that initiated it, and we can thus just use
the session's origin.
This should be backported to 1.9.
Usually when calling offer_buffer(), we don't expect to offer it to
ourselves. But with h2 we have the same buffer_wait for the two directions
so we can unblock the recv path when completing a send(), or we can unblock
part of the mux buffer after sending the first few buffers that we managed
to collect. Thus it is important to always accept to wake up any requester.
A few parts of this patch could possibly be backported but earlier versions
already have other issues related to low-buffer condition so it's not sure
it's worth taking the risk to make things worse.
The test for the mux alloc failure in h2_send() right after an attempt
at h2_process_mux() used to make sense as it tried to detect that this
latter failed to produce data. But now that we have a list of buffers,
it is a perfectly valid situation where there can still be data in the
buffer(s).
So now when we see this flag we only declare it's the last run on the
loop. In addition we need to make sure we break out of the loop on
snd_buf failure, or we'll loop indefinitely, for example when the buf
is full and we can't send.
No backport is needed.
In h2c_frt_stream_new, if we failed to create the stream for some reason,
don't forget to set h2s->cs to NULL before calling h2s_destroy(), otherwise
h2s_destroy() will call h2s_close(), which will attempt to access
h2s->cs->flags if it's non-NULL.
This should be backported to 1.9.
Add session flags, and add a new flag, SESS_FL_PREFER_LAST, to be set when
we use NTLM authentication, and we should reuse the last connection. This
should fix using NTLM with HTX. This totally replaces TX_PREFER_LAST.
This should be backported to 1.9.
When there is no more data to read (h1m->curr_len == 0 in the state
H1_MSG_DATA), we still call xprt->rcv_pipe() callback. It is important to update
connection's flags. Especially to remove the flag CO_FL_WAIT_ROOM. Otherwise,
the pipe remains marked as full, preventing the stream-interface to fallback on
rcv_buf(). So the connection may be freezed because no more data is received and
the mux H1 remains blocked in the state H1_MSG_DATA.
This patch must be backported to 1.9.
When profiling locks, it appears that the WQ's lock has become the most
contended one, despite the WQ being split by thread. The reason is that
each thread takes the WQ lock before checking if it it does have something
to do. In practice the WQ almost only contains health checks and rare tasks
that can be scheduled anywhere, so this is a real waste of resources.
This patch proceeds differently. Now that the WQ's lock was turned to RW
lock, we proceed in 3 phases :
1) locklessly check for the queue's emptiness
2) take an R lock to retrieve the first element and check if it is
expired. This way most visits are performed with an R lock to find
and return the next expiration date.
3) if one expiration is found, we perform the WR-locked lookup as
usual.
As a result, on a one-minute test involving 8 threads and 64 streams at
1.3 million ctxsw/s, before this patch the lock profiler reported this :
Stats about Lock TASK_WQ:
# write lock : 1125496
# write unlock: 1125496 (0)
# wait time for write : 263.143 msec
# wait time for write/lock: 233.802 nsec
# read lock : 0
# read unlock : 0 (0)
# wait time for read : 0.000 msec
# wait time for read/lock : 0.000 nsec
And after :
Stats about Lock TASK_WQ:
# write lock : 173
# write unlock: 173 (0)
# wait time for write : 0.018 msec
# wait time for write/lock: 103.988 nsec
# read lock : 1072706
# read unlock : 1072706 (0)
# wait time for read : 60.702 msec
# wait time for read/lock : 56.588 nsec
Thus the contention was divided by 4.3.
In lock profiles it's visible that there is a huge contention on the
buffer lock. The reason is that when offer_buffers() is called, it
systematically takes the lock before verifying if there is any
waiter. However doing so doesn't protect against races since a
waiter can happen just after we release the lock as well. Similarly
in h2 we take the lock every time an h2c is going to be released,
even without checking that the h2c belongs to a wait list. These
two have now been addressed by verifying non-emptiness of the list
prior to taking the lock.
Haproxy is designed to be able to continue to run even under very low
memory conditions. However this can sometimes have a serious impact on
performance that it hard to diagnose. Let's report counters of failed
pool and buffer allocations per thread in show activity.