The first one, HTX_SL_F_HAS_SCHM, will be used to know the request has an
explicit scheme. So, in H2, it is always true because the pseudo-header
":scheme" is mandatory. In H1, it is only true when an absolute URI is found on
the start-line. The other flags, HTX_SL_F_SCHM_HTTP and HTX_SL_F_SCHM_HTTPS,
will be used to know which scheme the request have. For now, other protocols are
not handled.
The aim of these flags is to pass this information to the backend side in
general, and to the H2 mux in particular. So the multiplexer will have a chance
to use this information to send the right scheme to the server.
This state is used in the legacy HTTP when everything was received from an
endpoint but a filter doesn't forward all the data. It is used to not report a
client or a server abort, depending on channels flags.
The same must be done on HTX streams. Otherwise, the message may be
truncated. For instance, it may happen with the filter trace with the random
forwarding enabled on the response channel.
This patch must be backported to 1.9.
In the HTX structure, the field <first> is used to know where to (re)start the
analysis. It may differ from the message's head. It is especially important to
update it to handle 1xx messages, to be sure to restart the analysis on the next
message (another 1xx message or the final one). It is also updated when some
data are forwarded (the headers or part of the body). But this update is an
error and must never be done at the analysis level. It is a bug, because some
sample fetches may be used after the data forwarding (but before the first send
of course). At this stage, if the first block position does not point on the
start-line, most of HTTP sample fetches fail.
So now, when something is forwarding by HTX analyzers, the first block position
is not update anymore.
This issue was reported on Github. See #119. No backport needed.
When a block's payload is moved during an expansion or when the whole block is
removed, the addresses of free spaces are updated accordingly. We must be
careful to reset them when <tail_addr> becomes equal to <end_addr>. In this
situation, we can maximize the free space between the blocks and their payload
and set the other one to 0. It is also important to be sure to never have
<end_addr> greater than <tail_addr>.
Instead of using the macro MAX_HTTP_HDR to limit the number of headers parsed
before throwing an error, we now use the custom global variable
global.tune.max_http_hdr.
This patch must be backported to 1.9.
When channel_full() is called for an HTX stream, we fall back on the HTX
version. This function is called, among other, from tcp_inspect_request(). With
this patch, the inspect delay is respected again.
This patch must be backported to 1.9.
Previous fix about the random forwarding on the message body was not enough to
fix the bug in all cases. Among others, when there is no data but only the EOM,
we must forward everything.
This patch must be backported to 1.9 if the patch 0bdeeaacb ("BUG/MINOR:
flt_trace/htx: Only apply the random forwarding on the message body.") is also
backported.
With both I/O and tasks in the same tasklet list, we now have a very
smooth and responsive scheduler, providing a good fairness between I/O
activities. With the lower layers relying on tasklet a lot (I/O wakeup,
subscribe, etc), there may often be a large number of totally autonomous
tasklets doing their business such as forwarding data between two muxes.
But the task scheduler historically refrained from picking tasks from the
priority-ordered run queue to put them into the tasklet list until this
later had less than max_runqueue_depth entries. This was to make sure that
low-latency, high-priority tasks would have an opportunity to be dequeued
before others even if they arrive late. But the counter used for this is
still the tasklet list size, which contains countless I/O events. This
causes an unfairness between unbounded I/Os and bounded tasks, resulting
for example in the CLI responding slower when forwarding 40 Gbps of HTTP
traffic spread over a thousand of connections.
A good solution consists in sticking to the initial intent of
max_runqueue_depth which is to limit the number of tasks in the list
(to maintain fairness between them) and not to limit the number of these
tasks among tasklets. It just turns out that the task_list_size initially
was this task counter and changed over time to be a tasklet list size.
Let's simply refrain from updating it for pure tasklets so that it takes
back its original role of counting real tasks as its name implies. With
this change the CLI becomes instantly responsive under load again.
This patch may possibly be backported to 1.9 though it requires some
careful checks.
In h1_init(), also add the H1C_F_CS_WAIT_CONN flag if the handshake didn't
complete, otherwise we may end up letting the upper layer sending data too
soon.
When built with the dummy 51Degrees library for testing, the output will
include "(dummy library)" to ensure it is clear that this is this is not
the API.
This way the directory structure remains the same as with the real lib and
one can apply the same build options regardless of where the lib is stored,
removing any possible confusion.
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
complains:
> src/debug.c: In function "ha_panic":
> src/debug.c:162:2: warning: ignoring return value of "write", declared with attribute warn_unused_result [-Wunused-result]
> (void) write(2, trash.area, trash.data);
> ^
Change the formatting of the uptime field in "show proc" so it's easier
to parse it. Remove the space between the day and the hour and align the
field on 15 characters.
These are intended for use by HAProxy developers to ensure any changes
did not affect the 51Degrees implementation. The 51Degrees module can be
enabled and used by using the source in contrib/51d. This will run
without breaking, but will not return any meaningful information.
This is ideal for testing HAProxy core code, and other modules alongside
51Degrees, but should never be used as an actual module as it does
nothing.
The _51d_fetch method, and the two methods it calls to fetch HTTP
headers (_51d_set_device_offsets, and _51d_set_headers), now support
both legacy and HTX operation.
This should be backported to 1.9.
This action is particularly convenient to replace some deprecated usees
of "reqrep". It takes a match and a format string including back-
references. The reqrep warning was updated to suggest it as well.
Released version 2.0-dev7 with the following main changes :
- BUG/MEDIUM: mux-h2: make sure the connection timeout is always set
- MINOR: tools: add new bitmap manipulation functions
- MINOR: logs: use the new bitmap functions instead of fd_sets for encoding maps
- MINOR: chunks: Make sure trash_size is only set once.
- Revert "MINOR: chunks: Make sure trash_size is only set once."
- MINOR: threads: serialize threads initialization
- MINOR peers: data structure simplifications for server names dictionary cache.
- DOC: peers: Update for dictionary cache entries for peers protocol.
- MINOR: dict: Store the length of the dictionary entries.
- MINOR: peers: A bit of optimization when encoding cached server names.
- MINOR: peers: Optimization for dictionary cache lookup.
- MEDIUM: tools: improve time format error detection
- BUG/MEDIUM: H1: When upgrading, make sure we don't free the buffer too early.
- BUG/MEDIUM: stream_interface: Make sure we call si_cs_process() if CS_FL_EOI.
- MINOR: threads: avoid clearing harmless twice in thread_release()
- MEDIUM: threads: add thread_sync_release() to synchronize steps
- BUG/MEDIUM: init/threads: prevent initialized threads from starting before others
- OPTIM/MINOR: init/threads: only call protocol_enable_all() on first thread
- BUG/MINOR: dict: race condition fix when inserting dictionary entries.
- MEDIUM: init/threads: don't use spinlocks during the init phase
- BUG/MINOR: cache/htx: Fix the counting of data already sent by the cache applet
- BUG/MEDIUM: compression/htx: Fix the adding of the last data block
- MINOR: flt_trace: Don't scrash the original offset during the random forwarding
- MAJOR: htx: Rework how free rooms are tracked in an HTX message
- MINOR: htx: Add the function htx_move_blk_before()
- Revert "BUG/MEDIUM: H1: When upgrading, make sure we don't free the buffer too early."
- BUG/MINOR: http-rules: mention "deny_status" for "deny" in the error message
- MINOR: http: turn default error files to HTTP/1.1
- BUG/MEDIUM: h1: Don't try to subscribe if we had a connection error.
- BUG/MEDIUM: h1: Don't consider we're connected if the handshake isn't done.
- MINOR: contrib/spoa_server: Upgrade SPOP to 2.0
- BUG/MEDIUM: contrib/spoa_server: Set FIN flag on agent frames
- MINOR: contrib/spoa_server: Add random IP score
- DOC/MINOR: contrib/spoa_server: Fix typo in README
The example configuration uses sess.ip_score however this variable
is not referenced within the example scripts. This patch adds support
for sess.ip_score to the python + lua scripts and generates a
random number between 1 and 100.
In h1_process(), don't consider we're connected if we still have handshakes
pending. It used not to happen, because we would not be called if there
were any ongoing handshakes, but that changed now that the handshakes are
handled by a xprt, and not by conn_fd_handler() directly.
For quite a long time we've been saying that the default error files
should produce HTTP/1.1 responses and since it's of low importance, it
always gets forgotten.
So here it finally comes. Each status code now properly contains a
content-length header so that the output is clean and doesn't force
upstream proxies to switch to chunked encoding or to close the connection
immediately after the response, which is particularly annoying for 401
or 407 for example. It's worth noting that the 3xx codes had already
been turned to HTTP/1.1.
This patch will obviously not change anything for user-provided error files.
The error message indicating an unknown keyword on an http-request rule
doesn't mention the "deny_status" option which comes with the "deny" rule,
this is particularly confusing.
This can be backported to all versions supporting this option.
The function htx_add_data_before() was removed because it was buggy. The
function htx_move_blk_before() may be used if necessary to do something
equivalent, except it just moves blocks. It doesn't handle the adding.
In an HTX message, it may have 2 available rooms to store a new block. The first
one is between the blocks and their payload. Blocks are added starting from the
end of the buffer and their payloads are added starting from the begining. So
the first free room is between these 2 edges. The second one is at the begining
of the buffer, when we start to wrap to add new payloads. Once we start to use
this one, the other one is ignored until the next defragmentation of the HTX
message.
In theory, there is no problem. But in practice, some lacks in the HTX structure
force us to defragment too often HTX messages to always be in a known state. The
second free room is not tracked as it should do and the first one may be easily
corrupted when rewrites happen.
So to fix the problem and avoid unecessary defragmentation, the HTX structure
has been refactored. The front (the block's position of the first payload before
the blocks) is no more stored. Instead we keep the relative addresses of 3 edges:
* tail_addr : The start address of the free space in front of the the blocks
table
* head_addr : The start address of the free space at the beginning
* end_addr : The end address of the free space at the beginning
Here is the general view of the HTX message now:
head_addr end_addr tail_addr
| | |
V V V
+------------+------------+------------+------------+------------------+
| | | | | |
| PAYLOAD | Free space | PAYLOAD | Free space | Blocks area |
| ==> | 1 | ==> | 2 | <== |
+------------+------------+------------+------------+------------------+
<head_addr> is always lower or equal to <end_addr> and <tail_addr>. <end_addr>
is always lower or equal to <tail_addr>.
In addition;, to simplify everything, the blocks area are now contiguous. It
doesn't wrap anymore. So the head is always the block with the lowest position,
and the tail is always the one with the highest position.
There is no bug here, but this patch improves the debug message reported during
the random forwarding. The original offset is kept untouched so its value may be
used to format the message. Before, 0 was always reported.
The function htx_add_data_before() is buggy and cannot work. It first add a data
block and then move it before another one, passed in argument. The problem
happens when a defragmentation is done to add the new block. In this case, the
reference is no longer valid, because the blocks are rearranged. So, instead of
moving the new block before the reference, it is moved at the head of the HTX
message.
So this function has been removed. It was only used by the compression filter to
add a last data block before a TLR, EOT or EOM block. Now, the new function
htx_add_last_data() is used. It adds a last data block, after all others and
before any TLR, EOT or EOM block. Then, the next bock is get. It is the first
non-data block after data in the HTX message. The compression loop continues
with it.
This patch must be backported to 1.9.
Since the commit 8f3c256f7 ("MEDIUM: cache/htx: Always store info about HTX
blocks in the cache"), it is possible to read info about a data block without
sending anything. It is possible because we rely on the function htx_add_data(),
which will try to add data without any defragmentation. In such case, info about
the data block are skipped but don't count in data sent.
No need to backport this patch, expect if the commit 8f3c256f7 is backported
too.
PiBa-NL found some pathological cases where starting threads can hinder
each other and cause a measurable slow down. This problem is reproducible
with the following config (haproxy must be built with -DDEBUG_DEV) :
global
stats socket /tmp/sock1 mode 666 level admin
nbthread 64
backend stopme
timeout server 1s
option tcp-check
tcp-check send "debug dev exit\n"
server cli unix@/tmp/sock1 check
This will cause the process to be stopped once the checks are ready to
start. Binding all these to just a few cores magnifies the problem.
Starting them in loops shows a significant time difference among the
commits :
# before startup serialization
$ time for i in {1..20}; do taskset -c 0,1,2,3 ./haproxy-e186161 -db -f slow-init.cfg >/dev/null 2>&1; done
real 0m1.581s
user 0m0.621s
sys 0m5.339s
# after startup serialization
$ time for i in {1..20}; do taskset -c 0,1,2,3 ./haproxy-e4d7c9dd -db -f slow-init.cfg >/dev/null 2>&1; done
real 0m2.366s
user 0m0.894s
sys 0m8.238s
In order to address this, let's use plain mutexes and cond_wait during
the init phase. With this done, waiting threads now sleep and the problem
completely disappeared :
$ time for i in {1..20}; do taskset -c 0,1,2,3 ./haproxy -db -f slow-init.cfg >/dev/null 2>&1; done
real 0m0.161s
user 0m0.079s
sys 0m0.149s
When checking the result of an ebis_insert() call in an ebtree with unique keys,
if already present, in place of freeing() the old one and return the new one,
rather the correct way is to free the new one, and return the old one. For
this, the __dict_insert() function was folded into dict_insert() as this
significantly simplifies the test of duplicates.
Thanks to Olivier for having reported this bug which came with this one:
"MINOR: dict: Add dictionary new data structure".
There's no point in calling this on each and every thread since the first
thread passing there will enable the listeners, and the next ones will
simply scan all of them in turn to discover that they are already
initialized. Let's only initilize them on the first thread. This could
slightly speed up start up on very large configurations, eventhough most
of the time is still spent in the main thread binding the sockets.
A few measurements have constantly shown that this decreases the startup
time by ~0.1s for 150k listeners. Starting all of them in parallel doesn't
provide better results and can still expose some undesired races.
Since commit 6ec902a ("MINOR: threads: serialize threads initialization")
we now serialize threads initialization. But doing so has emphasized another
race which is that some threads may actually start the loop before others
are done initializing.
As soon as all threads enter the first thread_release() call, their rdv
bit is cleared and they're all waiting for all others' rdv to be cleared
as well, with their harmless bit set. The first one to notice the cleared
mask will progress through thread_isolate(), take rdv again preventing
most others from noticing its short pass to zero, and this first one will
be able to run all the way through the initialization till the last call
to thread_release() which it happily crosses, being the only one with the
rdv bit, leaving the room for one or a few others to do the same. This
results in some threads entering the loop before others are done with
their initialization, which is particularly bad. PiBa-NL reported that
some regtests fail for him due to this (which was impossible to reproduce
here, but races are racy by definition). However placing some printf()
in the initialization code definitely shows this unsychronized startup.
This patch takes a different approach in three steps :
- first, we don't start with thread_release() anymore and we don't
set the rdv mask anymore in the main call. This was initially done
to let all threads start toghether, which we don't want. Instead
we just start with thread_isolate(). Since all threads are harmful
by default, they all wait for each other's readiness before starting.
- second, we don't release with thread_release() but with
thread_sync_release(), meaning that we don't leave the function until
other ones have reached the point in the function where they decide
to leave it as well.
- third, it makes sure we don't start the listeners using
protocol_enable_all() before all threads have allocated their local
FD tables or have initialized their pollers, otherwise startup could
be racy as well. It's worth noting that it is even possible to limit
this call to thread #0 as it only needs to be performed once.
This now guarantees that all thread init calls start only after all threads
are ready, and that no thread enters the polling loop before all others have
completed their initialization.
Please check GH issues #111 and #117 for more context.
No backport is needed, though if some new init races are reported in
1.9 (or even 1.8) which do not affect 2.0, then it may make sense to
carefully backport this small series.
This function provides an alternate way to leave a critical section run
under thread_isolate(). Currently, a thread may remain in thread_release()
without having the time to notice that the rdv mask was released and taken
again by another thread entering thread_isolate() (often the same that just
released it). This is because threads wait in harmless mode in the loop,
which is compatible with the conditions to enter thread_isolate(). It's
not possible to make them wait with the harmless bit off or we cannot know
when the job is finished for the next thread to start in thread_isolate(),
and if we don't clear the rdv bit when going there, we create another
race on the start point of thread_isolate().
This new synchronous variant of thread_release() makes use of an extra
mask to indicate the threads that want to be synchronously released. In
this case, they will be marked harmless before releasing their sync bit,
and will wait for others to release their bit as well, guaranteeing that
thread_isolate() cannot be started by any of them before they all left
thread_sync_release(). This allows to construct synchronized blocks like
this :
thread_isolate()
/* optionally do something alone here */
thread_sync_release()
/* do something together here */
thread_isolate()
/* optionally do something alone here */
thread_sync_release()
And so on. This is particularly useful during initialization where several
steps have to be respected and no thread must start a step before the
previous one is completed by other threads.
This one must not be placed after any call to thread_release() or it would
risk to block an earlier call to thread_isolate() which the current thread
managed to leave without waiting for others to complete, and end up here
with the thread's harmless bit cleared, blocking others. This might be
improved in the future.
thread_release() is to be called after thread_isolate(), i.e. when the
thread already has its harmless bit cleared. No need to clear it twice,
thus avoid calling thread_harmless_end() and directly check the rdv
bits then loop on them.
In si_cs_recv(), if we got the CS_FL_EOI flag on the conn_stream, make sure
we return 1, so that si_cs_process() will be called, and wake
process_stream() up, otherwise if we're unlucky the flag will never be
noticed, and the stream won't be woken up.
In h1_release(), when we want to upgrade the mux to h2, make sure we set
h1c->ibuf to BUF_NULL before calling conn_upgrade_mux_fe().
If the upgrade is successful, the buffer will be provided to the new mux,
h1_release() will be called recursively, it will so try to free h1c->ibuf,
and freeing the buffer we just provided to the new mux would be unfortunate.
As reported in GH issue #109 and in discourse issue
https://discourse.haproxy.org/t/haproxy-returns-408-or-504-error-when-timeout-client-value-is-every-25d
the time parser doesn't error on overflows nor underflows. This is a
recurring problem which additionally has the bad taste of taking a long
time before hitting the user.
This patch makes parse_time_err() return special error codes for overflows
and underflows, and adds the control in the call places to report suitable
errors depending on the requested unit. In practice, underflows are almost
never returned as the parsing function takes care of rounding values up,
so this might possibly happen on 64-bit overflows returning exactly zero
after rounding though. It is not really possible to cut the patch into
pieces as it changes the function's API, hence all callers.
Tests were run on about every relevant part (cookie maxlife/maxidle,
server inter, stats timeout, timeout*, cli's set timeout command,
tcp-request/response inspect-delay).