When in HTX mode, in functions smp_fetch_body_len() and
smp_fetch_body_size() we were subtracting the size of each header block
from the total size htx->data to calculate the size of body, and that
could result in wrong calculated value.
To avoid this, we now loop on blocks to sum up the size of only those
that are of type HTX_BLK_DATA.
This patch must be backported to 1.9.
When client-side or server-side cookies are parsed, if the end of the cookie
line is removed, the HTX message must be updated. The length of the HTX block is
decreased and the data size of the HTX message is modified accordingly. The
update of the HTX block was ok but the update of the HTX message was wrong,
leading to undefined behaviours during the data forwarding. One of possible
effect was a freeze of the connection and no data forward.
This patch must be backported in 1.9.
The resulting value produced in functions smp_fetch_base() and
smp_fetch_base32() was wrong when in HTX mode.
This patch also adds the semicolon at the end of the for-loop line, used
in function smp_fetch_path(), since it's actually with no body.
This must be backported to 1.9.
Initialize ->srv peer field for all the peers, the local peer included.
Indeed, a haproxy process needs to connect to the local peer of a remote
process. Furthermore, when a "peer" or "server" line is parsed by parse_server()
the address must be copied to ->addr field of the peer object only if this address
has been also parsed by parse_server(). This is not the case if this address belongs
to the local peer and is provided on a "server" line.
After having parsed the "peer" or "server" lines of a peer
sections, the ->srv part of all the peer must be initialized for SSL, if
enabled. Same thing for the binding part.
Revert 1417f0b commit which is no more required.
No backport is needed, this is purely 2.0.
http_find_stline() carefully verifies that it finds a start line otherwise
returns NULL when not found. But a few calling functions ignore this NULL
in return and dereference this pointer without checking. Let's add the
test where needed in the callers. If it turns out that over the long term
a start line is mandatory, then the test will be removed and the faulty
function will have to be simplified.
This must be backported to 1.9.
intencode() tests for the nullity of the target pointer passed in
argument, but the code calling intencode() never does so and happily
dereferences it. gcc at -O3 detects this as a potential null deref.
Let's remove this incorrect and misleading test. If this pointer was
null, the code would already crash in the calling functions.
This must be backported to stable versions.
Some gcc versions emit potential null deref warnings at -O3 in
date2str_log(), gmt2str_log() and localdate2str_log() after utoa_pad()
because this function may return NULL if its size argument is too small
for the integer value. And it's true that we can't guarantee that the
input number is always valid.
This must be backported to all stable versions.
gcc 6+ complains about a possible null-deref here due to the test in
objt_server() :
if (objt_server(s->target))
HA_ATOMIC_ADD(&objt_server(s->target)->counters.retries, 1);
Let's simply change it to __objt_server(). This can be backported to
1.9 and 1.8.
Commit 32211a1 ("BUG/MEDIUM: stream: Don't forget to free
s->unique_id in stream_free().") addressed a memory leak but in
exchange may cause double-free due to the fact that after freeing
s->unique_id it doesn't null it and then calls http_end_txn()
which frees it again. Thus the process quickly crashes at runtime.
This fix must be backported to all stable branches where the
aforementioned patch was backported.
The existing threading flag in the 51Degrees API
(FIFTYONEDEGREES_NO_THREADING) has now been mapped to the HAProxy
threading flag (USE_THREAD), and the 51Degrees module code has been made
thread safe.
In Pattern, the cache is now locked with a spin lock from hathreads.h
using a new lable 'OTHER_LOCK'. The workset pool is now created with the
same size as the number of threads to avoid any time waiting on a
worket.
In Hash Trie, the global device offsets structure is only used in single
threaded operation. Multi threaded operation creates a new offsets
structure in each thread.
Some logic around mutli header matching in Hash Trie has been improved
where only the name of the most important header was stored in the
global heade_names structure. Now all headers are stored, so are used in
the mutli header matching correctly.
The mux h1 properly avoid to set "connection: keep-alive" when the response
is in HTTP/1.1 but it forgot to check the request's version. Thus when the
client requests using HTTP/1.0 and connection: keep-alive (like ab does),
the response is in 1.1 with no keep-alive and ab waits for the close without
checking for the content-length. Response headers actually depend on the
recipient, thus on both request and response's version.
This patch must be backported to 1.9.
It has been developped as a service applet. Internally, it is called
"promex". To build HAProxy with the promex service, you should use the Makefile
variable "EXTRA_OBJS". To be used, it must be enabled in the configuration with
an "http-request" rule and the corresponding HTTP proxy must enable the HTX
support. For instance:
frontend test
mode http
...
option http-use-htx
http-request use-service prometheus-exporter if { path /metrics }
...
See contrib/prometheus-exporter/README for details.
Commit 1055e687a ("MINOR: peers: Make outgoing connection to SSL/TLS
peers work.") introduced an "srv" field in the peers, which points to
the equivalent server to hold SSL settings. This one is not set when
the peer is local so we must always test it before testing p->srv->use_ssl
otherwise haproxy dies during reloads.
No backport is needed, this is purely 2.0.
Now, in the function parse_process_number(), when a process number or a set of
processes is parsed, an error is triggered if an invalid character is found. It
means following syntaxes are not forbidden and will emit an alert during the
HAProxy startup:
1a
1/2
1-2-3
This bug was reported on Github. See issue #36.
This patch may be backported to 1.9 and 1.8.
During SPOP healthchecks, a dummy appctx is used to create the HAPROXY-HELLO
frame and then to parse the AGENT-HELLO frame. No agent are attached to it. So
it is important to not rely on an agent during these stages. When HAPROXY-HELLO
frame is created, there is no problem, all accesses to an agent are
guarded. This is not true during the parsing of the AGENT-HELLO frame. Thus, it
is possible to crash HAProxy with a SPOA declaring the async or the pipelining
capability during a healthcheck.
This patch must be backported to 1.9 and 1.8.
For some embedded systems, it's pointless to have 32- or even 64- large
arrays of processes when it's known that much fewer processes will be
used in the worst case. Let's introduce this MAX_PROCS define which
contains the highest number of processes allowed to run at once. It
still defaults to LONGBITS but may be lowered.
This also depends on the nbthread count, so it must only be performed after
parsing the whole config file. As a side effect, this removes some code
duplication between servers and server-templates.
This must be backported to 1.9.
The idle conns lists are sized according to the number of threads. As such
they cannot be initialized during the parsing since nbthread can be set
later, as revealed by this simple config which randomly crashes when used.
Let's do this at the end instead.
listen proxy
bind :4445
mode http
timeout client 10s
timeout server 10s
timeout connect 10s
http-reuse always
server s1 127.0.0.1:8000
global
nbthread 8
This fix must be backported to 1.9 and 1.8.
The agent used to be configured depending on global.nbthread while nbthread
may be set after the agent is parsed. Let's move this part to the spoe_check()
function to make sure nbthread is always correct and arrays are appropriately
sized.
This fix must be backported to 1.9 and 1.8.
Commit 40a007cf2 ("MEDIUM: threads/server: Make connection list
(priv/idle/safe) thread-safe") made a copy-paste error when initializing
the Lua sockets, as the TCP one was initialized twice. Fortunately it has
no impact because the pointers are set to NULL after a memset(0) and are
not changed in between.
This must be backported to 1.9 and 1.8.
As reported by Christopher, we may call spoe_release_agent() when leaving
after an allocation failure or a config parse error. We must not assume
agent->rt is valid there as the allocation could have failed.
This should be backported to 1.9 and 1.8.
Since TLS ciphers are not well understand, it is very common pratice to
copy and paste parameters from documentation and use them as-is. Since RC4
should not be used anymore, it is wiser to link users to up to date
documnetation from Mozilla to avoid unsafe configuration in the wild.
Clarify the location of man pages for OpenSSL when missing.
When commit 151e1ca98 ("BUG/MAJOR: config: verify that targets of track-sc
and stick rules are present") added a check for some process inconsistencies
between rules and their stick tables, some errors resulted in a "return 0"
statement, which is taken as "no error" in some cases. Let's fix this.
This must be backported to all versions using the above commit.
A piece of code about the HTX was lost this summer, after the "big merge"
(htx/http2/connection layer refactoring). I forgot to keep HTX changes in the
functions connect_server() and assign_server(). So, this patch fixes "uri",
"url_param" and "hdr" LB algorithms when the HTX is enabled. It also fixes
evaluation of the "sni" expression on server lines.
This issue was reported on github. See issue #32.
This patch must be backported in 1.9.
When a filter is installed on a proxy and references spoe, we must be
absolutely certain that the whole chain is valid on a given process
when running in multi-process mode. The problem here is that if a proxy
1 runs on process 1, referencing an SPOE agent itself based on a backend
running on process 2, this last one will be completely deinited on
process 1, and will thus cause random crashes when it gets messages
from this proess.
This patch makes sure that the whole chain is valid on all of the caller's
processes.
This fix must be backported to all spoe-enabled maintained versions. It
may potentially disrupt configurations which already randomly crash.
There hardly is any intermediary solution though, such configurations
need to be fixed.
Stick and track-sc rules may optionally designate a table in a different
proxy. In this case, a number of verifications are made such as validating
that this proxy actually exists. However, in multi-process mode, the target
table might indeed exist but not be bound to the set of processes the rules
will execute on. This will definitely result in a random behaviour especially
if these tables do require peer synchronization, because some tasks will be
started to try to synchronize form uninitialized areas.
The typical issue looks like this :
peers my-peers
peer foo ...
listen proxy
bind-process 1
stick on src table ip
...
backend ip
bind-process 2
stick-table type ip size 1k peers my-peers
While it appears obvious that the example above will not work, there are
less obvious situations, such as having bind-process in a defaults section
and having a larger set of processes for the referencing proxy than the
referenced one.
The present patch adds checks for such situations by verifying that all
processes from the referencing proxy are present on the other one in all
track-sc* and stick-* rules, and in sample fetch / converters referencing
another table so that sc_inc_gpc0() and similar are safe as well.
This fix must be backported to all maintained versions. It may potentially
disrupt configurations which already randomly crash. There hardly is any
intermediary solution though, such configurations need to be fixed.
__task_wakeup() takes care of a small race that exists between threads,
but it uses a store barrier that is not sufficient since apparently the
state read after clearing the leaf_p pointer sometimes is incorrect. This
results in missed wakeups between threads competing at a high rate. Let's
use a full barrier instead to serialize the operations.
This may be backported to 1.9 though it's extremely unlikely that this
bug will ever manifest itself there.
When HTX support was added to HTTP compression, a set of counters was missed,
namely comp_in and comp_byp, resulting in no stats being available for compression.
This must be backported to 1.9.
At a number of places we used to have null tests on bind_proc for
listeners and proxies. Let's simplify all these tests by always
having the proper bits reported via proc_mask().
These two functions return either all_{proc,threads}_mask, or the argument.
This is used to default to all_proc_mask or all_threads_mask when not set
on bind_conf or proxies.
We'll call popcount() more often so better use a parallel method
than an iterative one. One optimal design is proposed at the site
below. It requires a fast multiplication though, but even without
it will still be faster than the iterative one, and all relevant
64 bit platforms do have a multiply unit.
https://graphics.stanford.edu/~seander/bithacks.html
Some unused fields were placed early and some important ones were on
the second cache line. Let's move the proto_list and name closer to
the end of the structure to bring accept() and default_target() into
the first cache line.
When no nbproc is specified, a computation leads to reading bind_thread[-1]
before checking if the thread mask is valid for a bind conf. It may either
report a false warning and compute a wrong mask, or miss some incorrect
configs.
This must be backported to 1.9 and possibly 1.8.
Commit 421f02e ("MINOR: threads: add a MAX_THREADS define instead of
LONGBITS") used a MAX_THREADS macros to fix threads limits. However,
one change was wrong as it affected the upper bound of the process
loop when setting threads masks. No backport is needed.
In stream_free(), free s->unique_id. We may still have one, because it's
allocated in log.c::strm_log() no matter what, even if it's a TCP connection
and thus it won't get free'd by http_end_txn().
Failure to do so leads to a memory leak.
This should probably be backported to all maintained branches.
PiBa-NL reported that some servers don't fall back to the Host header when
:authority is absent. After studying all the combinations of Host and
:authority, it appears that we always have to send the latter, hence we
never need the former. In case of CONNECT method, the authority is retrieved
from the URI part, otherwise it's extracted from the Host field.
The tricky part is that we have to scan all headers for the Host header
before dumping other headers. This is due to the fact that we must emit
pseudo headers before other ones. One improvement could possibly be made
later in the request parser to search and emit the Host header immediately
if authority was not found. This would cost nothing on the vast marjority
of requests and make the lookup faster on output since Host would appear
first.
This fix must be backported to 1.9.
Commit 3c4e19f42 ("BUG/MEDIUM: backend: always release the previous
connection into its own target srv_list") introduced a valid warning
about a null-deref risk since we didn't check conn_new()'s return value.
This patch must be backported to 1.9 with the patch above.
In mem_should_fail(), if we don't want to fail the allocation, either
because mem_fail_rate is 0, or because we're still initializing, don't
forget to initialize ret, or we may return a non-zero value, and fail
an allocation we didn't want to fail.
This should only affect users that compile with DEBUG_FAIL_ALLOC.
I would have sworn it was done, probably we lost it during the refactoring.
If a frontend is in HTX and the backend not (and conersely), this is
normally detected at config parsing time unless the rule is dynamic. In
this case we must abort with an error 500. The logs will report "RR"
(resource issue while processing request) with the frontend and the
backend assigned, so that it's possible to figure what was attempted.
This must be backported to 1.9.
There was a bug reported in issue #19 regarding the fact that haproxy
could mis-route requests to the wrong server. It turns out that when
switching to another server, the old connection was put back into the
srv_list corresponding to the stream's target instead of this connection's
target. Thus if this connection was later picked, it was pointing to the
wrong server.
The patch fixes this and also clarifies the assignment to srv_conn->target
so that it's clear we don't change it when picking it from the srv_list.
This must be backported to 1.9 only.
When compiling with DEBUG_FAIL_ALLOC, add a new option, tune.fail-alloc,
that gives the percentage of chances an allocation fails.
This is useful to check that allocation failures are always handled
gracefully.