When an outgoing HTX message is formatted to a raw message, DATA blocks may be
splitted to not tranfser more data than expected. But if the buffer is almost
full, the formatting is interrupted, leaving some unused free space in the
buffer, because data are too large to be copied in one time.
Now, we transfer as much data as possible. When the message is chunked, we also
count the size used to encode the data.
When an outgoing HTX message is formatted to a raw message, if we fail to copy
data of an HTX block into the output buffer, we mark it as full. Before it was
only done calling the function buf_room_for_htx_data(). But this function is
designed to optimize input processing.
This patch must be backported to 2.0 and 1.9.
When raw data are copied or appended in a chunk, the result must not exceed the
chunk size but it can reach it. Unlike functions to copy or append a string,
there is no terminating null byte.
This patch must be backported as far as 1.8. Note in 1.8, the functions
chunk_cpy() and chunk_cat() don't exist.
In functions htx_*_to_h1(), most of time several calls to chunk_memcat() are
chained. The expected size is always compared to available room in the buffer to
be sure the full copy will succeed. But it is a bit risky because it relies on
the fact the function chunk_memcat() evaluates the available room in the buffer
in a same way than htx ones. And, unfortunately, it does not. A bug in
chunk_memcat() will always leave a byte unused in the buffer. So, for instance,
when a chunk is copied in an almost full buffer, the last CRLF may be skipped.
To fix the issue, we now rely on the result of chunk_memcat() only.
This patch must be backported to 2.0 and 1.9.
The SSL engines code was written below the OCSP #ifdef, which means you
can't build the engines code if the OCSP is deactived in the SSL lib.
Could be backported in every version since 1.8.
A NULL dereference can occur when inserting SNIs. In the case of
checking for duplicates, if there is already several sni_ctx with the
same key.
Fix issue #321.
Don't try to load the files containing the issuer and the OCSP response
each time we generate a SSL_CTX.
The .ocsp and the .issuer are now loaded in the struct
cert_key_and_chain only once and then loaded from this structure when
creating a SSL_CTX.
Don't try to load the file containing the sctl each time we generate a
SSL_CTX.
The .sctl is now loaded in the struct cert_key_and_chain only once and
then loaded from this structure when creating a SSL_CTX.
Note that this now make possible the use of sctl with multi-cert
bundles.
$ echo -e "set ssl cert certificate.pem <<\n$(cat certificate2.pem)\n" | \
socat stdio /var/run/haproxy.stat
Certificate updated!
The operation is locked at the ckch level with a HA_SPINLOCK_T which
prevents the ckch architecture (ckch_store, ckch_inst..) to be modified
at the same time. So you can't do a certificate update at the same time
from multiple CLI connections.
SNI trees are also locked with a HA_RWLOCK_T so reading operations are
locked only during a certificate update.
Bundles are supported but you need to update each file (.rsa|ecdsa|.dsa)
independently. If a file is used in the configuration as a bundle AND
as a unique certificate, both will be updated.
Bundles, directories and crt-list are supported, however filters in
crt-list are currently unsupported.
The code tries to allocate every SNIs and certificate instances first,
so it can rollback the operation if that was unsuccessful.
If you have too much instances of the certificate (at least 20000 in my
tests on my laptop), the function can take too much time and be killed
by the watchdog. This will be fixed later. Also with too much
certificates it's possible that socat exits before the end of the
generation without displaying a message, consider changing the socat
timeout in this case (-t2 for example).
The size of the certificate is currently limited by the maximum size of
a payload, that must fit in a buffer.
The ssl_sock_load_{multi}_ckchs() function were renamed and modified:
- allocate a ckch_inst and loads the sni in it
- return a ckch_inst or NULL
- the sni_ctx are not added anymore in the sni trees from there
- renamed in ckch_inst_new_load_{multi}_store()
- new ssl_sock_load_ckchs() function calls
ckch_inst_new_load_{multi}_store() and add the sni_ctx to the sni trees.
ssl_sock_load_multi_ckchs() is now able to fail without polluting the
bind_conf trees and leaking memory.
It is a prerequisite to load certificate on-the-fly with the CLI.
The insertion of the sni_ctxs in the trees are done once everything has
been allocated correctly.
ssl_sock_load_ckchn() is now able to fail without polluting the
bind_conf trees and leaking memory.
It is a prerequisite to load certificate on-the-fly with the CLI.
The insertion of the sni_ctxs in the trees are done once everything has
been allocated correctly.
In order to allow the creation of sni_ctx in runtime, we need to split
the function to allow rollback.
We need to be able to allocate all sni_ctxs required before inserting
them in case we need to rollback if we didn't succeed the allocation.
The function was splitted in 2 parts.
The first one ckch_inst_add_cert_sni() allocates a struct sni_ctx, fill
it with the right data and insert it in the ckch_inst's list of sni_ctx.
The second will take every sni_ctx in the ckch_inst and insert them in
the bind_conf's sni tree.
struct ckch_inst represents an instance of a certificate (ckch_node)
used in a bind_conf. Every sni_ctx created for 1 ckch_node in a
bind_conf are linked in this structure.
This patch allocate the ckch_inst for each bind_conf and inserts the
sni_ctx in its linked list.
The ssl_sock_populate_sni_keytypes_hplr() function does not return an
error upon an allocation failure.
The process would probably crash during the configuration parsing if the
allocation fail since it tries to copy some data in the allocated
memory.
This patch could be backported as far as 1.5.
This patch frees the sni_keytype nodes once the sni_ctxs have been
allocated in ssl_sock_load_multi_ckchn();
Could be backported in every version using the multi-cert SSL bundles.
The ssl_sock_add_cert_sni() function never return an error when a
sni_ctx allocation fail. It silently ignores the problem and continues
to try to allocate other snis.
It is unlikely that a sni allocation will succeed after one failure and
start a configuration without all the snis. But to avoid any problem we
return a -1 upon an sni allocation error and stop the configuration
parsing.
This patch must be backported in every version supporting the crt-list
sni filters. (as far as 1.5)
A ckch_store is a storage which contains a pointer to one or several
cert_key_and_chain structures.
This patch renames ckch_node to ckch_store, and ckch_n, ckchn to ckchs.
As using an mt_list for the tasklet list is costly, instead use a regular list,
but add an mt_list for tasklet woken up by other threads, to be run on the
current thread. At the beginning of process_runnable_tasks(), we just take
the new list, and merge it into the task_list.
This should give us performances comparable to before we started using a
mt_list, but allow us to use tasklet_wakeup() from other threads.
This macro atomically cuts the head of a list and returns the list
of elements as a detached list, meaning that they're all linked
together without any head. If the list was empty, NULL is returned.
I introduced this mistake when adding the description for the stats
metrics, it's even amazing it built and worked at all! This was
reported by Travis CI on non-GNU platforms :
src/stats.c:92:39: warning: use of GNU 'missing =' extension in designator [-Wgnu-designator]
[INF_NAME] { .name = "Name", .desc = "Product name" },
^
=
No backport is needed.
In issue #277 is reported a strange problem related to a fast-spinning
applet which seems to show valid progress being made. It's uncertain how
this can happen, maybe some very specific timing patterns manage to place
just a few bytes in each buffer and result in the peers applet being called
a lot. But it appears possible to artificially cross the spinning threshold
by asking for monster stats page (500 MB) and limiting the send() size to
1 MSS (1460 bytes), causing the stats page to be called for very small
blocks which most often do not leave enough room to place a new chunk.
The idea developed in this patch consists in not crashing for an applet
which reaches a very high call rate if it shows some indication of
progress. Detecting progress on applets is not trivial but in our case
we know that they must at least not claim to wait for a buffer allocation
if this buffer is present, wait for room if the buffer is empty, ask for
more data without polling if such data are still present, nor leave with
an empty input buffer without having written anything nor read anything
from the other side while a shutw is pending.
Doing so doesn't affect normal behaviors nor abuses of our existing
applets and does at least protect against an applet performing an
early return without processing events, or one causing an endless
loop by asking for impossible conditions.
This must be backported to 2.0.
Now "show info desc", "show info typed desc" and "show stat typed desc"
will report (hopefully) accurate descriptions of each field. These ones
were verified in the code. When some metrics are specific to the process
or the thread, they are indicated. Sometimes a config option is known
for a setting and it is reported as well. The purpose mainly is to help
sysadmins in field more easily sort out issues vs non-issues. In part
inspired by this very informative talk :
https://kernel-recipes.org/en/2019/metrics-are-money/
Example:
$ socat - /var/run/haproxy.sock <<< "show info desc"
Name: HAProxy:"Product name"
Version: 2.1-dev2-991035-31:"Product version"
Release_date: 2019/10/09:"Date of latest source code update"
Nbthread: 1:"Number of started threads (global.nbthread)"
Nbproc: 1:"Number of started worker processes (global.nbproc)"
Process_num: 1:"Relative process number (1..Nbproc)"
Pid: 11975:"This worker process identifier for the system"
Uptime: 0d 0h00m10s:"How long ago this worker process was started (days+hours+minutes+seconds)"
Uptime_sec: 10:"How long ago this worker process was started (seconds)"
Memmax_MB: 0:"Worker process's hard limit on memory usage in MB (-m on command line)"
PoolAlloc_MB: 0:"Amount of memory allocated in pools (in MB)"
PoolUsed_MB: 0:"Amount of pool memory currently used (in MB)"
PoolFailed: 0:"Number of failed pool allocations since this worker was started"
Ulimit-n: 300000:"Hard limit on the number of per-process file descriptors"
Maxsock: 300000:"Hard limit on the number of per-process sockets"
Maxconn: 149982:"Hard limit on the number of per-process connections (configured or imposed by Ulimit-n)"
Hard_maxconn: 149982:"Hard limit on the number of per-process connections (imposed by Memmax_MB or Ulimit-n)"
CurrConns: 0:"Current number of connections on this worker process"
CumConns: 1:"Total number of connections on this worker process since started"
CumReq: 1:"Total number of requests on this worker process since started"
MaxSslConns: 0:"Hard limit on the number of per-process SSL endpoints (front+back), 0=unlimited"
CurrSslConns: 0:"Current number of SSL endpoints on this worker process (front+back)"
CumSslConns: 0:"Total number of SSL endpoints on this worker process since started (front+back)"
Maxpipes: 0:"Hard limit on the number of pipes for splicing, 0=unlimited"
PipesUsed: 0:"Current number of pipes in use in this worker process"
PipesFree: 0:"Current number of allocated and available pipes in this worker process"
ConnRate: 0:"Number of front connections created on this worker process over the last second"
ConnRateLimit: 0:"Hard limit for ConnRate (global.maxconnrate)"
MaxConnRate: 0:"Highest ConnRate reached on this worker process since started (in connections per second)"
SessRate: 0:"Number of sessions created on this worker process over the last second"
SessRateLimit: 0:"Hard limit for SessRate (global.maxsessrate)"
MaxSessRate: 0:"Highest SessRate reached on this worker process since started (in sessions per second)"
SslRate: 0:"Number of SSL connections created on this worker process over the last second"
SslRateLimit: 0:"Hard limit for SslRate (global.maxsslrate)"
MaxSslRate: 0:"Highest SslRate reached on this worker process since started (in connections per second)"
SslFrontendKeyRate: 0:"Number of SSL keys created on frontends in this worker process over the last second"
SslFrontendMaxKeyRate: 0:"Highest SslFrontendKeyRate reached on this worker process since started (in SSL keys per second)"
SslFrontendSessionReuse_pct: 0:"Percent of frontend SSL connections which did not require a new key"
SslBackendKeyRate: 0:"Number of SSL keys created on backends in this worker process over the last second"
SslBackendMaxKeyRate: 0:"Highest SslBackendKeyRate reached on this worker process since started (in SSL keys per second)"
SslCacheLookups: 0:"Total number of SSL session ID lookups in the SSL session cache on this worker since started"
SslCacheMisses: 0:"Total number of SSL session ID lookups that didn't find a session in the SSL session cache on this worker since started"
CompressBpsIn: 0:"Number of bytes submitted to HTTP compression in this worker process over the last second"
CompressBpsOut: 0:"Number of bytes out of HTTP compression in this worker process over the last second"
CompressBpsRateLim: 0:"Limit of CompressBpsOut beyond which HTTP compression is automatically disabled"
Tasks: 10:"Total number of tasks in the current worker process (active + sleeping)"
Run_queue: 1:"Total number of active tasks+tasklets in the current worker process"
Idle_pct: 100:"Percentage of last second spent waiting in the current worker thread"
node: wtap.local:"Node name (global.node)"
Stopping: 0:"1 if the worker process is currently stopping, otherwise zero"
Jobs: 14:"Current number of active jobs on the current worker process (frontend connections, master connections, listeners)"
Unstoppable Jobs: 0:"Current number of unstoppable jobs on the current worker process (master connections)"
Listeners: 13:"Current number of active listeners on the current worker process"
ActivePeers: 0:"Current number of verified active peers connections on the current worker process"
ConnectedPeers: 0:"Current number of peers having passed the connection step on the current worker process"
DroppedLogs: 0:"Total number of dropped logs for current worker process since started"
BusyPolling: 0:"1 if busy-polling is currently in use on the worker process, otherwise zero (config.busy-polling)"
FailedResolutions: 0:"Total number of failed DNS resolutions in current worker process since started"
TotalBytesOut: 0:"Total number of bytes emitted by current worker process since started"
BytesOutRate: 0:"Number of bytes emitted by current worker process over the last second"
Now "show info" supports "desc" after the default and "typed" formats,
and "show stat" supports this after the typed format. In both cases
this appends the description for the represented metric between double
quotes. The same could be done for JSON output but would possibly require
to update the schema first.
Several times some users have expressed the non-intuitive aspect of some
of our stat/info metrics and suggested to add some help. This patch
replaces the char* arrays with an array of name_desc so that we now have
some reserved room to store a description with each stat or info field.
These descriptions are currently empty and not reported yet.
Now "show info" and "show stat" can parse "desc" as an output format
modifier that will be passed down the chain to add some descriptions
to the fields depending on the format in use. For now it is not
exploited.
Some functions used to take flags + appctx with flags==appctx.flags,
others neither, others just one of them. Some functions used to have
the flags before the object being dumped (server) while others had
it after (listener). This patch aims at cleaning this up a little bit
by following this principle:
- low-level functions which do not need the appctx take flags only
- medium-level functions which already use the appctx for other
reasons do not keep the flags
- top-level functions which already have the stream-int don't need
the flags nor the appctx.
This flag is used to decide to show the check box in front of a proxy
on the HTML stat page. It is always equal to STAT_ADMIN except when the
proxy has no backend capability (i.e. a pure frontend) or has no server,
in which case it's only used to avoid leaving an empty column at the
beginning of the table. Not only this is pretty useless, but it also
causes the columns not to align well when mixing multiple proxies with
or without servers.
Let's simply always use STAT_ADMIN and get rid of this flag.
Now we only use the appctx flags everywhere in the code, and the uri_auth
flags are read only by the HTTP analyser which presets the appctx ones.
This will allow to simplify access to the flags everywhere.
We used to rely on some config flags defined in uri_auth.h set during
parsing, and another set of STAT_* flags defined in stats.h set at run
time, with a somewhat gray area between the two sets. This is confusing
in the stats code as both are called "flags" in various functions and
it's quite hard to know which one describes what.
This patch cleans this up by replacing all ST_* by a newly assigned
value from the STAT_* set so that we can now use unified flags to
describe both the configuration and the current state. There is no
functional change at all.
This flag was added in 1.4-rc1 by commit 329f74d463 ("[BUG] uri_auth: do
not attemp to convert uri_auth -> http-request more than once") to
address the case where two proxies inherit the stats settings from
the defaults instance, and the first one compiles the expression while
the second one uses it. In this case since they use the exact same
uri_auth pointer, only the first one should compile and the second one
must not fail the check. This was addressed by adding an ST_CONVDONE
flag indicating that the expression conversion was completed and didn't
need to be done again. But this is a hack and it becomes cumbersome in
the middle of the other flags which are all relevant to the stats
applet. Let's instead fix it by checking if we're dealing with an
alias of the defaults instance and refrain from compiling this twice.
This allows us to remove the ST_CONVDONE flag.
A typical config requiring this check is :
defaults
mode http
stats auth foo:bar
listen l1
bind :8080
listen l2
bind :8181
Without this (or previous) check it would cmoplain when checking l2's
validity since the rule was already built.
Both "show info" and "show stat" support the "typed" output format and
the "json" output format. I just never can remind them, which is an
indication that some help is missing.
H2 strongly recommends that clients exclusively use the absolute form
for requests, which contains a scheme, an authority and a path, instead
of the old format involving the Host header and a path. Thus there is
no way to distinguish between a request intended for a proxy and an
origin request, and as such proxied requests are lost.
This patch makes sure to keep the encoding of all absolute form requests
so that the URI is kept end-to-end. If the scheme is http or https, there
is an uncertainty so the request is tagged as a normalized URI so that
the other end (H1) can decide to emit it in origin form as this is by far
the most commonly expected one, and it's certain that quite a number of
H1 setups are not ready to cope with absolute URIs.
There is a direct visible impact of this change, which is that the uri
sample fetch will now return absolute URIs (as they really come on the
wire) whenever these are used. It also means that default http logs will
report absolute URIs.
If a situation is once met where a client uses H2 to join an H1 proxy
with haproxy in the middle, then it will be trivial to add an option to
ask the H1 output to use absolute encoding for such requests.
Later we may be able to consider that the normalized URI is the default
output format and stop sending them in origin form unless an option is
set.
Now chaining multiple instances keeps the semantics as far as possible
along the whole chain :
1) H1 to H1
H1:"GET /" --> H1:"GET /" # log: /
H1:"GET http://" --> H1:"GET http://" # log: http://
H1:"GET ftp://" --> H1:"GET ftp://" # log: ftp://
2) H2 to H1
H2:"GET /" --> H1:"GET /" # log: /
H2:"GET http://" --> H1:"GET /" # log: http://
H2:"GET ftp://" --> H1:"GET ftp://" # log: ftp://
3) H1 to H2 to H2 to H1
H1:"GET /" --> H2:"GET /" --> H2:"GET /" --> H1:"GET /"
H1:"GET http://" --> H2:"GET http://" --> H2:"GET http://" --> H1:"GET /"
H1:"GET ftp://" --> H2:"GET ftp://" --> H2:"GET ftp://" --> H1:"GET ftp://"
Thus there is zero loss on H1->H1, H1->H2 nor H2->H2, and H2->H1 is
normalized in origin format if ambiguous.
Instead of mapping the Host header field to :authority, we now act
differently if the request is in origin form or in absolute form.
If it's absolute, we extract the scheme and the authority from the
request, fix the path if it's empty, and drop the Host header.
Otherwise we take the scheme from the http/https flags in the HTX
layer, make the URI be the path only, and emit the Host header,
as indicated in RFC7540#8.1.2.3. This allows to distinguish between
absolute and origin requests for H1 to H2 conversions.