The call to flush the compression buffers only needs to be done when
entering the final states or when leaving with missing data. After
that, if trailers are present, they have to be forwarded.
Now we have valid buffer offsets, we can use them to safely parse the
input and only forward when needed. Thus we can get rid of the
consumed_data accumulator, and the code now works both for chunked and
content-length, even with a server feeding one byte at a time (which
systematically broke the previous one).
It's worth noting that 0<CRLF> must always be sent after end of data
(ie: chunk_len==0), and that the trailing CRLF is sent only content
length mode, because in chunked we'll have to pass trailers.
This is basically a revert of commit 667c2a3 ("BUG/MAJOR: http: compression
still has defects on chunked responses").
The latest changes applied to message pointers should have got rid of all
the issues that were making the compression of partial chunks unreliable.
Currently, we forward headers only if the incoming message is still before
HTTP_MSG_CHUNK_SIZE, otherwise they'll be considered as data. In practice
this is always true for the response since there's no data inspection, and
for the request there is no compression so there's no problem with forwarding
them as data.
But the principle is incorrect and will make it difficult to later add data
processing features. So better fix it now.
The new principle is simple :
- if headers were not yet forwarded, forward them now.
- while doing so, check if we need to update the state
If for some reason, the compression returns an error, the compression
is not deinitialized which also means that any pending data are not
flushed and could be lost, especially in the chunked-encoded case.
No backport is needed.
When a parsing error was encountered in a chunked response, we failed
to properly deinitialize the compression context. There was no impact
till now since compression of chunked responses was disabled. No backport
is needed.
Thanks to the last updates on the message pointers, it is now safe again to
enable forwarding of the request headers while waiting for the connection to
complete because we know how to safely rewind this part.
So this patch slightly modifies what was done in commit 80a92c0 ("BUG/MEDIUM:
http: don't start to forward request data before the connect") to let up to
msg->sov bytes be forwarded when waiting for the connection. The resulting
effect is that a POST request may now be sent with the connect's ACK, which
still saves a packet and may even be useful later when TFO is supported.
In order to avoid abusively relying on buf->o to guess how many bytes to
rewind during a redispatch, we now clear msg->sov. Thus the meaning of this
field is exactly "how many bytes of headers are left to be forwarded". It
is still possible to rewind because msg->eoh + msg->eol equal that value
before scheduling the forwarding, so we can always subtract them.
http_body_rewind() returns the number of bytes to rewind before buf->p to
find the message's body. It relies on http_hdr_rewind() to find the beginning
and adds msg->eoh + msg->eol which are always safe.
http_data_rewind() does the same to get the beginning of the data, which
differs from above when a chunk is present. It uses the function above and
adds msg->sol.
The purpose is to centralize further ->sov changes aiming at avoiding
to rely on buf->o.
http_uri_rewind() returns the number of bytes to rewind before buf->p to
find the URI. It relies on http_hdr_rewind() to find the beginning and
is just here to simplify operations.
The purpose is to centralize further ->sov changes aiming at avoiding
to rely on buf->o.
http_hdr_rewind() returns the number of bytes to rewind before buf->p to
find the beginning of headers. At the moment it's not exact as it still
relies on buf->o, assuming that no other data from a past message were
pending there, but it's what was done till there.
The purpose is to centralize further ->sov changes aiming at avoiding
to rely on buf->o.
http_body_bytes() returns the number of bytes of the current message body
present in the buffer. It is compatible with being called before and after
the headers are forwarded.
This is done to centralize further ->sov changes.
We used to have msg->sov updated for every chunk that was parsed. The issue
is that we want to be able to rewind after chunks were parsed in case we need
to redispatch a request and perform a new hash on the request or insert a
different server header name.
Currently, msg->sov and msg->next make parallel progress. We reached a point
where they're always equal because msg->next is initialized from msg->sov,
and is subtracted msg->sov's value each time msg->sov bytes are forwarded.
So we can now ensure that msg->sov can always be replaced by msg->next for
every state after HTTP_MSG_BODY where it is used as a position counter.
This allows us to keep msg->sov untouched whatever the number of chunks that
are parsed, as is needed to extract data from POST request (eg: url_param).
However, we still need to know the starting position of the data relative to
the body, which differs by the chunk size length. We use msg->sol for this
since it's now always zero and unused in the body.
So with this patch, we have the following situation :
- msg->sov = msg->eoh + msg->eol = size of the headers including last CRLF
- msg->sol = length of the chunk size if any. So msg->sov + msg->sol = DATA.
- msg->next corresponds to the byte being inspected based on the current
state and is always >= msg->sov before starting to forward anything.
Since sov and next are updated in case of header rewriting, a rewind will
fix them both when needed. Of course, ->sol has no reason for changing in
such conditions, so it's fine to keep it relative to msg->sov.
In theory, even if a redispatch has to be performed, a transformation
occurring on the request would still work because the data moved would
still appear at the same place relative to bug->p.
This function is only a parser, it must start to parse at the next character
and only update the outgoing relative pointers, but not expect the buffer to
be aligned with the next byte to be parsed.
It's important to fix this otherwise we cannot use this function to parse
chunks without starting to forward data.
There are still some pending issues in the gzip compressor, and fixing
them requires a better handling of intermediate parsing states.
Another issue to deal with is the rewinding of a buffer during a redispatch
when a load balancing algorithm involves L7 data because the exact amount of
data to rewind is not clear. At the moment, this is handled by unwinding all
pending data, which cannot work in responses due to pipelining.
Last, having a first analysis which parses the body and another one which
restarts from where the parsing was left is wrong. Right now it only works
because we never both parse and transform in the same direction. But that
is wrong anyway.
In order to address the first issue, we'll have to use msg->eoh + msg->eol
to find the end of headers, and we still need to store the information about
the forwarded header length somewhere (msg->sol might be reused for this).
msg->sov may only be used for the start of data and not for subsequent chunks
if possible. This first implies that we stop sharing it with header length,
and stop using msg->sol there. In fact we don't need it already as it is
always zero when reaching the HTTP_MSG_BODY state. It was only updated to
reflect a copy of msg->sov.
So now as a first step into that direction, this patch ensure that msg->sol
is never re-assigned after being set to zero and is not used anymore when
we're dealing with HTTP processing and forwarding. We'll later reuse it
differently but for now it's secured.
The patch does nothing magic, it only removes msg->sol everywhere it was
already zero and avoids setting it. In order to keep the sov-sol difference,
it now resets sov after forwarding data. In theory there's no problem here,
but the patch is still tagged major because that code is complex.
One of the issues we face when we need to either forward headers only
before compressing, or rewind the stream during a redispatch is to know
the proper length of the request headers. msg->eoh always has the total
length up to the last CRLF, and we never know whether the request ended
with a single LF or a standard CRLF. This makes it hard to rewind the
headers without explicitly checking the bytes in the buffer.
Instead of doing so, we now use msg->eol to carry the length of the last
CRLF (either 1 or 2). Since it is not modified at all after HTTP_MSG_BODY,
and was only left in an undefined state, it is safe to use at any moment.
Thus, the complete header length to forward or to rewind now is always
msg->eoh + msg->eol.
Content-length encoded message bodies are trivial to deal with, but
chunked-encoded will require improvements, so let's separate the code
flows between the two to ease next steps. The behaviour is not changed
at all, the code is only rearranged.
This is the continuation of previous patch. Now that full buffers are
not rejected anymore, let's wait for at least the advertised chunk or
body length to be present or the buffer to be full. When either
condition is met, the message processing can go forward.
Thus we don't need to use url_param_post_limit anymore, which was passed
in the configuration as an optionnal <max_wait> parameter after the
"check_post" value. This setting was necessary when the feature was
implemented because there was no support for parsing message bodies.
The argument is now silently ignored if set in the configuration.
http_process_request_body() currently expects a request body containing
exactly an expected message body. This was done in order to support load
balancing on a unique POST parameter but the way it's done still suffers
from some limitations. One of them is that there is no guarantee that the
accepted message will contain the appropriate string if it starts with
another parameter. But at the same time it will reject a message when the
buffer is full.
So as a first step, we don't reject anymore message bodies that fill the
buffer.
Use HAProxy's exit status as the systemd wrapper's exit status instead
of always returning EXIT_SUCCESS, permitting the use of systemd's
`Restart = on-failure' logic.
Use standard error for logging messages, as it seems that this gets
messages to the systemd journal more reliably. Also use systemd's
support for specifying log levels via stderr to apply different levels
to messages.
Re-execute the systemd wrapper on SIGUSR2 and before reloading HAProxy,
making it possible to load a completely new version of HAProxy
(including a new version of the systemd wrapper) gracefully.
Since the wrapper accepts no command-line arguments of its own,
re-execution is signaled using the HAPROXY_SYSTEMD_REEXEC environment
variable.
This is primarily intended to help seamless upgrades of distribution
packages.
Since commit 4d4149c ("MEDIUM: counters: support passing the counter
number as a fetch argument"), the sample fetch sc_tracked(num) became
equivalent to sc[0-9]_tracked, by using the same smp_fetch_sc_tracked()
function.
This was theorically made possible after the series of changes starting
with commit a65536ca ("MINOR: counters: provide a generic function to
retrieve a stkctr for sc* and src."). Unfortunately, while all other
functions were changed to use the generic primitive smp_fetch_sc_stkctr(),
smp_fetch_sc_tracked() was forgotten and is not able to differentiate
between sc_tracked, src_tracked and sc[0-9]_tracked. The resulting mess is
that if sc_tracked is used, the counter number is assumed to be 47 because
that's what remains after subtracting "0" from char "_".
Fix this by simply relying on the generic function as should have been
done. The bug was introduced in 1.5-dev20. No backport is needed.
If the unique-id value is missing, the build_logline() function dump
anything. It is because the function lf_text() is bypassed. This
function is responsible to dump '-' is the value is not present, and set
the '"' around the value displayed.
This fixes the bug reported by Julient Vehent
language(<value[;value[;value[;...]]]>[,<default>])
Returns the value with the highest q-factor from a list as
extracted from the "accept-language" header using "req.fhdr".
Values with no q-factor have a q-factor of 1. Values with a
q-factor of 0 are dropped. Only values which belong to the
list of semi-colon delimited <values> will be considered. If
no value matches the given list and a default value is
provided, it is returned. Note that language names may have
a variant after a dash ('-'). If this variant is present in
the list, it will be matched, but if it is not, only the base
language is checked. The match is case-sensitive, and the
output string is always one of those provided in arguments.
The ordering of arguments is meaningless, only the ordering
of the values in the request counts, as the first value among
multiple sharing the same q-factor is used.
Example :
# this configuration switches to the backend matching a
# given language based on the request :
acl de req.fhdr(accept-language),language(de;es;fr;en) de
acl es req.fhdr(accept-language),language(de;es;fr;en) es
acl fr req.fhdr(accept-language),language(de;es;fr;en) fr
acl en req.fhdr(accept-language),language(de;es;fr;en) en
use_backend german if de
use_backend spanish if es
use_backend french if fr
use_backend english if en
default_backend choose_your_language
The function addr_to_stktable_key doesn't consider the expected
type of key. If the stick table key is based on IPv6 addresses
and the input is IPv4, the returned key is IPv4 adddress and his
length is 4 bytes, while is expected 16 bytes key.
This patch considers the expected key and try to convert IPv4 to
IPv6 and IPv6 to IPv4 according with the expected key.
This fixes the bug reported by Apollon Oikonomopoulos.
This bug was introduced somewhere in the 1.5-dev process.
Lukas reported another OpenBSD complaint about this use of sprintf() that
I missed :
src/ssl_sock.o(.text+0x2a79): In function `bind_parse_crt':
src/ssl_sock.c:3015: warning: sprintf() is often misused, please use snprintf()
This one was even easier to handle. Note that some of these calls could
be simplified by checking the snprintf output size instead of doing the
preliminary size computation.
This patch adds standardized (rfc 2409 / rfc 3526)
DH parameters with prime lengths of 1024, 2048, 3072, 4096, 6144 and
8192 bits, based on the private key size.
When compiled with USE_GETADDRINFO, make sure we use getaddrinfo(3) to
perform name lookups. On default dual-stack setups this will change the
behavior of using IPv6 first. Global configuration option
'nogetaddrinfo' can be used to revert to deprecated gethostbyname(3).
Certain implementations (for example ksh/OpenBSD) prefix the
'wc -l' output with whitespaces. This breaks the build since
689e4d733 ("BUILD: simplify the date and version retrieval in
the makefile").
Fix this by piping the wc output into tr -dc '0-9'.
Workaround is to build with IGNOREGIT=1.
HAProxy-1.4 is affected as well.
OpenBSD complains this way due to strncat() :
src/haproxy-systemd-wrapper.o(.text+0xd5): In function `spawn_haproxy':
src/haproxy-systemd-wrapper.c:33: warning: strcat() is almost always misused, please use strlcat()
In fact, the code before strncat() here is wrong, because it may
dereference a NULL if /proc/self/exe is not readable. So fix it
and get rid of strncat() at the same time.
No backport is needed.
OpenBSD complains about this use of sprintf() :
src/proto_http.o(.text+0xb0e6): In function `http_process_request':
src/proto_http.c:4127: warning: sprintf() is often misused, please use snprintf()
Here there's no risk as the strings are way shorter than the buffer size
but let's fix it anyway.
OpenBSD complains about our use of sprintf() here :
src/checks.o(.text+0x44db): In function `process_chk':
src/checks.c:766: warning: sprintf() is often misused, please use snprintf()
This case was not really clean since the introduction of global.node BTW.
Better change the API to support a size argument in the function and enforce
the limit.
A few occurrences of sprintf() were causing harmless warnings on OpenBSD :
src/cfgparse.o(.text+0x259e): In function `cfg_parse_global':
src/cfgparse.c:1044: warning: sprintf() is often misused, please use snprintf()
These ones were easy to get rid of, so better do it.
OpenBSD complains about the use of sprintf in human_time() :
src/standard.o(.text+0x1c40): In function `human_time':
src/standard.c:2067: warning: sprintf() is often misused, please use snprintf()
We can easily get around this by having a pointer to the end of the string and
using snprintf() instead.
OpenBSD complains about our use of strcpy() in standard.c. The checks
were OK and we didn't fall into the category of "almost always misused",
but it's very simple to fix it so better do it before a problem happens.
src/standard.o(.text+0x26ab): In function `str2sa_range':
src/standard.c:718: warning: strcpy() is almost always misused, please use strlcpy()
Commits 98634f0c7b, ed66c297c2 and d5f624dde7 updated the main
makefile with new objects src/hash.o, src/pattern.o and src/map.o,
but missed to update Makefile.bsd and Makefile.osx.
This will fix some of the build failures reported by Jorge Severino
under OpenBSD.
SNI is a TLS extension and requires at least TLSv1.0 or later, however
the version in the record layer may be SSLv3, not necessarily TLSv1.0.
GnuTLS for example does this.
Relax the record layer version check in smp_fetch_ssl_hello_sni() to
allow fetching SNI values from clients indicating SSLv3 in the record
layer (maintaining the TLSv1.0+ check in the actual handshake version).
This was reported and analyzed by Pravin Tatti.
The TLS unique id, or unique channel binding, is a byte string that can be
pulled from a TLS connection and it is unique to that connection. It is
defined in RFC 5929 section 3. The value is used by various upper layer
protocols as part of an extra layer of security. For example XMPP
(RFC 6120) and EST (RFC 7030).
Add the ssl_fc_unique_id keyword and corresponding sample fetch method.
Value is retrieved from OpenSSL and base64 encoded as described in RFC
5929 section 3.
Constructions such as sc0_get_gpc0(foo) allow to look up the same key as
the current key but in an alternate table. A check was missing to ensure
we already have a key, resulting in a crash if this lookup is performed
before the associated track-sc rule.
This bug was reported on the mailing list by Neil@iamafreeman and
narrowed down further by Lukas Tribus and Thierry Fournier.
This bug was introduced in 1.5-dev20 by commit "0f791d4 MEDIUM: counters:
support looking up a key in an alternate table".
RFC 1945 (§4.1) defines an HTTP/0.9 request ("Simple-Request") as:
Simple-Request = "GET" SP Request-URI CRLF
HAProxy tries to automatically upgrade HTTP/0.9 requests to
to HTTP/1.0, by appending "HTTP/1.0" to the request and setting the
Request-URI to "/" if it was not present. The latter however is
RFC-incompatible, as HTTP/0.9 requests must already have a Request-URI
according to the definition above. Additionally,
http_upgrade_v09_to_v10() does not check whether the request method is
indeed GET (the mandatory method for HTTP/0.9).
As a result, any single- or double-word request line is regarded as a
valid HTTP request. We fix this by failing in http_upgrade_v09_to_v10()
if the request method is not GET or the request URI is not present.
Commit 6f7203d ("MEDIUM: pattern: add prune function") introduced an
array of functions pat_prune_fcts[] but unfortunately declared it in
pattern.h without marking it "extern", resulting in each file including
it having its own copy.