Till now we used to return a pointer to a rule, but that makes it
complicated to later add support for registering new actions which
may fail. For example, the redirect may fail if the response is too
large to fit into the buffer.
So instead let's return a verdict. But we needed the pointer to the
last rule to get the address of a redirect and to get the realm used
by the auth page. So these pieces of code have moved into the function
and they produce a verdict.
Both use exactly the same mechanism, except for the choice of the
default realm to be emitted when none is selected. It can be achieved
by simply comparing the ruleset with the stats' for now. This achieves
a significant code reduction and further, removes the dependence on
the pointer to the final rule in the caller.
The "block" rules are redundant with http-request rules because they
are performed immediately before and do exactly the same thing as
"http-request deny". Moreover, this duplication has led to a few
minor stats accounting issues fixed lately.
Instead of keeping the two rule sets, we now build a list of "block"
rules that we compile as "http-request block" and that we later insert
at the beginning of the "http-request" rules.
The only user-visible change is that in case of a parsing error, the
config parser will now report "http-request block rule" instead of
"blocking condition".
Some of the remaining interleaving of request processing after the
http-request rules can now safely be removed, because all remaining
actions are mutually exclusive.
So we can move together all those related to an intercepting rule,
then proceed with stats, then with req*.
We still keep an issue with stats vs reqrep which forces us to
keep the stats split in two (detection and action). Indeed, from the
beginning, stats are detected before rewriting and not after. But a
reqdeny rule would stop stats, so in practice we have to first detect,
then perform the action. Maybe we'll be able to kill this in version
1.6.
Till now the Connection header was processed in the middle of the http-request
rules and some reqadd rules. It used to force some http-request actions to be
cut in two parts.
Now with keep-alive, not only that doesn't make any sense anymore, but it's
becoming a total mess, especially since we need to know the headers contents
before proceeding with most actions.
The real reason it was not moved earlier is that the "block" or "http-request"
rules can see a different version if some fields are changed there. But that
is already not reliable anymore since the values observed by the frontend
differ from those in the backend.
This patch is the equivalent of commit f118d9f ("REORG: http: move HTTP
Connection response header parsing earlier") but for the request side. It
has been tagged MEDIUM as it could theorically slightly affect some setups
relying on corner cases or invalid setups, though this does not make real
sense and is highly unlikely.
The session's backend request counters were incremented after the block
rules while these rules could increment the session's error counters,
meaning that we could have more errors than requests reported in a stick
table! Commit 5d5b5d8 ("MEDIUM: proto_tcp: add support for tracking L7
information") is the most responsible for this.
This bug is 1.5-specific and does not need any backport.
"block" rules used to build the whole response and forgot to increment
the denied_req counters. By jumping to the general "deny" label created
in previous patch, it's easier to fix this.
The issue was already present in 1.3 and remained unnoticed, in part
because few people use "block" nowadays.
Continue the cleanup of http-request post-processing to remove some
of the interleaved tests. Here we set up a few labels to deal with
the deny and tarpit actions and avoid interleaved ifs.
We still have a plate of spaghetti in the request processing rules.
All http-request rules are executed at once, then some responses are
built interlaced with other rules that used to be there in the past.
Here, reqadd is executed after an http-req redirect rule is *decided*,
but before it is *executed*.
So let's match the doc and config checks, to put the redirect actually
before the reqadd completely.
There's no point in open-coding the sending of 100-continue in
the stats initialization code, better simply rely on the function
designed to process the message body which already does it.
Commit 844a7e7 ("[MEDIUM] http: add support for proxy authentication")
merged in v1.4-rc1 added the ability to emit a status code 407 in auth
responses, but forgot to set the same status in the logs, which still
contain 401.
The bug is harmless, no backport is needed.
A switch from a TCP frontend to an HTTP backend initializes the HTTP
transaction. txn->hdr_idx.size is used by hdr_idx_init() but not
necessarily initialized yet here, because the first call to hdr_idx_init()
is in fact placed in http_init_txn(). Moving it before the call is
enough to fix it. We also remove the useless extra confusing call
to hdr_idx_init().
The bug was introduced in 1.5-dev8 with commit ac1932d ("MEDIUM:
tune.http.maxhdr makes it possible to configure the maximum number
of HTTP headers"). No backport to stable is needed.
Last fix did address the issue for inlined patterns, but it was not
enough because the flags are lost as well when updating patterns
dynamically over the CLI.
Also if the same file was used once with -i and another time without
-i, their references would have been merged and both would have used
the same matching method.
It's appear that the patterns have two types of flags. The first
ones are relative to the pattern matching, and the second are
relative to the pattern storage. The pattern matching flags are
the same for all the patterns of one expression. Now they are
stored in the expression. The storage flags are information
returned by the pattern mathing function. This information is
relative to each entry and is stored in the "struct pattern".
Now, the expression matching flags are forwarded to the parse
and index functions. These flags are stored during the
configuration parsing, and they are used during the parse and
index actions.
This issue was introduced in dev23 with the major pattern rework,
and is a continuation of commit a631fc8 ("BUG/MAJOR: patterns: -i
and -n are ignored for inlined patterns"). No backport is needed.
These flags are only passed to pattern_read_from_file() which
loads the patterns from a file. The functions used to parse the
patterns from the current line do not provide the means to pass
the pattern flags so they're lost.
This issue was introduced in dev23 with the major pattern rework,
and was reported by Graham Morley. No backport is needed.
Dmitry Sivachenko reported that nice warning :
src/pattern.c:2243:43: warning: if statement has empty body [-Wempty-body]
if (&ref2->list == &pattern_reference);
^
src/pattern.c:2243:43: note: put the semicolon on a separate line to silence
this warning
It was merged as is with the code from commit af5a29d ("MINOR: pattern:
Each pattern is identified by unique id").
So it looks like we can reassign an ID which is still in use because of
this.
Released version 1.5-dev24 with the following main changes :
- MINOR: pattern: find element in a reference
- MEDIUM: http: ACL and MAP updates through http-(request|response) rules
- MEDIUM: ssl: explicitly log failed handshakes after a heartbeat
- DOC: Full section dedicated to the converters
- MEDIUM: http: register http-request and http-response keywords
- BUG/MINOR: compression: correctly report incoming byte count
- BUG/MINOR: http: don't report server aborts as client aborts
- BUG/MEDIUM: channel: bi_putblk() must not wrap before the end of buffer
- CLEANUP: buffers: remove unused function buffer_contig_space_with_res()
- MEDIUM: stats: reimplement HTTP keep-alive on the stats page
- BUG/MAJOR: http: fix timeouts during data forwarding
- BUG/MEDIUM: http: 100-continue responses must process the next part immediately
- MEDIUM: http: move skipping of 100-continue earlier
- BUILD: stats: let gcc know that last_fwd cannot be used uninitialized...
- CLEANUP: general: get rid of all old occurrences of "session *t"
- CLEANUP: http: remove the useless "if (1)" inherited from version 1.4
- BUG/MEDIUM: stats: mismatch between behaviour and doc about front/back
- MEDIUM: http: enable analysers to have keep-alive on stats
- REORG: http: move HTTP Connection response header parsing earlier
- MINOR: stats: always emit HTTP/1.1 in responses
- MINOR: http: add capture.req.ver and capture.res.ver
- MINOR: checks: add a new global max-spread-checks directive
- BUG/MAJOR: http: fix the 'next' pointer when performing a redirect
- MINOR: http: implement the max-keep-alive-queue setting
- DOC: fix alphabetic order of tcp-check
- MINOR: connection: add a new error code for SSL with heartbeat
- MEDIUM: ssl: implement a workaround for the OpenSSL heartbleed attack
- BUG/MEDIUM: Revert "MEDIUM: ssl: Add standardized DH parameters >= 1024 bits"
- BUILD: http: remove a warning on strndup
- BUILD: ssl: avoid a warning about conn not used with OpenSSL < 1.0.1
- BUG/MINOR: ssl: really block OpenSSL's response to heartbleed attack
- MINOR: ssl: finally catch the heartbeats missing the padding
Previous patch only focused on parsing the packet right and blocking
it, so it relaxed one test on the packet length. The difference is
not usable for attacking but the logs will not report an attack for
such cases, which is probably bad. Better report all known invalid
packets cases.
Recent commit f51c698 ("MEDIUM: ssl: implement a workaround for the
OpenSSL heartbleed attack") did not always work well, because OpenSSL
is fun enough for not testing errors before sending data... So the
output sometimes contained some data.
The OpenSSL code relies on the max_send_segment value to limit the
packet length. The code ensures that a value of zero will result in
no single byte leaking. So we're forcing this instead and that
definitely fixes the issue. Note that we need to set it the hard
way since the regular API checks for valid values.
Building with a version of openssl without heartbeat gives this since
latest 29f037d ("MEDIUM: ssl: explicitly log failed handshakes after a
heartbeat") :
src/ssl_sock.c: In function 'ssl_sock_msgcbk':
src/ssl_sock.c:188: warning: unused variable 'conn'
Simply declare conn inside the ifdef. No backport is needed.
The latest commit about set-map/add-acl/... causes this warning for
me :
src/proto_http.c: In function 'parse_http_req_cond':
src/proto_http.c:8863: warning: implicit declaration of function 'strndup'
src/proto_http.c:8863: warning: incompatible implicit declaration of built-in function 'strndup'
src/proto_http.c:8890: warning: incompatible implicit declaration of built-in function 'strndup'
src/proto_http.c:8917: warning: incompatible implicit declaration of built-in function 'strndup'
src/proto_http.c:8944: warning: incompatible implicit declaration of built-in function 'strndup'
Use my_strndup() instead of strndup() which is not portable. No backport
needed.
This reverts commit 9ece05f590.
Sander Klein reported an important performance regression with this
patch applied. It is not yet certain what is exactly the cause but
let's not break other setups now and sort this out after dev24.
The commit was merged into dev23, no need to backport.
Using the previous callback, it's trivial to block the heartbeat attack,
first we control the message length, then we emit an SSL error if it is
out of bounds. A special log is emitted, indicating that a heartbleed
attack was stopped so that they are not confused with other failures.
That way, haproxy can protect itself even when running on an unpatched
SSL stack. Tests performed with openssl-1.0.1c indicate a total success.
Add a callback to receive the heartbeat notification. There, we add
SSL_SOCK_RECV_HEARTBEAT flag on the ssl session if a heartbeat is seen.
If a handshake fails, we log a different message to mention the fact that
a heartbeat was seen. The test is only performed on the frontend side.
Users have seen a huge increase in the rate of SSL handshake failures
starting from 2014/04/08 with the release of the Heartbleed OpenSSL
vulnerability (CVE-2014-0160). Haproxy can detect that a heartbeat
was received in the incoming handshake, and such heartbeats are not
supposed to be common, so let's log a different message when a
handshake error happens after a heartbeat is detected.
This patch only adds the new message and the new code.
The http_(res|req)_keywords_register() functions allow to register
new keywords.
You need to declare a keyword list:
struct http_req_action_kw_list test_kws = {
.scope = "testscope",
.kw = {
{ "test", parse_test },
{ NULL, NULL },
}
};
and a parsing function:
int parse_test(const char **args, int *cur_arg, struct proxy *px, struct http_req_rule *rule, char **err)
{
rule->action = HTTP_REQ_ACT_CUSTOM_STOP;
rule->action_ptr = action_function;
return 0;
}
http_req_keywords_register(&test_kws);
The HTTP_REQ_ACT_CUSTOM_STOP action stops evaluation of rules after
your rule, HTTP_REQ_ACT_CUSTOM_CONT permits the evaluation of rules
after your rule.
This patch allows manipulation of ACL and MAP content thanks to any
information available in a session: source IP address, HTTP request or
response header, etc...
It's an update "on the fly" of the content of the map/acls. This means
it does not resist to reload or restart of HAProxy.
Finn Arne Gangstad suggested that we should have the ability to break
keep-alive when the target server has reached its maxconn and that a
number of connections are present in the queue. After some discussion
around his proposed patch, the following solution was suggested : have
a per-proxy setting to fix a limit to the number of queued connections
on a server after which we break keep-alive. This ensures that even in
high latency networks where keep-alive is beneficial, we try to find a
different server.
This patch is partially based on his original proposal and implements
this configurable threshold.
Commit bed410e ("MAJOR: http: centralize data forwarding in the request path")
has woken up an issue in redirects, where msg->next is not reset when flushing
the input buffer. The result is an attempt to forward a negative amount of
data, making haproxy crash.
This bug does not seem to affect versions prior to dev23, so no backport is
needed.
These ones report a string as "HTTP/1.0" or "HTTP/1.1" depending on the
version of the request message or the response message, respectively.
The purpose is to be able to emit custom log lines reporting this version
in a persistent way.
We used to emit either 1.0 or 1.1 depending on whether we were sending
chunks or not. This condition is useless, better always send 1.1. Also
that way at least clients and intermediary proxies know we speak 1.1.
The "Connection: close" header is still set anyway.
Currently, the parsing of the HTTP Connection header for the response
is performed at the same place as the rule sets, which means that after
parsing the beginning of the response, we still have no information on
whether the response is keep-alive compatible or not. Let's do that
earlier.
Note that this is the same code that was moved in the previous function,
both of them are always called in a row so no change of behaviour is
expected.
A future change might consist in having a late analyser to perform the
late header changes such as mangling the connection header. It's quite
painful that currently this is mixed with the rest of the processing
such as filters.
This allows the stats page to work in keep-alive mode and to be
compressed. At compression ratios up to 80%, it's quite interesting
for large pages.
We ensure to skip filters because we don't want to unexpectedly block
a response nor to mangle response headers.
In version 1.3.4, we got the ability to split configuration parts between
frontends and backends. The stats was attached to the backend and a control
was made to ensure that it was used only in a listen or backend section, but
not in a frontend.
The documentation clearly says that the statement may only be used in the
backend.
But since that same version above, the defaults stats configuration is
only filled in the frontend part of the proxy and not in the backend's.
So a backend will not get stats which are enabled in a defaults section,
despite what the doc says. However, a frontend configured after a defaults
section will get stats and will not emit the warning!
There were many technical limitations in 1.3.4 making it impossible to
have the stats working both in the frontend and backend, but now this has
become a total mess.
It's common however to see people create a frontend with a perfectly
working stats configuration which only emits a warning stating that it
might not work, adding to the confusion. Most people workaround the tricky
behaviour by declaring a "listen" section with no server, which was the
recommended solution in 1.3 where it was even suggested to add a dispatch
address to avoid a warning.
So the right solution seems to do the following :
- ensure that the defaults section's settings apply to the backends,
as documented ;
- let the frontends work in order not to break existing setups relying
on the defaults section ;
- officially allow stats to be declared in frontends and remove the
warninng
This patch should probably not be backported since it's not certain that
1.4 is fully compatible with having stats in frontends and backends (which
was really made possible thanks to applets).
All the code inherited from version 1.1 still holds a lot ot sessions
called "t" because in 1.1 they were tasks. This naming is very annoying
and sometimes even confusing, for example in code involving tables.
Let's get rid of this once for all and before 1.5-final.
Nothing changed beyond just carefully renaming these variables.
OK, for once it cannot easily know this one, and certain versions are
emitting this harmless warning :
src/dumpstats.c: In function 'http_stats_io_handler':
src/dumpstats.c:4507:19: warning: 'last_fwd' may be used uninitialized in this function [-Wmaybe-uninitialized]
It's useless to process 100-continue in the middle of response filters
because there's no info in the 100 response itself, and it could even
make things worse. So better use it as it is, an interim response
waiting for the next response, thus we just have to put it into
http_wait_for_response(). That way we ensure to have a valid response
in this function.
Since commit d7ad9f5 ("MAJOR: channel: add a new flag CF_WAKE_WRITE to
notify the task of writes"), we got another bug with 100-continue responses.
If the final response comes in the same packet as the 100, then the rest of
the buffer is not processed since there is no wake-up event.
In fact the change above uncoverred the real culprit which is more
likely session.c which should detect that an earlier analyser was set
and should loop back to it.
A cleaner fix would be better, but setting the flag works fine.
This issue was introduced in 1.5-dev22, no backport is needed.
Patches c623c17 ("MEDIUM: http: start to centralize the forwarding code")
and bed410e ("MAJOR: http: centralize data forwarding in the request path")
merged into 1.5-dev23 cause transfers to be silently aborted after the
server timeout due to the fact that the analysers are woken up when the
timeout strikes and they believe they have nothing more to do, so they're
terminating the transfer.
No backport is needed.
This basically reimplements commit f3221f9 ("MEDIUM: stats: add support
for HTTP keep-alive on the stats page") which was reverted by commit
51437d2 after Igor Chan reported a broken stats page caused by the bug
fix by previous commit.
The errors reported by Igor Chan on the stats interface in chunked mode
were caused by data wrapping at the wrong place in the buffer. It could
be reliably reproduced by picking random buffer sizes until the issue
appeared (for a given conf, 5300 with 1024 maxrewrite was OK).
The issue is that the stats interface uses bi_putchk() to emit data,
which relies on bi_putblk(). This code checks the largest part that can
be emitted while preserving the rewrite reserve, but uses that result to
compute the wrapping offset, which is wrong. If some data remain present
in the buffer, the wrapping may be allowed and will happen before the
end of the buffer, leaving some old data in the buffer.
The reason it did not happen before keep-alive is simply that the buffer
was much less likely to contain older data. It also used to happen only
for certain configs after a certain amount of time because the size of
the counters used to inflate the output till the point wrapping started
to happen.
The fix is trivial, buffer_contig_space_with_res() simply needs to be
replaced by buffer_contig_space().
Note that peers were using the same function so it is possible that they
were affected as well.
This issue was introduced in 1.5-dev8. No backport to stable is needed.
Commit f003d37 ("BUG/MINOR: http: don't report client aborts as server errors")
attempted to fix a longstanding issue by which some client aborts could be
logged as server errors. Unfortunately, one of the tests involved there also
catches truncated server responses, which are reported as client aborts.
Instead, only check that the client has really closed using the abortonclose
option, just as in done in the request path (which means that the close was
propagated to the server).
The faulty fix above was introduced in 1.5-dev15, and was backported into
1.4.23.
Thanks to Patrick Hemmer for reporting this issue with traces showing the
root cause of the problem.