1
0
mirror of http://git.haproxy.org/git/haproxy.git/ synced 2024-12-17 17:04:35 +00:00
Commit Graph

5687 Commits

Author SHA1 Message Date
Cyril Bont�
a8322f96f1 MINOR: stats: show stat resolvers missing in the help message
The help message provided by the "help" command on the unix socket didn't
provide "show stat resolvers" in the list of supported commands.

This patch should be backported to 1.6
2016-05-06 12:28:43 +02:00
Cyril Bont�
db98eb3b5c MINOR: stats: fix typo in help messages
The word "available" was mistakenly written "avalaible" in help messages and
comments.

This patch should be backported to 1.5 and 1.6
2016-05-06 12:28:43 +02:00
Cyril Bont�
6ca9e01ab2 BUG/MEDIUM: stats: show backend may show an empty or incomplete result
This is the same issue as "show servers state", where the result is incorrect
it the data can't fit in one buffer. The similar fix is applied, to restart
the data processing where it stopped as buffers are sent to the client.

This fix should be backported to haproxy 1.6
2016-05-06 12:28:43 +02:00
Cyril Bont�
76a99784f4 BUG/MEDIUM: stats: show servers state may show an empty or incomplete result
It was reported that the unix socket command "show servers state" returned an
empty response while "show servers state <backend>" worked.
In fact, both cases can reproduce the issue. It happens when the response can't
fit in one buffer.

The fix consists in processing the response in several steps, as it is done in
some others commands, by restarting where it was stopped after the buffer is
sent to the client.

This fix should be backported to haproxy 1.6
2016-05-06 12:28:43 +02:00
Willy Tarreau
8bf242b764 BUG/MEDIUM: channel: fix inconsistent handling of 4GB-1 transfers
In 1.4-dev3, commit 31971e5 ("[MEDIUM] add support for infinite forwarding")
made it possible to configure the lower layer to forward data indefinitely
by setting the forward size to CHN_INFINITE_FORWARD (4GB-1). By then larger
chunk sizes were not supported so there was no confusion in the usage of the
function.

Since 1.5 we support 64-bit content-lengths and chunk sizes and the function
has grown to support 64-bit arguments, though it still limits a single pass
to 32-bit quantities (what fit in the channel's to_forward field). The issue
now becomes that a 4GB-1 content-length can be confused with infinite
forwarding (in fact it's 4GB-1+what was already in the buffer). It causes a
visible effect when transferring this exact size because the transfer rate
is lower than with other sizes due in part to the disabling of the Nagle
algorithm on the sendto() call.

In theory with keep-alive it should prevent a second request from being
processed after such a transfer, but since the analysers are still present,
the forwarding analyser properly counts down the remaining size to transfer
and ultimately the transaction gets correctly reset so there is no visible
effect.

Since the root cause of the issue is an API problem (lack of distinction
between a real valid length and a magic value), this patch modifies the API
to have a new dedicated function called channel_forward_forever() to program
a permanent forwarding. The existing function __channel_forward() was modified
to properly take care of the requested sizes and ensure it 1) never overflows
and 2) never reaches CHN_INFINITE_FORWARD by accident.

It is worth noting that the function used to have a bug causing a 2GB
forward to be scheduled if it was called with less data than what is present
in buf->i. Fortunately this bug couldn't be triggered with existing code.

This fix should be backported to 1.6 and 1.5. While it also theorically
affects 1.4, it's better not to backport it there, as the risk of breaking
large object transfers due to significant API differences is high, compared
to the fact that the largest supported objects (4GB-1) are just slower to
transfer.
2016-05-04 15:26:37 +02:00
Willy Tarreau
5fb04711f0 BUG/MEDIUM: stream: ensure the SI_FL_DONT_WAKE flag is properly cleared
The previous buffer space bug has revealed an issue causing some stalled
connections to remain orphaned forever, preventing an old process from
dying. The issue is that once in a while a task may be woken up because
a disabled expiration timer has been reached despite no timeout being
reached. In this case we exit very early but the SI_FL_DONT_WAKE flag
wasn't cleared, resulting in new events not waking the task up. It may
be one of the reasons why a few people have already observed some peers
connections stuck in CLOSE_WAIT state.

This bug was introduced in 1.5-dev13 by commit 798f432 ("OPTIM: session:
don't process the whole session when only timers need a refresh"), so
the fix must be backported to 1.6 and 1.5.
2016-05-04 10:18:37 +02:00
Willy Tarreau
ef907fee12 BUG/MAJOR: channel: fix miscalculation of available buffer space (4th try)
Unfortunately, commit 169c470 ("BUG/MEDIUM: channel: fix miscalculation of
available buffer space (3rd try)") was still not enough to completely
address the issue. It fell into an integer comparison trap. Contrary to
expectations, chn->to_forward may also have the sign bit set when
forwarding regular data having a large content-length, resulting in
an incomplete check of the result and of the reserve because the with
to_forward very large, to_forward+o could become very small and also
the reserve could become positive again and make channel_recv_limit()
return a negative value.

One way to reproduce this situation is to transfer a large file (> 2GB)
with http-keep-alive or http-server-close, without splicing, and ensure
that the server uses content-length instead of chunks. The transfer
should stall very early after the first buffer has been transferred
to the client.

This fix now properly checks 1) for an overflow caused by summing o and
to_forward, and 2) for o+to_forward being smaller or larger than maxrw
before performing the subtract, so that all sensitive operations are
properly performed on 33-bit arithmetics.

The code was subjected again to a series of tests using inject+httpterm
scanning a wide range of object sizes (+10MB after each new request) :

  $ printf "new page 1\nget 127.0.0.1:8002 / s=%%s0m\n" | \
      inject64 -o 1 -u 1 -f /dev/stdin

With previous fix, the transfer would suddenly stop when reaching 2GB :

   hits ^hits hits/s  ^h/s     bytes  kB/s  last  errs  tout htime  sdht ptime
    203     1      2     1 216816173354 2710202 3144892     0     0 685.0 0.0 685.0
    205     2      2     2 219257283186 2706880 2441109     0     0 679.5 6.5 679.5
    205     0      2     0 219257283186 2673836     0     0     0 0.0 0.0 0.0
    205     0      2     0 219257283186 2641622     0     0     0 0.0 0.0 0.0
    205     0      2     0 219257283186 2610174     0     0     0 0.0 0.0 0.0

Now it's fine even past 4 GB.

Many thanks to Vedran Furac for reporting this issue early with a common
access pattern helping to troubleshoot this.

This fix must be backported to 1.6 and 1.5 where the commit above was
already backported.
2016-05-03 17:58:03 +02:00
Willy Tarreau
3de5bd603c BUG/MEDIUM: http: fix risk of CPU spikes with pipelined requests from dead client
Since client-side HTTP keep-alive was introduced in 1.4-dev, a good
number of corner cases had to be dealt with. One of them remained and
caused some occasional CPU spikes that Cyril Bont� had the opportunity
to observe from time to time and even recently to capture for deeper
analysis.

What happens is the following scenario :
  1) a client sends a first request which causes the server to respond
     using chunked encoding

  2) the server starts to respond with a large response that doesn't
     fit into a single buffer

  3) the response buffer fills up with the response

  4) the client reads the response while it is being sent by the server.

  5) haproxy eventually receives the end of the response from the server,
     which occupies the whole response buffer (must at least override the
     reserve), parses it and prepares to receive a second request while
     sending the last data blocks to the client. It then reinitializes the
     whole http_txn and calls http_wait_for_request(), which arms the
     http-request timeout.

  6) at this exact moment the client emits a second request, probably
     asking for an object referenced in the first page while parsing it
     on the fly, and silently disappears from the internet (internet
     access cut or software having trouble with pipelined request).

  7) the second request arrives into the request buffer and the response
     data stall in the response buffer, preventing the reserve from being
     used for anything else.

  8) haproxy calls http_wait_for_request() to parse the next request which
     has just arrived, but since it sees the response buffer is full, it
     refrains from doing so and waits for some data to leave or a timeout
     to strike.

  9) the http-request timeout strikes, causing http_wait_for_request() to
     be called again, and the function immediately returns since it cannot
     even produce an error message, and the loop is maintained here.

 10) the client timeout strikes, aborting the loop.

At first glance a check for timeout would be needed *before* considering
the buffer in http_wait_for_request(), but the issue is not there in fact,
because when doing so we see in the logs a client timeout error while
waiting for a request, which is wrong. The real issue is that we must not
consider the first transaction terminated until at least the reserve is
released so that http_wait_for_request() has no problem starting to process
a new request. This allows the first response to be reported in timeout and
not the second request. A more restrictive control could consist in not
considering the request complete until the response buffer has no more
outgoing data but this brings no added value and needlessly increases the
number of wake-ups when dealing with pipelining.

Note that the same issue exists with the request, we must ensure that any
POST data are finished being forwarded to the server before accepting a new
request. This case is much harder to trigger however as servers rarely
disappear and if they do so, they impact all their sessions at once.

No specific reproducer is usable because the bug is very hard to reproduce
and depends on the system as well with settings varying across reboots. On
a test machine, socket buffers were reduced to 4096 (tcp_rmem and tcp_wmem),
haproxy's buffers were set to 16384 and tune.maxrewrite to 12288. The proxy
must work in http-server-close mode, with a request timeout smaller than the
client timeout. The test is run like this :

  $ (printf "GET /15000 HTTP/1.1\r\n\r\n"; usleep 100000; \
     printf "GET / HTTP/1.0\r\n\r\n"; sleep 15) | nc6 --send-only 0 8002

The server returns chunks of the requested size (15000 bytes here, but
78000 in a previous test was the only working value). Strace must show the
last recvfrom() succeed and the last sendto() being shorter than desired or
better, not being called.

This fix must be backported to 1.6, 1.5 and 1.4. It was made in a way that
should make it possible to backport it. It depends on channel_congested()
which also needs to be backported. Further cleanup of http_wait_for_request()
could be made given that some of the tests are now useless.
2016-05-02 16:39:22 +02:00
Willy Tarreau
55e58f2334 MINOR: channel: add new function channel_congested()
This function returns non-zero if the channel is congested with data in
transit waiting for leaving, indicating to the caller that it should wait
for the reserve to be released before starting to process new data in
case it needs the ability to append data. This is meant to be used while
waiting for a clean response buffer before processing a request.
2016-05-02 16:39:22 +02:00
Willy Tarreau
f51d03cf14 BUG/MEDIUM: http: fix incorrect reporting of server errors
Commit dbe34eb ("MEDIUM: filters/http: Move body parsing of HTTP
messages in dedicated functions") introduced a bug in function
http_response_forward_body() by getting rid of the while(1) loop.
The code immediately following the loop was only reachable on missing
data but now it's also reachable under normal conditions, which used
to be dealt with by the skip_resync_state label returning zero. The
side effect is that in http_server_close situations, the channel's
SHUTR flag is seen and considered as a server error which is reported
if any other error happens (eg: client timeout).

This bug is specific to 1.7, no backport is needed.
2016-05-02 16:39:22 +02:00
Nenad Merdanovic
54e439f0b4 BUG/MINOR: log: fix a typo that would cause %HP to log <BADREQ>
Typo was introduced in 57bc891 ("BUG/MEDIUM: log: fix risk of
segfault when logging HTTP fields in TCP mode") which inverted the
condition in the test and caused <BADREQ> to be logged when using
%HP.

Signed-off-by: Nenad Merdanovic <nmerdan@anine.io>
2016-04-29 07:28:44 +02:00
Christopher Faulet
0b64f62a69 BUG/MINOR: dumpstats: Fix the "Total bytes saved" counter in backends stats
Instead of subtracting ST_F_COMP_OUT (Compression out) from ST_F_COMP_IN
(Compressio in) in backends stats, ST_F_COMP_BYP (Compression bypass) was used.
2016-04-29 07:18:57 +02:00
David Carlier
0c437f4d35 MINOR: lua: migrate the argument mask to 64 bits type.
Recently the maximum number of arguments were ported to 12
due to the migration to 64 bits. So we allow lua to extend for
both converters and fetches.
2016-04-29 07:17:42 +02:00
David Carlier
abdb00fbc0 BUG/MEDIUM: lua: protects the upper boundary of the argument list for converters/fetches.
When a converter or sample is called from within a Lua code, there is a risk
of invalid argument string data usage when the upper boundary is reached.
Would be kind to port to 1.6 if possible.
2016-04-29 07:17:42 +02:00
Thierry Fournier
3610c39c8c MINOR: filters: add opaque data
Add opaque data between the filter keyword registrering and the parsing
function. This opaque data allow to use the same parser with differents
registered keywords. The opaque data is used for giving data which mainly
makes difference between the two keywords.

It will be used with Lua keywords registering.
2016-04-27 10:48:15 +02:00
Nenad Merdanovic
5c3ed34529 CLEANUP: Use server_parse_maxconn_change_request for maxconn CLI updates 2016-04-25 17:23:50 +02:00
Nenad Merdanovic
174dd37d88 MINOR: Add ability for agent-check to set server maxconn
This is very useful in complex architecture systems where HAproxy
is balancing DB connections for example. We want to keep the maxconn
high in order to avoid issues with queueing on the LB level when
there is slowness on another part of the system. Example is a case of
an architecture where each thread opens multiple DB connections, which
if get stuck in queue cause a snowball effect (old connections aren't
closed, new ones cannot be established). These connections are mostly
idle and the DB server has no problem handling thousands of them.

Allowing us to dynamically set maxconn depending on the backend usage
(LA, CPU, memory, etc.) enables us to have high maxconn for situations
like above, but lowering it in case there are real issues where the
backend servers become overloaded (cache issues, DB gets hit hard).
2016-04-25 17:23:50 +02:00
Willy Tarreau
57bc8917c3 BUG/MEDIUM: log: fix risk of segfault when logging HTTP fields in TCP mode
David Torgerson faced an issue when using HTTP fields in log-format in TCP
sections. The txn is dereferenced while it's null, resulting in a crash of
the process. Such configurations are invalid and a warning is emitted, but
nevertheless the process must not crash. As found by Lukas Tribus, this is
a side effect of the split between the stream and the HTTP transaction that
happened in 1.6, making it possible to have txn==NULL there.

The fix consists in checking that txn is valid before using it. Fortunately
it's easy since almost all places already used to check for the existence
of a field (eg: txn->uri).

This patch should be backported to 1.6.
2016-04-25 17:15:58 +02:00
Willy Tarreau
169c47028a BUG/MEDIUM: channel: fix miscalculation of available buffer space (3rd try)
Latest fix 8a32106 ("BUG/MEDIUM: channel: fix miscalculation of available
buffer space (2nd try)") did happen to fix some observable issues but not
all of them in fact, some corner cases still remained and at least one user
reported a busy loop that appeared possible, though not easily reproducible
under experimental conditions.

The remaining issue is that we still consider min(i, to_fwd) as the number
of bytes in transit, but in fact <i> is not relevant here. Indeed, what
matters is that we can read everything we want at once provided that at
the end, <i> cannot be larger than <size-maxrw> (if it was not already).

This is visible in two cases :
  - let's have i=o=max/2 and to_fwd=0. Then i+o >= max indicates that the
    buffer is already full, while it is not since once <o> is forwarded,
    some space remains.

  - when to_fwd is much larger than i, it's obvious that we can fill the
    buffer.

The only relevant part in fact is o + to_fwd. to_fwd will ensure that at
least this many bytes will be moved from <i> to <o> hence will leave the
buffer, whatever the number of rounds it takes.

Interestingly, the fix applied here ensures that channel_recv_max() will
now equal (size - maxrw - i + to_fwd), which is indeed what remains
available below maxrw after to_fwd bytes are forwarded from i to o and
leave the buffer.

Additionally, the latest fix made it possible to meet an integer overflow
that was not caught by the range test when forwarding in TCP or tunnel
mode due to to_forward being added to an existing value, causing the
buffer size to be limited when it should not have been, resulting in 2
to 3 recv() calls when a single one was enough. The first one was limited
to the unreserved buffer size, the second one to the size of the reserve
minus 1, and the last one to the last byte. Eg with a 2kB buffer :

recvfrom(22, "HTTP/1.1 200\r\nConnection: close\r"..., 1024, 0, NULL, NULL) = 1024
recvfrom(22, "23456789.123456789.123456789.123"..., 1023, 0, NULL, NULL) = 1023
recvfrom(22, "5", 1, 0, NULL, NULL)     = 1

This bug is still present in 1.6 and 1.5 so the fix should be backported
there.
2016-04-21 18:06:08 +02:00
Willy Tarreau
93dc478a04 BUG/MEDIUM: channel: incorrect polling condition may delay event delivery
The condition to poll for receive as implemented in channel_may_recv()
is still incorrect. If buf->o is null and buf->i is slightly larger than
chn->to_forward and at least as large as buf->size - maxrewrite, then
reading will be disabled. It may slightly delay some data delivery by
having first to forward pending bytes, but may also cause some random
issues with analysers that wait for some data before starting to forward
what they correctly parsed. For instance, a body analyser may be prevented
from seeing the data that only fits in the reserve.

This bug may also prevent an applet's chk_rcv() function from being called
when part of a buffer is released. It is possible (though not verified)
that this participated to some peers frozen session issues some people
have been facing.

This fix should be backported to 1.6 and 1.5 to ensure better coherency
with channel_recv_limit().
2016-04-21 17:03:46 +02:00
Willy Tarreau
4b46a3e8cc BUG/MEDIUM: channel: don't allow to overwrite the reserve until connected
Commit 9c06ee4 ("BUG/MEDIUM: channel: don't schedule data in transit for
leaving until connected") took care of an issue involving POST in conjunction
with http-send-name-header, where we absolutely never want to touch the
reserve until we're sure not to touch the buffer contents anymore, which
is indicated by the output stream-interface being connected.

But channel_may_recv() was not equipped with such a test, so in some
situations it might decide that it is possible to poll for reads, and
later channel_recv_limit() will decide it's not possible to read,
causing a loop. So we must add a similar test there.

Since the fix above was backported to 1.6 and 1.5, this fix must as well.
2016-04-21 15:31:22 +02:00
Christopher Faulet
c3fe5330be DOC: filters: Add filters documentation
The configuration documention has been updated. Doc about the filter line has
been added and a new chapter (�. 9) has been created to list and document
supported filters (for now, flt_trace and flt_http_comp).

The developer documentation about filters has also been added. The is a "pre"
version. Incoming changes in the filter API will require an update.
This documentation requires a deeper review and some TODO need to be complete.
2016-04-21 07:01:41 +02:00
Christopher Faulet
00e818aa58 MINOR: filters: Filters must define the callbacks struct during config parsing 2016-04-21 06:59:18 +02:00
Christopher Faulet
cc7317d11e MINOR: filters: Typo in an error message 2016-04-21 06:59:05 +02:00
Christopher Faulet
b3f4e14932 MINOR: filters: Print the list of existing filters during HA startup
This is done  in verbose/debug mode and when build options are reported.
2016-04-21 06:58:08 +02:00
Willy Tarreau
d50b4ac0d4 MEDIUM: unblock signals on startup.
A problem was reported recently by some users of programs compiled
with Go 1.5 which by default blocks all signals before executing
processes, resulting in haproxy not receiving SIGUSR1 or even SIGTERM,
causing lots of zombie processes.

This problem was apparently observed by users of consul and kubernetes
(at least).

This patch is a workaround for this issue. It consists in unblocking
all signals on startup. Since they're normally not blocked in a regular
shell, it ensures haproxy always starts under the same conditions.

Quite useful information reported by both Matti Savolainen and REN
Xiaolei actually helped find the root cause of this problem and this
workaround. Thanks to them for this.

This patch must be backported to 1.6 and 1.5 where the problem is
observed.
2016-04-20 10:53:12 +02:00
Willy Tarreau
07ecdea165 TESTS: add blocksig.c to run tests with all signals blocked
A problem was reported recently by some users of programs compiled
with Go 1.5 which by default blocks all signals before executing
processes, resulting in haproxy not receiving SIGUSR1 or even SIGTERM.

This program mimmicks this behaviour to make it easier to run tests.
It also displays the current signal mask. A simple test consists in
running it through itself.
2016-04-20 10:53:12 +02:00
Cyril Bont�
4920d70fa0 BUG/MINOR: fix maxaccept computation according to the frontend process range
commit 7c0ffd23 is only considering the explicit use of the "process" keyword
on the listeners. But at this step, if it's not defined in the configuration,
the listener bind_proc mask is set to 0. As a result, the code will compute
the maxaccept value based on only 1 process, which is not always true.

For example :
  global
    nbproc 4

  frontend test
    bind-process 1-2
    bind :80

Here, the maxaccept value for the "test" frontend was set to the global
tune.maxaccept value (default to 64), whereas it should consider 2 processes
will accept connections. As of the documentation, the value should be divided
by twice the number of processes the listener is bound to.

To fix this, we can consider that if no mask is set to the listener, we take
the frontend mask.

This is not critical but it can introduce unfairness distribution of the
incoming connections across the processes.

It should be backported to the same branches as commit 7c0ffd23 (1.6 and 1.5
were in the scope).
2016-04-15 08:22:52 +02:00
Willy Tarreau
d6c06d0f65 BUG/MINOR: listener: stop unbound listeners on startup
When a listener is not bound to a process its frontend belongs to, it
is only paused and not stopped. This creates confusion from the outside
as "netstat -ltnp" for example will report only the parent process as
the listener instead of the effective one. "ss -lnp" will report that
all processes are listening to all sockets.

This is confusing enough to suggest a fix. Now we simply stop the unused
listeners. Example with this simple config :

  global
      nbproc 4

  frontend haproxy_test
      bind-process 1-40
      bind :12345 process 1
      bind :12345 process 2
      bind :12345 process 3
      bind :12345 process 4

Before the patch :
  $ netstat -ltnp
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30457/./haproxy
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30457/./haproxy
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30457/./haproxy
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30457/./haproxy

After the patch :
  $ netstat -ltnp
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30504/./haproxy
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30503/./haproxy
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30502/./haproxy
  tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      30501/./haproxy

This patch may be backported to 1.6 and 1.5, but it relies on commit
7a798e5 ("CLEANUP: fix inconsistency between fd->iocb, proto->accept
and accept()") since it will expose an API inconsistency by including
listener.h in the .c.
2016-04-14 12:05:02 +02:00
Willy Tarreau
7c0ffd23d2 BUG/MEDIUM: fix maxaccept computation on per-process listeners
Christian Ruppert reported a performance degradation when binding a
single frontend to many processes while only one bind line was being
used, bound to a single process.

The reason comes from the fact that whenever a listener is bound to
multiple processes, the it is assigned a maxaccept value which equals
half the global maxaccept value divided by the number of processes the
frontend is bound to. The purpose is to ensure that no single process
will drain all the incoming requests at once and ensure a fair share
between all listeners. Usually this works pretty well, when a listener
is bound to all the processes of its frontend. But here we're in a
situation where the maxaccept of a listener which is bound to a single
process is still divided by a large value.

The fix consists in taking into account the number of processes the
listener is bound do and not only those of the frontend. This way it
is perfectly possible to benefit from nbproc and SO_REUSEPORT without
performance degradation.

1.6 and 1.5 normally suffer from the same issue.
2016-04-14 11:53:50 +02:00
Willy Tarreau
7a798e5d6b CLEANUP: fix inconsistency between fd->iocb, proto->accept and accept()
There's quite some inconsistency in the internal API. listener_accept()
which is the main accept() function returns void but is declared as int
in the include file. It's assigned to proto->accept() for all stream
protocols where an int is expected but the result is never checked (nor
is it documented by the way). This proto->accept() is in turn assigned
to fd->iocb() which is supposed to return an int composed of FD_WAIT_*
flags, but which is never checked either.

So let's fix all this mess :
  - nobody checks accept()'s return
  - nobody checks iocb()'s return
  - nobody sets a return value

=> let's mark all these functions void and keep the current ones intact.

Additionally we now include listener.h from listener.c to ensure we won't
silently hide this incoherency in the future.

Note that this patch could/should be backported to 1.6 and even 1.5 to
simplify debugging sessions.
2016-04-14 11:18:22 +02:00
Daniel Schneller
eba5634f2c DOC: Clarify IPv4 address / mask notation rules
Adds some examples regarding shorthand IPv4 address notation which might
be confused with RFC 4632 CIDR notation, leading to different than
expected results.
2016-04-13 07:20:46 +02:00
Vincent Bernat
39cec29244 CLEANUP: .gitignore cleanup
.gitignore is an odd beast. All the stuff at the beginning is useless
since in the bottom part starts with /.* and /*. Therefore, the top part
is useless. Moreover, the bottom part makes unignore *.o and
friends. Add it back at the bottom.
2016-04-12 12:06:29 +02:00
Daniel Schneller
9ff96c7a62 MINOR: acl: Add predefined METH_DELETE, METH_PUT
Adds the missing HTTP verbs DELETE and PUT as predefined ACLs, similar
to GET, POST etc.
2016-04-12 11:44:09 +02:00
Coen Rosdorff
596659b023 DOC: fix discrepancy in the example for http-request redirect
Commit c8f0e78 ("DOC: typo: req.uri is now replaced by capture.req.uri")
fixed a discrepancy in the doc but the scheme is still missing, resulting
in a redirect loop. Let's fix this as well. This should be backported to
1.5.
2016-04-12 11:26:22 +02:00
David Carlier
64a16ab19c BUG/MEDIUM: sample: initialize the pointer before parse_binary call.
parse_binary line 2025 checks the nullity of binstr parameter.
Other calls of parse_binary properly zeroify this parameter.
[wt: this could result in random failures of the const parser]
2016-04-12 11:08:24 +02:00
David Carlier
97880bb46d BUG/MINOR: cfgparse: couple of small memory leaks.
During the config parse in some code paths, there is some
forgotten pointers freeing, and as often, during errors handlings.
2016-04-12 11:01:41 +02:00
David Carlier
d10025c671 BUG/MINOR: server: risk of over reading the pref_net array.
dns_option struct pref_net field is an array of 5. The issue
here shows that pref_net_nb can go up to 5 as well which might lead
to read outside of this array.
2016-04-12 11:00:39 +02:00
Willy Tarreau
8a32106fff BUG/MEDIUM: channel: fix miscalculation of available buffer space (2nd try)
Commit 999f643 ("BUG/MEDIUM: channel: fix miscalculation of available buffer
space.") introduced a bug which made output data to be ignored when computing
the remaining room in a buffer. The problem is that channel_may_recv()
properly considers them and may declare that the FD may be polled for read
events, but once the even strikes, channel_recv_limit() called before recv()
says the opposite. In 1.6 and later this case is automatically caught by
polling loop detection at the connection level and is harmless. But the
backport in 1.5 ends up with a busy polling loop as soon as it becomes
possible to have a buffer with this conflict. In order to reproduce it, it
is necessary to have less than [maxrewrite] bytes available in a buffer, no
forwarding enabled (end of transfer) and [buf->o >= maxrewrite - free space].

Since this heavily depends on socket buffers, it will randomly strike users.
On 1.5 with 8kB buffers it was possible to reproduce it with httpterm using
the following command line :

   $ (printf "GET /?s=675000 HTTP/1.0\r\n\r\n"; sleep 60) | \
       nc6 --rcvbuf-size 1 --send-only 127.0.0.1 8002

This bug is only medium in 1.6 and later but is major in the 1.5 backport,
so it must be backported there.

Thanks to Nenad Merdanovic and Janusz Dziemidowicz for reporting this issue
with enough elements to help understand it.
2016-04-11 17:13:35 +02:00
William Lallemand
0567fa3af5 BUG/MEDIUM: trace.c: rdtsc() is defined in two files
The rdtsc() function provided in standard.h forbid trace.c to compile
because it's already defined there.
2016-04-09 22:27:01 +02:00
Thierry Fournier
0e00dca58b DOC: http: rename the unique-id sample and add the documentation
This patch renames the ssample fetch from "uniqueid" to "unique-id".
It also adds the documentation associated with this sample fetch.
2016-04-07 19:14:58 +02:00
Frederik Deweerdt
6cd8d13c05 OPTIM/MINOR: session: abort if possible before connecting to the backend
Depending on the path that led to sess_update_stream_int(), it's
possible that we had a read error on the frontend, but that we haven't
checked if we may abort the connection.

This was seen in particular the following setup: tcp mode, with
abortonclose set, frontend using ssl. If the ssl connection had a first
successful read, but the second read failed, we would stil try to open a
connection to the backend, although we had enough information to close
the connection early.

sess_update_stream_int() had some logic to handle that case in the
SI_ST_QUE and SI_ST_TAR, but that was missing in the SI_ST_ASS case.

This patches addresses the issue by verifying the state of the req
channel (and the abortonclose option) right before opening the
connection to the backend, so we have the opportunity to close the
connection there, and factorizes the shared SI_ST_{QUE,TAR,ASS} code.
2016-04-07 19:12:02 +02:00
Willy Tarreau
bb137a8af7 BUG/MEDIUM: ssl: rewind the BIO when reading certificates
Emeric found that some certificate files that were valid with the old method
(the one with the explicit name involving SSL_CTX_use_PrivateKey_file()) do
not work anymore with the new one (the one trying to load multiple cert types
using PEM_read_bio_PrivateKey()). With the last one, the private key couldn't
be loaded.

The difference was related to the ordering in the PEM file was different. The
old method would always work. The new method only works if the private key is
at the top, or if it appears as an "EC" private key. The cause in fact is that
we never rewind the BIO between the various calls. So this patch moves the
loading of the private key as the first step, then it rewinds the BIO, and
then it loads the cert and the chain. With this everything works.

No backport is needed, this issue came with the recent addition of the
multi-cert support.
2016-04-06 19:02:38 +02:00
Bertrand Paquet
83a2c3d4d7 BUG/MINOR : allow to log cookie for tarpit and denied request
The following patch allow to log cookie for tarpit and denied request.
This minor bug affect at least 1.5, 1.6 and 1.7 branch.

The solution is not perfect : may be the cookie processing
(manage_client_side_cookies) can be moved into http_process_req_common.
2016-04-06 14:58:41 +02:00
Baptiste Assmann
6f79aca339 BUG/MINOR: DNS: resolution structure change
060e57301d introduced a bug, related to a
dns option structure change and an improper rebase.

Thanks Lukas Tribus for reporting it.

backport: 1.7 and above
2016-04-05 21:35:42 +02:00
David Carlier
7365f7d41b CLEANUP: proto_http: few corrections for gcc warnings.
first, we modify the signatures of http_msg_forward_body and
http_msg_forward_chunked_body as they are declared as inline
below. Secondly, just verify the returns of the chunk initialization
which holds the Authorization Method (althought it is unlikely to fail  ...).
Both from gcc warnings.
2016-04-05 18:05:24 +02:00
Baptiste Assmann
060e57301d BUG/MINOR: dns: trigger a DNS query type change on resolution timeout
After Cedric Jeanneret reported an issue with HAProxy and DNS resolution
when multiple servers are in use, I saw that the implementation of DNS
query type update on resolution timeout was not implemented, even if it
is documented.

backport: 1.6 and above
2016-04-05 05:56:11 +02:00
Baptiste Assmann
382824c475 BUG/MINOR: dns: inapropriate way out after a resolution timeout
A bug leading HAProxy to stop DNS resolution when multiple servers are
configured and one is in timeout, the request is not resent.
Current code fix this issue.

backport status: 1.6 and above
2016-04-05 05:56:11 +02:00
Conrad Hoffmann
692c9386db BUG/MINOR: dumpstats: fix write to global chunk
This just happens to work as it is the correct chunk, but should be whatever
gets passed in as argument.

Signed-off-by: Conrad Hoffmann <conrad@soundcloud.com>
2016-04-05 05:56:10 +02:00
Thiago Farina
9f72a39f68 DOC: fix "needed" typo
While at it use "You" instead of "They" as in the context
it seems to make more sense to refer to "you", as it is
you that are going to be running the command, there is no
"they".

Signed-off-by: Thiago Farina <tfarina@chromium.org>
2016-04-03 14:21:22 +02:00