Commit Graph

3376 Commits

Author SHA1 Message Date
Thierry FOURNIER
348971ea28 MEDIUM: acl: use the fetch syntax 'fetch(args),conv(),conv()' into the ACL keyword
If the acl keyword is a "fetch", the dedicated parsing function
"sample_parse_expr()" is used. Otherwise, the acl parsing function
"parse_acl_expr()" is extended to understand the syntax of a series
of converters placed after the "fetch" keyword.

Before this patch, each acl uses a "struct sample_fetch" and executes
it with the "<fetch>->process()" function. Now, the dedicated function
"sample_process()" is called.

These syntax are now avalaible:

   acl bad req.hdr(host),lower -m str www
   http-request redirect prefix /go-away if bad

   acl bad hdr_beg(host),lower www
   http-request redirect prefix /go-away if bad
2013-12-02 23:31:32 +01:00
Thierry FOURNIER
8af6ff12b5 MINOR: sample: export sample_casts
just export the sample cast matrix "sample_casts" to prepare the
generic sample conversion parser.
2013-12-02 23:31:32 +01:00
Thierry FOURNIER
20f4996738 MINOR: sample: export the generic sample conversion parser
just export function "find_sample_conv()" to prepare the
generic sample conversion parser.
2013-12-02 23:31:32 +01:00
Willy Tarreau
bf0addb6ce BUG/MINOR: log: fix log-format parsing errors
Some errors were still reported as log-format instead of their respective
contexts (acl, request header, stick, ...). This is harmless and does not
require any backport.
2013-12-02 23:31:32 +01:00
Willy Tarreau
34c2fb6f89 BUG/MINOR: config: report the correct track-sc number in tcp-rules
When parsing track-sc* actions in tcp-request rules, we now automatically
compute the track-sc identifier number using %d when displaying an error
message. But the ID has become wrong since we introduced sc0, we continue
to report id+1 in error messages causing some confusion.

No backport is needed.
2013-12-02 23:31:32 +01:00
Willy Tarreau
1903acdf3a BUG/MINOR: backend: fix target address retrieval in transparent mode
A very old bug resulting from some code refactoring causes
assign_server_address() to refrain from retrieving the destination
address from the client-side connection when transparent mode is
enabled and we're connecting to a server which has address 0.0.0.0.

The impact is low since such configurations are unlikely to ever
be encountered. The fix should be backported to older branches.
2013-12-01 21:46:24 +01:00
Willy Tarreau
830bf61815 BUG/MINOR: connection: fix typo in error message report
"unknownn" -> "unknown"
2013-12-01 20:29:58 +01:00
Thierry FOURNIER
1c0054fe83 BUG/MINOR: arg: fix error reporting for add-header/set-header sample fetch arguments
The 'add-header %[samples]' parsing errors associated to http-request
and http-response are displayed with the wrong keyword.

Configuration entry:

   http-request set-header mon-header %[res.hdr(user-agent)]

Original error message:

   [WARNING] 323/150920 (16559) : parsing [haproxy.conf:36] : 'log-format' : sample fetch <res.hdr ...

After commit error message:

   [WARNING] 323/150929 (16580) : parsing [haproxy.conf:36] : 'http-request' : sample fetch <res.hdr ...
2013-11-28 18:25:18 +01:00
Thierry FOURNIER
4a04dc368d BUG/MEDIUM: sample: The function v4tov6 cannot support input and output overlap
This patch permits to use v4tov6 with the same input and output buffer. It
might have impacted the format of IPv4 addresses stored into IPv6 tables.
2013-11-28 17:09:45 +01:00
Willy Tarreau
f465994198 BUG/MINOR: stats: do not report "via" on tracking servers in maintenance
When a server tracks another one, its state on the stats page always reports
"via xx/yy". That's convenient to know what server to act on to change the
state. But it is also possible to force the tracking server itself into
maintenance mode and in this case we should not report "via xx/yy" because
the tracked server can't do anything to change the server's state, which
is confusing. In practice there is nothing wrong in leaving it as-is,
except that it's highly misleading when looking at the stats page.

Note that we only change the HTML output, not the CSV one. The states are
already different : "MAINT" vs "MAINT(via)" and we expect anyone coding a
monitoring system based on the CSV output to know the differences between
all possible states.
2013-11-28 11:52:11 +01:00
Willy Tarreau
81cf08c5cd BUG/MAJOR: check: fix haproxy crash during soft-stop/soft-start
This is the continuation of previous fix bc16cd8 "BUG/MAJOR: fix haproxy
crash when using server tracking instead of checks", the soft-stop/start
states were not addressed by this fix.
2013-11-28 11:52:11 +01:00
Willy Tarreau
bc16cd81c4 BUG/MAJOR: fix haproxy crash when using server tracking instead of checks
Igor at owind reported a very recent bug (just present in latest snapshot).
Commit "4a741432 MEDIUM: Paramatise functions over the check of a server"
causes up/down to die with tracked servers due to a typo.

The following call in set_server_down causes the server to put itself
down recurseively because "check" is the current server's check, so once
fed to the function again, it will pass through the exact same path (note
we have the exact symmetry in set_server_up) :

	for (srv = s->tracknext; srv; srv = srv->tracknext)
		if (!(srv->state & SRV_MAINTAIN))
			/* Only notify tracking servers that are not already in maintenance. */
			set_server_down(check);

Instead we should stop the tracking server being visited in the loop :

	for (srv = s->tracknext; srv; srv = srv->tracknext)
		if (!(srv->state & SRV_MAINTAIN))
			/* Only notify tracking servers that are not already in maintenance. */
			set_server_down(&srv->check);

But that's not exactly enough because srv->check->server is only set when
checks are enabled, so ->server is NULL for tracking servers, still causing a
crash upon first iteration. The fix is easy and consists in always initializing
check->server when creating a new server, which is what was already done a few
patches later by 69d29f9 (MEDIUM: cfgparse: Factor out check initialisation).

With the fix above alone on top of current version or snapshot 20131122, the
problem disappears.

Thanks to Igor for testing and reporting the issue.
2013-11-27 17:10:07 +01:00
Willy Tarreau
86a446e685 MINOR: peers: accept to learn strings of different lengths
While analysing old bug (9d9179b) with Emeric, we first believed
that the fix was wrong and that there was a potential for learning
one extra character in the peers learning code for strings due to
the use of table->key_size instead of table->key_size-1. In fact it
cannot happen with a normally behaving sender because the key sizes
are compared when synchronizing the table.

But this unveiled a suboptimal handling of strings. It can be quite
common to see admins reload haproxy to increase some key sizes when
seeing that user agents or cookies get truncated, or conversely to
reduce them after seeing they take too much memory and are never full.
The problem is that this will get rid of the table's contents because
of the size mismatch. While this is understandable for properly
formatted data (eg: IP addresses, integers, SSLIDs...) it's too bad
for strings.

So instead, make an exception to accept string of incompatible lengths
and let the synchronization code truncate them to the appropriate size
just as if the keys were learned normally.

Thanks to this change, it is now possible to change the "len" parameter
of a string stick-table and restart without losing its contents.
2013-11-25 23:15:06 +01:00
Willy Tarreau
d6e999b127 OPTIM: connection: fold the error handling with handshake handling
Both of them are rare and are detected from the same flags source, so
let's detect errors in the handshake loop and remove two tests in the
fast path. This seems to improve overall performance by less than 0.5%
on connection-bound workloads.
2013-11-25 08:57:11 +01:00
Simon Horman
8c3d0be987 MEDIUM: Add DRAIN state and report it on the stats page
Add a DRAIN sub-state for a server which
will be shown on the stats page instead of UP if
its effective weight is zero.

Also, log if a server enters or leaves the DRAIN state
as the result of an agent check.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-25 07:31:16 +01:00
Simon Horman
671b6f02b5 MEDIUM: Add enable and disable agent unix socket commands
The syntax of this new commands are:

enable agent <backend>/<server>
disable agent <backend>/<server>

These commands allow temporarily stopping and subsequently
re-starting an auxiliary agent check. The effect of this is as follows:

New checks are only initialised when the agent is in the enabled. Thus,
disable agent will prevent any new agent checks from begin initiated until
the agent re-enabled using enable agent.

When an agent is disabled the processing of an auxiliary agent check that
was initiated while the agent was set as enabled is as follows: All
results that would alter the weight, specifically "drain" or a weight
returned by the agent, are ignored. The processing of agent check is
otherwise unchanged.

The motivation for this feature is to allow the weight changing effects
of the agent checks to be paused to allow the weight of a server to be
configured using set weight without being overridden by the agent.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-25 07:31:16 +01:00
Simon Horman
58c32978b2 MEDIUM: Set rise and fall of agent checks to 1
This is achieved by moving rise and fall from struct server to struct check.

After this move the behaviour of the primary check, server->check is
unchanged. However, the secondary agent check, server->agent now has
independent rise and fall values each of which are set to 1.

The result is that receiving "fail", "stopped" or "down" just once from the
agent will mark the server as down. And receiving a weight just once will
allow the server to be marked up if its primary check is in good health.

This opens up the scope to allow the rise and fall values of the agent
check to be configurable, however this has not been implemented at this
stage.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-25 07:31:16 +01:00
Simon Horman
2f1f955c8c MEDIUM: Do not mark a server as down if the agent is unavailable
In the case where agent-port is used and the agent
check is a secondary check to not mark a server as down
if the agent becomes unavailable.

In this configuration the agent should only cause a server to be marked
as down if the agent returns "fail", "stopped" or "down".

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-25 07:31:16 +01:00
Simon Horman
d60d69138b MEDIUM: checks: Add supplementary agent checks
Allow an auxiliary agent check to be run independently of the
regular a regular health check. This is enabled by the agent-check
server setting.

The agent-port, which specifies the TCP port to use for the agent's
connections, is required.

The agent-inter, which specifies the interval between agent checks and
timeout of agent checks, is optional. If not set the value for regular
checks is used.

e.g.
server	web1_1 127.0.0.1:80 check agent-port 10000

If either the health or agent check determines that a server is down
then it is marked as being down, otherwise it is marked as being up.

An agent health check performed by opening a TCP socket and reading an
ASCII string. The string should have one of the following forms:

* An ASCII representation of an positive integer percentage.
  e.g. "75%"

  Values in this format will set the weight proportional to the initial
  weight of a server as configured when haproxy starts.

* The string "drain".

  This will cause the weight of a server to be set to 0, and thus it
  will not accept any new connections other than those that are
  accepted via persistence.

* The string "down", optionally followed by a description string.

  Mark the server as down and log the description string as the reason.

* The string "stopped", optionally followed by a description string.

  This currently has the same behaviour as "down".

* The string "fail", optionally followed by a description string.

  This currently has the same behaviour as "down".

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-25 07:31:16 +01:00
Simon Horman
afc47ee7fc MEDIUM: Remove option lb-agent-chk
Remove option lb-agent-chk and thus the facility to configure
a stand-alone agent health check. This feature was added by
"MEDIUM: checks: Add agent health check". It will be replaced
by subsequent patches with a features to allow an agent check
to be run as either a secondary check, along with any of the existing
checks, or as part of an http check with the status returned
in an HTTP header.

This patch does not entirely revert "MEDIUM: checks: Add agent health
check". The infrastructure it provides to parse the results of an
agent health check remains and will be re-used by the planned features
that are mentioned above.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-25 07:31:16 +01:00
Simon Horman
80fefaeb57 MEDIUM: Log agent fail, stopped or down as info
In the case where an agent check returns fail, stopped or down,
log this as info when logging the server status along with any
trailing message returned by the agent after fail, stopped or down.

Previously only the trailing message was logged as info and
if omitted no info was logged.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-25 07:31:15 +01:00
Simon Horman
d858306ddb MEDIUM: Add helper function for failed checks
This consolidates some logic in preparation for enhancing it.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-25 07:31:15 +01:00
Simon Horman
5c9424258e MEDIUM: Add helper for task creation for checks
This helper is in preparation for adding a second struct check element
to struct server.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-25 07:31:15 +01:00
Kristoffer Grnlund
f65194a6fa LOW: systemd-wrapper: Write debug information to stdout
Write the command line used to call haproxy to stdout, as
well as the return code returned by the haproxy process.
2013-11-23 12:06:51 +01:00
Kristoffer Grnlund
66fd1d830e MEDIUM: systemd-wrapper: Kill child processes when interrupted
Send SIGINT to child processes when killed. This ensures that
the haproxy process managed by the systemd-wrapper is stopped
when "systemctl stop haproxy.service" is called.
2013-11-23 12:06:51 +01:00
Kristoffer Grnlund
1b6e75fa84 MEDIUM: haproxy-systemd-wrapper: Use haproxy in same directory
Locate the wrapper and use a haproxy executable found in the
same directory.

This patch lets the wrapper work in openSUSE.
2013-11-23 12:06:50 +01:00
Willy Tarreau
d32c399747 MINOR: stats: report correct throttling percentage for servers in slowstart
The column used to report the throttle percentage when a server is in
slowstart is based on the time only. This is wrong, because server weights
in slowstart are updated at most once a second, so the reported value is
wrong at least fo rone second during each step, which means all the time
when using short delays (< 20s).

The second point is that it's disturbing to see a weight < 100% without
any throttle at the end of the period (during the last second), because
the effective weight has not yet been updated.

Instead, we now compute the exact ratio between eweight and uweight and
report it. It's always accurate and describes the value being used instead
of using only the date.

It can be backported to 1.4 though it's not particularly important.
2013-11-21 15:30:45 +01:00
Willy Tarreau
004e045f31 BUG/MAJOR: server: weight calculation fails for map-based algorithms
A crash was reported by Igor at owind when changing a server's weight
on the CLI. Lukas Tribus could reproduce a related bug where setting
a server's weight would result in the new weight being multiplied by
the initial one. The two bugs are the same.

The incorrect weight calculation results in the total farm weight being
larger than what was initially allocated, causing the map index to be out
of bounds on some hashes. It's easy to reproduce using "balance url_param"
with a variable param, or with "balance static-rr".

It appears that the calculation is made at many places and is not always
right and not always wrong the same way. Thus, this patch introduces a
new function "server_recalc_eweight()" which is dedicated to this task
of computing ->eweight from many other elements including uweight and
current time (for slowstart), and all users now switch to use this
function.

The patch is a bit large but the code was not trivially fixable in a way
that could guarantee this situation would not occur anymore. The fix is
much more readable and has been verified to work with all algorithms,
with both consistent and map-based hashes, and even with static-rr.

Slowstart was tested as well, just like enable/disable server.

The same bug is very likely present in 1.4 as well, so the patch will
probably need to be backported eventhough it will not apply as-is.

Thanks to Lukas and Igor for the information they provided to reproduce it.
2013-11-21 15:09:02 +01:00
Willy Tarreau
e7b73485d0 BUG/MEDIUM: checks: fix slow start regression after fix attempt
Commit 2e99390 (BUG/MEDIUM: checks: fix slowstart behaviour when server
tracking is in use) moved the slowstart task initialization within the
health check code and leaves it unset when checks are disabled. The
problem is that it's possible to trigger slowstart from the CLI by
issuing "disable server XXX / enable server XXX" even when checks are
disabled. The result is a crash when trying to wake up the slowstart
task of that server.

Move the task initialization earlier so that it is done even if the
checks are disabled.

This patch should be backported to 1.4 since the commit above was
backported there.
2013-11-21 15:07:55 +01:00
Godbach
c3916a7fca MINOR: buffer: align the last output line if there are less than 8 characters left
Commit c08057c does the align job for buffer_dump(), but it has not fixed the
issue that less than 8 characters are left in the last line as below:

Dumping contents from byte 0 to byte 119
         0  1  2  3  4  5  6  7    8  9  a  b  c  d  e  f
  0000: 47 45 54 20 2f 69 6e 64 - 65 78 2e 68 74 6d 20 48   GET /index.htm H
  0010: 54 54 50 2f 31 2e 30 0d - 0a 55 73 65 72 2d 41 67   TTP/1.0..User-Ag
  ...
  0060: 6e 65 63 74 69 6f 6e 3a - 20 4b 65 65 70 2d 41 6c   nection: Keep-Al
  0070: 69 76 65 0d 0a 0d 0a                              ive....

The last line of the hex column is still overlapped by the text column. Since
there will be additional "- " for the output line which has no less than 8
characters, two additional spaces should be present when there is less than 8
characters in order to do alignment. The result after being fixed is as below:

Dumping contents from byte 0 to byte 119
         0  1  2  3  4  5  6  7    8  9  a  b  c  d  e  f
  0000: 47 45 54 20 2f 69 6e 64 - 65 78 2e 68 74 6d 20 48   GET /index.htm H
  0010: 54 54 50 2f 31 2e 30 0d - 0a 55 73 65 72 2d 41 67   TTP/1.0..User-Ag
  ...
  0060: 6e 65 63 74 69 6f 6e 3a - 20 4b 65 65 70 2d 41 6c   nection: Keep-Al
  0070: 69 76 65 0d 0a 0d 0a                                ive....

Signed-off-by: Godbach <nylzhaowei@gmail.com>
2013-11-21 08:07:04 +01:00
Bhaskar Maddala
10e26de4ea DOC: Documentation for hashing function, with test results.
Summary:
Added a document for hashing under internal docs explaining
hashing in haproxy along with the results of tests under the test
folder.

These documents together explain the motivation for adding
options for hashing algorithms with the option of enabling or
disabling of avalanche.
2013-11-20 22:14:47 +01:00
Simon Horman
125d099662 MEDIUM: Move health element to struct check
This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-19 09:36:07 +01:00
Simon Horman
cd5d7b678e MEDIUM: Add state to struct check
Add state to struct check. This is currently used to store one bit,
CHK_RUNNING, which is set if a check is running and clear otherwise.
This bit was previously SRV_CHK_RUNNING of the state element of struct
server.

This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.

Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
2013-11-19 09:36:04 +01:00
Simon Horman
69d29f996b MEDIUM: cfgparse: Factor out check initialisation
This is in preparation for struct server having two elements
of type struct check.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-19 09:36:01 +01:00
Simon Horman
4a741432be MEDIUM: Paramatise functions over the check of a server
Paramatise the following functions over the check of a server

* set_server_down
* set_server_up
* srv_getinter
* server_status_printf
* set_server_check_status
* set_server_disabled
* set_server_enabled

Generally the server parameter of these functions has been removed.
Where it is still needed it is obtained using check->server.

This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.
By paramatising these functions they may act on each of the checks
without further significant modification.

Explanation of the SSP_O_HCHK portion of this change:

* Prior to this patch SSP_O_HCHK serves a single purpose which
  is to tell server_status_printf() weather it should print
  the details of the check of a server or not.

  With the paramatisation that this patch adds there are two cases.
  1) Printing the details of the check in which case a
     valid check parameter is needed.
  2) Not printing the details of the check in which case
     the contents check parameter are unused.

  In case 1) we could pass SSP_O_HCHK and a valid check and;
  In case 2) we could pass !SSP_O_HCHK and any value for check
  including NULL.

  If NULL is used for case 2) then SSP_O_HCHK becomes supurfulous
  and as NULL is used for case 2) SSP_O_HCHK has been removed.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-19 09:35:54 +01:00
Simon Horman
28b5ffc76f MEDIUM: Move result element to struct check
Move result element from struct server to struct check
This allows check results to be independent of the check's server.

This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-19 09:35:52 +01:00
Simon Horman
6618300e13 MEDIUM: Split up struct server's check element
This is in preparation for associating a agent check
with a server which runs as well as the server's existing check.

The split has been made by:
* Moving elements of struct server's check element that will
  be shared by both checks into a new check_common element
  of struct server.
* Moving the remaining elements to a new struct check and
  making struct server's check element a struct check.
* Adding a server element to struct check, a back-pointer
  to the server element it is a member of.
  - At this time the server could be obtained using
    container_of, however, this will not be so easy
    once a second struct check element is added to struct server
    to accommodate an agent health check.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-19 09:35:48 +01:00
Simon Horman
c69d547638 CLEANUP: Remove unused 'last_slowstart_change' field from struct peer
This was inadvertently added by "MEDIUM: checks: Add agent health check".
It appears to have never been used.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-19 08:04:59 +01:00
Simon Horman
a360844735 CLEANUP: Make parameters of srv_downtime and srv_getinter const
The parameters of srv_downtime and srv_getinter are not modified
and thus may be const.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-19 08:04:58 +01:00
Willy Tarreau
e155ec245a BUG/MINOR: http: fix build warning introduced with url32/url32_src
commit 39c63c5 "url32+src - like base32+src but whole url including parameters"
was missing the last argument "const char *kw", resulting in the build warning
below :

src/proto_http.c:10351:2: warning: initialization from incompatible pointer type [enabled by default]
src/proto_http.c:10351:2: warning: (near initialization for 'sample_fetch_keywords.kw[50].process') [enabled by default]
src/proto_http.c:10352:2: warning: initialization from incompatible pointer type [enabled by default]
src/proto_http.c:10352:2: warning: (near initialization for 'sample_fetch_keywords.kw[51].process') [enabled by default]

It's harmless since it's not needed there anyway.
2013-11-18 18:33:32 +01:00
Willy Tarreau
6d4890cfea BUG/MEDIUM: http: fix possible parser crash when parsing erroneous "http-request redirect" rules
Baptiste Assmann reported a bug affecting the "http-request redirect"
parser. It may randomly crash when reporting an error message if the
syntax is not OK. It happens that this is caused by the output error
message pointer which was not initialized to NULL.

This bug is 1.5-specific (introduced in dev17), no backport is needed.
2013-11-18 18:07:35 +01:00
Neil - HAProxy List
39c63c56d2 url32+src - like base32+src but whole url including parameters
I have a need to limit traffic to each url from each source address. much
like base32+src but the whole url including parameters (this came from
looking at the recent 'Haproxy rate limit per matching request' thread)

attached is patch that seems to do the job, its a copy and paste job of the
base32 functions

the url32 function seems to work too and using 2 machines to request the
same url locks me out of both if I abuse from either with the url32 key
function and only the one if I use url32_src.

Neil
2013-11-18 06:50:38 +01:00
Willy Tarreau
3b44e729e5 CLEANUP: http: merge error handling for req* and http-request *
The reqdeny/reqtarpit and http-request deny/tarpit were using
a copy-paste of the error handling code because originally the
req* actions used to maintain their own stats. This is not the
case anymore so we can use the same error blocks for both.

The http-request rulesets still has precedence over req* so no
functionality was changed.
2013-11-16 10:30:14 +01:00
Willy Tarreau
687ba13e92 CLEANUP: http: homogenize processing of denied req counter
The reqdeny/reqideny and reqtarpit/reqitarpit rules used to maintain
the stats counters themselves while http-request deny/tarpit and
rspdeny/rspideny used to centralize them at the point where the
error is processed.

Thus, let's do the same for reqdeny/reqtarpit so that the functions
which iterate over the rules do not have to deal with these counters
anymore.
2013-11-16 10:13:35 +01:00
Willy Tarreau
8ac7249611 BUG/MINOR: stats: don't count tarpitted connections twice
When a connection is tarpitted, a denied req is counted once when the
action is applied, and then a failed req is counted when the tarpit
timeout expires. This is completely wrong as the tarpit is exactly
equivalent to a deny since it's a disguised deny.

So let's not increment the failed req anymore.

This fix may be backported to 1.4 which has the same issue.
2013-11-16 10:06:44 +01:00
Willy Tarreau
38d5892634 OPTIM/MINOR: mark the source address as already known on accept()
Commit 986a9d2d12 moved the source address from the stream interface
to the session, but it did not set the flag on the connection to
report that the source address is known. Thus when logs are enabled,
we had a call to getpeername() which is redundant with the result
from accept(). This patch simply sets the flag.
2013-11-16 00:17:59 +01:00
Willy Tarreau
2f877304ef OPTIM/MEDIUM: epoll: fuse active events into polled ones during polling changes
When trying to speculatively send data to a server being connected to,
we see the following pattern :

    connect() = EINPROGRESS
    send() = EAGAIN
    epoll_ctl(add, W)
    epoll_wait() = EPOLLOUT
    send() = success
  > epoll_ctl(del, W)
  > recv() = EAGAIN
  > epoll_ctl(add, R)
    recv() = success
    epoll_ctl(del, R)

The reason for the failed recv() call is that the reading was marked
as speculative while we already have a polled I/O there. So we already
know when removing send write poll that the read is pending. Thus,
let's improve this by merging speculative I/O into polled I/O when
polled state changes. The result is now the following as expected :

    connect() = EINPROGRESS
    send() = EAGAIN
    epoll_ctl(add, W)
    epoll_wait() = EPOLLOUT
    send() = success
    epoll_ctl(mod, R)
    recv() = success
    epoll_ctl(del, R)

This is specific to epoll(), it doesn't make much sense at the moment
to do so for other pollers, because the cost of updating them is very
small.

The average performance gain on small requests is of 1.6% in TCP mode,
which is easily explained with the syscall stats below for 10000 forwarded
connections :

Before :
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 91.02    0.024608           0     60000         1 epoll_wait
  2.19    0.000593           0     20000           shutdown
  1.52    0.000412           0     10000     10000 connect
  1.36    0.000367           0     29998      9998 sendto
  1.09    0.000294           0     49993           epoll_ctl
  0.93    0.000252           0     50004     20002 recvfrom
  0.79    0.000214           0     20005           close
  0.62    0.000167           0     20001     10001 accept4
  0.25    0.000067           0     20002           setsockopt
  0.13    0.000035           0     10001           socket
  0.10    0.000028           0     10001           fcntl

After:
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 87.59    0.024269           0     50012         1 epoll_wait
  3.19    0.000884           0     20000           shutdown
  2.33    0.000646           0     29996      9996 sendto
  2.02    0.000560           0     10005     10003 connect
  1.40    0.000387           0     40013     10013 recvfrom
  1.35    0.000374           0     40000           epoll_ctl
  0.64    0.000178           0     20001     10001 accept4
  0.55    0.000152           0     20005           close
  0.45    0.000124           0     20002           setsockopt
  0.31    0.000086           0     10001           fcntl
  0.17    0.000047           0     10001           socket

Overall :
   -16.6% epoll_wait
   -20%   recvfrom
   -20%   epoll_ctl

On HTTP, the gain is even better :

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 80.43    0.015386           0     60006         1 epoll_wait
  4.61    0.000882           0     30000     10000 sendto
  3.74    0.000715           0     20001     10001 accept4
  3.35    0.000640           0     10000     10000 connect
  2.66    0.000508           0     20005           close
  1.34    0.000257           0     30002     10002 recvfrom
  1.27    0.000242           0     30005           epoll_ctl
  1.20    0.000230           0     10000           shutdown
  0.62    0.000119           0     20003           setsockopt
  0.40    0.000077           0     10001           socket
  0.39    0.000074           0     10001           fcntl
willy@wtap:haproxy$ head -15 apres.txt
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 83.47    0.020301           0     50008         1 epoll_wait
  4.26    0.001036           0     20005           close
  3.30    0.000803           0     30000     10000 sendto
  2.55    0.000621           0     20001     10001 accept4
  1.76    0.000428           0     10000     10000 connect
  1.20    0.000292           0     10000           shutdown
  1.14    0.000278           0     20001         1 recvfrom
  0.86    0.000210           0     20003           epoll_ctl
  0.71    0.000173           0     20003           setsockopt
  0.49    0.000120           0     10001           socket
  0.25    0.000060           0     10001           fcntl

Overall :
  -16.6% epoll_wait
  -33%   recvfrom
  -33%   epoll_ctl
2013-11-15 23:15:10 +01:00
Willy Tarreau
a0f4271497 MEDIUM: backend: add support for the wt6 hash
This function was designed for haproxy while testing other functions
in the past. Initially it was not planned to be used given the not
very interesting numbers it showed on real URL data : it is not as
smooth as the other ones. But later tests showed that the other ones
are extremely sensible to the server count and the type of input data,
especially DJB2 which must not be used on numeric input. So in fact
this function is still a generally average performer and it can make
sense to merge it in the end, as it can provide an alternative to
sdbm+avalanche or djb2+avalanche for consistent hashing or when hashing
on numeric data such as a source IP address or a visitor identifier in
a URL parameter.
2013-11-14 16:37:50 +01:00
Bhaskar Maddala
b6c0ac94a4 MEDIUM: backend: Implement avalanche as a modifier of the hashing functions.
Summary:
Avalanche is supported not as a native hashing choice, but a modifier
on the hashing function. Note that this means that possible configs
written after 1.5-dev4 using "hash-type avalanche" will get an informative
error instead. But as discussed on the mailing list it seems nobody ever
used it anyway, so let's fix it before the final 1.5 release.

The default values were selected for backward compatibility with previous
releases, as discussed on the mailing list, which means that the consistent
hashing will still apply the avalanche hash by default when no explicit
algorithm is specified.

Examples
  (default) hash-type map-based
	Map based hashing using sdbm without avalanche

  (default) hash-type consistent
	Consistent hashing using sdbm with avalanche

Additional Examples:

  (a) hash-type map-based sdbm
	Same as default for map-based above
  (b) hash-type map-based sdbm avalanche
	Map based hashing using sdbm with avalanche
  (c) hash-type map-based djb2
	Map based hashing using djb2 without avalanche
  (d) hash-type map-based djb2 avalanche
	Map based hashing using djb2 with avalanche
  (e) hash-type consistent sdbm avalanche
	Same as default for consistent above
  (f) hash-type consistent sdbm
	Consistent hashing using sdbm without avalanche
  (g) hash-type consistent djb2
	Consistent hashing using djb2 without avalanche
  (h) hash-type consistent djb2 avalanche
	Consistent hashing using djb2 with avalanche
2013-11-14 16:37:50 +01:00
Bhaskar
98634f0c7b MEDIUM: backend: Enhance hash-type directive with an algorithm options
Summary:
In testing at tumblr, we found that using djb2 hashing instead of the
default sdbm hashing resulted is better workload distribution to our backends.

This commit implements a change, that allows the user to specify the hash
function they want to use. It does not limit itself to consistent hashing
scenarios.

The supported hash functions are sdbm (default), and djb2.

For a discussion of the feature and analysis, see mailing list thread
"Consistent hashing alternative to sdbm" :

      http://marc.info/?l=haproxy&m=138213693909219

Note: This change does NOT make changes to new features, for instance,
applying an avalance hashing always being performed before applying
consistent hashing.
2013-11-14 16:37:50 +01:00