Commit Graph

65 Commits

Author SHA1 Message Date
Stéphane Cottin
23e9e93128 MINOR: log: Add logurilen tunable.
The default len of request uri in log messages is 1024. In some use
cases, you need to keep the long trail of GET parameters. The only
way to increase this len is to recompile with DEFINE=-DREQURI_LEN=2048.

This commit introduces a tune.http.logurilen configuration directive,
allowing to tune this at runtime.
2017-06-02 11:06:36 +02:00
Lukas Tribus
23953686da DOC: update RFC references
A few doc and code comment updates bumping RFC references to the new
ones.
2017-04-28 18:58:11 +02:00
Emmanuel Hocdet
98263291cc MAJOR: ssl: bind configuration per certificat
crt-list is extend to support ssl configuration. You can now have
such line in crt-list <file>:
mycert.pem [npn h2,http/1.1]

Support include "npn", "alpn", "verify", "ca_file", "crl_file",
"ecdhe", "ciphers" configuration and ssl options.

"crt-base" is also supported to fetch certificates.
2017-01-13 11:40:34 +01:00
Emmanuel Hocdet
5e0e6e409b MINOR: ssl: crt-list parsing factor
LINESIZE and MAX_LINE_ARGS are too low for parsing crt-list.
2016-06-20 17:29:56 +02:00
Pieter Baauw
46af170e41 MINOR: mailers: increase default timeout to 10 seconds
This allows the tcp connection to send multiple SYN packets, so 1 lost
packet does not cause the mail to be lost. It changes the socket timeout
from 2 to 10 seconds, this allows for 3 syn packets to be send and
waiting a little for their reply.

This patch should be backported to 1.6.

Acked-by: Simon Horman <horms@verge.net.au>
2016-02-17 10:19:08 +01:00
Willy Tarreau
270978492c MEDIUM: config: set tune.maxrewrite to 1024 by default
The tune.maxrewrite parameter used to be pre-initialized to half of
the buffer size since the very early days when buffers were very small.
It has grown to absurdly large values over the years to reach 8kB for a
16kB buffer. This prevents large requests from being accepted, which is
the opposite of the initial goal.

Many users fix it to 1024 which is already quite large for header
addition.

So let's change the default setting policy :
  - pre-initialize it to 1024
  - let the user tweak it
  - in any case, limit it to tune.bufsize / 2

This results in 15kB usable to buffer HTTP messages instead of 8kB, and
doesn't affect existing configurations which already force it.
2015-09-28 13:59:41 +02:00
Christopher Faulet
31af49d62b MEDIUM: ssl: Add options to forge SSL certificates
With this patch, it is possible to configure HAProxy to forge the SSL
certificate sent to a client using the SNI servername. We do it in the SNI
callback.

To enable this feature, you must pass following BIND options:

 * ca-sign-file <FILE> : This is the PEM file containing the CA certitifacte and
   the CA private key to create and sign server's certificates.

 * (optionally) ca-sign-pass <PASS>: This is the CA private key passphrase, if
   any.

 * generate-certificates: Enable the dynamic generation of certificates for a
   listener.

Because generating certificates is expensive, there is a LRU cache to store
them. Its size can be customized by setting the global parameter
'tune.ssl.ssl-ctx-cache-size'.
2015-06-12 18:06:59 +02:00
Willy Tarreau
f3045d2a06 MAJOR: pattern: add LRU-based cache on pattern matching
The principle of this cache is to have a global cache for all pattern
matching operations which rely on lists (reg, sub, dir, dom, ...). The
input data, the expression and a random seed are used as a hashing key.
The cached entries contains a pointer to the expression and a revision
number for that expression so that we don't accidently used obsolete
data after a pattern update or a very unlikely hash collision.

Regarding the risk of collisions, 10k entries at 10k req/s mean 1% risk
of a collision after 60 years, that's already much less than the memory's
reliability in most machines and more durable than most admin's life
expectancy. A collision will result in a valid result to be returned
for a different entry from the same list. If this is not acceptable,
the cache can be disabled using tune.pattern.cache-size.

A test on a file containing 10k small regex showed that the regex
matching was limited to 6k/s instead of 70k with regular strings.
When enabling the LRU cache, the performance was back to 70k/s.
2015-04-29 19:15:24 +02:00
Willy Tarreau
87b09668be REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.

In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.

The files stream.{c,h} were added and session.{c,h} removed.

The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.

Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.

Once all changes are completed, we should see approximately this :

   L7 - http_txn
   L6 - stream
   L5 - session
   L4 - connection | applet

There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.

Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-06 11:23:56 +02:00
Nenad Merdanovic
05552d4b98 MEDIUM: Add support for configurable TLS ticket keys
Until now, the TLS ticket keys couldn't have been configured and
shared between multiple instances or multiple servers running HAproxy.
The result was that if a request got a TLS ticket from one instance/server
and it hits another one afterwards, it will have to go through the full
SSL handshake and negotation.

This patch enables adding a ticket file to the bind line, which will be
used for all SSL contexts created from that bind line. We can use the
same file on all instances or servers to mitigate this issue and have
consistent TLS tickets assigned. Clients will no longer have to negotiate
every time they change the handling process.

Signed-off-by: Nenad Merdanovic <nmerdan@anine.io>
2015-02-28 23:10:22 +01:00
Willy Tarreau
d025648f7c MAJOR: init: automatically set maxconn and/or maxsslconn when possible
If a memory size limit is enforced using "-n" on the command line and
one or both of maxconn / maxsslconn are not set, instead of using the
build-time values, haproxy now computes the number of sessions that can
be allocated depending on a number of parameters among which :

  - global.maxconn (if set)
  - global.maxsslconn (if set)
  - maxzlibmem
  - tune.ssl.cachesize
  - presence of SSL in at least one frontend (bind lines)
  - presence of SSL in at least one backend (server lines)
  - tune.bufsize
  - tune.cookie_len

The purpose is to ensure that not haproxy will not run out of memory
when maxing out all parameters. If neither maxconn nor maxsslconn are
used, it will consider that 100% of the sessions involve SSL on sides
where it's supported. That means that it will typically optimize maxconn
for SSL offloading or SSL bridging on all connections. This generally
means that the simple act of enabling SSL in a frontend or in a backend
will significantly reduce the global maxconn but in exchange of that, it
will guarantee that it will not fail.

All metrics may be enforced using #defines to accomodate variations in
SSL libraries or various allocation sizes.
2015-01-15 21:45:22 +01:00
Willy Tarreau
d92aa5c44a MINOR: global: report information about the cost of SSL connections
An SSL connection takes some memory when it exists and during handshakes.
We measured up to 16kB for an established endpoint, and up to 76 extra kB
during a handshake. The SSL layer stores these values into the global
struct during initialization. If other SSL libs are used, it's easy to
change these values. Anyway they'll only be used as gross estimates in
order to guess the max number of SSL conns that can be established when
memory is constrained and the limit is not set.
2015-01-15 21:34:39 +01:00
Willy Tarreau
75abcb3106 MINOR: config: extend the default max hostname length to 64 and beyond
Some users reported that the default max hostname length of 32 is too
short in some environments. This patch does two things :

  - it relies on the system's max hostname length as found in MAXHOSTNAMELEN
    if it is set. This is the most logical thing to do as the system libs
    generally present the appropriate value supported by the system. This
    value is 64 on Linux and 256 on Solaris, to give a few examples.

  - otherwise it defaults to 64

It is still possible to override this value by defining MAX_HOSTNAME_LEN at
build time. After some observation time, this patch may be backported to
1.5 if it does not cause any build issue, as it is harmless and may help
some users.
2015-01-14 11:52:34 +01:00
Willy Tarreau
a24adf0795 MAJOR: session: only wake up as many sessions as available buffers permit
We've already experimented with three wake up algorithms when releasing
buffers : the first naive one used to wake up far too many sessions,
causing many of them not to get any buffer. The second approach which
was still in use prior to this patch consisted in waking up either 1
or 2 sessions depending on the number of FDs we had released. And this
was still inaccurate. The third one tried to cover the accuracy issues
of the second and took into consideration the number of FDs the sessions
would be willing to use, but most of the time we ended up waking up too
many of them for nothing, or deadlocking by lack of buffers.

This patch completely removes the need to allocate two buffers at once.
Instead it splits allocations into critical and non-critical ones and
implements a reserve in the pool for this. The deadlock situation happens
when all buffers are be allocated for requests pending in a maxconn-limited
server queue, because then there's no more way to allocate buffers for
responses, and these responses are critical to release the servers's
connection in order to release the pending requests. In fact maxconn on
a server creates a dependence between sessions and particularly between
oldest session's responses and latest session's requests. Thus, it is
mandatory to get a free buffer for a response in order to release a
server connection which will permit to release a request buffer.

Since we definitely have non-symmetrical buffers, we need to implement
this logic in the buffer allocation mechanism. What this commit does is
implement a reserve of buffers which can only be allocated for responses
and that will never be allocated for requests. This is made possible by
the requester indicating how much margin it wants to leave after the
allocation succeeds. Thus it is a cooperative allocation mechanism : the
requester (process_session() in general) prefers not to get a buffer in
order to respect other's need for response buffers. The session management
code always knows if a buffer will be used for requests or responses, so
that is not difficult :

  - either there's an applet on the initiator side and we really need
    the request buffer (since currently the applet is called in the
    context of the session)

  - or we have a connection and we really need the response buffer (in
    order to support building and sending an error message back)

This reserve ensures that we don't take all allocatable buffers for
requests waiting in a queue. The downside is that all the extra buffers
are really allocated to ensure they can be allocated. But with small
values it is not an issue.

With this change, we don't observe any more deadlocks even when running
with maxconn 1 on a server under severely constrained memory conditions.

The code becomes a bit tricky, it relies on the scheduler's run queue to
estimate how many sessions are already expected to run so that it doesn't
wake up everyone with too few resources. A better solution would probably
consist in having two queues, one for urgent requests and one for normal
requests. A failed allocation for a session dealing with an error, a
connection event, or the need for a response (or request when there's an
applet on the left) would go to the urgent request queue, while other
requests would go to the other queue. Urgent requests would be served
from 1 entry in the pool, while the regular ones would be served only
according to the reserve. Despite not yet having this, it works
remarkably well.

This mechanism is quite efficient, we don't perform too many wake up calls
anymore. For 1 million sessions elapsed during massive memory contention,
we observe about 4.5M calls to process_session() compared to 4.0M without
memory constraints. Previously we used to observe up to 16M calls, which
rougly means 12M failures.

During a test run under high memory constraints (limit enforced to 27 MB
instead of the 58 MB normally needed), performance used to drop by 53% prior
to this patch. Now with this patch instead it *increases* by about 1.5%.

The best effect of this change is that by limiting the memory usage to about
2/3 to 3/4 of what is needed by default, it's possible to increase performance
by up to about 18% mainly due to the fact that pools are reused more often
and remain hot in the CPU cache (observed on regular HTTP traffic with 20k
objects, buffers.limit = maxconn/10, buffers.reserve = limit/2).

Below is an example of scenario which used to cause a deadlock previously :
  - connection is received
  - two buffers are allocated in process_session() then released
  - one is allocated when receiving an HTTP request
  - the second buffer is allocated then released in process_session()
    for request parsing then connection establishment.
  - poll() says we can send, so the request buffer is sent and released
  - process session gets notified that the connection is now established
    and allocates two buffers then releases them
  - all other sessions do the same till one cannot get the request buffer
    without hitting the margin
  - and now the server responds. stream_interface allocates the response
    buffer and manages to get it since it's higher priority being for a
    response.
  - but process_session() cannot allocate the request buffer anymore

  => We could end up with all buffers used by responses so that none may
     be allocated for a request in process_session().

When the applet processing leaves the session context, the test will have
to be changed so that we always allocate a response buffer regardless of
the left side (eg: H2->H1 gateway). A final improvement would consists in
being able to only retry the failed I/O operation without waking up a
task, but to date all experiments to achieve this have proven not to be
reliable enough.
2014-12-24 23:47:33 +01:00
Willy Tarreau
edee1d60b7 MEDIUM: stick-table: make it easier to register extra data types
Some users want to add their own data types to stick tables. We don't
want to use a linked list here for performance reasons, so we need to
continue to use an indexed array. This patch allows one to reserve a
compile-time-defined number of extra data types by setting the new
macro STKTABLE_EXTRA_DATA_TYPES to anything greater than zero, keeping
in mind that anything larger will slightly inflate the memory consumed
by stick tables (not per entry though).

Then calling stktable_register_data_store() with the new keyword will
either register a new keyword or fail if the desired entry was already
taken or the keyword already registered.

Note that this patch does not dictate how the data will be used, it only
offers the possibility to create new keywords and have an index to
reference them in the config and in the tables. The caller will not be
able to use stktable_data_cast() and will have to explicitly cast the
stable pointers to the expected types. It can be used for experimentation
as well.
2014-07-15 19:14:52 +02:00
Willy Tarreau
4e957907aa MINOR: log: make MAX_SYSLOG_LEN overridable at build time
This value was set in log.h without any #ifndef around, so when one
wanted to change it, a patch was needed. Let's move it to defaults.h
with the usual #ifndef so that it's easier to change it.
2014-06-27 18:13:53 +02:00
Simon Horman
98637e5bff MEDIUM: Add external check
Add an external check which makes use of an external process to
check the status of a server.
2014-06-20 07:10:07 +02:00
Emeric Brun
c8b27b6c68 MEDIUM: ssl: add 300s supported time skew on OCSP response update.
OCSP_MAX_RESPONSE_TIME_SKEW can be set to a different value at
compilation (default is 300 seconds).
2014-06-19 14:37:30 +02:00
Emeric Brun
4147b2ef10 MEDIUM: ssl: basic OCSP stapling support.
The support is all based on static responses. This doesn't add any
request / response logic to HAProxy, but allows a way to update
information through the socket interface.

Currently certificates specified using "crt" or "crt-list" on "bind" lines
are loaded as PEM files.
For each PEM file, haproxy checks for the presence of file at the same path
suffixed by ".ocsp". If such file is found, support for the TLS Certificate
Status Request extension (also known as "OCSP stapling") is automatically
enabled. The content of this file is optional. If not empty, it must contain
a valid OCSP Response in DER format. In order to be valid an OCSP Response
must comply with the following rules: it has to indicate a good status,
it has to be a single response for the certificate of the PEM file, and it
has to be valid at the moment of addition. If these rules are not respected
the OCSP Response is ignored and a warning is emitted. In order to  identify
which certificate an OCSP Response applies to, the issuer's certificate is
necessary. If the issuer's certificate is not found in the PEM file, it will
be loaded from a file at the same path as the PEM file suffixed by ".issuer"
if it exists otherwise it will fail with an error.

It is possible to update an OCSP Response from the unix socket using:

  set ssl ocsp-response <response>

This command is used to update an OCSP Response for a certificate (see "crt"
on "bind" lines). Same controls are performed as during the initial loading of
the response. The <response> must be passed as a base64 encoded string of the
DER encoded response from the OCSP server.

Example:
  openssl ocsp -issuer issuer.pem -cert server.pem \
               -host ocsp.issuer.com:80 -respout resp.der
  echo "set ssl ocsp-response $(base64 -w 10000 resp.der)" | \
               socat stdio /var/run/haproxy.stat

This feature is automatically enabled on openssl 0.9.8h and above.

This work was performed jointly by Dirkjan Bussink of GitHub and
Emeric Brun of HAProxy Technologies.
2014-06-18 18:28:56 +02:00
Willy Tarreau
4bfc580dd3 MEDIUM: session: maintain per-backend and per-server time statistics
Using the last rate counters, we now compute the queue, connect, response
and total times per server and per backend with a 95% accuracy over the last
1024 samples. The operation is cheap so we don't need to condition it.
2014-06-17 17:15:56 +02:00
Remi Gacogne
f46cd6e4ec MEDIUM: ssl: Add the option to use standardized DH parameters >= 1024 bits
When no static DH parameters are specified, this patch makes haproxy
use standardized (rfc 2409 / rfc 3526) DH parameters with prime lenghts
of 1024, 2048, 4096 or 8192 bits for DHE key exchange. The size of the
temporary/ephemeral DH key is computed as the minimum of the RSA/DSA server
key size and the value of a new option named tune.ssl.default-dh-param.
2014-06-12 16:12:23 +02:00
Willy Tarreau
cc08d2c9ff MEDIUM: counters: stop relying on session flags at all
Till now, we had one flag per stick counter to indicate if it was
tracked in a backend or in a frontend. We just had to add another
flag per stick-counter to indicate if it relies on contents or just
connection. These flags are quite painful to maintain and tend to
easily conflict with other flags if their number is changed.

The correct solution consists in moving the flags to the stkctr struct
itself, but currently this struct is made of 2 pointers, so adding a
new entry there to store only two bits will cause at least 16 more bytes
to be eaten per counter due to alignment issues, and we definitely don't
want to waste tens to hundreds of bytes per session just for things that
most users don't use.

Since we only need to store two bits per counter, an intermediate
solution consists in replacing the entry pointer with a composite
value made of the original entry pointer and the two flags in the
2 unused lower bits. If later a need for other flags arises, we'll
have to store them in the struct.

A few inline functions have been added to abstract the retrieval
and assignment of the pointers and flags, resulting in very few
changes. That way there is no more dependence on the number of
stick-counters and their position in the session flags.
2014-01-28 23:34:45 +01:00
Willy Tarreau
f3338349ec BUG/MEDIUM: counters: flush content counters after each request
One year ago, commit 5d5b5d8 ("MEDIUM: proto_tcp: add support for tracking
L7 information") brought support for tracking L7 information in tcp-request
content rules. Two years earlier, commit 0a4838c ("[MEDIUM] session-counters:
correctly unbind the counters tracked by the backend") used to flush the
backend counters after processing a request.

While that earliest patch was correct at the time, it became wrong after
the second patch was merged. The code does what it says, but the concept
is flawed. "TCP request content" rules are evaluated for each HTTP request
over a single connection. So if such a rule in the frontend decides to
track any L7 information or to track L4 information when an L7 condition
matches, then it is applied to all requests over the same connection even
if they don't match. This means that a rule such as :

     tcp-request content track-sc0 src if { path /index.html }

will count one request for index.html, and another one for each of the
objects present on this page that are fetched over the same connection
which sent the initial matching request.

Worse, it is possible to make the code do stupid things by using multiple
counters:

     tcp-request content track-sc0 src if { path /foo }
     tcp-request content track-sc1 src if { path /bar }

Just sending two requests first, one with /foo, one with /bar, shows
twice the number of requests for all subsequent requests. Just because
both of them persist after the end of the request.

So the decision to flush backend-tracked counters was not the correct
one. In practice, what is important is to flush countent-based rules
since they are the ones evaluated for each request.

Doing so requires new flags in the session however, to keep track of
which stick-counter was tracked by what ruleset. A later change might
make this easier to maintain over time.

This bug is 1.5-specific, no backport to stable is needed.
2014-01-28 21:40:28 +01:00
Simon Horman
58c32978b2 MEDIUM: Set rise and fall of agent checks to 1
This is achieved by moving rise and fall from struct server to struct check.

After this move the behaviour of the primary check, server->check is
unchanged. However, the secondary agent check, server->agent now has
independent rise and fall values each of which are set to 1.

The result is that receiving "fail", "stopped" or "down" just once from the
agent will mark the server as down. And receiving a weight just once will
allow the server to be marked up if its primary check is in good health.

This opens up the scope to allow the rise and fall values of the agent
check to be configurable, however this has not been implemented at this
stage.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-11-25 07:31:16 +01:00
Willy Tarreau
47060b6ae0 MINOR: cli: make it possible to enter multiple values at once with "set table"
The "set table" statement allows to create new entries with their respective
values. Till now it was limited to a single data type per line, requiring as
many "set table" statements as the desired data types to be set. Since this
is only a parser limitation, this patch gets rid of it. It also allows the
creation of a key with no data types (all reset to their default values).
2013-08-01 21:17:19 +02:00
Willy Tarreau
b4c8493a9f MINOR: session: make the number of stick counter entries more configurable
In preparation of more flexibility in the stick counters, make their
number configurable. It still defaults to 3 which is the minimum
accepted value. Changing the value alone is not sufficient to get
more counters, some bitfields still need to be updated and the TCP
actions need to be updated as well, but this update tries to be
easier, which is nice for experimentation purposes.
2013-08-01 21:17:14 +02:00
Willy Tarreau
bf43f6eb32 MINOR: defaults: allow REQURI_LEN and CAPTURE_LEN to be redefined
Some users want to change these default settings but it was not
convenient.
2013-06-10 10:30:09 +02:00
Emeric Brun
6924ef8b12 BUG/MEDIUM: ssl: ECDHE ciphers not usable without named curve configured.
Fix consists to use prime256v1 as default named curve to init ECDHE ciphers if none configured.
2013-03-06 19:08:26 +01:00
Emeric Brun
4663577e24 MINOR: build: allow packagers to specify the ssl cache size
This is done by passing the default value to SSLCACHESIZE in sessions.
User can use tune.sslcachesize to change this value.
By default, it is set to 20000 sessions as openssl internal cache size.
Currently, a session entry size is between 592 and 616 bytes depending on the arch.
2012-11-15 10:52:19 +01:00
Willy Tarreau
ed7f836f07 BUG/MINOR: stream_interface: don't loop over ->snd_buf()
It is stupid to loop over ->snd_buf() because the snd_buf() itself already
loops and stops when system buffers are full. But looping again onto it,
we lose the information of the full buffers and perform one useless syscall.

Furthermore, this causes issues when dealing with large uploads while waiting
for a connection to establish, as it can report a server reject of some data
as a connection abort, which is wrong.

1.4 does not have this issue as it loops maximum twice (once for each buffer
half) and exists as soon as system buffers are full. So no backport is needed.
2012-10-29 23:30:33 +01:00
Emeric Brun
76d8895c49 MINOR: ssl: add defines LISTEN_DEFAULT_CIPHERS and CONNECT_DEFAULT_CIPHERS.
These ones are used to set the default ciphers suite on "bind" lines and
"server" lines respectively, instead of using OpenSSL's defaults. These
are probably mainly useful for distro packagers.
2012-10-05 22:11:15 +02:00
Willy Tarreau
56a77e5933 MEDIUM: connection: complete the polling cleanups
I/O handlers now all use __conn_{sock,data}_{stop,poll,want}_* instead
of returning dummy flags. The code has become slightly simpler because
some tricks such as the MIN_RET_FOR_READ_LOOP are not needed anymore,
and the data handlers which switch to a handshake handler do not need
to disable themselves anymore.
2012-09-03 20:47:35 +02:00
Willy Tarreau
ac1932da3e MEDIUM: tune.http.maxhdr makes it possible to configure the maximum number of HTTP headers
For a long time, the max number of headers was taken as a part of the buffer
size. Since the header size can be configured at runtime, it does not make
much sense anymore.

Nothing was making it necessary to have a static value, so let's turn this into
a tunable with a default value of 101 which equals what was previously used.
2011-10-24 19:14:41 +02:00
Hervé COMMOWICK
ec032d63a6 [MINOR] check: add redis check support
This patch provides a new "option redis-check" statement to enable server health checks based on redis PING request (http://www.redis.io/commands/ping).
2011-08-06 15:52:47 +02:00
Willy Tarreau
14acc7072e [OPTIM] stream_sock: don't use splice on too small payloads
It's more expensive to call splice() on short payloads than to use
recv()+send(). One of the reasons is that doing a splice() involves
allocating a pipe. One other reason is that the kernel will have to
copy itself if we try to splice less than a page. So let's fix a
short offset of 4kB below which we don't splice.

A quick test shows that on chunked encoded data, with splice we had
6826 syscalls (1715 splice, 3461 recv, 1650 send) while with this
patch, the same transfer resulted in 5793 syscalls (3896 recv, 1897
send).
2011-05-30 18:42:41 +02:00
Willy Tarreau
bca9969daf [MEDIUM] cookie: support client cookies with some contents appended to their value
In all cookie persistence modes but prefix, we now support cookies whose
value is suffixed with some contents after a vertical bar ('|'). This will
be used to pass an optional expiration date. So as of now we only consider
the part of the cookie value which is used before the vertical bar.
(cherry picked from commit a4486bf4e5b03b5a980d03fef799f6407b2c992d)
2010-10-30 19:04:32 +02:00
Gabor Lekeny
b4c81e4c81 [MINOR] checks: add support for LDAPv3 health checks
This patch provides a new "option ldap-check" statement to enable
server health checks based on LDAPv3 bind requests.
(cherry picked from commit b76b44c6fed8a7ba6f0f565dd72a9cb77aaeca7c)
2010-10-30 19:04:31 +02:00
Willy Tarreau
bce7088275 [MEDIUM] add ability to connect to a server from an IP found in a header
Using get_ip_from_hdr2() we can look for occurrence #X or #-X and
extract the IP it contains. This is typically designed for use with
the X-Forwarded-For header.

Using "usesrc hdr_ip(name,occ)", it becomes possible to use the IP address
found in <name>, and possibly specify occurrence number <occ>, as the
source to connect to a server. This is possible both in a server and in
a backend's source statement. This is typically used to use the source
IP previously set by a upstream proxy.
2010-03-30 10:39:43 +02:00
Willy Tarreau
e9d8788fdd [MINOR] checks: make the HTTP check code add the CRLF itself
Currently we cannot easily add headers nor anything to HTTP checks
because the requests are pre-formatted with the last CRLF. Make the
check code add the CRLF itself so that we can later add useful info.
2010-01-27 20:16:12 +01:00
Willy Tarreau
477ecd8627 [MEDIUM] config: remove the limitation of 10 config files
Now we use a linked list, there is no limit anymore.
2010-01-03 21:22:14 +01:00
Willy Tarreau
deb9ed8f60 [MEDIUM] config: remove the limitation of 10 reqadd/rspadd statements
Now we use a linked list, there is no limit anymore.
2010-01-03 21:22:14 +01:00
Krzysztof Piotr Oledzki
97f07b832f [MEDIUM] Decrease server health based on http responses / events, version 3
Implement decreasing health based on observing communication between
HAProxy and servers.

Changes in this version 2:
 - documentation
 - close race between a started check and health analysis event
 - don't force fastinter if it is not set
 - better names for options
 - layer4 support

Changes in this version 3:
 - add stats
 - port to the current 1.4 tree
2009-12-16 00:29:27 +01:00
Krzysztof Piotr Oledzki
f7089f5852 [MINOR] Capture & display more data from health checks, v2
Capture & display more data from health checks, like
strerror(errno) for L4 failed checks or a first line
from a response for L7 successes/failed checks.

Non ascii or control characters are masked with
chunk_htmlencode() (html stats) or chunk_asciiencode() (logs).
2009-10-10 21:51:16 +02:00
Willy Tarreau
31971e536a [MEDIUM] add support for infinite forwarding
In TCP, we don't want to forward chunks of data, we want to forward
indefinitely. This patch introduces a special value for the amount
of data to be forwarded. When buffer_forward() is called with
BUF_INFINITE_FORWARD, it configures the buffer to never stop
forwarding until the end.
2009-09-20 12:07:52 +02:00
Willy Tarreau
5ca791da8d [CLEANUP] move remaining stats sockets code to dumpstats
The remains of the stats socket code has nothing to do in proto_uxst
anymore and must move to dumpstats. The code is much cleaner and more
structured. It was also an opportunity to rename AN_REQ_UNIX_STATS
as AN_REQ_STATS_SOCK as the stats socket is no longer unix-specific
either.

The last item refering to stats in proto_uxst is the setting of the
task's nice value which should in fact come from the listener.
2009-08-16 19:35:36 +02:00
Willy Tarreau
3ad6a7640b [MINOR] export the hostname variable so that all the code can access it
The hostname variable will be used later, export it.
2009-08-16 10:08:02 +02:00
Willy Tarreau
5d01a63b78 [MEDIUM] config: support loading multiple configuration files
We now support up to 10 distinct configuration files. They are
all loaded in the order defined by -f <file1> -f <file2> ...

This can be useful in order to store global, private, public,
etc... configurations in distinct files.
2009-06-23 08:17:17 +02:00
Willy Tarreau
c9fe4562c2 [MINOR] make DEFAULT_MAXCONN user-configurable at build time
The only way to set this previously was to set SYSTEM_MAXCONN
which serves a different purpose.
2009-06-15 16:34:03 +02:00
Willy Tarreau
8f38bd0497 [MINOR] add basic signal handling functions
These functions will be used to deliver asynchronous signals in order
to make the signal handling functions more robust. The goal is to keep
the same interface to signal handlers.
2009-05-10 09:24:23 +02:00
Maik Broemme
2850cb42b6 [MINOR] add X-Original-To: header
I have attached a patch which will add on every http request a new
header 'X-Original-To'. If you have HAProxy running in transparent mode
with a big number of SQUID servers behind it, it is very nice to have
the original destination ip as a common header to make decisions based
on it.

The whole thing is configurable with a new option 'originalto'. I have
updated the sourcecode as well as the documentation. The 'haproxy-en.txt'
and 'haproxy-fr.txt' files are untouched, due to lack of my french
language knowledge. ;)

Also the patch adds this header for IPv4 only. I haven't any IPv6 test
environment running here and don't know if getsockopt() with SO_ORIGINAL_DST
will work on IPv6. If someone knows it and wants to test it I can modify
the diff. Feel free to ask me questions or things which should be changed. :)

--Maik
2009-05-01 16:22:33 +02:00