Commit Graph

360 Commits

Author SHA1 Message Date
Willy Tarreau
de39c9b10f CLEANUP: appsession: remove the last include files
These ones were include/common/appsession.h and include/common/sessionhash.h.
2015-08-10 19:42:30 +02:00
Willy Tarreau
5b4dd683cb MINOR: standard: provide htonll() and ntohll()
These are the 64-bit equivalent of htonl() and ntohl(). They're a bit
tricky in order to avoid expensive operations.

The principle consists in letting the compiler detect we're playing
with a union and simplify most or all operations. The asm-optimized
htonl() version involving bswap (x86) / rev (arm) / other is a single
operation on little endian, or a NOP on big-endian. In both cases,
this lets the compiler "see" that we're rebuilding a 64-bit word from
two 32-bit quantities that fit into a 32-bit register. In big endian,
the whole code is optimized out. In little endian, with a decent compiler,
a few bswap and 2 shifts are left, which is the minimum acceptable.
2015-07-21 23:50:06 +02:00
Thierry FOURNIER
763a5d85f7 MINOR: standard: add 64 bits conversion functions
This patch adds 3 functions for 64 bit integer conversion.

 * lltoa_r : converts signed 64 bit integer to string
 * read_uint64 : converts from string to signed 64 bits integer with capping
 * read_int64 : converts from string to unsigned 64 bits integer with capping
2015-07-21 23:27:10 +02:00
David Carlier
e6c3941668 BUILD/MINOR: tools: rename popcount to my_popcountl
This is in order to avoid conflicting with NetBSD popcount* functions
since 6.x release, the final l to mentions the argument is a long like
NetBSD does.

This patch could be backported to 1.5 to fix the build issue there as well.
2015-07-02 11:32:25 +02:00
Thierry FOURNIER
1480bd8dd2 MINOR: standard: add function that converts signed int to a string
This function is the same as "ultoa_r", but it takes a signed value
as input.
2015-06-13 22:59:14 +02:00
Joris Dedieu
9dd44ba5d6 BUG/MEDIUM: compat: fix segfault on FreeBSD
Since commit 65d805fd witch removes standard.h from compat.h some
values were not properly set on FreeBSD. This caused a segfault
at startup when smp_resolve_args is called.

As FreeBSD have IP_BINDANY, CONFIG_HAP_TRANSPARENT is define. This
cause struct conn_src to be extended with some fields. The size of
this structure was incorrect. Including netinet/in.h fix this issue.

While diving in code preprocessing, I found that limits.h was require
to properly set MAX_HOSTNAME_LEN, ULONG_MAX, USHRT_MAX and others
system limits on FreeBSD.
2015-06-13 08:25:36 +02:00
Christopher Faulet
31af49d62b MEDIUM: ssl: Add options to forge SSL certificates
With this patch, it is possible to configure HAProxy to forge the SSL
certificate sent to a client using the SNI servername. We do it in the SNI
callback.

To enable this feature, you must pass following BIND options:

 * ca-sign-file <FILE> : This is the PEM file containing the CA certitifacte and
   the CA private key to create and sign server's certificates.

 * (optionally) ca-sign-pass <PASS>: This is the CA private key passphrase, if
   any.

 * generate-certificates: Enable the dynamic generation of certificates for a
   listener.

Because generating certificates is expensive, there is a LRU cache to store
them. Its size can be customized by setting the global parameter
'tune.ssl.ssl-ctx-cache-size'.
2015-06-12 18:06:59 +02:00
Thierry FOURNIER
ddea626de4 MINOR: common: escape CSV strings
This function checks a string for using it in a CSV output format. If
the string contains one of the following four char <">, <,>, CR or LF,
the string is encapsulated between <"> and the <"> are escaped by a <"">
sequence.

The rounding by <"> is optionnal. It can be canceled, forced or the
function choose automatically the right way.
2015-05-28 17:47:19 +02:00
Willy Tarreau
98d0485a90 MAJOR: config: remove the deprecated reqsetbe / reqisetbe actions
These ones were already obsoleted in 1.4, marked for removal in 1.5,
and not documented anymore. They used to emit warnings, and do still
require quite some code to stay in place. Let's remove them now.
2015-05-26 12:18:29 +02:00
Willy Tarreau
f3045d2a06 MAJOR: pattern: add LRU-based cache on pattern matching
The principle of this cache is to have a global cache for all pattern
matching operations which rely on lists (reg, sub, dir, dom, ...). The
input data, the expression and a random seed are used as a hashing key.
The cached entries contains a pointer to the expression and a revision
number for that expression so that we don't accidently used obsolete
data after a pattern update or a very unlikely hash collision.

Regarding the risk of collisions, 10k entries at 10k req/s mean 1% risk
of a collision after 60 years, that's already much less than the memory's
reliability in most machines and more durable than most admin's life
expectancy. A collision will result in a valid result to be returned
for a different entry from the same list. If this is not acceptable,
the cache can be disabled using tune.pattern.cache-size.

A test on a file containing 10k small regex showed that the regex
matching was limited to 6k/s instead of 70k with regular strings.
When enabling the LRU cache, the performance was back to 70k/s.
2015-04-29 19:15:24 +02:00
Willy Tarreau
e6e49cfa93 MINOR: tools: provide an rdtsc() function for time comparisons
This one returns a timestamp, either the one from the CPU or from
gettimeofday() in 64-bit format. The purpose is to be able to compare
timestamps on various entities to make it easier to detect updates.
It can also be used for benchmarking in certain situations during
development.
2015-04-29 19:14:03 +02:00
Willy Tarreau
1b90511eeb CLEANUP: namespaces: fix protection against multiple inclusions
The include file did not protect correctly against multiple inclusions,
as it didn't define the file name after checking for it. That's currently
harmless as the file is only included from .c but that could change.
2015-04-08 17:31:40 +02:00
Willy Tarreau
87b09668be REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.

In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.

The files stream.{c,h} were added and session.{c,h} removed.

The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.

Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.

Once all changes are completed, we should see approximately this :

   L7 - http_txn
   L6 - stream
   L5 - session
   L4 - connection | applet

There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.

Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-06 11:23:56 +02:00
Thierry FOURNIER
d2b597aa10 BUG/MEDIUM: lua: segfault with buffer_replace2
The function buffer_contig_space() returns the contiguous space avalaible
to add data (at the end of the input side) while the function
hlua_channel_send_yield() needs to insert data starting at p. Here we
introduce a new function bi_space_for_replace() which returns the amount
of space that can be inserted at the head of the input side with one of
the buffer_replace* functions.

This patch proposes a function that returns the space avalaible after buf->p.
2015-03-09 18:12:59 +01:00
Thierry FOURNIER
549aac8d0b MEDIUM: buffer: make bo_putblk/bo_putstr/bo_putchk return the number of bytes copied.
This is not used yet. Planned for LUA.
2015-02-28 23:12:32 +01:00
Thierry FOURNIER
58639a0ef3 MINOR: global: export function and permits to not resolve DNS names
exports the commonly used function str2ip. The function str2ip2 is
created and permits to not resolve DNS names.
2015-02-28 23:12:32 +01:00
Nenad Merdanovic
05552d4b98 MEDIUM: Add support for configurable TLS ticket keys
Until now, the TLS ticket keys couldn't have been configured and
shared between multiple instances or multiple servers running HAproxy.
The result was that if a request got a TLS ticket from one instance/server
and it hits another one afterwards, it will have to go through the full
SSL handshake and negotation.

This patch enables adding a ticket file to the bind line, which will be
used for all SSL contexts created from that bind line. We can use the
same file on all instances or servers to mitigate this issue and have
consistent TLS tickets assigned. Clients will no longer have to negotiate
every time they change the handling process.

Signed-off-by: Nenad Merdanovic <nmerdan@anine.io>
2015-02-28 23:10:22 +01:00
Willy Tarreau
15a53a4384 MEDIUM: regex: add support for passing regex flags to regex_exec_match()
This function (and its sister regex_exec_match2()) abstract the regex
execution but make it impossible to pass flags to the regex engine.
Currently we don't use them but we'll need to support REG_NOTBOL soon
(to indicate that we're not at the beginning of a line). So let's add
support for this flag and update the API accordingly.
2015-01-22 14:24:53 +01:00
Willy Tarreau
c829ee48c7 MINOR: hash: add new function hash_crc32
This function will be used to perform CRC32 computations. This one wa
loosely inspired from crc32b found here, and focuses on size and speed
at the same time :

    http://www.hackersdelight.org/hdcodetxt/crc.c.txt

Much faster table-based versions exist but are pointless for our usage
here, this hash already sustains gigabit speed which is far faster than
what we'd ever need. Better preserve the CPU's cache instead.
2015-01-20 19:48:05 +01:00
Willy Tarreau
d025648f7c MAJOR: init: automatically set maxconn and/or maxsslconn when possible
If a memory size limit is enforced using "-n" on the command line and
one or both of maxconn / maxsslconn are not set, instead of using the
build-time values, haproxy now computes the number of sessions that can
be allocated depending on a number of parameters among which :

  - global.maxconn (if set)
  - global.maxsslconn (if set)
  - maxzlibmem
  - tune.ssl.cachesize
  - presence of SSL in at least one frontend (bind lines)
  - presence of SSL in at least one backend (server lines)
  - tune.bufsize
  - tune.cookie_len

The purpose is to ensure that not haproxy will not run out of memory
when maxing out all parameters. If neither maxconn nor maxsslconn are
used, it will consider that 100% of the sessions involve SSL on sides
where it's supported. That means that it will typically optimize maxconn
for SSL offloading or SSL bridging on all connections. This generally
means that the simple act of enabling SSL in a frontend or in a backend
will significantly reduce the global maxconn but in exchange of that, it
will guarantee that it will not fail.

All metrics may be enforced using #defines to accomodate variations in
SSL libraries or various allocation sizes.
2015-01-15 21:45:22 +01:00
Willy Tarreau
d92aa5c44a MINOR: global: report information about the cost of SSL connections
An SSL connection takes some memory when it exists and during handshakes.
We measured up to 16kB for an established endpoint, and up to 76 extra kB
during a handshake. The SSL layer stores these values into the global
struct during initialization. If other SSL libs are used, it's easy to
change these values. Anyway they'll only be used as gross estimates in
order to guess the max number of SSL conns that can be established when
memory is constrained and the limit is not set.
2015-01-15 21:34:39 +01:00
Willy Tarreau
3ca1a883f9 MINOR: tools: add new round_2dig() function to round integers
This function rounds down an integer to the closest value having only
2 significant digits.
2015-01-15 19:02:27 +01:00
Willy Tarreau
3889fffe92 MINOR: channel: rename channel_full() to !channel_may_recv()
This function's name was poorly chosen and is confusing to the point of
being suspiciously used at some places. The operations it does always
consider the ability to forward pending input data before receiving new
data. This is not obvious at all, especially at some places where it was
used when consuming outgoing data to know if the buffer has any chance
to ever get the missing data. The code needs to be re-audited with that
in mind. Care must be taken with existing code since the polarity of the
function was switched with the renaming.
2015-01-14 18:41:33 +01:00
Willy Tarreau
75abcb3106 MINOR: config: extend the default max hostname length to 64 and beyond
Some users reported that the default max hostname length of 32 is too
short in some environments. This patch does two things :

  - it relies on the system's max hostname length as found in MAXHOSTNAMELEN
    if it is set. This is the most logical thing to do as the system libs
    generally present the appropriate value supported by the system. This
    value is 64 on Linux and 256 on Solaris, to give a few examples.

  - otherwise it defaults to 64

It is still possible to override this value by defining MAX_HOSTNAME_LEN at
build time. After some observation time, this patch may be backported to
1.5 if it does not cause any build issue, as it is harmless and may help
some users.
2015-01-14 11:52:34 +01:00
Willy Tarreau
a24adf0795 MAJOR: session: only wake up as many sessions as available buffers permit
We've already experimented with three wake up algorithms when releasing
buffers : the first naive one used to wake up far too many sessions,
causing many of them not to get any buffer. The second approach which
was still in use prior to this patch consisted in waking up either 1
or 2 sessions depending on the number of FDs we had released. And this
was still inaccurate. The third one tried to cover the accuracy issues
of the second and took into consideration the number of FDs the sessions
would be willing to use, but most of the time we ended up waking up too
many of them for nothing, or deadlocking by lack of buffers.

This patch completely removes the need to allocate two buffers at once.
Instead it splits allocations into critical and non-critical ones and
implements a reserve in the pool for this. The deadlock situation happens
when all buffers are be allocated for requests pending in a maxconn-limited
server queue, because then there's no more way to allocate buffers for
responses, and these responses are critical to release the servers's
connection in order to release the pending requests. In fact maxconn on
a server creates a dependence between sessions and particularly between
oldest session's responses and latest session's requests. Thus, it is
mandatory to get a free buffer for a response in order to release a
server connection which will permit to release a request buffer.

Since we definitely have non-symmetrical buffers, we need to implement
this logic in the buffer allocation mechanism. What this commit does is
implement a reserve of buffers which can only be allocated for responses
and that will never be allocated for requests. This is made possible by
the requester indicating how much margin it wants to leave after the
allocation succeeds. Thus it is a cooperative allocation mechanism : the
requester (process_session() in general) prefers not to get a buffer in
order to respect other's need for response buffers. The session management
code always knows if a buffer will be used for requests or responses, so
that is not difficult :

  - either there's an applet on the initiator side and we really need
    the request buffer (since currently the applet is called in the
    context of the session)

  - or we have a connection and we really need the response buffer (in
    order to support building and sending an error message back)

This reserve ensures that we don't take all allocatable buffers for
requests waiting in a queue. The downside is that all the extra buffers
are really allocated to ensure they can be allocated. But with small
values it is not an issue.

With this change, we don't observe any more deadlocks even when running
with maxconn 1 on a server under severely constrained memory conditions.

The code becomes a bit tricky, it relies on the scheduler's run queue to
estimate how many sessions are already expected to run so that it doesn't
wake up everyone with too few resources. A better solution would probably
consist in having two queues, one for urgent requests and one for normal
requests. A failed allocation for a session dealing with an error, a
connection event, or the need for a response (or request when there's an
applet on the left) would go to the urgent request queue, while other
requests would go to the other queue. Urgent requests would be served
from 1 entry in the pool, while the regular ones would be served only
according to the reserve. Despite not yet having this, it works
remarkably well.

This mechanism is quite efficient, we don't perform too many wake up calls
anymore. For 1 million sessions elapsed during massive memory contention,
we observe about 4.5M calls to process_session() compared to 4.0M without
memory constraints. Previously we used to observe up to 16M calls, which
rougly means 12M failures.

During a test run under high memory constraints (limit enforced to 27 MB
instead of the 58 MB normally needed), performance used to drop by 53% prior
to this patch. Now with this patch instead it *increases* by about 1.5%.

The best effect of this change is that by limiting the memory usage to about
2/3 to 3/4 of what is needed by default, it's possible to increase performance
by up to about 18% mainly due to the fact that pools are reused more often
and remain hot in the CPU cache (observed on regular HTTP traffic with 20k
objects, buffers.limit = maxconn/10, buffers.reserve = limit/2).

Below is an example of scenario which used to cause a deadlock previously :
  - connection is received
  - two buffers are allocated in process_session() then released
  - one is allocated when receiving an HTTP request
  - the second buffer is allocated then released in process_session()
    for request parsing then connection establishment.
  - poll() says we can send, so the request buffer is sent and released
  - process session gets notified that the connection is now established
    and allocates two buffers then releases them
  - all other sessions do the same till one cannot get the request buffer
    without hitting the margin
  - and now the server responds. stream_interface allocates the response
    buffer and manages to get it since it's higher priority being for a
    response.
  - but process_session() cannot allocate the request buffer anymore

  => We could end up with all buffers used by responses so that none may
     be allocated for a request in process_session().

When the applet processing leaves the session context, the test will have
to be changed so that we always allocate a response buffer regardless of
the left side (eg: H2->H1 gateway). A final improvement would consists in
being able to only retry the failed I/O operation without waking up a
task, but to date all experiments to achieve this have proven not to be
reliable enough.
2014-12-24 23:47:33 +01:00
Willy Tarreau
f4718e8ec0 MEDIUM: buffer: implement b_alloc_margin()
This function is used to allocate a buffer and ensure that we leave
some margin after it in the pool. The function is not obvious. While
we allocate only one buffer, we want to ensure that at least two remain
available after our allocation. The purpose is to ensure we'll never
enter a deadlock where all sessions allocate exactly one buffer, and
none of them will be able to allocate the second buffer needed to build
a response in order to release the first one.

We also take care of remaining fast in the the fast path by first
checking whether or not there is enough margin, in which case we only
rely on b_alloc_fast() which is guaranteed to succeed. Otherwise we
take the slow path using pool_refill_alloc().
2014-12-24 23:47:32 +01:00
Willy Tarreau
620bd6c88e MINOR: buffer: implement b_alloc_fast()
This function allocates a buffer and replaces *buf with this buffer. If
no memory is available, &buf_wanted is used instead. No control is made
to check if *buf already pointed to another buffer. The allocated buffer
is returned, or NULL in case no memory is available. The difference with
b_alloc() is that this function only picks from the pool and never calls
malloc(), so it can fail even if some memory is available. It is the
caller's job to refill the buffer pool if needed.
2014-12-24 23:47:32 +01:00
Willy Tarreau
4428a29e52 MEDIUM: channel: do not report full when buf_empty is present on a channel
Till now we'd consider a buffer full even if it had size==0 due to pointing
to buf.size. Now we change this : if buf_wanted is present, it means that we
have already tried to allocate a buffer but failed. Thus the buffer must be
considered full so that we stop trying to poll for reads on it. Otherwise if
it's empty, it's buf_empty and we report !full since we may allocate it on
the fly.
2014-12-24 23:47:32 +01:00
Willy Tarreau
f2f7d6b27b MEDIUM: buffer: add a new buf_wanted dummy buffer to report failed allocations
Doing so ensures that even when no memory is available, we leave the
channel in a sane condition. There's a special case in proto_http.c
regarding the compression, we simply pre-allocate the tmpbuf to point
to the dummy buffer. Not reusing &buf_empty for this allows the rest
of the code to differenciate an empty buffer that's not used from an
empty buffer that results from a failed allocation which has the same
semantics as a buffer full.
2014-12-24 23:47:32 +01:00
Willy Tarreau
2a4b54359b MEDIUM: buffer: always assign a dummy empty buffer to channels
Channels are now created with a valid pointer to a buffer before the
buffer is allocated. This buffer is a global one called "buf_empty" and
of size zero. Thus it prevents any activity from being performed on
the buffer and still ensures that chn->buf may always be dereferenced.
b_free() also resets the buffer to &buf_empty, and was split into
b_drop() which does not reset the buffer.
2014-12-24 23:47:32 +01:00
Willy Tarreau
7dfca9daec MINOR: buffer: only use b_free to release buffers
We don't call pool_free2(pool2_buffers) anymore, we only call b_free()
to do the job. This ensures that we can start to centralize the releasing
of buffers.
2014-12-24 23:47:32 +01:00
Willy Tarreau
e583ea583a MEDIUM: buffer: use b_alloc() to allocate and initialize a buffer
b_alloc() now allocates a buffer and initializes it to the size specified
in the pool minus the size of the struct buffer itself. This ensures that
callers do not need to care about buffer details anymore. Also this never
applies memory poisonning, which is slow and useless on buffers.
2014-12-24 23:47:32 +01:00
Willy Tarreau
474cf54a97 MINOR: buffer: reset a buffer in b_reset() and not channel_init()
We'll soon need to be able to switch buffers without touching the
channel, so let's move buffer initialization out of channel_init().
We had the same in compressoin.c.
2014-12-24 23:47:31 +01:00
Willy Tarreau
a885f6dc65 MEDIUM: memory: improve pool_refill_alloc() to pass a refill count
Till now this function would only allocate one entry at a time. But with
dynamic buffers we'll like to allocate the number of missing entries to
properly refill the pool.

Let's modify it to take a minimum amount of available entries. This means
that when we know we need at least a number of available entries, we can
ask to allocate all of them at once. It also ensures that we don't move
the pointers back and forth between the caller and the pool, and that we
don't call pool_gc2() for each failed malloc. Instead, it's called only
once and the malloc is only allowed to fail once.
2014-12-24 23:47:31 +01:00
Willy Tarreau
0262241e26 MINOR: memory: cut pool allocator in 3 layers
pool_alloc2() used to pick the entry from the pool, fall back to
pool_refill_alloc(), and to perform the poisonning itself, which
pool_refill_alloc() was also doing. While this led to optimal
code size, it imposes memory poisonning on the buffers as well,
which is extremely slow on large buffers.

This patch cuts the allocator in 3 layers :
  - a layer to pick the first entry from the pool without falling back to
    pool_refill_alloc() : pool_get_first()
  - a layer to allocate a dirty area by falling back to pool_refill_alloc()
    but never performing the poisonning : pool_alloc_dirty()
  - pool_alloc2() which calls the latter and optionally poisons the area

No functional changes were made.
2014-12-24 23:47:31 +01:00
Willy Tarreau
e430e77dfd CLEANUP: memory: replace macros pool_alloc2/pool_free2 with functions
Using inline functions here makes the code more readable and reduces its
size by about 2 kB.
2014-12-24 23:47:31 +01:00
Willy Tarreau
62405a2155 CLEANUP: memory: remove dead code
The very old pool managment code has not been used for the last 7 years
and is still polluting the file. Get rid of it now.
2014-12-24 23:47:31 +01:00
Willy Tarreau
3dd717cd5d CLEANUP: lists: remove dead code
Remove the code dealing with the old dual-linked lists imported from
librt that has remained unused for the last 8 years. Now everything
uses the linux-like circular lists instead.
2014-12-24 23:47:31 +01:00
Willy Tarreau
23a5c396ec DEBUG: pools: apply poisonning on every allocated pool
Till now, when memory poisonning was enabled, it used to be done only
after a calloc(). But sometimes it's not enough to detect unexpected
sharing, so let's ensure that we now poison every allocation once it's
in place. Note that enabling poisonning significantly hurts performance
(it can typically half the overall performance).
2014-11-25 13:48:43 +01:00
KOVACS Krisztian
b3e54fe387 MAJOR: namespace: add Linux network namespace support
This patch makes it possible to create binds and servers in separate
namespaces.  This can be used to proxy between multiple completely independent
virtual networks (with possibly overlapping IP addresses) and a
non-namespace-aware proxy implementation that supports the proxy protocol (v2).

The setup is something like this:

net1 on VLAN 1 (namespace 1) -\
net2 on VLAN 2 (namespace 2) -- haproxy ==== proxy (namespace 0)
net3 on VLAN 3 (namespace 3) -/

The proxy is configured to make server connections through haproxy and sending
the expected source/target addresses to haproxy using the proxy protocol.

The network namespace setup on the haproxy node is something like this:

= 8< =
$ cat setup.sh
ip netns add 1
ip link add link eth1 type vlan id 1
ip link set eth1.1 netns 1
ip netns exec 1 ip addr add 192.168.91.2/24 dev eth1.1
ip netns exec 1 ip link set eth1.$id up
...
= 8< =

= 8< =
$ cat haproxy.cfg
frontend clients
  bind 127.0.0.1:50022 namespace 1 transparent
  default_backend scb

backend server
  mode tcp
  server server1 192.168.122.4:2222 namespace 2 send-proxy-v2
= 8< =

A bind line creates the listener in the specified namespace, and connections
originating from that listener also have their network namespace set to
that of the listener.

A server line either forces the connection to be made in a specified
namespace or may use the namespace from the client-side connection if that
was set.

For more documentation please read the documentation included in the patch
itself.

Signed-off-by: KOVACS Tamas <ktamas@balabit.com>
Signed-off-by: Sarkozi Laszlo <laszlo.sarkozi@balabit.com>
Signed-off-by: KOVACS Krisztian <hidden@balabit.com>
2014-11-21 07:51:57 +01:00
Christian Ruppert
de898712a0 MEDIUM: regex: Use pcre_study always when PCRE is used, regardless of JIT
pcre_study() has been around long before JIT has been added. It also seems to
affect the performance in some cases (positive).

Below I've attached some test restults. The test is based on
http://sljit.sourceforge.net/regex_perf.html (see bottom). It has been modified
to just test pcre_study vs. no pcre_study. Note: This test does not try to
match specific header it's instead run over a larger text with more and less
complex patterns to make the differences more clear.

% ./runtest
'mark.txt' loaded. (Length: 19665221 bytes)
-----------------
Regex: 'Twain'
[pcre-nostudy] time:    14 ms (2388 matches)
[pcre-study] time:    21 ms (2388 matches)
-----------------
Regex: '^Twain'
[pcre-nostudy] time:   109 ms (100 matches)
[pcre-study] time:   109 ms (100 matches)
-----------------
Regex: 'Twain$'
[pcre-nostudy] time:    14 ms (127 matches)
[pcre-study] time:    16 ms (127 matches)
-----------------
Regex: 'Huck[a-zA-Z]+|Finn[a-zA-Z]+'
[pcre-nostudy] time:   695 ms (83 matches)
[pcre-study] time:    26 ms (83 matches)
-----------------
Regex: 'a[^x]{20}b'
[pcre-nostudy] time:    90 ms (12495 matches)
[pcre-study] time:    91 ms (12495 matches)
-----------------
Regex: 'Tom|Sawyer|Huckleberry|Finn'
[pcre-nostudy] time:  1236 ms (3015 matches)
[pcre-study] time:    34 ms (3015 matches)
-----------------
Regex: '.{0,3}(Tom|Sawyer|Huckleberry|Finn)'
[pcre-nostudy] time:  5696 ms (3015 matches)
[pcre-study] time:  5655 ms (3015 matches)
-----------------
Regex: '[a-zA-Z]+ing'
[pcre-nostudy] time:  1290 ms (95863 matches)
[pcre-study] time:  1167 ms (95863 matches)
-----------------
Regex: '^[a-zA-Z]{0,4}ing[^a-zA-Z]'
[pcre-nostudy] time:   136 ms (4507 matches)
[pcre-study] time:   134 ms (4507 matches)
-----------------
Regex: '[a-zA-Z]+ing$'
[pcre-nostudy] time:  1334 ms (5360 matches)
[pcre-study] time:  1214 ms (5360 matches)
-----------------
Regex: '^[a-zA-Z ]{5,}$'
[pcre-nostudy] time:   198 ms (26236 matches)
[pcre-study] time:   197 ms (26236 matches)
-----------------
Regex: '^.{16,20}$'
[pcre-nostudy] time:   173 ms (4902 matches)
[pcre-study] time:   175 ms (4902 matches)
-----------------
Regex: '([a-f](.[d-m].){0,2}[h-n]){2}'
[pcre-nostudy] time:  1242 ms (68621 matches)
[pcre-study] time:   690 ms (68621 matches)
-----------------
Regex: '([A-Za-z]awyer|[A-Za-z]inn)[^a-zA-Z]'
[pcre-nostudy] time:  1215 ms (675 matches)
[pcre-study] time:   952 ms (675 matches)
-----------------
Regex: '"[^"]{0,30}[?!\.]"'
[pcre-nostudy] time:    27 ms (5972 matches)
[pcre-study] time:    28 ms (5972 matches)
-----------------
Regex: 'Tom.{10,25}river|river.{10,25}Tom'
[pcre-nostudy] time:   705 ms (2 matches)
[pcre-study] time:    68 ms (2 matches)

In some cases it's more or less the same but when it's faster than by a huge margin.
It always depends on the pattern, the string(s) to match against etc.

Signed-off-by: Christian Ruppert <c.ruppert@babiel.com>
2014-11-18 13:26:18 +01:00
Thierry FOURNIER
317e1c4f1e MINOR: sample: add "json" converter
This converter escapes string to use it as json/ascii escaped string.
It can read UTF-8 with differents behavior on errors and encode it in
json/ascii.

json([<input-code>])
  Escapes the input string and produces an ASCII ouput string ready to use as a
  JSON string. The converter tries to decode the input string according to the
  <input-code> parameter. It can be "ascii", "utf8", "utf8s", "utf8"" or
  "utf8ps". The "ascii" decoder never fails. The "utf8" decoder detects 3 types
  of errors:
   - bad UTF-8 sequence (lone continuation byte, bad number of continuation
     bytes, ...)
   - invalid range (the decoded value is within a UTF-8 prohibited range),
   - code overlong (the value is encoded with more bytes than necessary).

  The UTF-8 JSON encoding can produce a "too long value" error when the UTF-8
  character is greater than 0xffff because the JSON string escape specification
  only authorizes 4 hex digits for the value encoding. The UTF-8 decoder exists
  in 4 variants designated by a combination of two suffix letters : "p" for
  "permissive" and "s" for "silently ignore". The behaviors of the decoders
  are :
   - "ascii"  : never fails ;
   - "utf8"   : fails on any detected errors ;
   - "utf8s"  : never fails, but removes characters corresponding to errors ;
   - "utf8p"  : accepts and fixes the overlong errors, but fails on any other
                error ;
   - "utf8ps" : never fails, accepts and fixes the overlong errors, but removes
                characters corresponding to the other errors.

  This converter is particularly useful for building properly escaped JSON for
  logging to servers which consume JSON-formated traffic logs.

  Example:
     capture request header user-agent len 150
     capture request header Host len 15
     log-format {"ip":"%[src]","user-agent":"%[capture.req.hdr(1),json]"}

  Input request from client 127.0.0.1:
     GET / HTTP/1.0
     User-Agent: Very "Ugly" UA 1/2

  Output log:
     {"ip":"127.0.0.1","user-agent":"Very \"Ugly\" UA 1\/2"}
2014-10-26 06:41:12 +01:00
Willy Tarreau
3986b9c140 MEDIUM: config: report it when tcp-request rules are misplaced
A config where a tcp-request rule appears after an http-request rule
might seem valid but it is not. So let's report a warning about this
since this case is hard to detect by the naked eye.
2014-09-16 15:43:24 +02:00
Willy Tarreau
edee1d60b7 MEDIUM: stick-table: make it easier to register extra data types
Some users want to add their own data types to stick tables. We don't
want to use a linked list here for performance reasons, so we need to
continue to use an indexed array. This patch allows one to reserve a
compile-time-defined number of extra data types by setting the new
macro STKTABLE_EXTRA_DATA_TYPES to anything greater than zero, keeping
in mind that anything larger will slightly inflate the memory consumed
by stick tables (not per entry though).

Then calling stktable_register_data_store() with the new keyword will
either register a new keyword or fail if the desired entry was already
taken or the keyword already registered.

Note that this patch does not dictate how the data will be used, it only
offers the possibility to create new keywords and have an index to
reference them in the config and in the tables. The caller will not be
able to use stktable_data_cast() and will have to explicitly cast the
stable pointers to the expected types. It can be used for experimentation
as well.
2014-07-15 19:14:52 +02:00
Willy Tarreau
65d805fdfc BUILD: fix dependencies between config and compat.h
compat.h only depends on the system, and config needs compat, not the
opposite. global.h was fixed to explicitly include standard.h for LONGBITS.
2014-07-15 19:09:36 +02:00
Dan Dubovik
bd57a9f977 BUG/MEDIUM: backend: Update hash to use unsigned int throughout
When we were generating a hash, it was done using an unsigned long.  When the hash was used
to select a backend, it was sent as an unsigned int.  This made it difficult to predict which
backend would be selected.

This patch updates get_hash, and the hash methods to use an unsigned int, to remain consistent
throughout the codebase.

This fix should be backported to 1.5 and probably in part to 1.4.
2014-07-08 22:00:21 +02:00
Willy Tarreau
4e957907aa MINOR: log: make MAX_SYSLOG_LEN overridable at build time
This value was set in log.h without any #ifndef around, so when one
wanted to change it, a patch was needed. Let's move it to defaults.h
with the usual #ifndef so that it's easier to change it.
2014-06-27 18:13:53 +02:00
Simon Horman
98637e5bff MEDIUM: Add external check
Add an external check which makes use of an external process to
check the status of a server.
2014-06-20 07:10:07 +02:00
Emeric Brun
c8b27b6c68 MEDIUM: ssl: add 300s supported time skew on OCSP response update.
OCSP_MAX_RESPONSE_TIME_SKEW can be set to a different value at
compilation (default is 300 seconds).
2014-06-19 14:37:30 +02:00
Emeric Brun
4147b2ef10 MEDIUM: ssl: basic OCSP stapling support.
The support is all based on static responses. This doesn't add any
request / response logic to HAProxy, but allows a way to update
information through the socket interface.

Currently certificates specified using "crt" or "crt-list" on "bind" lines
are loaded as PEM files.
For each PEM file, haproxy checks for the presence of file at the same path
suffixed by ".ocsp". If such file is found, support for the TLS Certificate
Status Request extension (also known as "OCSP stapling") is automatically
enabled. The content of this file is optional. If not empty, it must contain
a valid OCSP Response in DER format. In order to be valid an OCSP Response
must comply with the following rules: it has to indicate a good status,
it has to be a single response for the certificate of the PEM file, and it
has to be valid at the moment of addition. If these rules are not respected
the OCSP Response is ignored and a warning is emitted. In order to  identify
which certificate an OCSP Response applies to, the issuer's certificate is
necessary. If the issuer's certificate is not found in the PEM file, it will
be loaded from a file at the same path as the PEM file suffixed by ".issuer"
if it exists otherwise it will fail with an error.

It is possible to update an OCSP Response from the unix socket using:

  set ssl ocsp-response <response>

This command is used to update an OCSP Response for a certificate (see "crt"
on "bind" lines). Same controls are performed as during the initial loading of
the response. The <response> must be passed as a base64 encoded string of the
DER encoded response from the OCSP server.

Example:
  openssl ocsp -issuer issuer.pem -cert server.pem \
               -host ocsp.issuer.com:80 -respout resp.der
  echo "set ssl ocsp-response $(base64 -w 10000 resp.der)" | \
               socat stdio /var/run/haproxy.stat

This feature is automatically enabled on openssl 0.9.8h and above.

This work was performed jointly by Dirkjan Bussink of GitHub and
Emeric Brun of HAProxy Technologies.
2014-06-18 18:28:56 +02:00