Commit Graph

232 Commits

Author SHA1 Message Date
Christopher Faulet
f7e4e7e096 MAJOR: spoe: Add an experimental Stream Processing Offload Engine
SPOE makes possible the communication with external components to retrieve some
info using an in-house binary protocol, the Stream Processing Offload Protocol
(SPOP). In the long term, its aim is to allow any kind of offloading on the
streams. This first version, besides being experimental, won't do lot of
things. The most important today is to validate the protocol design and lay the
foundations of what will, one day, be a full offload engine for the stream
processing.

So, for now, the SPOE can offload the stream processing before "tcp-request
content", "tcp-response content", "http-request" and "http-response" rules. And
it only supports variables creation/suppression. But, in spite of these limited
features, we can easily imagine to implement a SSO solution, an ip reputation
service or an ip geolocation service.

Internally, the SPOE is implemented as a filter. So, to use it, you must use
following line in a proxy proxy section:

  frontend my-front
      ...
      filter spoe [engine <name>] config <file>
      ...

It uses its own configuration file to keep the HAProxy configuration clean. It
is also a easy way to disable it by commenting out the filter line.

See "doc/SPOE.txt" for all details about the SPOE configuration.
2016-11-09 22:57:01 +01:00
Dirkjan Bussink
1866d6d8f1 MEDIUM: ssl: Add support for OpenSSL 1.1.0
In the last release a lot of the structures have become opaque for an
end user. This means the code using these needs to be changed to use the
proper functions to interact with these structures instead of trying to
manipulate them directly.

This does not fix any deprecations yet that are part of 1.1.0, it only
ensures that it can be compiled against that version and is still
compatible with older ones.

[wt: openssl-0.9.8 doesn't build with it, there are conflicts on certain
     function prototypes which we declare as inline here and which are
     defined differently there. But openssl-0.9.8 is not supported anymore
     so probably it's OK to go without it for now and we'll see later if
     some users still need it. Emeric has reviewed this change and didn't
     spot anything obvious which requires special care. Let's try it for
     real now]
2016-11-08 20:54:41 +01:00
scientiamobile
d0027ed5b1 MEDIUM: wurfl: add Scientiamobile WURFL device detection module
WURFL is a high-performance and low-memory footprint mobile device
detection software component that can quickly and accurately detect
over 500 capabilities of visiting devices. It can differentiate between
portable mobile devices, desktop devices, SmartTVs and any other types
of devices on which a web browser can be installed.

In order to add WURFL device detection support, you would need to
download Scientiamobile InFuze C API and install it on your system.
Refer to www.scientiamobile.com to obtain a valid InFuze license.

Any useful information on how to configure HAProxy working with WURFL
may be found in:

  doc/WURFL-device-detection.txt
  doc/configuration.txt
  examples/wurfl-example.cfg

Please find more information about WURFL device detection API detection
at https://docs.scientiamobile.com/documentation/infuze/infuze-c-api-user-guide
2016-11-08 14:21:43 +01:00
Bertrand Jacquin
3a2661d6b4 MINOR: build: Allow linking to device-atlas library file
DeviceAtlas might be installed in a location where a user might not have
enough permissions to write json.o and dac.o
2016-10-25 22:15:22 +02:00
Daniel Jakots
9705ba2981 BUILD: Make use of accept4() on OpenBSD.
OpenBSD >= 5.7 supports accept4(). Older versions are not supported
anymore anyway.

Patch originally from Brad Smith.
2016-10-20 16:01:53 +02:00
Dinko Korunic
7276f3aa3d BUG/MINOR: Fix OSX compilation errors
SOL_IPV6 is not defined on OSX, breaking the compile. Also libcrypt is
not available for installation neither in Macports nor as a Brew recipe,
so we're disabling implicit dependancy.

Signed-off-by: Dinko Korunic <dinko.korunic@gmail.com>
2016-09-11 08:04:37 +02:00
Willy Tarreau
13d67bbef3 BUG/BUILD: don't automatically run "make" on "make install"
Kay Fuchs reported that the recent changes to automatically rebuild files
on config option changes caused "make install" to rebuild the whole code
with the wrong options. That's caused by the fact that the "install-bin"
target depends on the "haproxy" target, which detects the lack of options
and causes a rebuild with different ones.

This patch makes a simple change, it removes this automatic dependency
which was already wrong since it could cause some files to be built with
different options prior to these changes, and instead emits an error
message indicating that "make" should be run prior to "make install".

The patches were backported into 1.6 so this fix must go there as well.
2016-06-24 18:34:13 +02:00
Willy Tarreau
8225bb4577 BUILD/MEDIUM: force a full rebuild if some build options change
We now instrument the makefile to keep a copy of previous build options.
The goal is to ensure that we'll rebuild everything when build options
change. The options that are watched are TARGET, VERBOSE_CFLAGS, and
BUILD_OPTIONS. These ones are copied into a file ".build_opts" and
compared to the new ones upon each build. This file is referenced in
the DEP variable which all .o files depend on, and it depends on the
code which updates it only upon changes. This ensures that a new file
is regenerated and detected upon change and that everything is rebuilt.
2016-06-07 14:45:44 +02:00
Willy Tarreau
b26835db3b BUILD/MEDIUM: rebuild everything when an include file is changed
Some users tend to get caught by incorrect builds when they try patches
that modify some include file after they forget to run "make clean".
While we can't blame users who are not developers, forcing developers
to rely on a painful autodepend is not nice either and will cause them
to test their changes less often. Here we propose a reasonable tradeoff.
This patch introduces a new "INCLUDES" variable which enumerates all
the ".h" files and sets them as a build dependency for all ".o" files.
This list is then copied into a "DEP" variable which can safely be
overridden if desired. This way by default all .c files are rebuilt if
any include file changes. This is the safe method for all users. And
developers can simply add "DEP=" to their quick build scripts to keep
the old fast and efficient behaviour.
2016-06-07 14:45:44 +02:00
Thierry Fournier
85dc1d3995 BUG/MINOR: lua: can't load external libraries
Libraries requires the export of embedded Lua symbols. If a library
is loaded by HAProxy or by an Lua program, an error like the following
error raises:

   [ALERT] 085/135722 (7224) : parsing [test.cfg:8] : lua runtime error: error loading module 'test' from file './test.so':
        ./test.so: undefined symbol: lua_createtable

This patch modify the Makefile, and allow exports of the Lua symbols.

This patch must be backported in version 1.6
2016-03-30 15:20:19 +02:00
Thierry Fournier
fb0b5467ca MINOR: lua: file dedicated to unsafe functions
When Lua executes functions from its API, these can throws an error.
These function must be executed in a special environment which catch
these error, otherwise a critical error (like segfault) can raise.

This patch add a c file called "hlua_fcn.c" which collect all the
Lua/c function needing safe environment for its execution.
2016-02-12 11:08:53 +01:00
Christopher Faulet
e6c3b69be0 MINOR: filters: Add an filter example
The "trace" filter has been added. It defines all available callbacks and for
each one it prints a trace message. To enable it:

  listener test
      ...
      filter trace
      ...
2016-02-09 14:53:15 +01:00
Christopher Faulet
3d97c90974 REORG: filters: Prepare creation of the HTTP compression filter
HTTP compression will be moved in a true filter. To prepare the ground, some
functions have been moved in a dedicated file. Idea is to keep everything about
compression algos in compression.c and everything related to the filtering in
flt_http_comp.c.

For now, a header has been added to help during the transition. It will be
removed later.

Unused empty ACL keyword list was removed. The "compression" keyword
parser was moved from cfgparse.c to flt_http_comp.c.
2016-02-09 14:53:15 +01:00
Christopher Faulet
d7c9196ae5 MAJOR: filters: Add filters support
This patch adds the support of filters in HAProxy. The main idea is to have a
way to "easely" extend HAProxy by adding some "modules", called filters, that
will be able to change HAProxy behavior in a programmatic way.

To do so, many entry points has been added in code to let filters to hook up to
different steps of the processing. A filter must define a flt_ops sutrctures
(see include/types/filters.h for details). This structure contains all available
callbacks that a filter can define:

struct flt_ops {
       /*
        * Callbacks to manage the filter lifecycle
        */
       int  (*init)  (struct proxy *p);
       void (*deinit)(struct proxy *p);
       int  (*check) (struct proxy *p);

        /*
         * Stream callbacks
         */
        void (*stream_start)     (struct stream *s);
        void (*stream_accept)    (struct stream *s);
        void (*session_establish)(struct stream *s);
        void (*stream_stop)      (struct stream *s);

       /*
        * HTTP callbacks
        */
       int  (*http_start)         (struct stream *s, struct http_msg *msg);
       int  (*http_start_body)    (struct stream *s, struct http_msg *msg);
       int  (*http_start_chunk)   (struct stream *s, struct http_msg *msg);
       int  (*http_data)          (struct stream *s, struct http_msg *msg);
       int  (*http_last_chunk)    (struct stream *s, struct http_msg *msg);
       int  (*http_end_chunk)     (struct stream *s, struct http_msg *msg);
       int  (*http_chunk_trailers)(struct stream *s, struct http_msg *msg);
       int  (*http_end_body)      (struct stream *s, struct http_msg *msg);
       void (*http_end)           (struct stream *s, struct http_msg *msg);
       void (*http_reset)         (struct stream *s, struct http_msg *msg);
       int  (*http_pre_process)   (struct stream *s, struct http_msg *msg);
       int  (*http_post_process)  (struct stream *s, struct http_msg *msg);
       void (*http_reply)         (struct stream *s, short status,
                                   const struct chunk *msg);
};

To declare and use a filter, in the configuration, the "filter" keyword must be
used in a listener/frontend section:

  frontend test
    ...
    filter <FILTER-NAME> [OPTIONS...]

The filter referenced by the <FILTER-NAME> must declare a configuration parser
on its own name to fill flt_ops and filter_conf field in the proxy's
structure. An exemple will be provided later to make it perfectly clear.

For now, filters cannot be used in backend section. But this is only a matter of
time. Documentation will also be added later. This is the first commit of a long
list about filters.

It is possible to have several filters on the same listener/frontend. These
filters are stored in an array of at most MAX_FILTERS elements (define in
include/types/filters.h). Again, this will be replaced later by a list of
filters.

The filter API has been highly refactored. Main changes are:

* Now, HA supports an infinite number of filters per proxy. To do so, filters
  are stored in list.

* Because filters are stored in list, filters state has been moved from the
  channel structure to the filter structure. This is cleaner because there is no
  more info about filters in channel structure.

* It is possible to defined filters on backends only. For such filters,
  stream_start/stream_stop callbacks are not called. Of course, it is possible
  to mix frontend and backend filters.

* Now, TCP streams are also filtered. All callbacks without the 'http_' prefix
  are called for all kind of streams. In addition, 2 new callbacks were added to
  filter data exchanged through a TCP stream:

    - tcp_data: it is called when new data are available or when old unprocessed
      data are still waiting.

    - tcp_forward_data: it is called when some data can be consumed.

* New callbacks attached to channel were added:

    - channel_start_analyze: it is called when a filter is ready to process data
      exchanged through a channel. 2 new analyzers (a frontend and a backend)
      are attached to channels to call this callback. For a frontend filter, it
      is called before any other analyzer. For a backend filter, it is called
      when a backend is attached to a stream. So some processing cannot be
      filtered in that case.

    - channel_analyze: it is called before each analyzer attached to a channel,
      expects analyzers responsible for data sending.

    - channel_end_analyze: it is called when all other analyzers have finished
      their processing. A new analyzers is attached to channels to call this
      callback. For a TCP stream, this is always the last one called. For a HTTP
      one, the callback is called when a request/response ends, so it is called
      one time for each request/response.

* 'session_established' callback has been removed. Everything that is done in
  this callback can be handled by 'channel_start_analyze' on the response
  channel.

* 'http_pre_process' and 'http_post_process' callbacks have been replaced by
  'channel_analyze'.

* 'http_start' callback has been replaced by 'http_headers'. This new one is
  called just before headers sending and parsing of the body.

* 'http_end' callback has been replaced by 'channel_end_analyze'.

* It is possible to set a forwarder for TCP channels. It was already possible to
  do it for HTTP ones.

* Forwarders can partially consumed forwardable data. For this reason a new
  HTTP message state was added before HTTP_MSG_DONE : HTTP_MSG_ENDING.

Now all filters can define corresponding callbacks (http_forward_data
and tcp_forward_data). Each filter owns 2 offsets relative to buf->p, next and
forward, to track, respectively, input data already parsed but not forwarded yet
by the filter and parsed data considered as forwarded by the filter. A any time,
we have the warranty that a filter cannot parse or forward more input than
previous ones. And, of course, it cannot forward more input than it has
parsed. 2 macros has been added to retrieve these offets: FLT_NXT and FLT_FWD.

In addition, 2 functions has been added to change the 'next size' and the
'forward size' of a filter. When a filter parses input data, it can alter these
data, so the size of these data can vary. This action has an effet on all
previous filters that must be handled. To do so, the function
'filter_change_next_size' must be called, passing the size variation. In the
same spirit, if a filter alter forwarded data, it must call the function
'filter_change_forward_size'. 'filter_change_next_size' can be called in
'http_data' and 'tcp_data' callbacks and only these ones. And
'filter_change_forward_size' can be called in 'http_forward_data' and
'tcp_forward_data' callbacks and only these ones. The data changes are the
filter responsability, but with some limitation. It must not change already
parsed/forwarded data or data that previous filters have not parsed/forwarded
yet.

Because filters can be used on backends, when we the backend is set for a
stream, we add filters defined for this backend in the filter list of the
stream. But we must only do that when the backend and the frontend of the stream
are not the same. Else same filters are added a second time leading to undefined
behavior.

The HTTP compression code had to be moved.

So it simplifies http_response_forward_body function. To do so, the way the data
are forwarded has changed. Now, a filter (and only one) can forward data. In a
commit to come, this limitation will be removed to let all filters take part to
data forwarding. There are 2 new functions that filters should use to deal with
this feature:

 * flt_set_http_data_forwarder: This function sets the filter (using its id)
   that will forward data for the specified HTTP message. It is possible if it
   was not already set by another filter _AND_ if no data was yet forwarded
   (msg->msg_state <= HTTP_MSG_BODY). It returns -1 if an error occurs.

 * flt_http_data_forwarder: This function returns the filter id that will
   forward data for the specified HTTP message. If there is no forwarder set, it
   returns -1.

When an HTTP data forwarder is set for the response, the HTTP compression is
disabled. Of course, this is not definitive.
2016-02-09 14:53:15 +01:00
David CARLIER
7385f65283 BUILD: Make deviceatlas require PCRE
Makefile deviceatlas throwing an error if the necessary pcre flag
is not passed avoiding surprising bunch of 'undefined reference'
for the user. Plus a tiny typo in OPENSSL area.

[wt: backport to 1.6]
2015-11-10 08:26:24 +01:00
Jerome Duval
38932c391c BUILD: add Haiku as supported target. 2015-11-02 20:32:08 +01:00
Jerome Duval
796d2fc136 BUG/BUILD: replace haproxy-systemd-wrapper with $(EXTRA) in install-bin.
[wt: this should be backported to 1.6 and 1.5 as well since some platforms
 don't build the systemd-wrapper]
2015-11-02 20:32:08 +01:00
Vincent Bernat
5c5147fa76 BUILD: install only relevant and existing documentation
doc/haproxy-{en,fr}.txt have been removed recently but they were still
referenced in the Makefile. Many other documents have also been
added. Instead of hard-coding a list of documents to install, install
all those in doc/ with some exceptions:

 - coding-style.txt is more for developers
 - gpl.txt and lgpl.txt are usually present at other places (and I would
   have to remove them in the Debian packaging, less work for me)

The documentation in the subdirectories is not installed as it is more
targeted to developers.
2015-10-13 23:40:22 +02:00
James Rosewell
3670eb1d74 BUILD: Changed 51Degrees option to support V3.2
Added support for city hash method, turned off multi threading support
and included maths library. Removed reference to compression library
which was never needed.
2015-09-21 12:14:11 +02:00
Willy Tarreau
29fbe51490 MAJOR: tproxy: remove support for cttproxy
This was the first transparent proxy technology supported by haproxy
circa 2005 but it was obsoleted in 2007 by Tproxy 4.0 which removed a
lot of the earlier versions' shortcomings and was finally merged into
the kernel. Since nobody has been using cttproxy for many years now
and nobody has even just tried to compile the files, it's time to
remove it. The doc was updated as well.
2015-08-20 19:35:14 +02:00
Cyril Bonté
8e441fb4ed BUILD: add USE_LUA to BUILD_OPTIONS when it's used
haproxy -vv doesn't indicate that USE_LUA was specified at compilation time.
This is caused by the Makefile, which doesn't update BUILD_OPTIONS.
2015-08-16 23:55:32 +02:00
Willy Tarreau
9496552e7d CLEANUP: appsession: remove appsession.c and sessionhash.c
Now there's no more code using appsessions we can remove them.
2015-08-10 19:17:47 +02:00
Vincent Bernat
e192cbb585 BUILD: link with libdl if needed for Lua support
On platforms where the dl*() functions are not part of the libc, a
program linking Lua also needs to link to libdl.

Moreover, on platforms using a gold linker with the --as-needed flag,
the libdl library needs to be linked after linking Lua, otherwise, it
won't be marked as needed and will be discarded and its symbols won't be
present at the end of the linking phase.

Ubuntu enables the --as-needed flag by default. Other distributions may
advertise its use, like Gentoo.
2015-07-23 09:47:02 +02:00
David Carlier
b5714dab9d BUILD: add netbsd TARGET
For now it's the same as openbsd.
2015-07-02 11:33:03 +02:00
Dragan Dosen
93b38d9191 MEDIUM: 51Degrees code refactoring and cleanup
Moved 51Degrees code from src/haproxy.c, src/sample.c and src/cfgparse.c
into a separate files src/51d.c and include/import/51d.h.

Added two new functions init_51degrees() and deinit_51degrees(), updated
Makefile and other code reorganizations related to 51Degrees.
2015-06-30 10:43:03 +02:00
Thierry FOURNIER
4834bc773c MEDIUM: vars: adds support of variables
This patch adds support of variables during the processing of each stream. The
variables scope can be set as 'session', 'transaction', 'request' or 'response'.
The variable type is the type returned by the assignment expression. The type
can change while the processing.

The allocated memory can be controlled for each scope and each request, and for
the global process.
2015-06-13 23:01:37 +02:00
Baptiste Assmann
325137d603 MEDIUM: dns: implement a DNS resolver
Implementation of a DNS client in HAProxy to perform name resolution to
IP addresses.

It relies on the freshly created UDP client to perform the DNS
resolution. For now, all UDP socket calls are performed in the
DNS layer, but this might change later when the protocols are
extended to be more suited to datagram mode.

A new section called 'resolvers' is introduced thanks to this patch. It
is used to describe DNS servers IP address and also many parameters.
2015-06-13 22:07:35 +02:00
Baptiste Assmann
5d4e4f7a57 MEDIUM: protocol: add minimalist UDP protocol client
Basic introduction of a UDP layer in HAProxy. It can be used as a
client only and manages UDP exchanges with servers.

It can't be used to load-balance UDP protocols, but only used by
internal features such as DNS resolution.
2015-06-13 22:07:35 +02:00
Willy Tarreau
82bd42e27a BUILD: make DeviceAtlas easier to build by defaulting to DEVICEATLAS_SRC
Since both DEVICEATLAS_INC and DEVICEATLAS_LIB are set to the same path
when building from sources, simply allow DEVICEATLAS_SRC to be set alone
to simplify the build procedure.
2015-06-02 19:30:59 +02:00
Willy Tarreau
c7203c7a5a BUILD: make 51D easier to build by defaulting to 51DEGREES_SRC
Till now 3 paths were needed, 51DEGREES_SRC, 51DEGREES_INC, and
51DEGREES_LIB.  Let's make the last two default to 51DEGREES_SRC since
it's the same location, and fix the doc to reflect this (all three were
documented but inconsistently).
2015-06-02 19:30:59 +02:00
Thomas Holmes
0ca65f8217 BUILD: add 51degrees options to makefile.
To build with 51Degrees set USE_51DEGREES=1. 51DEGREES_INC, 51DEGREES_LIB,
and 51DEGREES_SRC will need to be set to the 51Degrees pattern header and
C file.
2015-06-02 13:43:15 +02:00
David Carlier
a03fb1433d BUILD: Makefile: add options to build with DeviceAtlas
This diff updates the Makefile to compile conditionally via
some new sets of flags, USE_DEVICEATLAS to enable the module
and the couple DEVICEATLAS_INC/DEVICEATLAS_LIB which needs to
point to the API root folder in order to compile the API and
the module.
2015-06-02 13:42:11 +02:00
Willy Tarreau
b5684e0081 IMPORT: hash: import xxhash-r39
The xxhash library provides a very fast and excellent hash algorithm
suitable for many purposes. It excels at hashing large blocks but is
also extremely fast on small ones. It's distributed under a 2-clause
BSD license (GPL-compatible) so it can be included here. Updates are
distributed here :

      https://github.com/Cyan4973/xxHash
2015-04-29 19:15:21 +02:00
Willy Tarreau
69c696c138 IMPORT: lru: import simple ebtree-based LRU functions
This will be usable to implement some maps/acl caches for heavy datasets
loaded from files (mostly regex-based but in general anything that cannot
be indexed in a tree).
2015-04-29 19:14:43 +02:00
Willy Tarreau
81f38d6f57 MEDIUM: applet: add basic support for an applet run queue
This will be needed so that we can schedule applets out of the streams.
For now nothing calls the queue yet.
2015-04-23 17:56:16 +02:00
Willy Tarreau
b1ec8c4a59 MINOR: session: start to reintroduce struct session
There is now a pointer to the session in the stream, which is NULL
for now. The session pool is created as well. Some parts will move
from the stream to the session now.
2015-04-06 11:23:57 +02:00
Willy Tarreau
87b09668be REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.

In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.

The files stream.{c,h} were added and session.{c,h} removed.

The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.

Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.

Once all changes are completed, we should see approximately this :

   L7 - http_txn
   L6 - stream
   L5 - session
   L4 - connection | applet

There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.

Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-06 11:23:56 +02:00
Willy Tarreau
418b8c0c41 MAJOR: compression: integrate support for libslz
This library is designed to emit a zlib-compatible stream with no
memory usage and to favor resource savings over compression ratio.
While zlib requires 256 kB of RAM per compression context (and can only
support 4000 connections per GB of RAM), the stateless compression
offered by libslz does not need to retain buffers between subsequent
calls. In theory this slightly reduces the compression ratio but in
practice it does not have that much of an effect since the zlib
window is limited to 32kB.

Libslz is available at :

      http://git.1wt.eu/web?p=libslz.git

It was designed for web compression and provides a lot of savings
over zlib in haproxy. Here are the preliminary results on a single
core of a core2-quad 3.0 GHz in 32-bit for only 300 concurrent
sessions visiting the home page of www.haproxy.org (76 kB) with
the default 16kB buffers :

          BW In      BW Out     BW Saved   Ratio   memory VSZ/RSS
zlib      237 Mbps    92 Mbps   145 Mbps   2.58     84M /  69M
slz       733 Mbps   380 Mbps   353 Mbps   1.93    5.9M / 4.2M

So while the compression ratio is lower, the bandwidth savings are
much more important due to the significantly lower compression cost
which allows to consume even more data from the servers. In the
example above, zlib became the bottleneck at 24% of the output
bandwidth. Also the difference in memory usage is obvious.

More tests run on a single core of a core i5-3320M, with 500 concurrent
users and the default 16kB buffers :

At 100% CPU (no limit) :
          BW In      BW Out     BW Saved   Ratio   memory VSZ/RSS  hits/s
zlib      480 Mbps   188 Mbps   292 Mbps   2.55     130M / 101M     744
slz      1700 Mbps   810 Mbps   890 Mbps   2.10    23.7M / 9.7M    2382

At 85% CPU (limited) :
          BW In      BW Out     BW Saved   Ratio   memory VSZ/RSS  hits/s
zlib     1240 Mbps   976 Mbps   264 Mbps   1.27     130M / 100M    1738
slz      1600 Mbps   976 Mbps   624 Mbps   1.64    23.7M / 9.7M    2210

The most important benefit really happens when the CPU usage is
limited by "maxcompcpuusage" or the BW limited by "maxcomprate" :
in order to preserve resources, haproxy throttles the compression
ratio until usage is within limits. Since slz is much cheaper, the
average compression ratio is much higher and the input bandwidth
is quite higher for one Gbps output.

Other tests made with some reference files :

                           BW In     BW Out    BW Saved  Ratio  hits/s
daniels.html       zlib  1320 Mbps  163 Mbps  1157 Mbps   8.10    1925
                   slz   3600 Mbps  580 Mbps  3020 Mbps   6.20    5300

tv.com/listing     zlib   980 Mbps  124 Mbps   856 Mbps   7.90     310
                   slz   3300 Mbps  553 Mbps  2747 Mbps   5.97    1100

jquery.min.js      zlib   430 Mbps  180 Mbps   250 Mbps   2.39     547
                   slz   1470 Mbps  764 Mbps   706 Mbps   1.92    1815

bootstrap.min.css  zlib   790 Mbps  165 Mbps   625 Mbps   4.79     777
                   slz   2450 Mbps  650 Mbps  1800 Mbps   3.77    2400

So on top of saving a lot of memory, slz is constantly 2.5-3.5 times
faster than zlib and results in providing more savings for a fixed CPU
usage. For links smaller than 100 Mbps, zlib still provides a better
compression ratio, at the expense of a much higher CPU usage.

Larger input files provide slightly higher bandwidth for both libs, at
the expense of a bit more memory usage for zlib (it converges to 256kB
per connection).
2015-03-29 03:32:06 +02:00
Willy Tarreau
71b99ef3dc BUILD: fix automatic inclusion of libdl.
Last commit ecc9547 ("BUILD: lua: it miss the '-ldl' directive") broke
build on systems without libdl (eg: FreeBSD). Since lua requires libdl
on some systems, let's simplify this by adding a USE_DL build directive
to enable/disable use of libdl. It's set by default on all linux flavors.
2015-03-17 14:33:22 +01:00
Thierry FOURNIER
ecc954703f BUILD: lua: it miss the '-ldl' directive
The Lua library requires the 'dl' library.
2015-03-17 11:44:13 +01:00
Thierry FOURNIER
463119ccc1 BUG/BUILD: lua: The strict Lua 5.3 version check is not done.
This patch fix the Lua library check. Only the version
5.3 or later is allowed.

This bug is added by the patch "MEDIUM: lua: use the
Lua-5.3 version of the library" with commit id

   f90838b71a
2015-03-10 10:17:48 +01:00
Thierry FOURNIER
f90838b71a MEDIUM: lua: use the Lua-5.3 version of the library
The Lua-5.3 version of the library adds a required function to fix
a bug with the forced-yield system.

This patch permits to build with the Lua-5.3 library. Main changes
are:
 - "unsigned" type disappear to be replaced by signed type,
 - prototype of the yield function callback changes.
2015-03-09 17:47:52 +01:00
Cyril Bonté
c21adb5b00 BUILD: try to automatically detect the Lua library name
Depending on the distribution, the Lua library can have different names.
Some distributions will require -llua5.2, others -llua52, and other systems may
require -llua.

Now, the Makefile will try to guess the library name, in order of priority :
"lua5.2", "lua52", or "lua".
2015-03-04 10:11:57 +01:00
Thierry FOURNIER
6f1fd48ef1 MEDIUM: lua: lua integration in the build and init system.
This is the first step of the lua integration. We add the useful
files in the HAProxy project. These files contains the main
includes, the Makefile options and empty initialisation function.
Is is the LUA skeleton.
2015-02-28 23:12:33 +01:00
Willy Tarreau
713a7566af BUILD: Makefile: add -Wdeclaration-after-statement
This one makes it easier to detect accidentally misplaced variables
declarations in the code which are always a pain to deal with when
functions grow.
2015-02-28 23:12:30 +01:00
Ilyas Bakirov
dfb124fe0d BUILD: add new target 'make uninstall' to support uninstalling haproxy from OS 2015-02-04 13:13:30 +01:00
Simon Horman
0d16a4011e MEDIUM: Add parsing of mailers section
As mailer and mailers structures and allow parsing of
a mailers section into those structures.

These structures will subsequently be freed as it is
not yet possible to use reference them in the configuration.

Signed-off-by: Simon Horman <horms@verge.net.au>
2015-02-03 00:24:16 +01:00
KOVACS Krisztian
b3e54fe387 MAJOR: namespace: add Linux network namespace support
This patch makes it possible to create binds and servers in separate
namespaces.  This can be used to proxy between multiple completely independent
virtual networks (with possibly overlapping IP addresses) and a
non-namespace-aware proxy implementation that supports the proxy protocol (v2).

The setup is something like this:

net1 on VLAN 1 (namespace 1) -\
net2 on VLAN 2 (namespace 2) -- haproxy ==== proxy (namespace 0)
net3 on VLAN 3 (namespace 3) -/

The proxy is configured to make server connections through haproxy and sending
the expected source/target addresses to haproxy using the proxy protocol.

The network namespace setup on the haproxy node is something like this:

= 8< =
$ cat setup.sh
ip netns add 1
ip link add link eth1 type vlan id 1
ip link set eth1.1 netns 1
ip netns exec 1 ip addr add 192.168.91.2/24 dev eth1.1
ip netns exec 1 ip link set eth1.$id up
...
= 8< =

= 8< =
$ cat haproxy.cfg
frontend clients
  bind 127.0.0.1:50022 namespace 1 transparent
  default_backend scb

backend server
  mode tcp
  server server1 192.168.122.4:2222 namespace 2 send-proxy-v2
= 8< =

A bind line creates the listener in the specified namespace, and connections
originating from that listener also have their network namespace set to
that of the listener.

A server line either forces the connection to be made in a specified
namespace or may use the namespace from the client-side connection if that
was set.

For more documentation please read the documentation included in the patch
itself.

Signed-off-by: KOVACS Tamas <ktamas@balabit.com>
Signed-off-by: Sarkozi Laszlo <laszlo.sarkozi@balabit.com>
Signed-off-by: KOVACS Krisztian <hidden@balabit.com>
2014-11-21 07:51:57 +01:00
Arcadiy Ivanov
3785311e64 BUILD: fix "make install" to support spaces in the install dirs
Makefile is unable to install into directories containing spaces.
2014-11-10 12:03:03 +01:00
Willy Tarreau
cd360cec7a BUG/BUILD: revert accidental change in the makefile from latest SSL fix
Commit 0bed994 ("BUG/MINOR: ssl: correctly initialize ssl ctx for
invalid certificates") accidently left a change in the Makefile
resulting in -ldl being appended to the LDFLAGS. As reported by
Dmitry Sivachenko, this will break build on systems without libdl
such as FreeBSD.

This fix must be backported to 1.5.
2014-10-31 07:39:04 +01:00