Commit Graph

2062 Commits

Author SHA1 Message Date
Thierry Fournier
ada348459f MEDIUM: dns: extract options
DNS selection preferences are actually declared inline in the
struct server. There are copied from the server struct to the
dns_resolution struct for each resolution.

Next patchs adds new preferences options, and it is not a good
way to copy all the configuration information before each dns
resolution.

This patch extract the configuration preference from the struct
server and declares a new dedicated struct. Only a pointer to this
new striuict will be copied before each dns resolution.
2016-02-19 14:37:46 +01:00
Thierry Fournier
70473a5f8c MINOR: common: mask conversion
Add function which converts network mask from bit length form
to struct in*_addr form.
2016-02-19 14:37:41 +01:00
Thierry Fournier
49d4842e98 BUG/MAJOR: lua: segfault using Concat object
Concat object is based on "luaL_Buffer". The luaL_Buffer documentation says:

   During its normal operation, a string buffer uses a variable number of stack
   slots. So, while using a buffer, you cannot assume that you know where the
   top of the stack is. You can use the stack between successive calls to buffer
   operations as long as that use is balanced; that is, when you call a buffer
   operation, the stack is at the same level it was immediately after the
   previous buffer operation. (The only exception to this rule is
   luaL_addvalue.) After calling luaL_pushresult the stack is back to its level
   when the buffer was initialized, plus the final string on its top.

So, the stack cannot be manipulated between the first call at the function
"luaL_buffinit()" and the last call to the function "luaL_pushresult()" because
we cannot known the stack status.

In other way, the memory used by these functions seems to be collected by GC, so
if the GC is triggered during the usage of the Concat object, it can be used
some released memory.

This patch rewrite the Concat class without the "luaL_Buffer" system. It uses
"userdata()" forr the memory allocation of the buffer strings.
2016-02-19 13:24:09 +01:00
Pieter Baauw
46af170e41 MINOR: mailers: increase default timeout to 10 seconds
This allows the tcp connection to send multiple SYN packets, so 1 lost
packet does not cause the mail to be lost. It changes the socket timeout
from 2 to 10 seconds, this allows for 3 syn packets to be send and
waiting a little for their reply.

This patch should be backported to 1.6.

Acked-by: Simon Horman <horms@verge.net.au>
2016-02-17 10:19:08 +01:00
Willy Tarreau
ae79572f89 MINOR: cli: add a new "show env" command
Using environment variables in configuration files can make troubleshooting
complicated because there's no easy way to verify that the variables are
correct. This patch introduces a new "show env" command which displays the
whole environment on the CLI, one variable per line.

The socket must at least have level operator to display the environment.
2016-02-16 11:43:03 +01:00
Dragan Dosen
835b9212f6 MEDIUM: log: add a new log format flag "E"
The +E mode escapes characters '"', '\' and ']' with '\' as prefix. It
mostly makes sense to use it in the RFC5424 structured-data log formats.

Example:

log-format-sd %{+Q,+E}o\ [exampleSDID@1234\ header=%[capture.req.hdr(0)]]
2016-02-12 13:36:47 +01:00
Dragan Dosen
0edd10925d MINOR: standard: add function "escape_chunk"
This function tries to prefix all characters tagged in the <map> with the
<escape> character. The specified <chunk> contains the input to be
escaped.
2016-02-12 13:36:47 +01:00
Thierry Fournier
9e7e3ea991 MINOR: lua: move common function
This patch moves the function hlua_checkudata which check that
an object contains the expected class_reference as metatable.
This function is commonly used by all the lua functions.
The function hlua_metatype is also moved.
2016-02-12 11:08:53 +01:00
Thierry Fournier
9312794ed7 MINOR: standard: add RFC HTTP date parser
This parser takes a string containing an HTTP date. It returns
a broken-down time struct. We must considers considers this
time as GMT. Maybe later the timezone will be taken in account.
2016-02-12 11:08:53 +01:00
Thierry Fournier
fb0b5467ca MINOR: lua: file dedicated to unsafe functions
When Lua executes functions from its API, these can throws an error.
These function must be executed in a special environment which catch
these error, otherwise a critical error (like segfault) can raise.

This patch add a c file called "hlua_fcn.c" which collect all the
Lua/c function needing safe environment for its execution.
2016-02-12 11:08:53 +01:00
Thierry Fournier
8feaa661b6 MINOR: map: Add regex matching replacement
This patch declares a new map which provides a string based on
a string with back references replaced by the content matched
by the regex.
2016-02-10 23:38:34 +01:00
Christopher Faulet
443ea1a242 MINOR: filters: Extract proxy stuff from the struct filter
Now, filter's configuration (.id, .conf and .ops fields) is stored in the
structure 'flt_conf'. So proxies own a flt_conf list instead of a filter
list. When a filter is attached to a stream, it gets a pointer on its
configuration. This avoids mixing the filter's context (owns by a stream) and
its configuration (owns by a proxy). It also saves 2 pointers per filter
instance.
2016-02-09 14:53:15 +01:00
Christopher Faulet
113f7decfc MINOR: filters/http: Slightly update the parsing of chunks
Now, http_parse_chunk_size and http_skip_chunk_crlf return the number of bytes
parsed on success. http_skip_chunk_crlf does not use msg->sol anymore.

On the other hand, http_forward_trailers is unchanged. It returns >0 if the end
of trailers is reached and 0 if not. In all cases (except if an error is
encountered), msg->sol contains the length of the last parsed part of the
trailer headers.

Internal doc and comments about msg->sol has been updated accordingly.
2016-02-09 14:53:15 +01:00
Christopher Faulet
3e7bc67722 MINOR: filters: Remove unused or useless stuff and do small optimizations 2016-02-09 14:53:15 +01:00
Christopher Faulet
da02e17d42 MAJOR: filters: Require explicit registration to filter HTTP body and TCP data
Before, functions to filter HTTP body (and TCP data) were called from the moment
at least one filter was attached to the stream. If no filter is interested by
these data, this uselessly slows data parsing.
A good example is the HTTP compression filter. Depending of request and response
headers, the response compression can be enabled or not. So it could be really
nice to call it only when enabled.

So, now, to filter HTTP/TCP data, a filter must use the function
register_data_filter. For TCP streams, this function can be called only
once. But for HTTP streams, when needed, it must be called for each HTTP request
or HTTP response.
Only registered filters will be called during data parsing. At any time, a
filter can be unregistered by calling the function unregister_data_filter.
2016-02-09 14:53:15 +01:00
Christopher Faulet
fcf035cb5a MINOR: filters: Add stream_filters structure to hide filters info
From the stream point of view, this new structure is opaque. it hides filters
implementation details. So, impact for future optimizations will be reduced
(well, we hope so...).

Some small improvements has been made in filters.c to avoid useless checks.
2016-02-09 14:53:15 +01:00
Christopher Faulet
309c6418b0 MEDIUM: filters: Replace filter_http_headers callback by an analyzer
This new analyzer will be called for each HTTP request/response, before the
parsing of the body. It is identified by AN_FLT_HTTP_HDRS.

Special care was taken about the following condition :

  * the frontend is a TCP proxy
  * filters are defined in the frontend section
  * the selected backend is a HTTP proxy

So, this patch explicitly add AN_FLT_HTTP_HDRS analyzer on the request and the
response channels when the backend is a HTTP proxy and when there are filters
attatched on the stream.
This patch simplifies http_request_forward_body and http_response_forward_body
functions.
2016-02-09 14:53:15 +01:00
Christopher Faulet
2fb2880caf MEDIUM: filters: remove http_start_chunk, http_last_chunk and http_chunk_end
For Chunked HTTP request/response, the body filtering can be really
expensive. In the worse case (many chunks of 1 bytes), the filters overhead is
of 3 calls per chunk. If http_data callback is useful, others are just
informative.

So these callbacks has been removed. Of course, existing filters (trace and
compression) has beeen updated accordingly. For the HTTP compression filter, the
update is quite huge. Its implementation is closer to the old one.
2016-02-09 14:53:15 +01:00
Christopher Faulet
3e34429515 MEDIUM: filters: Use macros to call filters callbacks to speed-up processing
When no filter is attached to the stream, the CPU footprint due to the calls to
filters_* functions is huge, especially for chunk-encoded messages. Using macros
to check if we have some filters or not is a great improvement.

Furthermore, instead of checking the filter list emptiness, we introduce a flag
to know if filters are attached or not to a stream.
2016-02-09 14:53:15 +01:00
Christopher Faulet
92d3638d2d MAJOR: filters/http: Rewrite the HTTP compression as a filter
HTTP compression has been rewritten to use the filter API. This is more a PoC
than other thing for now. It allocates memory to work. So, if only for that, it
should be rewritten.

In the mean time, the implementation has been refactored to allow its use with
other filters. However, there are limitations that should be respected:

  - No filter placed after the compression one is allowed to change input data
    (in 'http_data' callback).
  - No filter placed before the compression one is allowed to change forwarded
    data (in 'http_forward_data' callback).

For now, these limitations are informal, so you should be careful when you use
several filters.

About the configuration, 'compression' keywords are still supported and must be
used to configure the HTTP compression behavior. In absence of a 'filter' line
for the compression filter, it is added in the filter chain when the first
compression' line is parsed. This is an easy way to do when you do not use other
filters. But another filter exists, an error is reported so that the user must
explicitly declare the filter.

For example:

  listen tst
      ...
      compression algo gzip
      compression offload
      ...
      filter flt_1
      filter compression
      filter flt_2
      ...
2016-02-09 14:53:15 +01:00
Christopher Faulet
3d97c90974 REORG: filters: Prepare creation of the HTTP compression filter
HTTP compression will be moved in a true filter. To prepare the ground, some
functions have been moved in a dedicated file. Idea is to keep everything about
compression algos in compression.c and everything related to the filtering in
flt_http_comp.c.

For now, a header has been added to help during the transition. It will be
removed later.

Unused empty ACL keyword list was removed. The "compression" keyword
parser was moved from cfgparse.c to flt_http_comp.c.
2016-02-09 14:53:15 +01:00
Christopher Faulet
d7c9196ae5 MAJOR: filters: Add filters support
This patch adds the support of filters in HAProxy. The main idea is to have a
way to "easely" extend HAProxy by adding some "modules", called filters, that
will be able to change HAProxy behavior in a programmatic way.

To do so, many entry points has been added in code to let filters to hook up to
different steps of the processing. A filter must define a flt_ops sutrctures
(see include/types/filters.h for details). This structure contains all available
callbacks that a filter can define:

struct flt_ops {
       /*
        * Callbacks to manage the filter lifecycle
        */
       int  (*init)  (struct proxy *p);
       void (*deinit)(struct proxy *p);
       int  (*check) (struct proxy *p);

        /*
         * Stream callbacks
         */
        void (*stream_start)     (struct stream *s);
        void (*stream_accept)    (struct stream *s);
        void (*session_establish)(struct stream *s);
        void (*stream_stop)      (struct stream *s);

       /*
        * HTTP callbacks
        */
       int  (*http_start)         (struct stream *s, struct http_msg *msg);
       int  (*http_start_body)    (struct stream *s, struct http_msg *msg);
       int  (*http_start_chunk)   (struct stream *s, struct http_msg *msg);
       int  (*http_data)          (struct stream *s, struct http_msg *msg);
       int  (*http_last_chunk)    (struct stream *s, struct http_msg *msg);
       int  (*http_end_chunk)     (struct stream *s, struct http_msg *msg);
       int  (*http_chunk_trailers)(struct stream *s, struct http_msg *msg);
       int  (*http_end_body)      (struct stream *s, struct http_msg *msg);
       void (*http_end)           (struct stream *s, struct http_msg *msg);
       void (*http_reset)         (struct stream *s, struct http_msg *msg);
       int  (*http_pre_process)   (struct stream *s, struct http_msg *msg);
       int  (*http_post_process)  (struct stream *s, struct http_msg *msg);
       void (*http_reply)         (struct stream *s, short status,
                                   const struct chunk *msg);
};

To declare and use a filter, in the configuration, the "filter" keyword must be
used in a listener/frontend section:

  frontend test
    ...
    filter <FILTER-NAME> [OPTIONS...]

The filter referenced by the <FILTER-NAME> must declare a configuration parser
on its own name to fill flt_ops and filter_conf field in the proxy's
structure. An exemple will be provided later to make it perfectly clear.

For now, filters cannot be used in backend section. But this is only a matter of
time. Documentation will also be added later. This is the first commit of a long
list about filters.

It is possible to have several filters on the same listener/frontend. These
filters are stored in an array of at most MAX_FILTERS elements (define in
include/types/filters.h). Again, this will be replaced later by a list of
filters.

The filter API has been highly refactored. Main changes are:

* Now, HA supports an infinite number of filters per proxy. To do so, filters
  are stored in list.

* Because filters are stored in list, filters state has been moved from the
  channel structure to the filter structure. This is cleaner because there is no
  more info about filters in channel structure.

* It is possible to defined filters on backends only. For such filters,
  stream_start/stream_stop callbacks are not called. Of course, it is possible
  to mix frontend and backend filters.

* Now, TCP streams are also filtered. All callbacks without the 'http_' prefix
  are called for all kind of streams. In addition, 2 new callbacks were added to
  filter data exchanged through a TCP stream:

    - tcp_data: it is called when new data are available or when old unprocessed
      data are still waiting.

    - tcp_forward_data: it is called when some data can be consumed.

* New callbacks attached to channel were added:

    - channel_start_analyze: it is called when a filter is ready to process data
      exchanged through a channel. 2 new analyzers (a frontend and a backend)
      are attached to channels to call this callback. For a frontend filter, it
      is called before any other analyzer. For a backend filter, it is called
      when a backend is attached to a stream. So some processing cannot be
      filtered in that case.

    - channel_analyze: it is called before each analyzer attached to a channel,
      expects analyzers responsible for data sending.

    - channel_end_analyze: it is called when all other analyzers have finished
      their processing. A new analyzers is attached to channels to call this
      callback. For a TCP stream, this is always the last one called. For a HTTP
      one, the callback is called when a request/response ends, so it is called
      one time for each request/response.

* 'session_established' callback has been removed. Everything that is done in
  this callback can be handled by 'channel_start_analyze' on the response
  channel.

* 'http_pre_process' and 'http_post_process' callbacks have been replaced by
  'channel_analyze'.

* 'http_start' callback has been replaced by 'http_headers'. This new one is
  called just before headers sending and parsing of the body.

* 'http_end' callback has been replaced by 'channel_end_analyze'.

* It is possible to set a forwarder for TCP channels. It was already possible to
  do it for HTTP ones.

* Forwarders can partially consumed forwardable data. For this reason a new
  HTTP message state was added before HTTP_MSG_DONE : HTTP_MSG_ENDING.

Now all filters can define corresponding callbacks (http_forward_data
and tcp_forward_data). Each filter owns 2 offsets relative to buf->p, next and
forward, to track, respectively, input data already parsed but not forwarded yet
by the filter and parsed data considered as forwarded by the filter. A any time,
we have the warranty that a filter cannot parse or forward more input than
previous ones. And, of course, it cannot forward more input than it has
parsed. 2 macros has been added to retrieve these offets: FLT_NXT and FLT_FWD.

In addition, 2 functions has been added to change the 'next size' and the
'forward size' of a filter. When a filter parses input data, it can alter these
data, so the size of these data can vary. This action has an effet on all
previous filters that must be handled. To do so, the function
'filter_change_next_size' must be called, passing the size variation. In the
same spirit, if a filter alter forwarded data, it must call the function
'filter_change_forward_size'. 'filter_change_next_size' can be called in
'http_data' and 'tcp_data' callbacks and only these ones. And
'filter_change_forward_size' can be called in 'http_forward_data' and
'tcp_forward_data' callbacks and only these ones. The data changes are the
filter responsability, but with some limitation. It must not change already
parsed/forwarded data or data that previous filters have not parsed/forwarded
yet.

Because filters can be used on backends, when we the backend is set for a
stream, we add filters defined for this backend in the filter list of the
stream. But we must only do that when the backend and the frontend of the stream
are not the same. Else same filters are added a second time leading to undefined
behavior.

The HTTP compression code had to be moved.

So it simplifies http_response_forward_body function. To do so, the way the data
are forwarded has changed. Now, a filter (and only one) can forward data. In a
commit to come, this limitation will be removed to let all filters take part to
data forwarding. There are 2 new functions that filters should use to deal with
this feature:

 * flt_set_http_data_forwarder: This function sets the filter (using its id)
   that will forward data for the specified HTTP message. It is possible if it
   was not already set by another filter _AND_ if no data was yet forwarded
   (msg->msg_state <= HTTP_MSG_BODY). It returns -1 if an error occurs.

 * flt_http_data_forwarder: This function returns the filter id that will
   forward data for the specified HTTP message. If there is no forwarder set, it
   returns -1.

When an HTTP data forwarder is set for the response, the HTTP compression is
disabled. Of course, this is not definitive.
2016-02-09 14:53:15 +01:00
Christopher Faulet
635c0adec2 BUG/MINOR: ssl: Be sure to use unique serial for regenerated certificates
The serial number for a generated certificate was computed using the requested
servername, without any variable/random part. It is not a problem from the
moment it is not regenerated.

But if the cache is disabled or when the certificate is evicted from the cache,
we may need to regenerate it. It is important to not reuse the same serial
number for the new certificate. Else clients (especially browsers) trigger a
warning because 2 certificates issued by the same CA have the same serial
number.

So now, the serial is a static variable initialized with now_ms (internal date
in milliseconds) and incremented at each new certificate generation.

(Ref MPS-2031)
2016-02-09 09:04:53 +01:00
Christopher Faulet
c34d19fc3c BUG: stream_interface: Reuse connection even if the output channel is empty
in function 'si_connect', an existing connection is reused (and considered as
established) only when there are some pending data in the output channel.

This can be problem when filters are used, because a filter can choose to not
forward data immediatly. So when we try to initiate a connection to a server,
the output channel can be empty. In this situation, if the connection already
exists, it is not considered as established and nothing happens. If the stream
interface is in the state SI_ST_ASS, this leads to an infinite loop in
process_stream because it remains in this state.

This patch fixes this problem. Now, in 'si_connect', we always reuse an existing
connection, whether or not there are pending data in the output channel.
2016-02-03 14:22:55 +01:00
Willy Tarreau
581bf81d34 MEDIUM: pools: add a new flag to avoid rounding pool size up
Usually it's desirable to merge similarly sized pools, which is the
reason why their size is rounded up to the next multiple of 16. But
for the buffers this is problematic because we add the size of
struct buffer to the user-requested size, and the rounding results
in 8 extra bytes that are usable in the end. So the user gets more
bytes than asked for, and in case of SSL it results in short writes
for the extra bytes that are sent above multiples of 16 kB.

So we add a new flag MEM_F_EXACT to request that the size is not
rounded up when creating the entry. Thus it doesn't disable merging.
2016-01-25 02:31:18 +01:00
Willy Tarreau
999f643ed2 BUG/MEDIUM: channel: fix miscalculation of available buffer space.
The function channel_recv_limit() relies on channel_reserved() which
itself relies on channel_in_transit(). Individually they're OK but
combined they're doing the wrong thing.

The problem is that we refrain from filling buffers while to_forward
is even much larger than the buffer because of a semantic issue along
the call chain. This is particularly visible when offloading SSL on
moderately large files (1 MB), though it is also visible on clear text.
Twice the number of recv() calls are made compared to what is needed,
and the typical performance drops by 15-20% in SSL in 1.6 and later,
and no directly measurable drop in 1.5 except when using strace.

There's no need for all these intermediate functions, so let's get
rid of them and reimplement channel_recv_limit() from scratch in a
safer way.

This fix needs to be backported to 1.6 and 1.5 (at least). Note that in
1.5 the function is called buffer_recv_limit() and it may differ a bit.
2016-01-25 02:31:18 +01:00
Thiago Farina
b1af23ebea MINOR: fix the return type for dns_response_get_query_id() function
This function should return a 16-bit type as that is the type for
dns header id.
Also because it is doing an uint16 unpack big-endian operation.

Backport: can be backported to 1.6

Signed-off-by: Thiago Farina <tfarina@chromium.org>
Signed-off-by: Baptiste Assmann <bedis9@gmail.com>
2016-01-20 23:51:24 +01:00
Baptiste Assmann
22c4ed6937 MINOR: lru: new function to delete <nb> least recently used keys
Introduction of a new function in the LRU cache source file.
Purpose of this function is to be used to delete a number of entries in
the cache. 'number' is defined by the caller and the key removed are
taken at the tail of the tree
2016-01-11 07:31:35 +01:00
Willy Tarreau
898529b4a8 MEDIUM: tools: add csv_enc_append() to preserve the original chunk
We have csv_enc() but there's no way to append some CSV-encoded data
to an existing chunk, so here we modify the existing function for this
and create an inlined version of csv_enc() which first resets the output
chunk. It will be handy to append data to an existing chunk without
having to use an extra temporary chunk, or to encode multiple strings
into a single chunk with chunk_newstr().

The patch is quite small, in fact most changes are typo fixes in the
comments.
2016-01-06 20:58:55 +01:00
Willy Tarreau
70af633ebe MINOR: chunk: make chunk_initstr() take a const string
chunk_initstr() prepares a read-only chunk from a string of
fixed length. Thus it must be prepared to accept a read-only
string on the input, otherwise the caller has to force-cast
some const char* and that's not a good idea.
2016-01-06 20:58:55 +01:00
Willy Tarreau
601360b41d MINOR: chunks: add chunk_strcat() and chunk_newstr()
These two new functions will make it easier to manipulate small strings
from within functions, because at many places, multiple short strings
are needed which do not deserve a malloc() nor a free(), and alloca()
is often discouraged. Since we already have trash chunks, it's convenient
to be able to allocate substrings from a chunk and use them later since
our functions already perform all the length checks. chunk_newstr() adds
a trailing zero at the end of a chunk and returns the pointer to the next
character, which can be used as an independant string. chunk_strcat()
does what it says.
2016-01-06 13:53:37 +01:00
Willy Tarreau
0b6044fa24 MINOR: chunks: ensure that chunk_strcpy() adds a trailing zero
Since thus function bears the name of a well-known string function, it
must at least promise compatible semantics. Here it means always adding
the trailing zero so that anyone willing to use chunk->str as a regular
string can do it. Of course the zero is not counted in the chunk's length.
2016-01-06 13:53:37 +01:00
Willy Tarreau
f9476a5a30 BUG/MINOR: chunk: make chunk_dup() always check and set dst->size
chunk_dup() was affected by two bugs at once related to dst->size :
  - first, it didn't check dst->size to know if it could free(dst->str),
    so using it on a statically allocated chunk would cause a free(constant)
    and crash the process ;

  - second, it didn't properly set dst->size, possibly causing smaller
    strings not to be properly reported in a chunk that was previously
    used for something else.

Fortunately, neither of these situations ever happened since the function
is rarely used.

In the process of doing this, we even allocate one more byte for a
trailing zero if the input chunk was not full, so that the copied
string can safely be reused by standard string functions.

The bug was introduced in 1.3.4 nine years ago with this commit :

  0f77253 ("[MINOR] store HTTP error messages into a chunk array")

It's better to backport this fix in case a future fix relies on it.
2016-01-04 20:47:27 +01:00
Christopher Faulet
a94e5a548c MINOR: filters/http: Use a wrapper function instead of stream_int_retnclose
The function http_reply_and_close has been added in proto_http.c to wrap calls
to stream_int_retnclose. This functions will be modified when the filters will
be added.
2015-12-28 16:49:36 +01:00
Thierry FOURNIER
ec9a58c709 BUILD/MINOR: regex: missing header
When HAProxy is compiled with pcre, strlen() is used, but <string.h>
is not included.

This patch must be backported in 1.6
2015-12-22 13:36:01 +01:00
Thierry FOURNIER
ca98866bcf BUG/MEDIUM: lua: Lua applets must not fetch samples using http_txn
If a sample fetch needing http_txn is called from an HTTP Lua applet,
the result will be invalid and may even cause a crash because some HTTP
data can be forwarded and the HTTP txn is no longer valid.

Here the solution is to ensure that a fetch called from Lua never
needs http_txn. This is done thanks to a new flag HLUA_F_MAY_USE_HTTP
which indicates whether or not it is safe to call a fetch which needs
HTTP.

This fix needs to be backported to 1.6.
2015-12-20 23:13:00 +01:00
Thierry FOURNIER
7fa0549a2b REORG/MINOR: lua: convert boolean "int" to bitfield
This patch converts a boolean "int" to a bitfiled. The main
reason is to save space in the struct if another flag may will
be require.

Note that this patch is required for next fix and will need to be
backported to 1.6.
2015-12-20 23:13:00 +01:00
Willy Tarreau
7006045e48 BUG/MEDIUM: config: properly adjust maxconn with nbproc when memmax is forced
When memmax is forced using "-m", the per-process memory limit is enforced
using setrlimit(), but this value is not used to compute the automatic
maxconn limit. In addition, the per-process memory limit didn't consider
the fact that the shared SSL cache only needs to be accounted once.

The doc was also fixed to clearly state that "-m" is global and not per
process. It makes sense because people who use -m want to protect the
system's resources regardless of whatever appears in the configuration.
2015-12-14 13:03:09 +01:00
Willy Tarreau
9579d12f2e BUILD/MINOR: http: proto_http.h needs sample.h
Since commit fd7edd3 ("MINOR: Move http method enum from proto_http to sample")
proto_http.h needs to include sample.h. This can be backported to 1.6 though
it doesn't affect existing code.
2015-11-26 10:24:48 +01:00
Thierry FOURNIER
1db96672c4 BUILD: freebsd: double declaration
On freebsd, the macro LIST_PREV already exists in the header file
<sys/queue.h>, and this makes a build error.

This patch removes the macros before declaring it. This ensure
that the error doesn't occurs.
2015-11-06 01:15:02 +01:00
Baptiste Assmann
e9544935e8 BUG/MINOR: http rule: http capture 'id' rule points to a non existing id
It is possible to create a http capture rule which points to a capture slot
id which does not exist.

Current patch prevent this when parsing configuration and prevent running
configuration which contains such rules.

This configuration is now invalid:

  frontend f
   bind :8080
   http-request capture req.hdr(User-Agent) id 0
   default_backend b

this one as well:

  frontend f
   bind :8080
   declare capture request len 32 # implicit id is 0 here
   http-request capture req.hdr(User-Agent) id 1
   default_backend b

It applies of course to both http-request and http-response rules.
2015-11-04 08:47:55 +01:00
James Brown
55f9ff11b5 MINOR: check: add agent-send server parameter
Causes HAProxy to emit a static string to the agent on every check,
so that you can independently control multiple services running
behind a single agent port.
2015-11-04 07:26:51 +01:00
Thierry FOURNIER
c4eebc8157 BUG/MEDIUM: lua: sample fetches based on response doesn't work
The direction (request or response) is not propagated in the
sample fecthes called throught Lua. This patch adds the direction
status in some structs (hlua_txn and hlua_smp) to make sure that
the sample fetches will be called with all the information.

The converters can not access to a TXN object, so there are not
impacted the direction. However, the samples used as input of the
Lua converter wrapper are initiliazed with the direction. Thereby,
the struct smp stay consistent.
[wt: needs to be backported to 1.6]
2015-11-03 10:50:14 +01:00
Willy Tarreau
58102cf30b MEDIUM: memory: add accounting for failed allocations
We now keep a per-pool counter of failed memory allocations and
we report that, as well as the amount of memory allocated and used
on the CLI.
2015-10-28 16:24:21 +01:00
Willy Tarreau
de30a684ca DEBUG/MEDIUM: memory: add optional control pool memory operations
When DEBUG_MEMORY_POOLS is used, we now use the link pointer at the end
of the pool to store a pointer to the pool, and to control it during
pool_free2() in order to serve four purposes :
  - at any instant we can know what pool an object was allocated from
    when examining memory, hence how we should possibly decode it ;

  - it serves to detect double free when they happen, as the pointer
    cannot be valid after the element is linked into the pool ;

  - it serves to detect if an element is released in the wrong pool ;

  - it serves as a canary, to detect if some buffers experienced an
    overflow before being release.

All these elements will definitely help better troubleshoot strange
situations, or at least confirm that certain conditions did not happen.
2015-10-28 15:28:05 +01:00
Willy Tarreau
ac421118db DEBUG/MEDIUM: memory: optionally protect free data in pools
When debugging a core file, it's sometimes convenient to be able to
visit the released entries in the pools (typically last released
session). Unfortunately the first bytes of these entries are destroyed
by the link elements of the pool. And of course, most structures have
their most accessed elements at the beginning of the structure (typically
flags). Let's add a build-time option DEBUG_MEMORY_POOLS which allocates
an extra pointer in each pool to put the link at the end of each pool
item instead of the beginning.
2015-10-28 15:27:59 +01:00
Willy Tarreau
a84dcb8440 DEBUG/MINOR: memory: add a build option to disable memory pools sharing
Sometimes analysing a core file isn't easy due to shared memory pools.
Let's add a build option to disable this. It's not enabled by default,
it could be backported to older versions.
2015-10-28 15:27:55 +01:00
Andrew Hayworth
e6a4a329b8 MEDIUM: dns: Don't use the ANY query type
Basically, it's ill-defined and shouldn't really be used going forward.
We can't guarantee that resolvers will do the 'legwork' for us and
actually resolve CNAMES when we request the ANY query-type. Case in point
(obfuscated, clearly):

  PRODUCTION! ahayworth@secret-hostname.com:~$
  dig @10.11.12.53 ANY api.somestartup.io

  ; <<>> DiG 9.8.4-rpz2+rl005.12-P1 <<>> @10.11.12.53 ANY api.somestartup.io
  ; (1 server found)
  ;; global options: +cmd
  ;; Got answer:
  ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62454
  ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 4, ADDITIONAL: 0

  ;; QUESTION SECTION:
  ;api.somestartup.io.                        IN      ANY

  ;; ANSWER SECTION:
  api.somestartup.io.         20      IN      CNAME api-somestartup-production.ap-southeast-2.elb.amazonaws.com.

  ;; AUTHORITY SECTION:
  somestartup.io.               166687  IN      NS      ns-1254.awsdns-28.org.
  somestartup.io.               166687  IN      NS      ns-1884.awsdns-43.co.uk.
  somestartup.io.               166687  IN      NS      ns-440.awsdns-55.com.
  somestartup.io.               166687  IN      NS      ns-577.awsdns-08.net.

  ;; Query time: 1 msec
  ;; SERVER: 10.11.12.53#53(10.11.12.53)
  ;; WHEN: Mon Oct 19 22:02:29 2015
  ;; MSG SIZE  rcvd: 242

HAProxy can't handle that response correctly.

Rather than try to build in support for resolving CNAMEs presented
without an A record in an answer section (which may be a valid
improvement further on), this change just skips ANY record types
altogether. A and AAAA are much more well-defined and predictable.

Notably, this commit preserves the implicit "Prefer IPV6 behavior."

Furthermore, ANY query type by default is a bad idea: (from Robin on
HAProxy's ML):
  Using ANY queries for this kind of stuff is considered by most people
  to be a bad practice since besides all the things you named it can
  lead to incomplete responses. Basically a resolver is allowed to just
  return whatever it has in cache when it receives an ANY query instead
  of actually doing an ANY query at the authoritative nameserver. Thus
  if it only received queries for an A record before you do an ANY query
  you will not get an AAAA record even if it is actually available since
  the resolver doesn't have it in its cache. Even worse if before it
  only got MX queries, you won't get either A or AAAA
2015-10-20 22:31:01 +02:00
Willy Tarreau
a5c51ac6a6 BUILD: properly report when USE_ZLIB and USE_SLZ are used together
Use #error here otherwise the errors are hard to spot for the casual
user.
2015-10-13 16:47:16 +02:00
Willy Tarreau
163d4620c6 MEDIUM: server: implement TCP_USER_TIMEOUT on the server
This is equivalent to commit 2af207a ("MEDIUM: tcp: implement tcp-ut
bind option to set TCP_USER_TIMEOUT") except that this time it works
on the server side. The purpose is to detect dead server connections
even when checks are rare, disabled, or after a soft reload (since
checks are disabled there as well), and to ensure client connections
will get killed faster.
2015-10-13 16:18:27 +02:00