Commit Graph

1668 Commits

Author SHA1 Message Date
Simon Horman
bfb5d33fe6 MEDIUM: Add free_check() helper
Add free_check() helper to free the memory allocated by init_check().

Signed-off-by: Simon Horman <horms@verge.net.au>
2015-02-03 00:24:15 +01:00
Simon Horman
b1900d55df MEDIUM: Refactor init_check and move to checks.c
Refactor init_check so that an error string is returned
rather than alerts being printed by it. Also
init_check to checks.c and provide a prototype to allow
it to be used from multiple C files.

Signed-off-by: Simon Horman <horms@verge.net.au>
2015-02-03 00:24:15 +01:00
Willy Tarreau
a0dc23f093 MEDIUM: http: implement http-request set-{method,path,query,uri}
This commit implements the following new actions :

- "set-method" rewrites the request method with the result of the
  evaluation of format string <fmt>. There should be very few valid reasons
  for having to do so as this is more likely to break something than to fix
  it.

- "set-path" rewrites the request path with the result of the evaluation of
  format string <fmt>. The query string, if any, is left intact. If a
  scheme and authority is found before the path, they are left intact as
  well. If the request doesn't have a path ("*"), this one is replaced with
  the format. This can be used to prepend a directory component in front of
  a path for example. See also "set-query" and "set-uri".

  Example :
      # prepend the host name before the path
      http-request set-path /%[hdr(host)]%[path]

- "set-query" rewrites the request's query string which appears after the
  first question mark ("?") with the result of the evaluation of format
  string <fmt>. The part prior to the question mark is left intact. If the
  request doesn't contain a question mark and the new value is not empty,
  then one is added at the end of the URI, followed by the new value. If
  a question mark was present, it will never be removed even if the value
  is empty. This can be used to add or remove parameters from the query
  string. See also "set-query" and "set-uri".

  Example :
      # replace "%3D" with "=" in the query string
      http-request set-query %[query,regsub(%3D,=,g)]

- "set-uri" rewrites the request URI with the result of the evaluation of
  format string <fmt>. The scheme, authority, path and query string are all
  replaced at once. This can be used to rewrite hosts in front of proxies,
  or to perform complex modifications to the URI such as moving parts
  between the path and the query string. See also "set-path" and
  "set-query".

All of them are handled by the same parser and the same exec function,
which is why they're merged all together. For once, instead of adding
even more entries to the huge switch/case, we used the new facility to
register action keywords. A number of the existing ones should probably
move there as well.
2015-01-23 20:27:41 +01:00
Willy Tarreau
15a53a4384 MEDIUM: regex: add support for passing regex flags to regex_exec_match()
This function (and its sister regex_exec_match2()) abstract the regex
execution but make it impossible to pass flags to the regex engine.
Currently we don't use them but we'll need to support REG_NOTBOL soon
(to indicate that we're not at the beginning of a line). So let's add
support for this flag and update the API accordingly.
2015-01-22 14:24:53 +01:00
Willy Tarreau
469477879c MINOR: args: implement a new arg type for regex : ARGT_REG
This one will be used when a regex is expected. It is automatically
resolved after the parsing and compiled into a regex. Some optional
flags are supported in the type-specific flags that should be set by
the optional arg checker. One is used during the regex compilation :
ARGF_REG_ICASE to ignore case.
2015-01-22 14:24:53 +01:00
Willy Tarreau
085dafac5f MINOR: args: add type-specific flags for each arg in a list
These flags are meant to be used by arg checkers to pass out-of-band
information related to some args. A typical use is to indicate how a
regex is expected to be compiled/matched based on other arguments.
These flags are initialized to zero by default and it is up to the args
checkers to set them if needed.
2015-01-22 14:24:53 +01:00
Willy Tarreau
dbc79d0aed MEDIUM: args: increase arg type to 5 bits and limit arg count to 5
We'll soon need to add new argument types, and we don't use the current
limit of 7 arguments, so let's increase the arg type size to 5 bits and
reduce the arg count to 5 (3 max are used today).
2015-01-22 14:24:53 +01:00
Willy Tarreau
3d241e78a1 MEDIUM: args: use #define to specify the number of bits used by arg types and counts
This is in order to add new types. This patch does not change anything
else. Two remaining (harmless) occurrences of a count of 8 instead of 7
were fixed by this patch : empty_arg_list[] and the for() loop counting
args.
2015-01-22 14:24:53 +01:00
Willy Tarreau
324f07f6dd MEDIUM: backend: add the crc32 hash algorithm for load balancing
Since we have it available, let's make it usable for load balancing,
it comes at no cost except 3 lines of documentation.
2015-01-20 19:48:14 +01:00
Willy Tarreau
c829ee48c7 MINOR: hash: add new function hash_crc32
This function will be used to perform CRC32 computations. This one wa
loosely inspired from crc32b found here, and focuses on size and speed
at the same time :

    http://www.hackersdelight.org/hdcodetxt/crc.c.txt

Much faster table-based versions exist but are pointless for our usage
here, this hash already sustains gigabit speed which is far faster than
what we'd ever need. Better preserve the CPU's cache instead.
2015-01-20 19:48:05 +01:00
Willy Tarreau
d025648f7c MAJOR: init: automatically set maxconn and/or maxsslconn when possible
If a memory size limit is enforced using "-n" on the command line and
one or both of maxconn / maxsslconn are not set, instead of using the
build-time values, haproxy now computes the number of sessions that can
be allocated depending on a number of parameters among which :

  - global.maxconn (if set)
  - global.maxsslconn (if set)
  - maxzlibmem
  - tune.ssl.cachesize
  - presence of SSL in at least one frontend (bind lines)
  - presence of SSL in at least one backend (server lines)
  - tune.bufsize
  - tune.cookie_len

The purpose is to ensure that not haproxy will not run out of memory
when maxing out all parameters. If neither maxconn nor maxsslconn are
used, it will consider that 100% of the sessions involve SSL on sides
where it's supported. That means that it will typically optimize maxconn
for SSL offloading or SSL bridging on all connections. This generally
means that the simple act of enabling SSL in a frontend or in a backend
will significantly reduce the global maxconn but in exchange of that, it
will guarantee that it will not fail.

All metrics may be enforced using #defines to accomodate variations in
SSL libraries or various allocation sizes.
2015-01-15 21:45:22 +01:00
Willy Tarreau
d92aa5c44a MINOR: global: report information about the cost of SSL connections
An SSL connection takes some memory when it exists and during handshakes.
We measured up to 16kB for an established endpoint, and up to 76 extra kB
during a handshake. The SSL layer stores these values into the global
struct during initialization. If other SSL libs are used, it's easy to
change these values. Anyway they'll only be used as gross estimates in
order to guess the max number of SSL conns that can be established when
memory is constrained and the limit is not set.
2015-01-15 21:34:39 +01:00
Willy Tarreau
fce03113fa MINOR: global: always export some SSL-specific metrics
We'll need to know the number of SSL connections, their use and their
cost soon. In order to avoid getting tons of ifdefs everywhere, always
export SSL information in the global section. We add two flags to know
whether or not SSL is used in a frontend and in a backend.
2015-01-15 21:32:40 +01:00
Willy Tarreau
3ca1a883f9 MINOR: tools: add new round_2dig() function to round integers
This function rounds down an integer to the closest value having only
2 significant digits.
2015-01-15 19:02:27 +01:00
Willy Tarreau
319f745ba0 MINOR: channel: rename bi_erase() to channel_truncate()
It applies to the channel and it doesn't erase outgoing data, only
pending unread data, which is strictly equivalent to what recv()
does with MSG_TRUNC, so that new name is more accurate and intuitive.
2015-01-14 20:32:59 +01:00
Willy Tarreau
b5051f8742 MINOR: channel: rename bi_avail() to channel_recv_max()
This name more accurately reminds that it applies to a channel and not
to a buffer, and that what is returned may be used as a max number of
bytes to pass to recv().
2015-01-14 20:26:54 +01:00
Willy Tarreau
3f5096ddf2 MINOR: channel: rename buffer_max_len() to channel_recv_limit()
Buffer_max_len() is ambiguous and misleading since it considers the
channel. The new name more accurately designates the size limit for
received data.
2015-01-14 20:21:43 +01:00
Willy Tarreau
a4178192b9 MINOR: channel: rename buffer_reserved() to channel_reserved()
This applies to the channel, not the buffer, so let's fix this name.
Warning, the function's name happens to be the same as the old one
which was mistakenly used during 1.5.
2015-01-14 20:21:12 +01:00
Willy Tarreau
3889fffe92 MINOR: channel: rename channel_full() to !channel_may_recv()
This function's name was poorly chosen and is confusing to the point of
being suspiciously used at some places. The operations it does always
consider the ability to forward pending input data before receiving new
data. This is not obvious at all, especially at some places where it was
used when consuming outgoing data to know if the buffer has any chance
to ever get the missing data. The code needs to be re-audited with that
in mind. Care must be taken with existing code since the polarity of the
function was switched with the renaming.
2015-01-14 18:41:33 +01:00
Willy Tarreau
ba0902ede4 CLEANUP: channel: rename channel_reserved -> channel_is_rewritable
channel_reserved is confusingly named. It is used to know whether or
not the rewrite area is left intact for situations where we want to
ensure we can use it before proceeding. Let's rename it to fix this
confusion.
2015-01-14 18:41:33 +01:00
Willy Tarreau
9c06ee4ccf BUG/MEDIUM: channel: don't schedule data in transit for leaving until connected
Option http-send-name-header is still hurting. If a POST request has to be
redispatched when this option is used, and the next server's name is larger
than the initial one, and the POST body fills the buffer, it becomes
impossible to rewrite the server's name in the buffer when redispatching.
In 1.4, this is worse, the process may crash because of a negative size
computation for the memmove().

The only solution to fix this is to refrain from eating the reserve before
we're certain that we won't modify the buffer anymore. And the condition for
that is that the connection is established.

This patch introduces "channel_may_send()" which helps to detect whether it's
safe to eat the reserve or not. This condition is used by channel_in_transit()
introduced by recent patches.

This patch series must be backported into 1.5, and a simpler version must be
backported into 1.4 where fixing the bug is much easier since there were no
channels by then. Note that in 1.4 the severity is major.
2015-01-14 16:08:45 +01:00
Willy Tarreau
27bb0e14a8 MEDIUM: channel: make bi_avail() use channel_in_transit()
This ensures that we rely on a sane computation for the buffer size.
2015-01-14 15:57:24 +01:00
Willy Tarreau
fe57834955 MEDIUM: channel: make buffer_reserved() use channel_in_transit()
This ensures that we rely on a sane computation for the buffer size.
2015-01-14 15:57:21 +01:00
Willy Tarreau
1a4484dec8 MINOR: channel: add channel_in_transit()
This function returns the amount of bytes in transit in a channel's buffer,
which is the amount of outgoing data plus the amount of incoming data bound
to the forward limit.
2015-01-14 13:51:48 +01:00
Willy Tarreau
bb3f994f1a BUG/MINOR: channel: compare to_forward with buf->i, not buf->size
We know that all incoming data are going to be purged if to_forward
is greater than them, not only if greater than the buffer size. This
buf has no direct impact on this version, but it participates to some
bugs affecting http-send-name-header since 1.4. This fix will have to
be backported down to 1.4 albeit in a different form.
2015-01-14 13:50:24 +01:00
Willy Tarreau
0428a146c0 BUG/MEDIUM: channel: fix possible integer overflow on reserved size computation
The buffer_max_len() function is subject to an integer overflow in this
calculus :

    int ret = global.tune.maxrewrite - chn->to_forward - chn->buf->o;

  - chn->to_forward may be up to 2^31 - 1
  - chn->buf->o may be up to chn->buf->size
  - global.tune.maxrewrite is by definition smaller than chn->buf->size

Thus here we can subtract (2^31 + buf->o) (highly negative) from something
slightly positive, and result in ret being larger than expected.

Fortunately in 1.5 and 1.6, this is only used by bi_avail() which itself
is used by applets which do not set high values for to_forward so this
problem does not happen there. However in 1.4 the equivalent computation
was used to limit the size of a read and can result in a read overflow
when combined with the nasty http-send-name-header feature.

This fix must be backported to 1.5 and 1.4.
2015-01-14 12:04:34 +01:00
Willy Tarreau
75abcb3106 MINOR: config: extend the default max hostname length to 64 and beyond
Some users reported that the default max hostname length of 32 is too
short in some environments. This patch does two things :

  - it relies on the system's max hostname length as found in MAXHOSTNAMELEN
    if it is set. This is the most logical thing to do as the system libs
    generally present the appropriate value supported by the system. This
    value is 64 on Linux and 256 on Solaris, to give a few examples.

  - otherwise it defaults to 64

It is still possible to override this value by defining MAX_HOSTNAME_LEN at
build time. After some observation time, this patch may be backported to
1.5 if it does not cause any build issue, as it is harmless and may help
some users.
2015-01-14 11:52:34 +01:00
Willy Tarreau
094af4e16e MINOR: logs: add a new per-proxy "log-tag" directive
This is equivalent to what was done in commit 48936af ("[MINOR] log:
ability to override the syslog tag") but this time instead of doing
this globally, it does it per proxy. The purpose is to be able to use
a separate log tag for various proxies (eg: make it easier to route
log messages depending on the customer).
2015-01-07 15:03:42 +01:00
Willy Tarreau
3c23a85550 CLEANUP: session: remove session_from_task()
Since commit 3dd6a25 ("MINOR: stream-int: retrieve session pointer from
stream-int"), we can get the session from the task, so let's get rid of
this less obvious function.
2014-12-28 12:19:57 +01:00
Cyril Bont
ac92a065d7 MINOR: checks: update dynamic environment variables in external checks
commit 9ede66b0 introduced an environment variable (HAPROXY_SERVER_CURCONN) that
was supposed to be dynamically updated, but it was set only once, during its
initialization.

Most of the code provided in this previous patch has been rewritten in order to
easily update the environment variables without reallocating memory during each
check.

Now, HAPROXY_SERVER_CURCONN will contain the current number of connections on
the server at the time of the check.
2014-12-28 01:22:56 +01:00
Willy Tarreau
b034b2598d MEDIUM: channel: implement a zero-copy buffer transfer
bi_swpbuf() swaps the buffer passed in argument with the one attached to
the channel, but only if this last one is empty. The idea is to avoid a
copy when buffers can simply be swapped.
2014-12-24 23:47:33 +01:00
Willy Tarreau
33cb065348 MINOR: config: implement global setting tune.buffers.limit
This setting is used to limit memory usage without causing the alloc
failures caused by "-m". Unexpectedly, tests have shown a performance
boost of up to about 18% on HTTP traffic when limiting the number of
buffers to about 10% of the amount of concurrent connections.

tune.buffers.limit <number>
  Sets a hard limit on the number of buffers which may be allocated per process.
  The default value is zero which means unlimited. The minimum non-zero value
  will always be greater than "tune.buffers.reserve" and should ideally always
  be about twice as large. Forcing this value can be particularly useful to
  limit the amount of memory a process may take, while retaining a sane
  behaviour. When this limit is reached, sessions which need a buffer wait for
  another one to be released by another session. Since buffers are dynamically
  allocated and released, the waiting time is very short and not perceptible
  provided that limits remain reasonable. In fact sometimes reducing the limit
  may even increase performance by increasing the CPU cache's efficiency. Tests
  have shown good results on average HTTP traffic with a limit to 1/10 of the
  expected global maxconn setting, which also significantly reduces memory
  usage. The memory savings come from the fact that a number of connections
  will not allocate 2*tune.bufsize. It is best not to touch this value unless
  advised to do so by an haproxy core developer.
2014-12-24 23:47:33 +01:00
Willy Tarreau
a24adf0795 MAJOR: session: only wake up as many sessions as available buffers permit
We've already experimented with three wake up algorithms when releasing
buffers : the first naive one used to wake up far too many sessions,
causing many of them not to get any buffer. The second approach which
was still in use prior to this patch consisted in waking up either 1
or 2 sessions depending on the number of FDs we had released. And this
was still inaccurate. The third one tried to cover the accuracy issues
of the second and took into consideration the number of FDs the sessions
would be willing to use, but most of the time we ended up waking up too
many of them for nothing, or deadlocking by lack of buffers.

This patch completely removes the need to allocate two buffers at once.
Instead it splits allocations into critical and non-critical ones and
implements a reserve in the pool for this. The deadlock situation happens
when all buffers are be allocated for requests pending in a maxconn-limited
server queue, because then there's no more way to allocate buffers for
responses, and these responses are critical to release the servers's
connection in order to release the pending requests. In fact maxconn on
a server creates a dependence between sessions and particularly between
oldest session's responses and latest session's requests. Thus, it is
mandatory to get a free buffer for a response in order to release a
server connection which will permit to release a request buffer.

Since we definitely have non-symmetrical buffers, we need to implement
this logic in the buffer allocation mechanism. What this commit does is
implement a reserve of buffers which can only be allocated for responses
and that will never be allocated for requests. This is made possible by
the requester indicating how much margin it wants to leave after the
allocation succeeds. Thus it is a cooperative allocation mechanism : the
requester (process_session() in general) prefers not to get a buffer in
order to respect other's need for response buffers. The session management
code always knows if a buffer will be used for requests or responses, so
that is not difficult :

  - either there's an applet on the initiator side and we really need
    the request buffer (since currently the applet is called in the
    context of the session)

  - or we have a connection and we really need the response buffer (in
    order to support building and sending an error message back)

This reserve ensures that we don't take all allocatable buffers for
requests waiting in a queue. The downside is that all the extra buffers
are really allocated to ensure they can be allocated. But with small
values it is not an issue.

With this change, we don't observe any more deadlocks even when running
with maxconn 1 on a server under severely constrained memory conditions.

The code becomes a bit tricky, it relies on the scheduler's run queue to
estimate how many sessions are already expected to run so that it doesn't
wake up everyone with too few resources. A better solution would probably
consist in having two queues, one for urgent requests and one for normal
requests. A failed allocation for a session dealing with an error, a
connection event, or the need for a response (or request when there's an
applet on the left) would go to the urgent request queue, while other
requests would go to the other queue. Urgent requests would be served
from 1 entry in the pool, while the regular ones would be served only
according to the reserve. Despite not yet having this, it works
remarkably well.

This mechanism is quite efficient, we don't perform too many wake up calls
anymore. For 1 million sessions elapsed during massive memory contention,
we observe about 4.5M calls to process_session() compared to 4.0M without
memory constraints. Previously we used to observe up to 16M calls, which
rougly means 12M failures.

During a test run under high memory constraints (limit enforced to 27 MB
instead of the 58 MB normally needed), performance used to drop by 53% prior
to this patch. Now with this patch instead it *increases* by about 1.5%.

The best effect of this change is that by limiting the memory usage to about
2/3 to 3/4 of what is needed by default, it's possible to increase performance
by up to about 18% mainly due to the fact that pools are reused more often
and remain hot in the CPU cache (observed on regular HTTP traffic with 20k
objects, buffers.limit = maxconn/10, buffers.reserve = limit/2).

Below is an example of scenario which used to cause a deadlock previously :
  - connection is received
  - two buffers are allocated in process_session() then released
  - one is allocated when receiving an HTTP request
  - the second buffer is allocated then released in process_session()
    for request parsing then connection establishment.
  - poll() says we can send, so the request buffer is sent and released
  - process session gets notified that the connection is now established
    and allocates two buffers then releases them
  - all other sessions do the same till one cannot get the request buffer
    without hitting the margin
  - and now the server responds. stream_interface allocates the response
    buffer and manages to get it since it's higher priority being for a
    response.
  - but process_session() cannot allocate the request buffer anymore

  => We could end up with all buffers used by responses so that none may
     be allocated for a request in process_session().

When the applet processing leaves the session context, the test will have
to be changed so that we always allocate a response buffer regardless of
the left side (eg: H2->H1 gateway). A final improvement would consists in
being able to only retry the failed I/O operation without waking up a
task, but to date all experiments to achieve this have proven not to be
reliable enough.
2014-12-24 23:47:33 +01:00
Willy Tarreau
bf883e0aa7 MAJOR: session: implement a wait-queue for sessions who need a buffer
When a session_alloc_buffers() fails to allocate one or two buffers,
it subscribes the session to buffer_wq, and waits for another session
to release buffers. It's then removed from the queue and woken up with
TASK_WAKE_RES, and can attempt its allocation again.

We decide to try to wake as many waiters as we release buffers so
that if we release 2 and two waiters need only once, they both have
their chance. We must never come to the situation where we don't wake
enough tasks up.

It's common to release buffers after the completion of an I/O callback,
which can happen even if the I/O could not be performed due to half a
failure on memory allocation. In this situation, we don't want to move
out of the wait queue the session that was just added, otherwise it
will never get any buffer. Thus, we only force ourselves out of the
queue when freeing the session.

Note: at the moment, since session_alloc_buffers() is not used, no task
is subscribed to the wait queue.
2014-12-24 23:47:33 +01:00
Willy Tarreau
656859d478 MEDIUM: session: implement a basic atomic buffer allocator
This patch introduces session_alloc_recv_buffer(), session_alloc_buffers()
and session_release_buffers() whose purpose will be to allocate missing
buffers and release unneeded ones around the process_session() and during
I/O operations.

I/O callbacks only need a single buffer for recv operations and none
for send. However we still want to ensure that we don't pick the last
buffer. That's what session_alloc_recv_buffer() is for.

This allocator is atomic in that it always ensures we can get 2 buffers
or fails. Here, if any of the buffers is not ready and cannot be
allocated, the operation is cancelled. The purpose is to guarantee that
we don't enter into the deadlock where all buffers are allocated by the
same size of all sessions.

A queue will have to be implemented for failed allocations. For now
they're just reported as failures.
2014-12-24 23:47:32 +01:00
Willy Tarreau
f4718e8ec0 MEDIUM: buffer: implement b_alloc_margin()
This function is used to allocate a buffer and ensure that we leave
some margin after it in the pool. The function is not obvious. While
we allocate only one buffer, we want to ensure that at least two remain
available after our allocation. The purpose is to ensure we'll never
enter a deadlock where all sessions allocate exactly one buffer, and
none of them will be able to allocate the second buffer needed to build
a response in order to release the first one.

We also take care of remaining fast in the the fast path by first
checking whether or not there is enough margin, in which case we only
rely on b_alloc_fast() which is guaranteed to succeed. Otherwise we
take the slow path using pool_refill_alloc().
2014-12-24 23:47:32 +01:00
Willy Tarreau
620bd6c88e MINOR: buffer: implement b_alloc_fast()
This function allocates a buffer and replaces *buf with this buffer. If
no memory is available, &buf_wanted is used instead. No control is made
to check if *buf already pointed to another buffer. The allocated buffer
is returned, or NULL in case no memory is available. The difference with
b_alloc() is that this function only picks from the pool and never calls
malloc(), so it can fail even if some memory is available. It is the
caller's job to refill the buffer pool if needed.
2014-12-24 23:47:32 +01:00
Willy Tarreau
4428a29e52 MEDIUM: channel: do not report full when buf_empty is present on a channel
Till now we'd consider a buffer full even if it had size==0 due to pointing
to buf.size. Now we change this : if buf_wanted is present, it means that we
have already tried to allocate a buffer but failed. Thus the buffer must be
considered full so that we stop trying to poll for reads on it. Otherwise if
it's empty, it's buf_empty and we report !full since we may allocate it on
the fly.
2014-12-24 23:47:32 +01:00
Willy Tarreau
f2f7d6b27b MEDIUM: buffer: add a new buf_wanted dummy buffer to report failed allocations
Doing so ensures that even when no memory is available, we leave the
channel in a sane condition. There's a special case in proto_http.c
regarding the compression, we simply pre-allocate the tmpbuf to point
to the dummy buffer. Not reusing &buf_empty for this allows the rest
of the code to differenciate an empty buffer that's not used from an
empty buffer that results from a failed allocation which has the same
semantics as a buffer full.
2014-12-24 23:47:32 +01:00
Willy Tarreau
2a4b54359b MEDIUM: buffer: always assign a dummy empty buffer to channels
Channels are now created with a valid pointer to a buffer before the
buffer is allocated. This buffer is a global one called "buf_empty" and
of size zero. Thus it prevents any activity from being performed on
the buffer and still ensures that chn->buf may always be dereferenced.
b_free() also resets the buffer to &buf_empty, and was split into
b_drop() which does not reset the buffer.
2014-12-24 23:47:32 +01:00
Willy Tarreau
7dfca9daec MINOR: buffer: only use b_free to release buffers
We don't call pool_free2(pool2_buffers) anymore, we only call b_free()
to do the job. This ensures that we can start to centralize the releasing
of buffers.
2014-12-24 23:47:32 +01:00
Willy Tarreau
e583ea583a MEDIUM: buffer: use b_alloc() to allocate and initialize a buffer
b_alloc() now allocates a buffer and initializes it to the size specified
in the pool minus the size of the struct buffer itself. This ensures that
callers do not need to care about buffer details anymore. Also this never
applies memory poisonning, which is slow and useless on buffers.
2014-12-24 23:47:32 +01:00
Willy Tarreau
474cf54a97 MINOR: buffer: reset a buffer in b_reset() and not channel_init()
We'll soon need to be able to switch buffers without touching the
channel, so let's move buffer initialization out of channel_init().
We had the same in compressoin.c.
2014-12-24 23:47:31 +01:00
Willy Tarreau
3dd6a25323 MINOR: stream-int: retrieve session pointer from stream-int
sess_from_si() does this via the owner (struct task). It works because
all stream ints belong to a task nowadays.
2014-12-24 23:47:31 +01:00
Willy Tarreau
a885f6dc65 MEDIUM: memory: improve pool_refill_alloc() to pass a refill count
Till now this function would only allocate one entry at a time. But with
dynamic buffers we'll like to allocate the number of missing entries to
properly refill the pool.

Let's modify it to take a minimum amount of available entries. This means
that when we know we need at least a number of available entries, we can
ask to allocate all of them at once. It also ensures that we don't move
the pointers back and forth between the caller and the pool, and that we
don't call pool_gc2() for each failed malloc. Instead, it's called only
once and the malloc is only allowed to fail once.
2014-12-24 23:47:31 +01:00
Willy Tarreau
0262241e26 MINOR: memory: cut pool allocator in 3 layers
pool_alloc2() used to pick the entry from the pool, fall back to
pool_refill_alloc(), and to perform the poisonning itself, which
pool_refill_alloc() was also doing. While this led to optimal
code size, it imposes memory poisonning on the buffers as well,
which is extremely slow on large buffers.

This patch cuts the allocator in 3 layers :
  - a layer to pick the first entry from the pool without falling back to
    pool_refill_alloc() : pool_get_first()
  - a layer to allocate a dirty area by falling back to pool_refill_alloc()
    but never performing the poisonning : pool_alloc_dirty()
  - pool_alloc2() which calls the latter and optionally poisons the area

No functional changes were made.
2014-12-24 23:47:31 +01:00
Willy Tarreau
e430e77dfd CLEANUP: memory: replace macros pool_alloc2/pool_free2 with functions
Using inline functions here makes the code more readable and reduces its
size by about 2 kB.
2014-12-24 23:47:31 +01:00
Willy Tarreau
62405a2155 CLEANUP: memory: remove dead code
The very old pool managment code has not been used for the last 7 years
and is still polluting the file. Get rid of it now.
2014-12-24 23:47:31 +01:00
Willy Tarreau
3dd717cd5d CLEANUP: lists: remove dead code
Remove the code dealing with the old dual-linked lists imported from
librt that has remained unused for the last 8 years. Now everything
uses the linux-like circular lists instead.
2014-12-24 23:47:31 +01:00
Godbach
f2dd68d0e0 DOC: fix a few typos
include/types/proto_http.h: hwen -> when
include/types/server.h: SRV_ST_DOWN -> SRV_ST_STOPPED
src/backend.c: prefer-current-server -> prefer-last-server

Signed-off-by: Godbach <nylzhaowei@gmail.com>
2014-12-10 05:34:55 +01:00