Commit Graph

559 Commits

Author SHA1 Message Date
Willy Tarreau
04ff9f105f MINOR: http: add full-length header fetch methods
The req.hdr and res.hdr fetch methods do not work well on headers which
are allowed to contain commas, such as User-Agent, Date or Expires.
More specifically, full-length matching is impossible if a comma is
present.

This patch introduces 4 new fetch functions which are designed to work
with these full-length headers :
  - req.fhdr, req.fhdr_cnt
  - res.fhdr, res.fhdr_cnt

These ones do not stop at commas and permit to return full-length header
values.
2013-06-10 18:39:42 +02:00
Willy Tarreau
379357af58 BUG/MAJOR: http: always ensure response buffer has some room for a response
Since 1.5-dev12 and commit 3bf1b2b8 (MAJOR: channel: stop relying on
BF_FULL to take action), the HTTP parser switched to channel_full()
instead of BF_FULL to decide whether a buffer had enough room to start
parsing a request or response. The problem is that channel_full()
intentionally ignores outgoing data, so a corner case exists where a
large response might still be left in a response buffer with just a
few bytes left (much less than the reserve), enough to accept a second
response past the last data, but not enough to permit the HTTP processor
to add some headers. Since all the processing relies on this space being
available, we can get some random crashes when clients pipeline requests.

The analysis of a core from haproxy configured with 20480 bytes buffers
shows this : with enough "luck", when sending back the response for the
first request, the client is slow, the TCP window is congested, the socket
buffers are full, and haproxy's buffer fills up. We still have 20230 bytes
of response data in a 20480 response buffer. The second request is sent to
the server which returns 214 bytes which fit in the small 250 bytes left
in this buffer. And the buffer arrangement makes it possible to escape all
the controls in http_wait_for_response() :

    |<------ response buffer = 20480 bytes ------>|
    [ 2/2  | 3 | 4 |          1/2                 ]
           ^ start of circular buffer

      1/2 = beginning of previous response (18240)
      2/2 = end of previous response       (1990)
        3 = current response               (214)
        4 = free space                     (36)

  - channel_full() returns false (20230 bytes are going to leave)
  - the response headers does not wrap at the end of the buffer
  - the remaining linear room after the headers is larger than the
    reserve, because it's the previous response which wraps :
  => response is processed

Header rewriting causes it to reach 260 bytes, 10 bytes larger than what
the buffer could hold. So all computations during header addition are
wrong and lead to the corruption we've observed.

All the conditions are very hard to meet (which explains why it took
almost one year for this bug to show up) and are almost impossible to
reproduce on purpose on a test platform. But the bug is clearly there.

This issue was reported by Dinko Korunic who kindly devoted a lot of
time to provide countless traces and cores, and to experiment with
troubleshooting patches to knock the bug down. Thanks Dinko!

No backport is needed, but all 1.5-dev versions between dev12 and dev18
included must be upgraded. A workaround consists in setting option
forceclose to prevent pipelined requests from being processed.
2013-06-08 13:14:17 +02:00
Willy Tarreau
d5ca9abb0d MINOR: counters: make it easier to extend the amount of tracked counters
By properly affecting the flags and values, it becomes easier to add
more tracked counters, for example for experimentation. It also slightly
reduces the code and the number of tests. No counters were added with
this patch.
2013-05-28 17:43:40 +02:00
de Lafond Guillaume
88c278fadf MEDIUM: stats: add proxy name filtering on the statistic page
This patch adds a "scope" box in the statistics page in order to
display only proxies with a name that contains the requested value.
The scope filter is preserved across all clicks on the page.
2013-04-15 22:50:33 +02:00
Willy Tarreau
a4312fa28e MAJOR: sample: maintain a per-proxy list of the fetch args to resolve
While ACL args were resolved after all the config was parsed, it was not the
case with sample fetch args because they're almost everywhere now.

The issue is that ACLs now solely rely on sample fetches, so their args
resolving doesn't work anymore. And many fetches involving a server, a
proxy or a userlist don't work at all.

The real issue is that at the bottom layers we have no information about
proxies, line numbers, even ACLs in order to report understandable errors,
and that at the top layers we have no visibility over the locations where
fetches are referenced (think log node).

After failing multiple unsatisfying solutions attempts, we now have a new
concept of args list. The principle is that every proxy has a list head
which contains a number of indications such as the config keyword, the
context where it's used, the file and line number, etc... and a list of
arguments. This list head is of the same type as the elements, so it
serves as a template for adding new elements. This way, it is filled from
top to bottom by the callers with the information they have (eg: line
numbers, ACL name, ...) and the lower layers just have to duplicate it and
add an element when they face an argument they cannot resolve yet.

Then at the end of the configuration parsing, a loop passes over each
proxy's list and resolves all the args in sequence. And this way there is
all necessary information to report verbose errors.

The first immediate benefit is that for the first time we got very precise
location of issues (arg number in a keyword in its context, ...). Second,
in order to do this we had to parse log-format and unique-id-format a bit
earlier, so that was a great opportunity for doing so when the directives
are encountered (unless it's a default section). This way, the recorded
line numbers for these args are the ones of the place where the log format
is declared, not the end of the file.

Userlists report slightly more information now. They're the only remaining
ones in the ACL resolving function.
2013-04-03 02:13:02 +02:00
Willy Tarreau
93fddf1dbc MEDIUM: acl: have a pointer to the keyword name in acl_expr
The acl_expr struct used to hold a pointer to the ACL keyword. But since
we now have all the relevant pointers, we don't need that anymore, we just
need the pointer to the keyword as a string in order to return warnings
and error messages.

So let's change this in order to remove the dependency on the acl_keyword
struct from acl_expr.

During this change, acl_cond_kw_conflicts() used to return a pointer to an
ACL keyword but had to be changed to return a const char* for the same reason.
2013-04-03 02:13:01 +02:00
Willy Tarreau
a91d0a583c MAJOR: acl: convert all ACL requires to SMP use+val instead of ->requires
The ACLs now use the fetch's ->use and ->val to decide upon compatibility
between the place where they are used and where the information are fetched.
The code is capable of reporting warnings about very fine incompatibilities
between certain fetches and an exact usage location, so it is expected that
some new warnings will be emitted on some existing configurations.

Two degrees of detection are provided :
  - detecting ACLs that never match
  - detecting keywords that are ignored

All tests show that this seems to work well, though bugs are still possible.
2013-04-03 02:13:00 +02:00
Willy Tarreau
bf8e251077 MINOR: sample: provide a function to report the name of a sample check point
We need to put names on places where samples are used in order to emit warnings
and errors. Let's do that now.
2013-04-03 02:13:00 +02:00
Willy Tarreau
25320b2906 MEDIUM: proxy: remove acl_requires and just keep a flag "http_needed"
Proxy's acl_requires was a copy of all bits taken from ACLs, but we'll
get rid of ACL flags and only rely on sample fetches soon. The proxy's
acl_requires was only used to allocate an HTTP context when needed, and
was even forced in HTTP mode. So better have a flag which exactly says
what it's supposed to be used for.
2013-04-03 02:13:00 +02:00
Willy Tarreau
8ed669b12a MAJOR: acl: make all ACLs reference the fetch function via a sample.
ACL fetch functions used to directly reference a fetch function. Now
that all ACL fetches have their sample fetches equivalent, we can make
ACLs reference a sample fetch keyword instead.

In order to simplify the code, a sample keyword name may be NULL if it
is the same as the ACL's, which is the most common case.

A minor change appeared, http_auth always expects one argument though
the ACL allowed it to be missing and reported as such afterwards, so
fix the ACL to match this. This is not really a bug.
2013-04-03 02:12:58 +02:00
Willy Tarreau
d4c33c8889 MEDIUM: samples: move payload-based fetches and ACLs to their own file
The file acl.c is a real mess, it both contains functions to parse and
process ACLs, and some sample extraction functions which act on buffers.
Some other payload analysers were arbitrarily dispatched to proto_tcp.c.

So now we're moving all payload-based fetches and ACLs to payload.c
which is capable of extracting data from buffers and rely on everything
that is protocol-independant. That way we can safely inflate this file
and only use the other ones when some fetches are really specific (eg:
HTTP, SSL, ...).

As a result of this cleanup, the following new sample fetches became
available even if they're not really useful :

  always_false, always_true, rep_ssl_hello_type, rdp_cookie_cnt,
  req_len, req_ssl_hello_type, req_ssl_sni, req_ssl_ver, wait_end

The function 'acl_fetch_nothing' was wrong and never used anywhere so it
was removed.

The "rdp_cookie" sample fetch used to have a mandatory argument while it
was optional in ACLs, which are supposed to iterate over RDP cookies. So
we're making it optional as a fetch too, and it will return the first one.
2013-04-03 02:12:57 +02:00
Willy Tarreau
434c57c95c MINOR: log: indicate it when some unreliable sample fetches are logged
If a log-format involves some sample fetches that may not be present at
the logging instant, we can now report a warning.

Note that this is done both for log-format and for add-header and carefully
respects the original fetch keyword's capabilities.
2013-04-03 02:12:56 +02:00
Willy Tarreau
80aca90ad2 MEDIUM: samples: use new flags to describe compatibility between fetches and their usages
Samples fetches were relying on two flags SMP_CAP_REQ/SMP_CAP_RES to describe
whether they were compatible with requests rules or with response rules. This
was never reliable because we need a finer granularity (eg: an HTTP request
method needs to parse an HTTP request, and is available past this point).

Some fetches are also dependant on the context (eg: "hdr" uses request or
response depending where it's involved, causing some abiguity).

In order to solve this, we need to precisely indicate in fetches what they
use, and their users will have to compare with what they have.

So now we have a bunch of bits indicating where the sample is fetched in the
processing chain, with a few variants indicating for some of them if it is
permanent or volatile (eg: an HTTP status is stored into the transaction so
it is permanent, despite being caught in the response contents).

The fetches also have a second mask indicating their validity domain. This one
is computed from a conversion table at registration time, so there is no need
for doing it by hand. This validity domain consists in a bitmask with one bit
set for each usage point in the processing chain. Some provisions were made
for upcoming controls such as connection-based TCP rules which apply on top of
the connection layer but before instantiating the session.

Then everywhere a fetch is used, the bit for the control point is checked in
the fetch's validity domain, and it becomes possible to finely ensure that a
fetch will work or not.

Note that we need these two separate bitfields because some fetches are usable
both in request and response (eg: "hdr", "payload"). So the keyword will have
a "use" field made of a combination of several SMP_USE_* values, which will be
converted into a wider list of SMP_VAL_* flags.

The knowledge of permanent vs dynamic information has disappeared for now, as
it was never used. Later we'll probably reintroduce it differently when
dealing with variables. Its only use at the moment could have been to avoid
caching a dynamic rate measurement, but nothing is cached as of now.
2013-04-03 02:12:56 +02:00
Simon Horman
7d09b9a4df MEDIUM: server: Break out set weight processing code
Break out set weight processing code.
This is in preparation for reusing the code.

Also, remove duplicate check in nested if clauses.
{px->lbprm.algo & BE_LB_PROP_DYN) is checked by
the immediate outer if clause, so there is no need
to check it a second time.

Signed-off-by: Simon Horman <horms@verge.net.au>
2013-02-13 10:53:40 +01:00
Willy Tarreau
6cbbdbf3f3 BUG/MEDIUM: log: emit '-' for empty fields again
Commit 2b0108ad accidently got rid of the ability to emit a "-" for
empty log fields. This can happen for captured request and response
cookies, as well as for fetches. Since we don't want to have this done
for headers however, we set the default log method when parsing the
format. It is still possible to force the desired mode using +M/-M.
2013-02-05 18:55:09 +01:00
Emeric Brun
22890a1225 MINOR: ssl: Setting global tune.ssl.cachesize value to 0 disables SSL session cache. 2012-12-28 14:48:13 +01:00
Willy Tarreau
4baae248fc REORG: config: move the http redirect rule parser to proto_http.c
We'll have to use this elsewhere soon, let's move it to the proper
place.
2012-12-28 14:47:19 +01:00
Willy Tarreau
71241abfd3 MINOR: http: move redirect rule processing to its own function
We now have http_apply_redirect_rule() which does all the redirect-specific
job instead of having this inside http_process_req_common().

Also one of the benefit gained from uniformizing this code is that both
keep-alive and close response do emit the PR-- flags. The fix for the
flags could probably be backported to 1.4 though it's very minor.

The previous function http_perform_redirect() was becoming confusing
so it was renamed http_perform_server_redirect() since it only applies
to server-based redirection.
2012-12-28 14:47:19 +01:00
Willy Tarreau
b83bc1e1c1 MINOR: log: make parse_logformat_string() take a const char *
Sometimes we can't pass a char *, and there is no need for this since we strdup() it.
2012-12-24 12:36:33 +01:00
Willy Tarreau
354898bba9 MINOR: stats: replace STAT_FMT_CSV with STAT_FMT_HTML
We need to switch the default mode if we want to add new output formats
later. Let CSV be the default and HTML be an option.
2012-12-23 21:46:30 +01:00
Willy Tarreau
47ca54505c MINOR: chunks: centralize the trash chunk allocation
At the moment, we need trash chunks almost everywhere and the only
correctly implemented one is in the sample code. Let's move this to
the chunks so that all other places can use this allocator.

Additionally, the get_trash_chunk() function now really returns two
different chunks. Previously it used to always overwrite the same
chunk and point it to a different buffer, which was a bit tricky
because it's not obvious that two consecutive results do alias each
other.
2012-12-23 21:46:07 +01:00
Willy Tarreau
d9bdcd5139 REORG: stats: massive code reorg and cleanup
The dumpstats code looks like a spaghetti plate. Several functions are
supposed to be able to do several things but rely on complex states to
dispatch the work to independant functions. Most of the HTML output is
performed within the switch/case statements of the whole state machine.

Let's clean this up by adding new functions to emit the data and have
a few more iterators to avoid relying on so complex states.

The new stats dump sequence looks like this for CLI and for HTTP :

  cli_io_handler()
      -> stats_dump_sess_to_buffer()      // "show sess"
      -> stats_dump_errors_to_buffer()    // "show errors"
      -> stats_dump_raw_info_to_buffer()  // "show info"
         -> stats_dump_raw_info()
      -> stats_dump_raw_stat_to_buffer()  // "show stat"
         -> stats_dump_csv_header()
         -> stats_dump_proxy()
            -> stats_dump_px_hdr()
            -> stats_dump_fe_stats()
            -> stats_dump_li_stats()
            -> stats_dump_sv_stats()
            -> stats_dump_be_stats()
            -> stats_dump_px_end()

  http_stats_io_handler()
      -> stats_http_redir()
      -> stats_dump_http()              // also emits the HTTP headers
         -> stats_dump_html_head()      // emits the HTML headers
         -> stats_dump_csv_header()     // emits the CSV headers (same as above)
         -> stats_dump_http_info()      // note: ignores non-HTML output
         -> stats_dump_proxy()          // same as above
         -> stats_dump_http_end()       // emits HTML trailer
2012-12-22 20:45:02 +01:00
Willy Tarreau
2b0108adf6 MINOR: log: add lf_text_len
This function allows to log a text of a specific length.
2012-12-21 19:24:48 +01:00
Willy Tarreau
e7ad4bb2f0 MINOR: samples: add a function to fetch and convert any sample to a string
Any sample type can now easily be converted to a string that can be used
anywhere. This will be used for logging and passing information in headers.
2012-12-21 17:57:24 +01:00
Willy Tarreau
8a3f52fc2e MEDIUM: log-format: make the format parser more robust and more extensible
The log-format parser reached a limit making it hard to add new features.
It also suffers from a weak handling of certain incorrect corner cases,
for example "%{foo}" is emitted as a litteral while syntactically it's an
argument to no variable. Also the argument parser had to redo some of the
job with some cases causing minor memory leaks (eg: ignored args).

This work aims at improving the situation so that slightly better reporting
is possible and that it becomes possible to extend the log format. The code
has a few more states but looks significantly simpler. The parser is now
capable of reporting ignored arguments and truncated lines.
2012-12-20 23:34:20 +01:00
Willy Tarreau
5fb3803f4b CLEANUP: buffer: use buffer_empty() instead of buffer_len()==0
A few places still made use of buffer_len()==0 to detect an empty
buffer. Use the cleaner and more efficient buffer_empty() instead.
2012-12-17 01:14:49 +01:00
Willy Tarreau
7d28149e92 BUG/MEDIUM: connection: always update connection flags prior to computing polling
stream_int_chk_rcv_conn() did not clear connection flags before updating them. It
is unsure whether this could have caused the stalled transfers that have been
reported since dev15.

In order to avoid such further issues, we now use a simple inline function to do
all the job.
2012-12-17 01:14:25 +01:00
Willy Tarreau
4a29144591 OPTIM: poll: optimize fd management functions for low register count CPUs
Looking at the assembly code that updt_fd() and alloc/release_spec_entry
produce in the polling loops, it's clear that gcc has to recompute pointers
several times in a row because of limited spare registers. By better
grouping adjacent structure updates, we improve the code size by around
60 bytes in the fast path on x86.
2012-12-13 23:34:18 +01:00
Willy Tarreau
20d46a5a95 CLEANUP: session: use an array for the stick counters
The stick counters were in two distinct sets of struct members,
causing some code to be duplicated. Now we use an array, which
enables some processing to be performed in loops. This allowed
the code to be shrunk by 700 bytes.
2012-12-09 15:57:16 +01:00
Willy Tarreau
5d5b5d8eaf MEDIUM: proto_tcp: add support for tracking L7 information
Until now it was only possible to use track-sc1/sc2 with "src" which
is the IPv4 source address. Now we can use track-sc1/sc2 with any fetch
as well as any transformation type. It works just like the "stick"
directive.

Samples are automatically converted to the correct types for the table.

Only "tcp-request content" rules may use L7 information, and such information
must already be present when the tracking is set up. For example it becomes
possible to track the IP address passed in the X-Forwarded-For header.

HTTP request processing now also considers tracking from backend rules
because we want to be able to update the counters even when the request
was already parsed and tracked.

Some more controls need to be performed (eg: samples do not distinguish
between L4 and L6).
2012-12-09 14:08:47 +01:00
Willy Tarreau
55e4ecd928 MINOR: stats: add a few more information on session dump
We also report fd.spec_p, fd.updt and a few names instead of the values.
2012-12-08 17:48:47 +01:00
Emeric Brun
af9619da3e MEDIUM: ssl: manage shared cache by blocks for huge sessions.
Sessions using client certs are huge (more than 1 kB) and do not fit
in session cache, or require a huge cache.

In this new implementation sshcachesize set a number of available blocks
instead a number of available sessions.

Each block is large enough (128 bytes) to store a simple session (without
client certs).

Huge sessions will take multiple blocks depending on client certificate size.

Note: some unused code for session sync with remote peers was temporarily
      removed.
2012-12-04 10:56:56 +01:00
Willy Tarreau
20879a0233 MEDIUM: connection: add error reporting for the SSL
Get a bit more info in the logs when client-side SSL handshakes fail.
2012-12-03 17:21:52 +01:00
Willy Tarreau
8e3bf699db MEDIUM: connection: add error reporting for the PROXY protocol header
When the PROXY protocol header is expected and fails, leading to an
abort of the incoming connection, we now emit a log message. If option
dontlognull is set and it was just a port probe, then nothing is logged.
2012-12-03 17:21:51 +01:00
Willy Tarreau
0af2912fd1 MEDIUM: connection: add minimal error reporting in logs for incomplete connections
Since the introduction of SSL, it became quite annoying not to get any useful
info in logs about handshake failures. Let's improve reporting for embryonic
sessions by checking a per-connection error code and reporting it into the logs
if an error happens before the session is completely instanciated.

The "dontlognull" option is supported in that if a connection does not talk
before being aborted, nothing will be emitted.

At the moment, only timeouts are considered for SSL and the PROXY protocol,
but next patches will handle more errors.
2012-12-03 15:38:23 +01:00
Emeric Brun
786991e8b7 BUG/MEDIUM: ssl: Fix handshake failure on session resumption with client cert.
Openssl session_id_context was not set on cached sessions so handshake returns an error.
2012-11-26 18:43:21 +01:00
Willy Tarreau
36fb02c526 BUG/MEDIUM: connection: always disable polling upon error
Commit 0ffde2cc in 1.5-dev13 tried to always disable polling on file
descriptors when errors were encountered. Unfortunately it did not
always succeed in doing so because it relied on detecting polling
changes to disable it. Let's use a dedicated conn_stop_polling()
function that is inconditionally called upon error instead.

This managed to stop a busy loop observed when a health check makes
use of the send-proxy protocol and fails before the connection can
be established.
2012-11-24 11:09:07 +01:00
Willy Tarreau
f0837b259b MEDIUM: tcp: add explicit support for delayed ACK in connect()
Commit 24db47e0 tried to improve support for delayed ACK upon connect
but it was incomplete, because checks with the proxy protocol would
always enable polling for data receive and there was no way of
distinguishing data polling and delayed ack.

So we add a distinct delack flag to the connect() function so that
the caller decides whether or not to use a delayed ack regardless
of pending data (eg: when send-proxy is in use). Doing so covers all
combinations of { (check with data), (sendproxy), (smart-connect) }.
2012-11-24 10:24:27 +01:00
Willy Tarreau
2b199c9ac3 MEDIUM: connection: provide a common conn_full_close() function
Several places got the connection close sequence wrong because it
was not obvious. In practice we always need the same sequence when
aborting, so let's have a common function for this.
2012-11-23 17:32:21 +01:00
Willy Tarreau
88c6d81386 MINOR: http: add some debugging functions to pretty-print msg state names
The http_msg_state_str() function reports a string containing the name of
the state passed in argument. This helps while debugging.
2012-11-21 21:50:04 +01:00
William Lallemand
072a2bf537 MINOR: compression: CPU usage limit
New option 'maxcompcpuusage' in global section.
Sets the maximum CPU usage HAProxy can reach before stopping the
compression for new requests or decreasing the compression level of
current requests.  It works like 'maxcomprate' but with the Idle.
2012-11-21 02:15:16 +01:00
William Lallemand
e3a7d99062 MINOR: compression: report zlib memory usage
Show the memory usage and the max memory available for zlib.
The value stored is now the memory used instead of the remaining
available memory.
2012-11-21 02:15:16 +01:00
William Lallemand
8b52bb3878 MEDIUM: compression: use pool for comp_ctx
Use pool for comp_ctx, it is allocated during the comp_algo->init().
The allocation of comp_ctx is accounted for in the zlib_memory_available.
2012-11-21 01:56:47 +01:00
Willy Tarreau
bc174aa144 MINOR: cli: report connection status in "show sess xxx"
Connection flags, targets and transport layers are now reported in
"show sess $PTR", as it is an absolute requirement in debugging.
2012-11-19 16:22:22 +01:00
William Lallemand
bf3ae61789 MEDIUM: compression: don't compress when no data
This patch makes changes in the http_response_forward_body state
machine. It checks if the compress algorithm had consumed data before
swapping the temporary and the input buffer. So it prevents null sized
zlib chunks.
2012-11-19 14:57:29 +01:00
Willy Tarreau
3fdb366885 MAJOR: connection: replace struct target with a pointer to an enum
Instead of storing a couple of (int, ptr) in the struct connection
and the struct session, we use a different method : we only store a
pointer to an integer which is stored inside the target object and
which contains a unique type identifier. That way, the pointer allows
us to retrieve the object type (by dereferencing it) and the object's
address (by computing the displacement in the target structure). The
NULL pointer always corresponds to OBJ_TYPE_NONE.

This reduces the size of the connection and session structs. It also
simplifies target assignment and compare.

In order to improve the generated code, we try to put the obj_type
element at the beginning of all the structs (listener, server, proxy,
si_applet), so that the original and target pointers are always equal.

A lot of code was touched by massive replaces, but the changes are not
that important.
2012-11-12 00:42:33 +01:00
Willy Tarreau
128b03c9ab CLEANUP: stream_interface: remove the external task type target
Before connections were introduced, it was possible to connect an
external task to a stream interface. However it was left as an
exercise for the brave implementer to find how that ought to be
done.

The feature was broken since the introduction of connections and
was never fixed since due to lack of users. Better remove this dead
code now.
2012-11-11 23:14:16 +01:00
Willy Tarreau
b31c971bef CLEANUP: channel: remove any reference of the hijackers
Hijackers were functions designed to inject data into channels in the
distant past. They became unused around 1.3.16, and since there has
not been any user of this mechanism to date, it's uncertain whether
the mechanism still works (and it's not really useful anymore). So
better remove it as well as the pointer it uses in the channel struct.
2012-11-11 23:05:39 +01:00
Willy Tarreau
70c6fd82c3 MAJOR: polling: remove unused callbacks from the poller struct
Since no poller uses poller->{set,clr,wai,is_set,rem} anymore, let's
remove them and remove the associated pointer tests in proto/fd.h.
2012-11-11 21:02:34 +01:00
Willy Tarreau
7f7ad91056 BUILD: stream_interface: remove si_fd() and its references
si_fd() is not used a lot, and breaks builds on OpenBSD 5.2 which
defines this name for its own purpose. It's easy enough to remove
this one-liner function, so let's do it.
2012-11-11 20:53:29 +01:00