WARNING: this is a MAJOR (and disruptive) change with previous HAProxy's
behavior: before, HAProxy never ever used to change a server administrative
status when the DNS resolution failed at run time.
This patch gives HAProxy the ability to change the administrative status
of a server to MAINT (RMAINT actually) when an error is encountered for
a period longer than its own allowed by the corresponding 'hold'
parameter.
IE if the configuration sets "hold nx 10s" and a server's hostname
points to a NX for more than 10s, then the server will be set to RMAINT,
hence in MAINTENANCE mode.
This adds new "hold" timers : nx, refused, timeout, other. This timers
will be used to tell HAProxy to keep an erroneous response as valid for
the corresponding period. For now they're only configured, not enforced.
It will be important to help debugging some DNS resolution issues to
know why a server was marked down, so let's make the function support
a 3rd argument with an indication of the reason. Passing NULL will keep
the message as-is.
The server's state is now "MAINT (resolution)" just like we also have
"MAINT (via x/y)" when servers are tracked. The HTML stats page reports
"resolution" in the checks field similarly to what is done for the "via"
entry.
It's important to report in the server state change logs that RMAINT was
cleared, as it's not the regular maintenance mode, it's specific to name
resolution, and it's important to report the new state (which can be DRAIN
or READY).
This flag has to be set when an IP address resolution fails (either
using libc at start up or using HAProxy's runtime resolver). This will
automatically trigger the administrative status "MAINT", through the
global mask SRV_ADMF_MAINT.
Server addresses are not resolved anymore upon the first pass so that we
don't fail if an address cannot be resolved by the libc. Instead they are
processed all at once after the configuration is fully loaded, by the new
function srv_init_addr(). This function only acts on the server's address
if this address uses an FQDN, which appears in server->hostname.
For now the function does two things, to followup with HAProxy's historical
default behavior:
1. apply server IP address found in server-state file if runtime DNS
resolution is enabled for this server
2. use the DNS resolver provided by the libc
If none of the 2 options above can find an IP address, then an error is
returned.
All of this will be needed to support the new server parameter "init-addr".
For now, the biggest user-visible change is that all server resolution errors
are dumped at once instead of causing a startup failure one by one.
Currently, the function which applies server states provided by the
"old" process is applied after configuration sanity check. This results
in the impossibility to check the validity of the state file during a
regular config check, implying a full start is required, which can be
a problem sometimes.
This patch moves the loading of server_state file before MODE_CHECK.
This will be needed to later postpone server address resolution. We need the
FQDN even when it doesn't resolve. The caller then needs to check if fqdn was
set when resolve is null to detect that the address couldn't be parsed and
needs later resolution.
Quite a lot of people have been complaining about option contstats not
working correctly anymore since about 1.4. The reason was that one reason
for the significant performance boost between 1.3 and 1.4 was the ability
to forward data between a server and a client without waking up the stream
manager. And we couldn't afford to force sessions to constantly wake it
up given that most of the people interested in contstats are also those
interested in high performance transmission.
An idea was experimented with in the past, consisting in limiting the
amount of transmissible data before waking it up, but it was not usable
on slow connections (eg: FTP over modem lines, RDP, SSH) as stats would
be updated too rarely if at all, so that idea was dropped.
During a discussion today another idea came up : ensure that stats are
updated once in a while, since it's the only thing that matters. It
happens that we have the request channel's analyse_exp timeout that is
used to wake the stream up after a configured delay, and that by
definition this timeout is not used when there's no more analyser
(otherwise the stream would wake up and the stats would be updated).
Thus here the idea is to reuse this timeout when there's no analyser
and set it to now+5 seconds so that a stream wakes up at least once
every 5 seconds to update its stats. It should be short enough to
provide smooth traffic graphs and to allow to debug outputs of "show
sess" more easily without inflicting too much load even for very large
number of concurrent connections.
This patch is simple enough and safe enough to be backportable to 1.6
if there is some demand.
In the last release a lot of the structures have become opaque for an
end user. This means the code using these needs to be changed to use the
proper functions to interact with these structures instead of trying to
manipulate them directly.
This does not fix any deprecations yet that are part of 1.1.0, it only
ensures that it can be compiled against that version and is still
compatible with older ones.
[wt: openssl-0.9.8 doesn't build with it, there are conflicts on certain
function prototypes which we declare as inline here and which are
defined differently there. But openssl-0.9.8 is not supported anymore
so probably it's OK to go without it for now and we'll see later if
some users still need it. Emeric has reviewed this change and didn't
spot anything obvious which requires special care. Let's try it for
real now]
The only reason wurfl/wurfl.h was needed outside of wurfl.c was to expose
wurfl_handle which is a pointer to a structure, referenced by global.h.
By just storing a void* there instead, we can confine all wurfl code to
wurfl.c, which is really nice.
Both DeviceAtlas and 51Degrees used to put their building instructions
in the README, representing more than 1/3 of it. It's better to let the
README focus on generic stuff and building procedure and move the DD
docs to their own files.
WURFL is a high-performance and low-memory footprint mobile device
detection software component that can quickly and accurately detect
over 500 capabilities of visiting devices. It can differentiate between
portable mobile devices, desktop devices, SmartTVs and any other types
of devices on which a web browser can be installed.
In order to add WURFL device detection support, you would need to
download Scientiamobile InFuze C API and install it on your system.
Refer to www.scientiamobile.com to obtain a valid InFuze license.
Any useful information on how to configure HAProxy working with WURFL
may be found in:
doc/WURFL-device-detection.txt
doc/configuration.txt
examples/wurfl-example.cfg
Please find more information about WURFL device detection API detection
at https://docs.scientiamobile.com/documentation/infuze/infuze-c-api-user-guide
Right now there is an issue with the way the maintenance flags are
propagated upon startup. They are not propagate, just copied from the
tracked server. This implies that depending on the server's order, some
tracking servers may not be marked down. For example this configuration
does not work as expected :
server s1 1.1.1.1:8000 track s2
server s2 1.1.1.1:8000 track s3
server s3 1.1.1.1:8000 track s4
server s4 wtap:8000 check inter 1s disabled
It results in s1/s2 being up, and s3/s4 being down, while all of them
should be down.
The only clean way to process this is to run through all "root" servers
(those not tracking any other server), and to propagate their state down
to all their trackers. This is the same algorithm used to propagate the
state changes. It has to be done both to compute the IDRAIN flag and the
IMAINT flag. However, doing so requires that tracking servers are not
marked as inherited maintenance anymore while parsing the configuration
(and given that it is wrong, better drop it).
This fix also addresses another side effect of the bug above which is
that the IDRAIN/IMAINT flags are stored in the state files, and if
restored while the tracked server doesn't have the equivalent flag,
the servers may end up in a situation where it's impossible to remove
these flags. For example in the configuration above, after removing
"disabled" on server s4, the other servers would have remained down,
and not anymore with this fix. Similarly, the combination of IMAINT
or IDRAIN with their respective forced modes was not accepted on
reload, which is wrong as well.
This bug has been present at least since 1.5, maybe even 1.4 (it came
with tracking support). The fix needs to be backported there, though
the srv-state parts are irrelevant.
This commit relies on previous patch to silence warnings on startup.
We'll have to use srv_set_admin_flag() to propagate some server flags
during the startup, and we don't want the resulting actions to cause
warnings, logs nor e-mail alerts to be generated since we're just applying
the config or a state file. So let's condition these notifications to the
fact that we're starting.
CMAINT indicates that the server was *initially* disabled in the
configuration via the "disabled" keyword. FDRAIN indicates that the
server was switched to the DRAIN state from the CLI or the agent.
This it's perfectly valid to have both of them in the state file,
so the parser must not reject this combination.
This fix must be backported to 1.6.
There were seveal reports about the DRAIN state not being properly
restored upon reload.
It happens that the condition in the code does exactly the opposite
of what the comment says, and the comment is right so the code is
wrong.
It's worth noting that the conditions are complex here due to the 2
available methods to set the drain state (CLI/agent, and config's
weight). To paraphrase the updated comment in the code, there are
two possible reasons for FDRAIN to have been present :
- previous config weight was zero
- "set server b/s drain" was sent to the CLI
In the first case, we simply want to drop this drain state if the new
weight is not zero anymore, meaning the administrator has intentionally
turned the weight back to a positive value to enable the server again
after an operation. In the second case, the drain state was forced on
the CLI regardless of the config's weight so we don't want a change to
the config weight to lose this status. What this means is :
- if previous weight was 0 and new one is >0, drop the DRAIN state.
- if the previous weight was >0, keep it.
This fix must be backported to 1.6.
http_find_header2() relies on find_hdr_value_end() to find the comma
delimiting a header field value, which also properly handles double
quotes and backslashes within quotes. In fact double quotes are very
rare, and commas happen once every multiple characters, especially
with cookies where a full block can be found at once. So it makes
sense to optimize this function to speed up the lookup of the first
block before the quote.
This change increases the performance from 212k to 217k req/s when
requests contain a 1kB cookie (+2.5%). We don't care about going
back into the fast parser after the first quote, as it may
needlessly make the parser more complex for very marginal gains.
Searching the trailing space in long URIs takes some time. This can
happen especially on static files and some blogs. By skipping valid
character ranges by 32-bit blocks, it's possible to increase the
HTTP performance from 212k to 216k req/s on requests features a
100-character URI, which is an increase of 2%. This is done for
architectures supporting unaligned accesses (x86_64, x86, armv7a).
There's only a 32-bit version because URIs are rarely long and very
often short, so it's more efficient to limit the systematic overhead
than to try to optimize for the rarest requests.
A performance test with 1kB cookies was capping at 194k req/s. After
implementing multi-byte skipping, the performance increased to 212k req/s,
or 9.2% faster. This patch implements this for architectures supporting
unaligned accesses (x86_64, x86, armv7a). Maybe other architectures can
benefit from this but they were not tested yet.
We used to have 7 different character classes, each was 256 bytes long,
resulting in almost 2kB being used in the L1 cache. It's as cheap to
test a bit than to check the byte is not null, so let's store a 7-bit
composite value and check for the respective bits there instead.
The executable is now 4 kB smaller and the performance on small
objects increased by about 1% to 222k requests/second with a config
involving 4 http-request rules including 1 header lookup, one header
replacement, and 2 variable assignments.
ipcpy() is used to replace an IP address with another one, but it
doesn't preserve the original port so all callers have to do it
manually while it's trivial to do there. Better do it inside the
function.
Often we need to call str2ip2() on an address which already contains a
port without replacing it, so let's ensure we preserve it even if the
family changes.
Gabriele Cerami reported the the exit codes of the systemd-wrapper are
wrong. In short, it directly returns the output of the wait syscall's
status, which is a composite value made of error code an signal numbers.
In general it contains the signal number on the lower bits and the error
code on the higher bits, but exit() truncates it to the lowest 8 bits,
causing config validations to incorrectly report a success. Example :
$ ./haproxy-systemd-wrapper -c -f /dev/null
<7>haproxy-systemd-wrapper: executing /tmp/haproxy -c -f /dev/null -Ds
Configuration file has no error but will not start (no listener) => exit(2).
<5>haproxy-systemd-wrapper: exit, haproxy RC=512
$ echo $?
0
If the process is killed however, the signal number is directly reported
in the exit code.
Let's fix all this to ensure that the exit code matches what the shell does,
which means that codes 0..127 are for exit codes, codes 128..254 for signals,
and code 255 for unknown exit code. Now the return code is correct :
$ ./haproxy-systemd-wrapper -c -f /dev/null
<7>haproxy-systemd-wrapper: executing /tmp/haproxy -c -f /dev/null -Ds
Configuration file has no error but will not start (no listener) => exit(2).
<5>haproxy-systemd-wrapper: exit, haproxy RC=2
$ echo $?
2
$ ./haproxy-systemd-wrapper -f /tmp/cfg.conf
<7>haproxy-systemd-wrapper: executing /tmp/haproxy -f /dev/null -Ds
^C
<5>haproxy-systemd-wrapper: exit, haproxy RC=130
$ echo $?
130
This fix must be backported to 1.6 and 1.5.
There's no reason to use the stream anymore, only the appctx should be
used by a peer. This was a leftover from the migration to appctx and it
caused some confusion, so let's totally drop it now. Note that half of
the patch are just comment updates.
It was inherited from initial code but we must only manipulate the appctx
and never the stream, otherwise we always risk shooting ourselves in the
foot.
In case of resource allocation error, peer_session_create() frees
everything allocated and returns a pointer to the stream/session that
was put back into the free pool. This stream/session is then assigned
to ps->{stream,session} with no error control. This means that it is
perfectly possible to have a new stream or session being both used for
a regular communication and for a peer at the same time.
In fact it is the only way (for now) to explain a CLOSE_WAIT on peers
connections that was caught in this dump with the stream interface in
SI_ST_CON state while the error field proves the state ought to have
been SI_ST_DIS, very likely indicating two concurrent accesses on the
same area :
0x7dbd50: [31/Oct/2016:17:53:41.267510] id=0 proto=tcpv4
flags=0x23006, conn_retries=0, srv_conn=(nil), pend_pos=(nil)
frontend=myhost2 (id=4294967295 mode=tcp), listener=? (id=0)
backend=<NONE> (id=-1 mode=-) addr=127.0.0.1:41432
server=<NONE> (id=-1) addr=127.0.0.1:8521
task=0x7dbcd8 (state=0x08 nice=0 calls=2 exp=<NEVER> age=1m5s)
si[0]=0x7dbf48 (state=CLO flags=0x4040 endp0=APPCTX:0x7d99c8 exp=<NEVER>, et=0x000)
si[1]=0x7dbf68 (state=CON flags=0x50 endp1=CONN:0x7dc0b8 exp=<NEVER>, et=0x020)
app0=0x7d99c8 st0=11 st1=0 st2=0 applet=<PEER>
co1=0x7dc0b8 ctrl=tcpv4 xprt=RAW data=STRM target=PROXY:0x7fe62028a010
flags=0x0020b310 fd=7 fd.state=22 fd.cache=0 updt=0
req=0x7dbd60 (f=0x80a020 an=0x0 pipe=0 tofwd=0 total=0)
an_exp=<NEVER> rex=<NEVER> wex=<NEVER>
buf=0x78a3c0 data=0x78a3d4 o=0 p=0 req.next=0 i=0 size=0
res=0x7dbda0 (f=0x80402020 an=0x0 pipe=0 tofwd=0 total=0)
an_exp=<NEVER> rex=<NEVER> wex=<NEVER>
buf=0x78a3c0 data=0x78a3d4 o=0 p=0 rsp.next=0 i=0 size=0
Special thanks to Arnaud Gavara who provided lots of valuable input and
ran some validation testing on this patch.
This fix must be backported to 1.6 and 1.5. Note that in 1.5 the
session is not assigned from within the function so some extra checks
may be needed in the callers.
This part was missed when peers were ported to the new applet
infrastructure in 1.6, the main stream is woken up instead of the
appctx. This creates a race condition by which it is possible to
wake the stream at the wrong moment and miss an event. This bug
might be at least partially responsible for some of the CLOSE_WAIT
that were reported on peers session upon reload in version 1.6.
This fix must be backported to 1.6.
Released version 1.7-dev5 with the following main changes :
- MINOR: cfgparse: few memory leaks fixes.
- MEDIUM: log: Decompose %Tq in %Th %Ti %TR
- CLEANUP: logs: remove unused log format field definitions
- BUILD/MAJOR:updated 51d Trie implementation to incorperate latest update to 51Degrees.c
- BUG/MAJOR: stream: properly mark the server address as unset on connect retry
- CLEANUP: proto_http: Removing useless variable assignation
- CLEANUP: dumpstats: Removing useless variables allocation
- CLEANUP: dns: Removing usless variable & assignation
- BUG/MINOR: payload: fix SSLv2 version parser
- MINOR: cli: allow the semi-colon to be escaped on the CLI
- MINOR: cli: change a server health check port through the stats socket
- BUG/MINOR: Fix OSX compilation errors
- MAJOR: check: find out which port to use for health check at run time
- MINOR: server: introduction of 3 new server flags
- MINOR: new update_server_addr_port() function to change both server's ADDR and service PORT
- MINOR: cli: ability to change a server's port
- CLEANUP/MINOR dns: comment do not follow up code update
- MINOR: chunk: new strncat function
- MINOR: dns: wrong DNS_MAX_UDP_MESSAGE value
- MINOR: dns: new MAX values
- MINOR: dns: new macro to compute DNS header size
- MINOR: dns: new DNS structures to store received packets
- MEDIUM: dns: new DNS response parser
- MINOR: dns: query type change when last record is a CNAME
- MINOR: dns: proper domain name validation when receiving DNS response
- MINOR: dns: comments in types/dns.h about structures endianness
- BUG/MINOR: displayed PCRE version is running release
- MINOR: show Built with PCRE version
- MINOR: show Running on zlib version
- MEDIUM: make SO_REUSEPORT configurable
- MINOR: enable IP_BIND_ADDRESS_NO_PORT on backend connections
- BUG/MEDIUM: http/compression: Fix how chunked data are copied during the HTTP body parsing
- BUG/MINOR: stats: report the correct conn_time in backend's html output
- BUG/MEDIUM: dns: don't randomly crash on out-of-memory
- MINOR: Add fe_req_rate sample fetch
- MEDIUM: peers: Fix a peer stick-tables synchronization issue.
- MEDIUM: cli: register CLI keywords with cli_register_kw()
- BUILD: Make use of accept4() on OpenBSD.
- MINOR: tcp: make set-src/set-src-port and set-dst/set-dst-port commutative
- DOC: fix missed entry for "set-{src,dst}{,-port}"
- BUG/MINOR: vars: use sess and not s->sess in action_store()
- BUG/MINOR: vars: make smp_fetch_var() more robust against misuses
- BUG/MINOR: vars: smp_fetch_var() doesn't depend on HTTP but on the session
- MINOR: stats: output dcon
- CLEANUP: tcp rules: mention everywhere that tcp-conn rules are L4
- MINOR: counters: add new fields for denied_sess
- MEDIUM: tcp: add registration and processing of TCP L5 rules
- MINOR: stats: emit dses
- DOC: document tcp-request session
- MINOR: ssl: add debug traces
- BUILD/CLEANUP: ssl: Check BIO_reset() return code
- BUG/MINOR: ssl: Check malloc return code
- BUG/MINOR: ssl: prevent multiple entries for the same certificate
- BUG/MINOR: systemd: make the wrapper return a non-null status code on error
- BUG/MINOR: systemd: always restore signals before execve()
- BUG/MINOR: systemd: check return value of calloc()
- MINOR: systemd: report it when execve() fails
- BUG/MEDIUM: systemd: let the wrapper know that haproxy has completed or failed
- MINOR: proxy: add 'served' field to proxy, equal to total of all servers'
- MINOR: backend: add hash-balance-factor option for hash-type consistent
- MINOR: server: compute a "cumulative weight" to allow chash balancing to hit its target
- MEDIUM: server: Implement bounded-load hash algorithm
- SCRIPTS: make git-show-backports also dump a "git show" command
- MINOR: build: Allow linking to device-atlas library file
- MINOR: stats: Escape equals sign on socket dump
Greetings,
Was recently working with a stick table storing URL's and one had an
equals sign in it (e.g. 127.0.0.1/f=ab) which made it difficult to
easily split the key and value without a regex.
This patch will change it so that the key looks like
"key=127.0.0.1/f\=ab" instead of "key=127.0.0.1/f=ab".
Not very important given that there are ways to work around it.
Thanks,
- Chad
This is very convenient for backport reviews as in a single
command you get all the patches one at a time with their
changelog and backport instructions.
The consistent hash lookup is done as normal, then if balancing is
enabled, we progress through the hash ring until we find a server that
doesn't have "too much" load. In the case of equal weights for all
servers, the allowed number of requests for a server is either the
floor or the ceil of (num_requests * hash-balance-factor / num_servers);
with unequal weights things are somewhat more complicated, but the
spirit is the same -- a server should not be able to go too far above
(its relative weight times) the average load. Using the hash ring to
make the second/third/etc. choice maintains as much locality as
possible given the load limit.
Signed-off-by: Andrew Rodland <andrewr@vimeo.com>
For active servers, this is the sum of the eweights of all active
servers before this one in the backend, and
[srv->cumulative_weight .. srv_cumulative_weight + srv_eweight) is a
space occupied by this server in the range [0 .. lbprm.tot_wact), and
likewise for backup servers with tot_wbck. This allows choosing a
server or a range of servers proportional to their weight, by simple
integer comparison.
Signed-off-by: Andrew Rodland <andrewr@vimeo.com>
0 will mean no balancing occurs; otherwise it represents the ratio
between the highest-loaded server and the average load, times 100 (i.e.
a value of 150 means a 1.5x ratio), assuming equal weights.
Signed-off-by: Andrew Rodland <andrewr@vimeo.com>
Pierre Cheynier found that there's a persistent issue with the systemd
wrapper. Too fast reloads can lead to certain old processes not being
signaled at all and continuing to run. The problem was tracked down as
a race between the startup and the signal processing : nothing prevents
the wrapper from starting new processes while others are still starting,
and the resulting pid file will only contain the latest pids in this
case. This can happen with large configs and/or when a lot of SSL
certificates are involved.
In order to solve this we want the wrapper to wait for the new processes
to complete their startup. But we also want to ensure it doesn't wait for
nothing in case of error.
The solution found here is to create a pipe between the wrapper and the
sub-processes. The wrapper waits on the pipe and the sub-processes are
expected to close this pipe once they completed their startup. That way
we don't queue up new processes until the previous ones have registered
their pids to the pid file. And if anything goes wrong, the wrapper is
immediately released. The only thing is that we need the sub-processes
to know the pipe's file descriptor. We pass it in an environment variable
called HAPROXY_WRAPPER_FD.
It was confirmed both by Pierre and myself that this completely solves
the "zombie" process issue so that only the new processes continue to
listen on the sockets.
It seems that in the future this stuff could be moved to the haproxy
master process, also getting rid of an environment variable.
This fix needs to be backported to 1.6 and 1.5.
It's important to know that a signal sent to the wrapper had no effect
because something failed during execve(). Ideally more info (strerror)
should be reported. It would be nice to backport this to 1.6 and 1.5.
The wrapper is not the best reliable thing in the universe, so start
by adding at least the minimum expected controls :-/
To be backported to 1.5 and 1.6.
Since signals are inherited, we must restore them before calling execve()
and intercept them again after a failed execve(). In order to cleanly deal
with the SIGUSR2/SIGHUP loops where we re-exec the wrapper, we ignore these
two signals during a re-exec, and restore them to defaults when spawning
haproxy.
This should be backported to 1.6 and 1.5.
When execv() fails to execute the haproxy executable, it's important to
return an error instead of pretending everything is cool. This fix should
be backported to 1.6 and 1.5 in order to improve the overall reliability
under systemd.
Today, the certificate are indexed int he SNI tree using their CN and the
list of thier AltNames. So, Some certificates have the same names in the
CN and one of the AltNames entries.
Typically Let's Encrypt duplicate the the DNS name in the CN and the
AltName.
This patch prevents the creation of identical entries in the trees. It
checks the same DNS name and the same SSL context.
If the same certificate is registered two time it will be duplicated.
This patch should be backported in the 1.6 and 1.5 version.