The introduction of a new PROXY protocol for proxied connections requires
an early analyser to decode the incoming connection and set the session
flags accordingly.
Some more work is needed, among which setting a flag on the session to
indicate it's proxied, and copying the original parameters for later
comparisons with new ACLs (eg: real_src, ...).
inetaddr_host_lim_ret() used to make use of const char** for some
args, but that make it impossible ot use char** due to the way
controls are made by gcc. So let's change that.
The timer parser looks for the next slash after the last timer, which is
very far away. Those 4 occurrences have been fixed to match the way it's
done in URL sorting, which is faster. Average speed gain is 5-6% on -srv
and -pct.
(cherry picked from commit 3555671c93695f48c02ef05c8bb228523f17ca20)
Using -u{,c,e,t,a,to,ao} it is possible to get per-URL statistics, sorted by
URL, request count, error count, total time, avg time, total time on OK requests,
avg time on OK requests.
Since it has to parse URLs and store a number of fields, it's quite slower
than other methods, but still correct for production usage (typically 800000
lines or 270 MB per second on a 2 GHz system).
Results are sorted in reverse order so that it's easy to catch them by piping
the output to the "head" command.
(cherry picked from commit 15ce7f56d15f839ce824279b84ffe14c58e41fda)
(update to ebtree 6.0.4)
Recent fix fd301cc1370cd4977fe175dfa4544c7dc0e7ce6b was not OK because it
was returning one excess byte, causing some duplicates not to be detected.
The reason is that we added 8 bits to count the trailing zero but they
were implied by the pre-incrementation of the pointer.
Fixing this was still not enough, as the problem appeared when
string_equal_bits() was applied on two identical strings, and it returned
a number of bits covering the trailing zero. Subsequent calls were applied
to the first byte after this trailing zero. It was often zero when doing
insertion from raw files, explaining why the issue was not discovered
earlier. But when the data is from a reused area, duplicate strings are not
correctly detected when inserting into the tree.
Several solutions were tested, and the only efficient one consists in making
string_equal_bits() notify the caller that the end of the string was reached.
It now returns zero and the callers just have to ensure that when they get a
zero, they stop using that bit until a dup tree or a leaf is encountered.
This fix brought the unexpected bonus of simplifying the insertion code a bit
and making it slightly faster to process duplicates.
The impact for haproxy was that if many similar string patterns were loaded
from a file, there was a potential risk that their insertion or matching could
have been slower. The bigger impact was with the URL sorting feature of halog,
which is not yet merged and is how this bug was discovered.
(cherry picked from commit 518d59ec9ba43705f930f9ece3749c450fd005df)
This patch adds new haproxy_socket.xml template and updates
haproxy_backend.xml and haproxy_frontend.xml templates.
(cherry picked from commit 67cd1d55b5513e4186f021a7014e9442fd7a710f)
This patch adds support for Sockets and several
new variables available in the 1.4 branch.
(cherry picked from commit d049c84fdc9e35472a3db87e45069afd92bee01d)
This option makes haproxy preserve any persistence cookie emitted by
the server, which allows the server to change it or to unset it, for
instance, after a logout request.
(cherry picked from commit 52e6d75374c7900c1fe691c5633b4ae029cae8d5)
When a backend defines a new cookie, it forgot to unset any params
that could have been set in a defaults section, resulting in configs
that would sometimes refuse to load or not work as expected.
(cherry picked from commit f80bf174ed905a29a3ed8ee91fcd528da6df174f)
This match returns true when the request calling it is the first one of
a connection.
(cherry picked from commit 922ca979c50653c415852531f36fe409190ad76b)
Pattern fetches relying on destination address must first fetch
the address if it has not been done yet.
(cherry picked from commit 21abf441feb318b2ccd7df590fd89e9e824627f6)
Hi,
I've attached the templates I've built for monitoring backends and
frontends of haproxy.
To install these, you will need to copy the XML files from the contrib/
directory of the haproxy distribution into a directory that Cacti can
reach, and edit the Data Queries "HaProxy Backends" and "HAProxy
Frontends" accordingly (the "XML Path" field. It's also dependant on
having a version of net-snmp that supports embedded Perl, and including
the "perl do 'path_to_haproxy.pl';" directive in your snmpd.conf file.
As for what is created:
- For the devices, you have two new data queries to choose from, they
can be added from the Devices page for each device, at the very end in
the drop-down box, then click "Add". The data queries are called
"HaProxy Backends" and "HAProxy Frontends".
- From "HaProxy Backends": in the new graphs page, you can choose which
backend to graph, and create one of two graphs:
- Haproxy backend traffic: ingress and egress bytes.
- Haproxy backend sessions: total sessions with _reponse_ errors.
- From "HAProxy Frontends": in the new graphs page again, you can choose
which frontend to graph, which will include aggregated data for the
backends behind it, obviously. You can create one of two graphs:
- Haproxy frontend traffic: ingress and egress bytes.
- Haproxy frontend sessions: total sessions with _request_ errors.
In the graphs and data sources, limits are set to reasonably high values
to support up to nearly 10G traffic, and up to 10000 concurrent
connections.
/ Matt
(cherry picked from commit f63090f2e85cdb7448071cdceb2eb5fabd2b9320)
The MySQL check has been revamped to be able to send real MySQL data,
and to avoid Aborted connects on MySQL side.
It is however backward compatible with older version, but it is highly
recommended to use the new mode, by adding "user <username>" on the
"mysql-check" line.
The new check consists in sending two MySQL packet, one Client
Authentication packet, with "haproxy" username (by default), and one
QUIT packet, to correctly close MySQL session. We then parse the Mysql
Handshake Initialisation packet and/or Error packet. It is a basic but
useful test which does not produce error nor aborted connect on the
server.
(cherry picked from commit a1e4dcfe5718311b7653d7dabfad65c005d0439b)
Health checks were all pure ASCII, but we're going to have to support some
binary checks (eg: SQL). When they're inherited from the default section,
they will be truncated to the first \0 due to strdup(). Let's fix that with
a simple malloc.
(cherry picked from commit 98fc04a766bcff80f57db2b1cd865c91761b131b)
Keywords were changed just before the commit but not in the help message.
Spotted by Hank A. Paulson.
(cherry picked from commit fdd46a0766dccec704aa1bd5acb0ac99a801c549)
When we're enabling a server again (unix CLI or stats interface), we must not mark
it completely up because it can take a while before a failure is detected. So we
mark it one step above failure, which means it's up but will be marked down upon
first failure.
(cherry picked from commit 83c3e06452457ed5660fc814cbda5bf878bf19a2)
The stats web interface must be read-only by default to prevent security
holes. As it is now allowed to enable/disable servers, a new keyword
"stats admin" is introduced to activate this admin level, conditioned by ACLs.
(cherry picked from commit 5334bab92ca7debe36df69983c19c21b6dc63f78)
Based on a patch provided by Judd Montgomery, it is now possible to
enable/disable servers from the stats web interface. This allows to select
several servers in a backend and apply the action to them at the same time.
Currently, there are 2 known limitations :
- The POST data are limited to one packet
(don't alter too many servers at a time).
- Expect: 100-continue is not supported.
(cherry picked from commit 7693948766cb5647ac03b48e782cfee2b1f14491)
In a down backend, when a zero-weight server is lost, a new
"backend down" message was emitted and the down transition of that
backend was wrongly increased. This change ensures that we don't
count that transition again.
This patch should be backported to 1.3.
(cherry picked from commit 60efc5f745b5fa70d811f977727592e47e32a281)
If a maxidle or maxlife parameter is set on the persistence cookie in
insert mode and the client did not provide a recent enough cookie,
then we emit a new cookie with a new last_seen date and the same
first_seen (if maxlife is set). Recent enough here designates a
cookie that would be rounded to the same date. That way, we can
refresh a cookie when required without doing it in all responses.
If the request did not contain such parameters, they are set anyway.
This means that a monitoring request that is forced to a server will
get an expiration date anyway, but this should not be a problem given
that the client is able to set its cookie in this case. This also
permits to force an expiration date on visitors who previously did
not have one.
If a request comes with a dated cookie while no date check is performed,
then a new cookie is emitted with no date, so that we don't risk dropping
the user too fast due to a very old date when we re-enable the date check.
All requests that were targetting the correct server and which had their
expiration date added/updated/removed in the response cookie are logged
with the 'U' ("updated") flag instead of the 'I' ("inserted"). So very
often we'll see "VU" instead of "VN".
(cherry picked from commit 8b3c6ecab6d37be5f3655bc3a2d2c0f9f37325eb)
If a cookie comes in with a first or last date, and they are configured on
the backend, they're checked. If a date is expired or too far in the future,
then the cookie is ignored and the specific reason appears in the cookie
field of the logs.
(cherry picked from commit faa3019107eabe6b3ab76ffec9754f2f31aa24c6)
These functions only require 5 chars to encode 30 bits, and don't expect
any padding. They will be used to encode dates in cookies.
(cherry picked from commit a7e2b5fc4612994c7b13bcb103a4a2c3ecd6438a)
The set-cookie status flags were not very handy and limited. Reorder
them to save some room for additional values and add the "U" flags
(for Updated expiration date) that will be used with expirable cookies
in insert mode.
(cherry picked from commit 5bab52f821bb0fa99fc48ad1b400769e66196ece)
We'll need one more bit to store and report the request cookie's status.
Doing this required moving a few bits around. However, now in 1.4 all bits
are used, there's no room left.
Cookie flags will need
(cherry picked from commit 09ebca0413c43620ddc375b5b4ab31a25d47b3f4)
In all cookie persistence modes but prefix, we now support cookies whose
value is suffixed with some contents after a vertical bar ('|'). This will
be used to pass an optional expiration date. So as of now we only consider
the part of the cookie value which is used before the vertical bar.
(cherry picked from commit a4486bf4e5b03b5a980d03fef799f6407b2c992d)
Add two new arguments to the "cookie" keyword, to be able to
fix a max idle and max life on them. Right now only the parameter
parsing is implemented.
(cherry picked from commit 9ad5dec4c3bb8f29129f292cb22d3fc495fcc98a)
- fix some erroneous keywords for http-request
- stats http-request was not documented
--
Cyril Bont
(cherry picked from commit 17e07dad5074571c38cbeb92d54a20f83658449d)
HTTP content-based health checks will be involved in searching text in pages.
Some pages may not fit in the default buffer (16kB) and sometimes it might be
desired to have larger buffers in order to find patterns. Running checks on
smaller URIs is always preferred of course.
(cherry picked from commit 043f44aeb835f3d0b57626c4276581a73600b6b1)
This patch adds the "http-check expect [r]{string,status}" statements
which enable health checks based on whether the response status or body
to an HTTP request contains a string or matches a regex.
This probably is one of the oldest patches that remained unmerged. Over
the time, several people have contributed to it, among which FinalBSD
(first and second implementations), Nick Chalk (port to 1.4), Anze
Skerlavaj (tests and fixes), Cyril Bont (general fixes), and of course
myself for the final fixes and doc during integration.
Some people already use an old version of this patch which has several
issues, among which the inability to search for a plain string that is
not at the beginning of the data, and the inability to look for response
contents that are provided in a second and subsequent recv() calls. But
since some configs are already deployed, it was quite important to ensure
a 100% compatible behaviour on the working cases.
Thus, that patch fixes the issues while maintaining config compatibility
with already deployed versions.
(cherry picked from commit b507c43a3ce9a8e8e4b770e52e4edc20cba4c37f)
This patch provides a new "option ldap-check" statement to enable
server health checks based on LDAPv3 bind requests.
(cherry picked from commit b76b44c6fed8a7ba6f0f565dd72a9cb77aaeca7c)