2015-10-13 12:40:55 +00:00
|
|
|
------------------------
|
|
|
|
HAProxy Management Guide
|
|
|
|
------------------------
|
2022-12-01 14:25:34 +00:00
|
|
|
version 2.8
|
2015-10-13 12:40:55 +00:00
|
|
|
|
|
|
|
|
|
|
|
This document describes how to start, stop, manage, and troubleshoot HAProxy,
|
|
|
|
as well as some known limitations and traps to avoid. It does not describe how
|
|
|
|
to configure it (for this please read configuration.txt).
|
|
|
|
|
|
|
|
Note to documentation contributors :
|
|
|
|
This document is formatted with 80 columns per line, with even number of
|
|
|
|
spaces for indentation and without tabs. Please follow these rules strictly
|
|
|
|
so that it remains easily printable everywhere. If you add sections, please
|
|
|
|
update the summary below for easier searching.
|
|
|
|
|
|
|
|
|
|
|
|
Summary
|
|
|
|
-------
|
|
|
|
|
|
|
|
1. Prerequisites
|
|
|
|
2. Quick reminder about HAProxy's architecture
|
|
|
|
3. Starting HAProxy
|
|
|
|
4. Stopping and restarting HAProxy
|
|
|
|
5. File-descriptor limitations
|
|
|
|
6. Memory management
|
|
|
|
7. CPU usage
|
|
|
|
8. Logging
|
|
|
|
9. Statistics and monitoring
|
2015-10-13 12:45:29 +00:00
|
|
|
9.1. CSV format
|
2016-03-11 10:09:34 +00:00
|
|
|
9.2. Typed output format
|
|
|
|
9.3. Unix Socket commands
|
2018-12-11 17:56:45 +00:00
|
|
|
9.4. Master CLI
|
2022-02-02 13:44:19 +00:00
|
|
|
9.4.1. Master CLI commands
|
2015-10-13 12:40:55 +00:00
|
|
|
10. Tricks for easier configuration management
|
|
|
|
11. Well-known traps to avoid
|
|
|
|
12. Debugging and performance issues
|
|
|
|
13. Security considerations
|
|
|
|
|
|
|
|
|
|
|
|
1. Prerequisites
|
|
|
|
----------------
|
|
|
|
|
|
|
|
In this document it is assumed that the reader has sufficient administration
|
|
|
|
skills on a UNIX-like operating system, uses the shell on a daily basis and is
|
|
|
|
familiar with troubleshooting utilities such as strace and tcpdump.
|
|
|
|
|
|
|
|
|
|
|
|
2. Quick reminder about HAProxy's architecture
|
|
|
|
----------------------------------------------
|
|
|
|
|
2019-02-27 14:01:46 +00:00
|
|
|
HAProxy is a multi-threaded, event-driven, non-blocking daemon. This means is
|
2015-10-13 12:40:55 +00:00
|
|
|
uses event multiplexing to schedule all of its activities instead of relying on
|
|
|
|
the system to schedule between multiple activities. Most of the time it runs as
|
|
|
|
a single process, so the output of "ps aux" on a system will report only one
|
|
|
|
"haproxy" process, unless a soft reload is in progress and an older process is
|
|
|
|
finishing its job in parallel to the new one. It is thus always easy to trace
|
2019-02-27 14:01:46 +00:00
|
|
|
its activity using the strace utility. In order to scale with the number of
|
|
|
|
available processors, by default haproxy will start one worker thread per
|
|
|
|
processor it is allowed to run on. Unless explicitly configured differently,
|
|
|
|
the incoming traffic is spread over all these threads, all running the same
|
|
|
|
event loop. A great care is taken to limit inter-thread dependencies to the
|
|
|
|
strict minimum, so as to try to achieve near-linear scalability. This has some
|
|
|
|
impacts such as the fact that a given connection is served by a single thread.
|
|
|
|
Thus in order to use all available processing capacity, it is needed to have at
|
|
|
|
least as many connections as there are threads, which is almost always granted.
|
2015-10-13 12:40:55 +00:00
|
|
|
|
|
|
|
HAProxy is designed to isolate itself into a chroot jail during startup, where
|
|
|
|
it cannot perform any file-system access at all. This is also true for the
|
|
|
|
libraries it depends on (eg: libc, libssl, etc). The immediate effect is that
|
|
|
|
a running process will not be able to reload a configuration file to apply
|
|
|
|
changes, instead a new process will be started using the updated configuration
|
|
|
|
file. Some other less obvious effects are that some timezone files or resolver
|
|
|
|
files the libc might attempt to access at run time will not be found, though
|
|
|
|
this should generally not happen as they're not needed after startup. A nice
|
|
|
|
consequence of this principle is that the HAProxy process is totally stateless,
|
|
|
|
and no cleanup is needed after it's killed, so any killing method that works
|
|
|
|
will do the right thing.
|
|
|
|
|
|
|
|
HAProxy doesn't write log files, but it relies on the standard syslog protocol
|
|
|
|
to send logs to a remote server (which is often located on the same system).
|
|
|
|
|
|
|
|
HAProxy uses its internal clock to enforce timeouts, that is derived from the
|
|
|
|
system's time but where unexpected drift is corrected. This is done by limiting
|
|
|
|
the time spent waiting in poll() for an event, and measuring the time it really
|
|
|
|
took. In practice it never waits more than one second. This explains why, when
|
|
|
|
running strace over a completely idle process, periodic calls to poll() (or any
|
|
|
|
of its variants) surrounded by two gettimeofday() calls are noticed. They are
|
|
|
|
normal, completely harmless and so cheap that the load they imply is totally
|
|
|
|
undetectable at the system scale, so there's nothing abnormal there. Example :
|
|
|
|
|
|
|
|
16:35:40.002320 gettimeofday({1442759740, 2605}, NULL) = 0
|
|
|
|
16:35:40.002942 epoll_wait(0, {}, 200, 1000) = 0
|
|
|
|
16:35:41.007542 gettimeofday({1442759741, 7641}, NULL) = 0
|
|
|
|
16:35:41.007998 gettimeofday({1442759741, 8114}, NULL) = 0
|
|
|
|
16:35:41.008391 epoll_wait(0, {}, 200, 1000) = 0
|
|
|
|
16:35:42.011313 gettimeofday({1442759742, 11411}, NULL) = 0
|
|
|
|
|
|
|
|
HAProxy is a TCP proxy, not a router. It deals with established connections that
|
|
|
|
have been validated by the kernel, and not with packets of any form nor with
|
|
|
|
sockets in other states (eg: no SYN_RECV nor TIME_WAIT), though their existence
|
|
|
|
may prevent it from binding a port. It relies on the system to accept incoming
|
|
|
|
connections and to initiate outgoing connections. An immediate effect of this is
|
|
|
|
that there is no relation between packets observed on the two sides of a
|
|
|
|
forwarded connection, which can be of different size, numbers and even family.
|
|
|
|
Since a connection may only be accepted from a socket in LISTEN state, all the
|
|
|
|
sockets it is listening to are necessarily visible using the "netstat" utility
|
|
|
|
to show listening sockets. Example :
|
|
|
|
|
|
|
|
# netstat -ltnp
|
|
|
|
Active Internet connections (only servers)
|
|
|
|
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
|
|
|
|
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1629/sshd
|
|
|
|
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2847/haproxy
|
|
|
|
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 2847/haproxy
|
|
|
|
|
|
|
|
|
|
|
|
3. Starting HAProxy
|
|
|
|
-------------------
|
|
|
|
|
|
|
|
HAProxy is started by invoking the "haproxy" program with a number of arguments
|
|
|
|
passed on the command line. The actual syntax is :
|
|
|
|
|
|
|
|
$ haproxy [<options>]*
|
|
|
|
|
|
|
|
where [<options>]* is any number of options. An option always starts with '-'
|
|
|
|
followed by one of more letters, and possibly followed by one or multiple extra
|
|
|
|
arguments. Without any option, HAProxy displays the help page with a reminder
|
|
|
|
about supported options. Available options may vary slightly based on the
|
|
|
|
operating system. A fair number of these options overlap with an equivalent one
|
|
|
|
if the "global" section. In this case, the command line always has precedence
|
|
|
|
over the configuration file, so that the command line can be used to quickly
|
|
|
|
enforce some settings without touching the configuration files. The current
|
|
|
|
list of options is :
|
|
|
|
|
|
|
|
-- <cfgfile>* : all the arguments following "--" are paths to configuration
|
2016-05-13 21:52:56 +00:00
|
|
|
file/directory to be loaded and processed in the declaration order. It is
|
|
|
|
mostly useful when relying on the shell to load many files that are
|
|
|
|
numerically ordered. See also "-f". The difference between "--" and "-f" is
|
|
|
|
that one "-f" must be placed before each file name, while a single "--" is
|
|
|
|
needed before all file names. Both options can be used together, the
|
|
|
|
command line ordering still applies. When more than one file is specified,
|
|
|
|
each file must start on a section boundary, so the first keyword of each
|
|
|
|
file must be one of "global", "defaults", "peers", "listen", "frontend",
|
|
|
|
"backend", and so on. A file cannot contain just a server list for example.
|
|
|
|
|
|
|
|
-f <cfgfile|cfgdir> : adds <cfgfile> to the list of configuration files to be
|
|
|
|
loaded. If <cfgdir> is a directory, all the files (and only files) it
|
2016-07-02 01:01:18 +00:00
|
|
|
contains are added in lexical order (using LC_COLLATE=C) to the list of
|
2016-05-13 21:52:56 +00:00
|
|
|
configuration files to be loaded ; only files with ".cfg" extension are
|
|
|
|
added, only non hidden files (not prefixed with ".") are added.
|
|
|
|
Configuration files are loaded and processed in their declaration order.
|
|
|
|
This option may be specified multiple times to load multiple files. See
|
|
|
|
also "--". The difference between "--" and "-f" is that one "-f" must be
|
|
|
|
placed before each file name, while a single "--" is needed before all file
|
|
|
|
names. Both options can be used together, the command line ordering still
|
|
|
|
applies. When more than one file is specified, each file must start on a
|
|
|
|
section boundary, so the first keyword of each file must be one of
|
|
|
|
"global", "defaults", "peers", "listen", "frontend", "backend", and so on.
|
|
|
|
A file cannot contain just a server list for example.
|
2015-10-13 12:40:55 +00:00
|
|
|
|
|
|
|
-C <dir> : changes to directory <dir> before loading configuration
|
|
|
|
files. This is useful when using relative paths. Warning when using
|
|
|
|
wildcards after "--" which are in fact replaced by the shell before
|
|
|
|
starting haproxy.
|
|
|
|
|
|
|
|
-D : start as a daemon. The process detaches from the current terminal after
|
|
|
|
forking, and errors are not reported anymore in the terminal. It is
|
|
|
|
equivalent to the "daemon" keyword in the "global" section of the
|
|
|
|
configuration. It is recommended to always force it in any init script so
|
|
|
|
that a faulty configuration doesn't prevent the system from booting.
|
|
|
|
|
|
|
|
-L <name> : change the local peer name to <name>, which defaults to the local
|
2018-04-17 14:46:13 +00:00
|
|
|
hostname. This is used only with peers replication. You can use the
|
|
|
|
variable $HAPROXY_LOCALPEER in the configuration file to reference the
|
|
|
|
peer name.
|
2015-10-13 12:40:55 +00:00
|
|
|
|
|
|
|
-N <limit> : sets the default per-proxy maxconn to <limit> instead of the
|
|
|
|
builtin default value (usually 2000). Only useful for debugging.
|
|
|
|
|
|
|
|
-V : enable verbose mode (disables quiet mode). Reverts the effect of "-q" or
|
|
|
|
"quiet".
|
|
|
|
|
2017-06-01 15:38:56 +00:00
|
|
|
-W : master-worker mode. It is equivalent to the "master-worker" keyword in
|
|
|
|
the "global" section of the configuration. This mode will launch a "master"
|
|
|
|
which will monitor the "workers". Using this mode, you can reload HAProxy
|
|
|
|
directly by sending a SIGUSR2 signal to the master. The master-worker mode
|
|
|
|
is compatible either with the foreground or daemon mode. It is
|
|
|
|
recommended to use this mode with multiprocess and systemd.
|
|
|
|
|
2018-02-07 20:42:16 +00:00
|
|
|
-Ws : master-worker mode with support of `notify` type of systemd service.
|
|
|
|
This option is only available when HAProxy was built with `USE_SYSTEMD`
|
|
|
|
build option enabled.
|
|
|
|
|
2015-10-13 12:40:55 +00:00
|
|
|
-c : only performs a check of the configuration files and exits before trying
|
|
|
|
to bind. The exit status is zero if everything is OK, or non-zero if an
|
2020-04-15 14:06:11 +00:00
|
|
|
error is encountered. Presence of warnings will be reported if any.
|
2015-10-13 12:40:55 +00:00
|
|
|
|
2021-06-05 22:50:22 +00:00
|
|
|
-cc : evaluates a condition as used within a conditional block of the
|
|
|
|
configuration. The exit status is zero if the condition is true, 1 if the
|
|
|
|
condition is false or 2 if an error is encountered.
|
|
|
|
|
2015-10-13 12:40:55 +00:00
|
|
|
-d : enable debug mode. This disables daemon mode, forces the process to stay
|
2020-10-09 17:15:03 +00:00
|
|
|
in foreground and to show incoming and outgoing events. It must never be
|
|
|
|
used in an init script.
|
2015-10-13 12:40:55 +00:00
|
|
|
|
2022-09-14 15:51:55 +00:00
|
|
|
-dC[key] : dump the configuration file. It is performed after the lines are
|
|
|
|
tokenized, so comments are stripped and indenting is forced. If a non-zero
|
|
|
|
key is specified, lines are truncated before sensitive/confidential fields,
|
|
|
|
and identifiers and addresses are emitted hashed with this key using the
|
2022-12-09 11:28:46 +00:00
|
|
|
same algorithm as the one used by the anonymized mode on the CLI. This
|
2022-09-14 15:51:55 +00:00
|
|
|
means that the output may safely be shared with a developer who needs it
|
|
|
|
to figure what's happening in a dump that was anonymized using the same
|
|
|
|
key. Please also see the CLI's "set anon" command.
|
|
|
|
|
2021-03-29 08:29:07 +00:00
|
|
|
-dD : enable diagnostic mode. This mode will output extra warnings about
|
|
|
|
suspicious configuration statements. This will never prevent startup even in
|
|
|
|
"zero-warning" mode nor change the exit status code.
|
|
|
|
|
2023-02-14 15:12:54 +00:00
|
|
|
-dF : disable data fast-forward. It is a mechanism to optimize the data
|
|
|
|
forwarding by passing data directly from a side to the other one without
|
|
|
|
waking the stream up. Thanks to this directive, it is possible to disable
|
|
|
|
this optimization. Note it also disable any kernel tcp splicing. This
|
|
|
|
command is not meant for regular use, it will generally only be suggested by
|
|
|
|
developers along complex debugging sessions.
|
|
|
|
|
2015-10-13 12:40:55 +00:00
|
|
|
-dG : disable use of getaddrinfo() to resolve host names into addresses. It
|
|
|
|
can be used when suspecting that getaddrinfo() doesn't work as expected.
|
|
|
|
This option was made available because many bogus implementations of
|
|
|
|
getaddrinfo() exist on various systems and cause anomalies that are
|
|
|
|
difficult to troubleshoot.
|
|
|
|
|
MINOR: management: add some basic keyword dump infrastructure
It's difficult from outside haproxy to detect the supported keywords
and syntax. Interestingly, many of our modern keywords are enumerated
since they're registered from constructors, so it's not very hard to
enumerate most of them.
This patch creates some basic infrastructure to support dumping existing
keywords from different classes on stdout. The format will differ depending
on the classes, but the idea is that the output could easily be passed to
a script that generates some simple syntax highlighting rules, completion
rules for editors, syntax checkers or config parsers.
The principle chosen here is that if "-dK" is passed on the command-line,
at the end of the parsing the registered keywords will be dumped for the
requested classes passed after "-dK". Special name "help" will show known
classes, while "all" will execute all of them. The reason for doing that
after the end of the config processor is that it will also enumerate
internally-generated keywords, Lua or even those loaded from external
code (e.g. if an add-on is loaded using LD_PRELOAD). A typical way to
call this with a valid config would be:
./haproxy -dKall -q -c -f /path/to/config
If there's no config available, feeding /dev/null will also do the job,
though it will not be able to detect dynamically created keywords, of
course.
This patch also updates the management doc.
For now nothing but the help is listed, various subsystems will follow
in subsequent patches.
2022-03-08 15:01:40 +00:00
|
|
|
-dK<class[,class]*> : dumps the list of registered keywords in each class.
|
|
|
|
The list of classes is available with "-dKhelp". All classes may be dumped
|
|
|
|
using "-dKall", otherwise a selection of those shown in the help can be
|
|
|
|
specified as a comma-delimited list. The output format will vary depending
|
|
|
|
on what class of keywords is being dumped (e.g. "cfg" will show the known
|
2022-05-31 06:07:43 +00:00
|
|
|
configuration keywords in a format resembling the config file format while
|
MINOR: management: add some basic keyword dump infrastructure
It's difficult from outside haproxy to detect the supported keywords
and syntax. Interestingly, many of our modern keywords are enumerated
since they're registered from constructors, so it's not very hard to
enumerate most of them.
This patch creates some basic infrastructure to support dumping existing
keywords from different classes on stdout. The format will differ depending
on the classes, but the idea is that the output could easily be passed to
a script that generates some simple syntax highlighting rules, completion
rules for editors, syntax checkers or config parsers.
The principle chosen here is that if "-dK" is passed on the command-line,
at the end of the parsing the registered keywords will be dumped for the
requested classes passed after "-dK". Special name "help" will show known
classes, while "all" will execute all of them. The reason for doing that
after the end of the config processor is that it will also enumerate
internally-generated keywords, Lua or even those loaded from external
code (e.g. if an add-on is loaded using LD_PRELOAD). A typical way to
call this with a valid config would be:
./haproxy -dKall -q -c -f /path/to/config
If there's no config available, feeding /dev/null will also do the job,
though it will not be able to detect dynamically created keywords, of
course.
This patch also updates the management doc.
For now nothing but the help is listed, various subsystems will follow
in subsequent patches.
2022-03-08 15:01:40 +00:00
|
|
|
"smp" will show sample fetch functions prefixed with a compatibility matrix
|
|
|
|
with each rule set). These may rarely be used as-is by humans but can be of
|
|
|
|
great help for external tools that try to detect the appearance of new
|
|
|
|
keywords at certain places to automatically update some documentation,
|
|
|
|
syntax highlighting files, configuration parsers, API etc. The output
|
|
|
|
format may evolve a bit over time so it is really recommended to use this
|
|
|
|
output mostly to detect differences with previous archives. Note that not
|
|
|
|
all keywords are listed because many keywords have existed long before the
|
|
|
|
different keyword registration subsystems were created, and they do not
|
|
|
|
appear there. However since new keywords are only added via the modern
|
|
|
|
mechanisms, it's reasonably safe to assume that this output may be used to
|
|
|
|
detect language additions with a good accuracy. The keywords are only
|
|
|
|
dumped after the configuration is fully parsed, so that even dynamically
|
|
|
|
created keywords can be dumped. A good way to dump and exit is to run a
|
|
|
|
silent config check on an existing configuration:
|
|
|
|
|
|
|
|
./haproxy -dKall -q -c -f foo.cfg
|
|
|
|
|
|
|
|
If no configuration file is available, using "-f /dev/null" will work as
|
|
|
|
well to dump all default keywords, but then the return status will not be
|
|
|
|
zero since there will be no listener, and will have to be ignored.
|
|
|
|
|
2021-12-28 14:43:11 +00:00
|
|
|
-dL : dumps the list of dynamic shared libraries that are loaded at the end
|
|
|
|
of the config processing. This will generally also include deep dependencies
|
|
|
|
such as anything loaded from Lua code for example, as well as the executable
|
|
|
|
itself. The list is printed in a format that ought to be easy enough to
|
|
|
|
sanitize to directly produce a tarball of all dependencies. Since it doesn't
|
|
|
|
stop the program's startup, it is recommended to only use it in combination
|
|
|
|
with "-c" and "-q" where only the list of loaded objects will be displayed
|
|
|
|
(or nothing in case of error). In addition, keep in mind that when providing
|
|
|
|
such a package to help with a core file analysis, most libraries are in fact
|
|
|
|
symbolic links that need to be dereferenced when creating the archive:
|
|
|
|
|
|
|
|
./haproxy -W -q -c -dL -f foo.cfg | tar -T - -hzcf archive.tgz
|
|
|
|
|
2023-03-22 10:37:54 +00:00
|
|
|
When started in verbose mode (-V) the shared libraries' address ranges are
|
|
|
|
also enumerated, unless the quiet mode is in use (-q).
|
|
|
|
|
2022-02-23 14:20:53 +00:00
|
|
|
-dM[<byte>[,]][help|options,...] : forces memory poisoning, and/or changes
|
|
|
|
memory other debugging options. Memory poisonning means that each and every
|
2017-11-24 16:34:44 +00:00
|
|
|
memory region allocated with malloc() or pool_alloc() will be filled with
|
2015-10-13 12:40:55 +00:00
|
|
|
<byte> before being passed to the caller. When <byte> is not specified, it
|
|
|
|
defaults to 0x50 ('P'). While this slightly slows down operations, it is
|
|
|
|
useful to reliably trigger issues resulting from missing initializations in
|
|
|
|
the code that cause random crashes. Note that -dM0 has the effect of
|
|
|
|
turning any malloc() into a calloc(). In any case if a bug appears or
|
|
|
|
disappears when using this option it means there is a bug in haproxy, so
|
2022-02-23 14:20:53 +00:00
|
|
|
please report it. A number of other options are available either alone or
|
|
|
|
after a comma following the byte. The special option "help" will list the
|
|
|
|
currently supported options and their current value. Each debugging option
|
|
|
|
may be forced on or off. The most optimal options are usually chosen at
|
|
|
|
build time based on the operating system and do not need to be adjusted,
|
|
|
|
unless suggested by a developer. Supported debugging options include
|
|
|
|
(set/clear):
|
|
|
|
- fail / no-fail:
|
|
|
|
This enables randomly failing memory allocations, in conjunction with
|
|
|
|
the global "tune.fail-alloc" setting. This is used to detect missing
|
2023-03-21 08:24:53 +00:00
|
|
|
error checks in the code. Setting the option presets the ratio to 1%
|
|
|
|
failure rate.
|
2022-02-23 14:20:53 +00:00
|
|
|
|
|
|
|
- no-merge / merge:
|
|
|
|
By default, pools of very similar sizes are merged, resulting in more
|
|
|
|
efficiency, but this complicates the analysis of certain memory dumps.
|
|
|
|
This option allows to disable this mechanism, and may slightly increase
|
|
|
|
the memory usage.
|
|
|
|
|
|
|
|
- cold-first / hot-first:
|
|
|
|
In order to optimize the CPU cache hit ratio, by default the most
|
|
|
|
recently released objects ("hot") are recycled for new allocations.
|
|
|
|
But doing so also complicates analysis of memory dumps and may hide
|
|
|
|
use-after-free bugs. This option allows to instead pick the coldest
|
|
|
|
objects first, which may result in a slight increase of CPU usage.
|
|
|
|
|
|
|
|
- integrity / no-integrity:
|
|
|
|
When this option is enabled, memory integrity checks are enabled on
|
|
|
|
the allocated area to verify that it hasn't been modified since it was
|
|
|
|
last released. This works best with "no-merge", "cold-first" and "tag".
|
|
|
|
Enabling this option will slightly increase the CPU usage.
|
|
|
|
|
|
|
|
- no-global / global:
|
|
|
|
Depending on the operating system, a process-wide global memory cache
|
|
|
|
may be enabled if it is estimated that the standard allocator is too
|
|
|
|
slow or inefficient with threads. This option allows to forcefully
|
|
|
|
disable it or enable it. Disabling it may result in a CPU usage
|
|
|
|
increase with inefficient allocators. Enabling it may result in a
|
|
|
|
higher memory usage with efficient allocators.
|
|
|
|
|
|
|
|
- no-cache / cache:
|
|
|
|
Each thread uses a very fast local object cache for allocations, which
|
|
|
|
is always enabled by default. This option allows to disable it. Since
|
|
|
|
the global cache also passes via the local caches, this will
|
|
|
|
effectively result in disabling all caches and allocating directly from
|
|
|
|
the default allocator. This may result in a significant increase of CPU
|
|
|
|
usage, but may also result in small memory savings on tiny systems.
|
|
|
|
|
|
|
|
- caller / no-caller:
|
|
|
|
Enabling this option reserves some extra space in each allocated object
|
|
|
|
to store the address of the last caller that allocated or released it.
|
|
|
|
This helps developers go back in time when analysing memory dumps and
|
|
|
|
to guess how something unexpected happened.
|
|
|
|
|
|
|
|
- tag / no-tag:
|
|
|
|
Enabling this option reserves some extra space in each allocated object
|
|
|
|
to store a tag that allows to detect bugs such as double-free, freeing
|
|
|
|
an invalid object, and buffer overflows. It offers much stronger
|
|
|
|
reliability guarantees at the expense of 4 or 8 extra bytes per
|
|
|
|
allocation. It usually is the first step to detect memory corruption.
|
|
|
|
|
|
|
|
- poison / no-poison:
|
|
|
|
Enabling this option will fill allocated objects with a fixed pattern
|
|
|
|
that will make sure that some accidental values such as 0 will not be
|
|
|
|
present if a newly added field was mistakenly forgotten in an
|
|
|
|
initialization routine. Such bugs tend to rarely reproduce, especially
|
|
|
|
when pools are not merged. This is normally enabled by directly passing
|
|
|
|
the byte's value to -dM but using this option allows to disable/enable
|
|
|
|
use of a previously set value.
|
2015-10-13 12:40:55 +00:00
|
|
|
|
|
|
|
-dS : disable use of the splice() system call. It is equivalent to the
|
|
|
|
"global" section's "nosplice" keyword. This may be used when splice() is
|
|
|
|
suspected to behave improperly or to cause performance issues, or when
|
|
|
|
using strace to see the forwarded data (which do not appear when using
|
|
|
|
splice()).
|
|
|
|
|
|
|
|
-dV : disable SSL verify on the server side. It is equivalent to having
|
|
|
|
"ssl-server-verify none" in the "global" section. This is useful when
|
|
|
|
trying to reproduce production issues out of the production
|
|
|
|
environment. Never use this in an init script as it degrades SSL security
|
|
|
|
to the servers.
|
|
|
|
|
2020-04-15 14:42:39 +00:00
|
|
|
-dW : if set, haproxy will refuse to start if any warning was emitted while
|
|
|
|
processing the configuration. This helps detect subtle mistakes and keep the
|
|
|
|
configuration clean and portable across versions. It is recommended to set
|
|
|
|
this option in service scripts when configurations are managed by humans,
|
|
|
|
but it is recommended not to use it with generated configurations, which
|
|
|
|
tend to emit more warnings. It may be combined with "-c" to cause warnings
|
|
|
|
in checked configurations to fail. This is equivalent to global option
|
|
|
|
"zero-warning".
|
|
|
|
|
2015-10-13 12:40:55 +00:00
|
|
|
-db : disable background mode and multi-process mode. The process remains in
|
|
|
|
foreground. It is mainly used during development or during small tests, as
|
|
|
|
Ctrl-C is enough to stop the process. Never use it in an init script.
|
|
|
|
|
|
|
|
-de : disable the use of the "epoll" poller. It is equivalent to the "global"
|
|
|
|
section's keyword "noepoll". It is mostly useful when suspecting a bug
|
|
|
|
related to this poller. On systems supporting epoll, the fallback will
|
|
|
|
generally be the "poll" poller.
|
|
|
|
|
|
|
|
-dk : disable the use of the "kqueue" poller. It is equivalent to the
|
|
|
|
"global" section's keyword "nokqueue". It is mostly useful when suspecting
|
|
|
|
a bug related to this poller. On systems supporting kqueue, the fallback
|
|
|
|
will generally be the "poll" poller.
|
|
|
|
|
|
|
|
-dp : disable the use of the "poll" poller. It is equivalent to the "global"
|
|
|
|
section's keyword "nopoll". It is mostly useful when suspecting a bug
|
|
|
|
related to this poller. On systems supporting poll, the fallback will
|
|
|
|
generally be the "select" poller, which cannot be disabled and is limited
|
|
|
|
to 1024 file descriptors.
|
|
|
|
|
2016-11-07 20:03:16 +00:00
|
|
|
-dr : ignore server address resolution failures. It is very common when
|
|
|
|
validating a configuration out of production not to have access to the same
|
|
|
|
resolvers and to fail on server address resolution, making it difficult to
|
|
|
|
test a configuration. This option simply appends the "none" method to the
|
|
|
|
list of address resolution methods for all servers, ensuring that even if
|
|
|
|
the libc fails to resolve an address, the startup sequence is not
|
|
|
|
interrupted.
|
|
|
|
|
2015-12-14 11:46:07 +00:00
|
|
|
-m <limit> : limit the total allocatable memory to <limit> megabytes across
|
|
|
|
all processes. This may cause some connection refusals or some slowdowns
|
2015-10-13 12:40:55 +00:00
|
|
|
depending on the amount of memory needed for normal operations. This is
|
2015-12-14 11:46:07 +00:00
|
|
|
mostly used to force the processes to work in a constrained resource usage
|
|
|
|
scenario. It is important to note that the memory is not shared between
|
|
|
|
processes, so in a multi-process scenario, this value is first divided by
|
|
|
|
global.nbproc before forking.
|
2015-10-13 12:40:55 +00:00
|
|
|
|
|
|
|
-n <limit> : limits the per-process connection limit to <limit>. This is
|
|
|
|
equivalent to the global section's keyword "maxconn". It has precedence
|
|
|
|
over this keyword. This may be used to quickly force lower limits to avoid
|
|
|
|
a service outage on systems where resource limits are too low.
|
|
|
|
|
|
|
|
-p <file> : write all processes' pids into <file> during startup. This is
|
|
|
|
equivalent to the "global" section's keyword "pidfile". The file is opened
|
|
|
|
before entering the chroot jail, and after doing the chdir() implied by
|
|
|
|
"-C". Each pid appears on its own line.
|
|
|
|
|
|
|
|
-q : set "quiet" mode. This disables some messages during the configuration
|
|
|
|
parsing and during startup. It can be used in combination with "-c" to
|
|
|
|
just check if a configuration file is valid or not.
|
|
|
|
|
2018-12-11 17:56:45 +00:00
|
|
|
-S <bind>[,bind_options...]: in master-worker mode, bind a master CLI, which
|
|
|
|
allows the access to every processes, running or leaving ones.
|
|
|
|
For security reasons, it is recommended to bind the master CLI to a local
|
|
|
|
UNIX socket. The bind options are the same as the keyword "bind" in
|
|
|
|
the configuration file with words separated by commas instead of spaces.
|
|
|
|
|
|
|
|
Note that this socket can't be used to retrieve the listening sockets from
|
|
|
|
an old process during a seamless reload.
|
|
|
|
|
2015-10-13 12:40:55 +00:00
|
|
|
-sf <pid>* : send the "finish" signal (SIGUSR1) to older processes after boot
|
|
|
|
completion to ask them to finish what they are doing and to leave. <pid>
|
|
|
|
is a list of pids to signal (one per argument). The list ends on any
|
|
|
|
option starting with a "-". It is not a problem if the list of pids is
|
|
|
|
empty, so that it can be built on the fly based on the result of a command
|
2023-02-01 08:28:32 +00:00
|
|
|
like "pidof" or "pgrep".
|
2015-10-13 12:40:55 +00:00
|
|
|
|
|
|
|
-st <pid>* : send the "terminate" signal (SIGTERM) to older processes after
|
|
|
|
boot completion to terminate them immediately without finishing what they
|
|
|
|
were doing. <pid> is a list of pids to signal (one per argument). The list
|
|
|
|
is ends on any option starting with a "-". It is not a problem if the list
|
|
|
|
of pids is empty, so that it can be built on the fly based on the result of
|
|
|
|
a command like "pidof" or "pgrep".
|
|
|
|
|
|
|
|
-v : report the version and build date.
|
|
|
|
|
|
|
|
-vv : display the version, build options, libraries versions and usable
|
|
|
|
pollers. This output is systematically requested when filing a bug report.
|
|
|
|
|
2017-04-05 20:50:59 +00:00
|
|
|
-x <unix_socket> : connect to the specified socket and try to retrieve any
|
|
|
|
listening sockets from the old process, and use them instead of trying to
|
|
|
|
bind new ones. This is useful to avoid missing any new connection when
|
2017-05-26 15:42:10 +00:00
|
|
|
reloading the configuration on Linux. The capability must be enable on the
|
|
|
|
stats socket using "expose-fd listeners" in your configuration.
|
2021-11-24 17:45:37 +00:00
|
|
|
In master-worker mode, the master will use this option upon a reload with
|
|
|
|
the "sockpair@" syntax, which allows the master to connect directly to a
|
|
|
|
worker without using stats socket declared in the configuration.
|
2017-04-05 20:50:59 +00:00
|
|
|
|
2016-07-02 01:01:18 +00:00
|
|
|
A safe way to start HAProxy from an init file consists in forcing the daemon
|
2015-10-13 12:40:55 +00:00
|
|
|
mode, storing existing pids to a pid file and using this pid file to notify
|
|
|
|
older processes to finish before leaving :
|
|
|
|
|
|
|
|
haproxy -f /etc/haproxy.cfg \
|
|
|
|
-D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
|
|
|
|
|
|
|
|
When the configuration is split into a few specific files (eg: tcp vs http),
|
|
|
|
it is recommended to use the "-f" option :
|
|
|
|
|
|
|
|
haproxy -f /etc/haproxy/global.cfg -f /etc/haproxy/stats.cfg \
|
|
|
|
-f /etc/haproxy/default-tcp.cfg -f /etc/haproxy/tcp.cfg \
|
|
|
|
-f /etc/haproxy/default-http.cfg -f /etc/haproxy/http.cfg \
|
|
|
|
-D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
|
|
|
|
|
|
|
|
When an unknown number of files is expected, such as customer-specific files,
|
|
|
|
it is recommended to assign them a name starting with a fixed-size sequence
|
|
|
|
number and to use "--" to load them, possibly after loading some defaults :
|
|
|
|
|
|
|
|
haproxy -f /etc/haproxy/global.cfg -f /etc/haproxy/stats.cfg \
|
|
|
|
-f /etc/haproxy/default-tcp.cfg -f /etc/haproxy/tcp.cfg \
|
|
|
|
-f /etc/haproxy/default-http.cfg -f /etc/haproxy/http.cfg \
|
|
|
|
-D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) \
|
|
|
|
-f /etc/haproxy/default-customers.cfg -- /etc/haproxy/customers/*
|
|
|
|
|
|
|
|
Sometimes a failure to start may happen for whatever reason. Then it is
|
|
|
|
important to verify if the version of HAProxy you are invoking is the expected
|
|
|
|
version and if it supports the features you are expecting (eg: SSL, PCRE,
|
|
|
|
compression, Lua, etc). This can be verified using "haproxy -vv". Some
|
|
|
|
important information such as certain build options, the target system and
|
|
|
|
the versions of the libraries being used are reported there. It is also what
|
|
|
|
you will systematically be asked for when posting a bug report :
|
|
|
|
|
|
|
|
$ haproxy -vv
|
2021-05-09 04:25:16 +00:00
|
|
|
HAProxy version 1.6-dev7-a088d3-4 2015/10/08
|
2015-10-13 12:40:55 +00:00
|
|
|
Copyright 2000-2015 Willy Tarreau <willy@haproxy.org>
|
|
|
|
|
|
|
|
Build options :
|
|
|
|
TARGET = linux2628
|
|
|
|
CPU = generic
|
|
|
|
CC = gcc
|
|
|
|
CFLAGS = -pg -O0 -g -fno-strict-aliasing -Wdeclaration-after-statement \
|
|
|
|
-DBUFSIZE=8030 -DMAXREWRITE=1030 -DSO_MARK=36 -DTCP_REPAIR=19
|
|
|
|
OPTIONS = USE_ZLIB=1 USE_DLMALLOC=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1
|
|
|
|
|
|
|
|
Default settings :
|
|
|
|
maxconn = 2000, bufsize = 8030, maxrewrite = 1030, maxpollevents = 200
|
|
|
|
|
|
|
|
Encrypted password support via crypt(3): yes
|
|
|
|
Built with zlib version : 1.2.6
|
|
|
|
Compression algorithms supported : identity("identity"), deflate("deflate"), \
|
|
|
|
raw-deflate("deflate"), gzip("gzip")
|
|
|
|
Built with OpenSSL version : OpenSSL 1.0.1o 12 Jun 2015
|
|
|
|
Running on OpenSSL version : OpenSSL 1.0.1o 12 Jun 2015
|
|
|
|
OpenSSL library supports TLS extensions : yes
|
|
|
|
OpenSSL library supports SNI : yes
|
|
|
|
OpenSSL library supports prefer-server-ciphers : yes
|
|
|
|
Built with PCRE version : 8.12 2011-01-15
|
|
|
|
PCRE library supports JIT : no (USE_PCRE_JIT not set)
|
|
|
|
Built with Lua version : Lua 5.3.1
|
|
|
|
Built with transparent proxy support using: IP_TRANSPARENT IP_FREEBIND
|
|
|
|
|
|
|
|
Available polling systems :
|
|
|
|
epoll : pref=300, test result OK
|
|
|
|
poll : pref=200, test result OK
|
|
|
|
select : pref=150, test result OK
|
|
|
|
Total: 3 (3 usable), will use epoll.
|
|
|
|
|
|
|
|
The relevant information that many non-developer users can verify here are :
|
|
|
|
- the version : 1.6-dev7-a088d3-4 above means the code is currently at commit
|
|
|
|
ID "a088d3" which is the 4th one after after official version "1.6-dev7".
|
|
|
|
Version 1.6-dev7 would show as "1.6-dev7-8c1ad7". What matters here is in
|
|
|
|
fact "1.6-dev7". This is the 7th development version of what will become
|
|
|
|
version 1.6 in the future. A development version not suitable for use in
|
|
|
|
production (unless you know exactly what you are doing). A stable version
|
|
|
|
will show as a 3-numbers version, such as "1.5.14-16f863", indicating the
|
|
|
|
14th level of fix on top of version 1.5. This is a production-ready version.
|
|
|
|
|
|
|
|
- the release date : 2015/10/08. It is represented in the universal
|
|
|
|
year/month/day format. Here this means August 8th, 2015. Given that stable
|
|
|
|
releases are issued every few months (1-2 months at the beginning, sometimes
|
|
|
|
6 months once the product becomes very stable), if you're seeing an old date
|
|
|
|
here, it means you're probably affected by a number of bugs or security
|
|
|
|
issues that have since been fixed and that it might be worth checking on the
|
|
|
|
official site.
|
|
|
|
|
|
|
|
- build options : they are relevant to people who build their packages
|
|
|
|
themselves, they can explain why things are not behaving as expected. For
|
|
|
|
example the development version above was built for Linux 2.6.28 or later,
|
2016-07-02 01:01:18 +00:00
|
|
|
targeting a generic CPU (no CPU-specific optimizations), and lacks any
|
2015-10-13 12:40:55 +00:00
|
|
|
code optimization (-O0) so it will perform poorly in terms of performance.
|
|
|
|
|
|
|
|
- libraries versions : zlib version is reported as found in the library
|
|
|
|
itself. In general zlib is considered a very stable product and upgrades
|
|
|
|
are almost never needed. OpenSSL reports two versions, the version used at
|
|
|
|
build time and the one being used, as found on the system. These ones may
|
|
|
|
differ by the last letter but never by the numbers. The build date is also
|
|
|
|
reported because most OpenSSL bugs are security issues and need to be taken
|
|
|
|
seriously, so this library absolutely needs to be kept up to date. Seeing a
|
|
|
|
4-months old version here is highly suspicious and indeed an update was
|
|
|
|
missed. PCRE provides very fast regular expressions and is highly
|
|
|
|
recommended. Certain of its extensions such as JIT are not present in all
|
|
|
|
versions and still young so some people prefer not to build with them,
|
2016-07-02 01:01:18 +00:00
|
|
|
which is why the build status is reported as well. Regarding the Lua
|
2015-10-13 12:40:55 +00:00
|
|
|
scripting language, HAProxy expects version 5.3 which is very young since
|
|
|
|
it was released a little time before HAProxy 1.6. It is important to check
|
|
|
|
on the Lua web site if some fixes are proposed for this branch.
|
|
|
|
|
|
|
|
- Available polling systems will affect the process's scalability when
|
|
|
|
dealing with more than about one thousand of concurrent connections. These
|
|
|
|
ones are only available when the correct system was indicated in the TARGET
|
|
|
|
variable during the build. The "epoll" mechanism is highly recommended on
|
|
|
|
Linux, and the kqueue mechanism is highly recommended on BSD. Lacking them
|
|
|
|
will result in poll() or even select() being used, causing a high CPU usage
|
|
|
|
when dealing with a lot of connections.
|
|
|
|
|
|
|
|
|
|
|
|
4. Stopping and restarting HAProxy
|
|
|
|
----------------------------------
|
|
|
|
|
|
|
|
HAProxy supports a graceful and a hard stop. The hard stop is simple, when the
|
|
|
|
SIGTERM signal is sent to the haproxy process, it immediately quits and all
|
|
|
|
established connections are closed. The graceful stop is triggered when the
|
|
|
|
SIGUSR1 signal is sent to the haproxy process. It consists in only unbinding
|
|
|
|
from listening ports, but continue to process existing connections until they
|
|
|
|
close. Once the last connection is closed, the process leaves.
|
|
|
|
|
|
|
|
The hard stop method is used for the "stop" or "restart" actions of the service
|
|
|
|
management script. The graceful stop is used for the "reload" action which
|
|
|
|
tries to seamlessly reload a new configuration in a new process.
|
|
|
|
|
|
|
|
Both of these signals may be sent by the new haproxy process itself during a
|
|
|
|
reload or restart, so that they are sent at the latest possible moment and only
|
|
|
|
if absolutely required. This is what is performed by the "-st" (hard) and "-sf"
|
|
|
|
(graceful) options respectively.
|
|
|
|
|
2017-06-01 15:38:56 +00:00
|
|
|
In master-worker mode, it is not needed to start a new haproxy process in
|
|
|
|
order to reload the configuration. The master process reacts to the SIGUSR2
|
|
|
|
signal by reexecuting itself with the -sf parameter followed by the PIDs of
|
|
|
|
the workers. The master will then parse the configuration file and fork new
|
|
|
|
workers.
|
|
|
|
|
2015-10-13 12:40:55 +00:00
|
|
|
To understand better how these signals are used, it is important to understand
|
|
|
|
the whole restart mechanism.
|
|
|
|
|
|
|
|
First, an existing haproxy process is running. The administrator uses a system
|
2020-07-22 23:59:40 +00:00
|
|
|
specific command such as "/etc/init.d/haproxy reload" to indicate they want to
|
2015-10-13 12:40:55 +00:00
|
|
|
take the new configuration file into effect. What happens then is the following.
|
|
|
|
First, the service script (/etc/init.d/haproxy or equivalent) will verify that
|
|
|
|
the configuration file parses correctly using "haproxy -c". After that it will
|
|
|
|
try to start haproxy with this configuration file, using "-st" or "-sf".
|
|
|
|
|
|
|
|
Then HAProxy tries to bind to all listening ports. If some fatal errors happen
|
|
|
|
(eg: address not present on the system, permission denied), the process quits
|
|
|
|
with an error. If a socket binding fails because a port is already in use, then
|
|
|
|
the process will first send a SIGTTOU signal to all the pids specified in the
|
|
|
|
"-st" or "-sf" pid list. This is what is called the "pause" signal. It instructs
|
|
|
|
all existing haproxy processes to temporarily stop listening to their ports so
|
|
|
|
that the new process can try to bind again. During this time, the old process
|
|
|
|
continues to process existing connections. If the binding still fails (because
|
|
|
|
for example a port is shared with another daemon), then the new process sends a
|
|
|
|
SIGTTIN signal to the old processes to instruct them to resume operations just
|
|
|
|
as if nothing happened. The old processes will then restart listening to the
|
2021-08-04 05:29:05 +00:00
|
|
|
ports and continue to accept connections. Note that this mechanism is system
|
2016-07-02 01:01:18 +00:00
|
|
|
dependent and some operating systems may not support it in multi-process mode.
|
2015-10-13 12:40:55 +00:00
|
|
|
|
|
|
|
If the new process manages to bind correctly to all ports, then it sends either
|
|
|
|
the SIGTERM (hard stop in case of "-st") or the SIGUSR1 (graceful stop in case
|
|
|
|
of "-sf") to all processes to notify them that it is now in charge of operations
|
|
|
|
and that the old processes will have to leave, either immediately or once they
|
|
|
|
have finished their job.
|
|
|
|
|
|
|
|
It is important to note that during this timeframe, there are two small windows
|
|
|
|
of a few milliseconds each where it is possible that a few connection failures
|
|
|
|
will be noticed during high loads. Typically observed failure rates are around
|
|
|
|
1 failure during a reload operation every 10000 new connections per second,
|
|
|
|
which means that a heavily loaded site running at 30000 new connections per
|
|
|
|
second may see about 3 failed connection upon every reload. The two situations
|
|
|
|
where this happens are :
|
|
|
|
|
|
|
|
- if the new process fails to bind due to the presence of the old process,
|
|
|
|
it will first have to go through the SIGTTOU+SIGTTIN sequence, which
|
|
|
|
typically lasts about one millisecond for a few tens of frontends, and
|
|
|
|
during which some ports will not be bound to the old process and not yet
|
|
|
|
bound to the new one. HAProxy works around this on systems that support the
|
|
|
|
SO_REUSEPORT socket options, as it allows the new process to bind without
|
|
|
|
first asking the old one to unbind. Most BSD systems have been supporting
|
|
|
|
this almost forever. Linux has been supporting this in version 2.0 and
|
|
|
|
dropped it around 2.2, but some patches were floating around by then. It
|
|
|
|
was reintroduced in kernel 3.9, so if you are observing a connection
|
2016-07-02 01:01:18 +00:00
|
|
|
failure rate above the one mentioned above, please ensure that your kernel
|
2015-10-13 12:40:55 +00:00
|
|
|
is 3.9 or newer, or that relevant patches were backported to your kernel
|
|
|
|
(less likely).
|
|
|
|
|
|
|
|
- when the old processes close the listening ports, the kernel may not always
|
|
|
|
redistribute any pending connection that was remaining in the socket's
|
|
|
|
backlog. Under high loads, a SYN packet may happen just before the socket
|
|
|
|
is closed, and will lead to an RST packet being sent to the client. In some
|
|
|
|
critical environments where even one drop is not acceptable, these ones are
|
|
|
|
sometimes dealt with using firewall rules to block SYN packets during the
|
|
|
|
reload, forcing the client to retransmit. This is totally system-dependent,
|
|
|
|
as some systems might be able to visit other listening queues and avoid
|
|
|
|
this RST. A second case concerns the ACK from the client on a local socket
|
|
|
|
that was in SYN_RECV state just before the close. This ACK will lead to an
|
|
|
|
RST packet while the haproxy process is still not aware of it. This one is
|
2016-07-02 01:01:18 +00:00
|
|
|
harder to get rid of, though the firewall filtering rules mentioned above
|
2015-10-13 12:40:55 +00:00
|
|
|
will work well if applied one second or so before restarting the process.
|
|
|
|
|
|
|
|
For the vast majority of users, such drops will never ever happen since they
|
|
|
|
don't have enough load to trigger the race conditions. And for most high traffic
|
|
|
|
users, the failure rate is still fairly within the noise margin provided that at
|
|
|
|
least SO_REUSEPORT is properly supported on their systems.
|
|
|
|
|
|
|
|
5. File-descriptor limitations
|
|
|
|
------------------------------
|
|
|
|
|
|
|
|
In order to ensure that all incoming connections will successfully be served,
|
|
|
|
HAProxy computes at load time the total number of file descriptors that will be
|
|
|
|
needed during the process's life. A regular Unix process is generally granted
|
|
|
|
1024 file descriptors by default, and a privileged process can raise this limit
|
|
|
|
itself. This is one reason for starting HAProxy as root and letting it adjust
|
|
|
|
the limit. The default limit of 1024 file descriptors roughly allow about 500
|
|
|
|
concurrent connections to be processed. The computation is based on the global
|
|
|
|
maxconn parameter which limits the total number of connections per process, the
|
|
|
|
number of listeners, the number of servers which have a health check enabled,
|
|
|
|
the agent checks, the peers, the loggers and possibly a few other technical
|
|
|
|
requirements. A simple rough estimate of this number consists in simply
|
|
|
|
doubling the maxconn value and adding a few tens to get the approximate number
|
|
|
|
of file descriptors needed.
|
|
|
|
|
|
|
|
Originally HAProxy did not know how to compute this value, and it was necessary
|
|
|
|
to pass the value using the "ulimit-n" setting in the global section. This
|
|
|
|
explains why even today a lot of configurations are seen with this setting
|
|
|
|
present. Unfortunately it was often miscalculated resulting in connection
|
|
|
|
failures when approaching maxconn instead of throttling incoming connection
|
|
|
|
while waiting for the needed resources. For this reason it is important to
|
2016-07-02 01:01:18 +00:00
|
|
|
remove any vestigial "ulimit-n" setting that can remain from very old versions.
|
2015-10-13 12:40:55 +00:00
|
|
|
|
|
|
|
Raising the number of file descriptors to accept even moderate loads is
|
|
|
|
mandatory but comes with some OS-specific adjustments. First, the select()
|
|
|
|
polling system is limited to 1024 file descriptors. In fact on Linux it used
|
|
|
|
to be capable of handling more but since certain OS ship with excessively
|
|
|
|
restrictive SELinux policies forbidding the use of select() with more than
|
|
|
|
1024 file descriptors, HAProxy now refuses to start in this case in order to
|
|
|
|
avoid any issue at run time. On all supported operating systems, poll() is
|
|
|
|
available and will not suffer from this limitation. It is automatically picked
|
2016-07-02 01:01:18 +00:00
|
|
|
so there is nothing to do to get a working configuration. But poll's becomes
|
2015-10-13 12:40:55 +00:00
|
|
|
very slow when the number of file descriptors increases. While HAProxy does its
|
|
|
|
best to limit this performance impact (eg: via the use of the internal file
|
|
|
|
descriptor cache and batched processing), a good rule of thumb is that using
|
|
|
|
poll() with more than a thousand concurrent connections will use a lot of CPU.
|
|
|
|
|
|
|
|
For Linux systems base on kernels 2.6 and above, the epoll() system call will
|
|
|
|
be used. It's a much more scalable mechanism relying on callbacks in the kernel
|
|
|
|
that guarantee a constant wake up time regardless of the number of registered
|
|
|
|
monitored file descriptors. It is automatically used where detected, provided
|
|
|
|
that HAProxy had been built for one of the Linux flavors. Its presence and
|
|
|
|
support can be verified using "haproxy -vv".
|
|
|
|
|
|
|
|
For BSD systems which support it, kqueue() is available as an alternative. It
|
|
|
|
is much faster than poll() and even slightly faster than epoll() thanks to its
|
|
|
|
batched handling of changes. At least FreeBSD and OpenBSD support it. Just like
|
|
|
|
with Linux's epoll(), its support and availability are reported in the output
|
|
|
|
of "haproxy -vv".
|
|
|
|
|
|
|
|
Having a good poller is one thing, but it is mandatory that the process can
|
|
|
|
reach the limits. When HAProxy starts, it immediately sets the new process's
|
|
|
|
file descriptor limits and verifies if it succeeds. In case of failure, it
|
|
|
|
reports it before forking so that the administrator can see the problem. As
|
|
|
|
long as the process is started by as root, there should be no reason for this
|
|
|
|
setting to fail. However, it can fail if the process is started by an
|
|
|
|
unprivileged user. If there is a compelling reason for *not* starting haproxy
|
|
|
|
as root (eg: started by end users, or by a per-application account), then the
|
|
|
|
file descriptor limit can be raised by the system administrator for this
|
|
|
|
specific user. The effectiveness of the setting can be verified by issuing
|
|
|
|
"ulimit -n" from the user's command line. It should reflect the new limit.
|
|
|
|
|
|
|
|
Warning: when an unprivileged user's limits are changed in this user's account,
|
|
|
|
it is fairly common that these values are only considered when the user logs in
|
|
|
|
and not at all in some scripts run at system boot time nor in crontabs. This is
|
|
|
|
totally dependent on the operating system, keep in mind to check "ulimit -n"
|
|
|
|
before starting haproxy when running this way. The general advice is never to
|
|
|
|
start haproxy as an unprivileged user for production purposes. Another good
|
|
|
|
reason is that it prevents haproxy from enabling some security protections.
|
|
|
|
|
|
|
|
Once it is certain that the system will allow the haproxy process to use the
|
|
|
|
requested number of file descriptors, two new system-specific limits may be
|
|
|
|
encountered. The first one is the system-wide file descriptor limit, which is
|
|
|
|
the total number of file descriptors opened on the system, covering all
|
|
|
|
processes. When this limit is reached, accept() or socket() will typically
|
|
|
|
return ENFILE. The second one is the per-process hard limit on the number of
|
|
|
|
file descriptors, it prevents setrlimit() from being set higher. Both are very
|
|
|
|
dependent on the operating system. On Linux, the system limit is set at boot
|
|
|
|
based on the amount of memory. It can be changed with the "fs.file-max" sysctl.
|
|
|
|
And the per-process hard limit is set to 1048576 by default, but it can be
|
|
|
|
changed using the "fs.nr_open" sysctl.
|
|
|
|
|
|
|
|
File descriptor limitations may be observed on a running process when they are
|
|
|
|
set too low. The strace utility will report that accept() and socket() return
|
|
|
|
"-1 EMFILE" when the process's limits have been reached. In this case, simply
|
|
|
|
raising the "ulimit-n" value (or removing it) will solve the problem. If these
|
|
|
|
system calls return "-1 ENFILE" then it means that the kernel's limits have
|
|
|
|
been reached and that something must be done on a system-wide parameter. These
|
|
|
|
trouble must absolutely be addressed, as they result in high CPU usage (when
|
|
|
|
accept() fails) and failed connections that are generally visible to the user.
|
|
|
|
One solution also consists in lowering the global maxconn value to enforce
|
|
|
|
serialization, and possibly to disable HTTP keep-alive to force connections
|
|
|
|
to be released and reused faster.
|
|
|
|
|
|
|
|
|
|
|
|
6. Memory management
|
|
|
|
--------------------
|
|
|
|
|
|
|
|
HAProxy uses a simple and fast pool-based memory management. Since it relies on
|
|
|
|
a small number of different object types, it's much more efficient to pick new
|
|
|
|
objects from a pool which already contains objects of the appropriate size than
|
|
|
|
to call malloc() for each different size. The pools are organized as a stack or
|
|
|
|
LIFO, so that newly allocated objects are taken from recently released objects
|
|
|
|
still hot in the CPU caches. Pools of similar sizes are merged together, in
|
|
|
|
order to limit memory fragmentation.
|
|
|
|
|
|
|
|
By default, since the focus is set on performance, each released object is put
|
|
|
|
back into the pool it came from, and allocated objects are never freed since
|
|
|
|
they are expected to be reused very soon.
|
|
|
|
|
|
|
|
On the CLI, it is possible to check how memory is being used in pools thanks to
|
|
|
|
the "show pools" command :
|
|
|
|
|
|
|
|
> show pools
|
|
|
|
Dumping pools usage. Use SIGQUIT to flush them.
|
2018-10-16 05:58:39 +00:00
|
|
|
- Pool cache_st (16 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9ccc40=03 [SHARED]
|
|
|
|
- Pool pipe (32 bytes) : 5 allocated (160 bytes), 5 used, 0 failures, 2 users, @0x9ccac0=00 [SHARED]
|
|
|
|
- Pool comp_state (48 bytes) : 3 allocated (144 bytes), 3 used, 0 failures, 5 users, @0x9cccc0=04 [SHARED]
|
|
|
|
- Pool filter (64 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 3 users, @0x9ccbc0=02 [SHARED]
|
|
|
|
- Pool vars (80 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9ccb40=01 [SHARED]
|
|
|
|
- Pool uniqueid (128 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9cd240=15 [SHARED]
|
|
|
|
- Pool task (144 bytes) : 55 allocated (7920 bytes), 55 used, 0 failures, 1 users, @0x9cd040=11 [SHARED]
|
|
|
|
- Pool session (160 bytes) : 1 allocated (160 bytes), 1 used, 0 failures, 1 users, @0x9cd140=13 [SHARED]
|
|
|
|
- Pool h2s (208 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9ccec0=08 [SHARED]
|
|
|
|
- Pool h2c (288 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9cce40=07 [SHARED]
|
|
|
|
- Pool spoe_ctx (304 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9ccf40=09 [SHARED]
|
|
|
|
- Pool connection (400 bytes) : 2 allocated (800 bytes), 2 used, 0 failures, 1 users, @0x9cd1c0=14 [SHARED]
|
|
|
|
- Pool hdr_idx (416 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9cd340=17 [SHARED]
|
|
|
|
- Pool dns_resolut (480 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9ccdc0=06 [SHARED]
|
|
|
|
- Pool dns_answer_ (576 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9ccd40=05 [SHARED]
|
|
|
|
- Pool stream (960 bytes) : 1 allocated (960 bytes), 1 used, 0 failures, 1 users, @0x9cd0c0=12 [SHARED]
|
|
|
|
- Pool requri (1024 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9cd2c0=16 [SHARED]
|
|
|
|
- Pool buffer (8030 bytes) : 3 allocated (24090 bytes), 2 used, 0 failures, 1 users, @0x9cd3c0=18 [SHARED]
|
|
|
|
- Pool trash (8062 bytes) : 1 allocated (8062 bytes), 1 used, 0 failures, 1 users, @0x9cd440=19
|
|
|
|
Total: 19 pools, 42296 bytes allocated, 34266 used.
|
2015-10-13 12:40:55 +00:00
|
|
|
|
|
|
|
The pool name is only indicative, it's the name of the first object type using
|
|
|
|
this pool. The size in parenthesis is the object size for objects in this pool.
|
|
|
|
Object sizes are always rounded up to the closest multiple of 16 bytes. The
|
|
|
|
number of objects currently allocated and the equivalent number of bytes is
|
|
|
|
reported so that it is easy to know which pool is responsible for the highest
|
|
|
|
memory usage. The number of objects currently in use is reported as well in the
|
|
|
|
"used" field. The difference between "allocated" and "used" corresponds to the
|
2018-10-16 05:58:39 +00:00
|
|
|
objects that have been freed and are available for immediate use. The address
|
|
|
|
at the end of the line is the pool's address, and the following number is the
|
|
|
|
pool index when it exists, or is reported as -1 if no index was assigned.
|
2015-10-13 12:40:55 +00:00
|
|
|
|
|
|
|
It is possible to limit the amount of memory allocated per process using the
|
|
|
|
"-m" command line option, followed by a number of megabytes. It covers all of
|
|
|
|
the process's addressable space, so that includes memory used by some libraries
|
|
|
|
as well as the stack, but it is a reliable limit when building a resource
|
|
|
|
constrained system. It works the same way as "ulimit -v" on systems which have
|
|
|
|
it, or "ulimit -d" for the other ones.
|
|
|
|
|
|
|
|
If a memory allocation fails due to the memory limit being reached or because
|
|
|
|
the system doesn't have any enough memory, then haproxy will first start to
|
|
|
|
free all available objects from all pools before attempting to allocate memory
|
|
|
|
again. This mechanism of releasing unused memory can be triggered by sending
|
|
|
|
the signal SIGQUIT to the haproxy process. When doing so, the pools state prior
|
|
|
|
to the flush will also be reported to stderr when the process runs in
|
|
|
|
foreground.
|
|
|
|
|
|
|
|
During a reload operation, the process switched to the graceful stop state also
|
|
|
|
automatically performs some flushes after releasing any connection so that all
|
|
|
|
possible memory is released to save it for the new process.
|
|
|
|
|
|
|
|
|
|
|
|
7. CPU usage
|
|
|
|
------------
|
|
|
|
|
|
|
|
HAProxy normally spends most of its time in the system and a smaller part in
|
|
|
|
userland. A finely tuned 3.5 GHz CPU can sustain a rate about 80000 end-to-end
|
|
|
|
connection setups and closes per second at 100% CPU on a single core. When one
|
|
|
|
core is saturated, typical figures are :
|
|
|
|
- 95% system, 5% user for long TCP connections or large HTTP objects
|
|
|
|
- 85% system and 15% user for short TCP connections or small HTTP objects in
|
|
|
|
close mode
|
|
|
|
- 70% system and 30% user for small HTTP objects in keep-alive mode
|
|
|
|
|
|
|
|
The amount of rules processing and regular expressions will increase the user
|
|
|
|
land part. The presence of firewall rules, connection tracking, complex routing
|
|
|
|
tables in the system will instead increase the system part.
|
|
|
|
|
|
|
|
On most systems, the CPU time observed during network transfers can be cut in 4
|
|
|
|
parts :
|
|
|
|
- the interrupt part, which concerns all the processing performed upon I/O
|
|
|
|
receipt, before the target process is even known. Typically Rx packets are
|
|
|
|
accounted for in interrupt. On some systems such as Linux where interrupt
|
|
|
|
processing may be deferred to a dedicated thread, it can appear as softirq,
|
|
|
|
and the thread is called ksoftirqd/0 (for CPU 0). The CPU taking care of
|
|
|
|
this load is generally defined by the hardware settings, though in the case
|
|
|
|
of softirq it is often possible to remap the processing to another CPU.
|
|
|
|
This interrupt part will often be perceived as parasitic since it's not
|
|
|
|
associated with any process, but it actually is some processing being done
|
|
|
|
to prepare the work for the process.
|
|
|
|
|
|
|
|
- the system part, which concerns all the processing done using kernel code
|
|
|
|
called from userland. System calls are accounted as system for example. All
|
|
|
|
synchronously delivered Tx packets will be accounted for as system time. If
|
|
|
|
some packets have to be deferred due to queues filling up, they may then be
|
|
|
|
processed in interrupt context later (eg: upon receipt of an ACK opening a
|
|
|
|
TCP window).
|
|
|
|
|
|
|
|
- the user part, which exclusively runs application code in userland. HAProxy
|
|
|
|
runs exclusively in this part, though it makes heavy use of system calls.
|
|
|
|
Rules processing, regular expressions, compression, encryption all add to
|
|
|
|
the user portion of CPU consumption.
|
|
|
|
|
|
|
|
- the idle part, which is what the CPU does when there is nothing to do. For
|
|
|
|
example HAProxy waits for an incoming connection, or waits for some data to
|
|
|
|
leave, meaning the system is waiting for an ACK from the client to push
|
|
|
|
these data.
|
|
|
|
|
|
|
|
In practice regarding HAProxy's activity, it is in general reasonably accurate
|
|
|
|
(but totally inexact) to consider that interrupt/softirq are caused by Rx
|
|
|
|
processing in kernel drivers, that user-land is caused by layer 7 processing
|
|
|
|
in HAProxy, and that system time is caused by network processing on the Tx
|
|
|
|
path.
|
|
|
|
|
|
|
|
Since HAProxy runs around an event loop, it waits for new events using poll()
|
|
|
|
(or any alternative) and processes all these events as fast as possible before
|
|
|
|
going back to poll() waiting for new events. It measures the time spent waiting
|
|
|
|
in poll() compared to the time spent doing processing events. The ratio of
|
|
|
|
polling time vs total time is called the "idle" time, it's the amount of time
|
|
|
|
spent waiting for something to happen. This ratio is reported in the stats page
|
|
|
|
on the "idle" line, or "Idle_pct" on the CLI. When it's close to 100%, it means
|
|
|
|
the load is extremely low. When it's close to 0%, it means that there is
|
|
|
|
constantly some activity. While it cannot be very accurate on an overloaded
|
|
|
|
system due to other processes possibly preempting the CPU from the haproxy
|
|
|
|
process, it still provides a good estimate about how HAProxy considers it is
|
|
|
|
working : if the load is low and the idle ratio is low as well, it may indicate
|
|
|
|
that HAProxy has a lot of work to do, possibly due to very expensive rules that
|
|
|
|
have to be processed. Conversely, if HAProxy indicates the idle is close to
|
|
|
|
100% while things are slow, it means that it cannot do anything to speed things
|
|
|
|
up because it is already waiting for incoming data to process. In the example
|
|
|
|
below, haproxy is completely idle :
|
|
|
|
|
|
|
|
$ echo "show info" | socat - /var/run/haproxy.sock | grep ^Idle
|
|
|
|
Idle_pct: 100
|
|
|
|
|
|
|
|
When the idle ratio starts to become very low, it is important to tune the
|
|
|
|
system and place processes and interrupts correctly to save the most possible
|
|
|
|
CPU resources for all tasks. If a firewall is present, it may be worth trying
|
|
|
|
to disable it or to tune it to ensure it is not responsible for a large part
|
|
|
|
of the performance limitation. It's worth noting that unloading a stateful
|
|
|
|
firewall generally reduces both the amount of interrupt/softirq and of system
|
|
|
|
usage since such firewalls act both on the Rx and the Tx paths. On Linux,
|
|
|
|
unloading the nf_conntrack and ip_conntrack modules will show whether there is
|
|
|
|
anything to gain. If so, then the module runs with default settings and you'll
|
|
|
|
have to figure how to tune it for better performance. In general this consists
|
|
|
|
in considerably increasing the hash table size. On FreeBSD, "pfctl -d" will
|
|
|
|
disable the "pf" firewall and its stateful engine at the same time.
|
|
|
|
|
|
|
|
If it is observed that a lot of time is spent in interrupt/softirq, it is
|
|
|
|
important to ensure that they don't run on the same CPU. Most systems tend to
|
|
|
|
pin the tasks on the CPU where they receive the network traffic because for
|
|
|
|
certain workloads it improves things. But with heavily network-bound workloads
|
|
|
|
it is the opposite as the haproxy process will have to fight against its kernel
|
|
|
|
counterpart. Pinning haproxy to one CPU core and the interrupts to another one,
|
|
|
|
all sharing the same L3 cache tends to sensibly increase network performance
|
|
|
|
because in practice the amount of work for haproxy and the network stack are
|
|
|
|
quite close, so they can almost fill an entire CPU each. On Linux this is done
|
|
|
|
using taskset (for haproxy) or using cpu-map (from the haproxy config), and the
|
|
|
|
interrupts are assigned under /proc/irq. Many network interfaces support
|
|
|
|
multiple queues and multiple interrupts. In general it helps to spread them
|
|
|
|
across a small number of CPU cores provided they all share the same L3 cache.
|
|
|
|
Please always stop irq_balance which always does the worst possible thing on
|
|
|
|
such workloads.
|
|
|
|
|
|
|
|
For CPU-bound workloads consisting in a lot of SSL traffic or a lot of
|
|
|
|
compression, it may be worth using multiple processes dedicated to certain
|
|
|
|
tasks, though there is no universal rule here and experimentation will have to
|
|
|
|
be performed.
|
|
|
|
|
|
|
|
In order to increase the CPU capacity, it is possible to make HAProxy run as
|
|
|
|
several processes, using the "nbproc" directive in the global section. There
|
|
|
|
are some limitations though :
|
|
|
|
- health checks are run per process, so the target servers will get as many
|
|
|
|
checks as there are running processes ;
|
|
|
|
- maxconn values and queues are per-process so the correct value must be set
|
|
|
|
to avoid overloading the servers ;
|
|
|
|
- outgoing connections should avoid using port ranges to avoid conflicts
|
|
|
|
- stick-tables are per process and are not shared between processes ;
|
|
|
|
- each peers section may only run on a single process at a time ;
|
|
|
|
- the CLI operations will only act on a single process at a time.
|
|
|
|
|
|
|
|
With this in mind, it appears that the easiest setup often consists in having
|
|
|
|
one first layer running on multiple processes and in charge for the heavy
|
|
|
|
processing, passing the traffic to a second layer running in a single process.
|
|
|
|
This mechanism is suited to SSL and compression which are the two CPU-heavy
|
|
|
|
features. Instances can easily be chained over UNIX sockets (which are cheaper
|
2016-01-15 08:40:53 +00:00
|
|
|
than TCP sockets and which do not waste ports), and the proxy protocol which is
|
2015-10-13 12:40:55 +00:00
|
|
|
useful to pass client information to the next stage. When doing so, it is
|
|
|
|
generally a good idea to bind all the single-process tasks to process number 1
|
|
|
|
and extra tasks to next processes, as this will make it easier to generate
|
|
|
|
similar configurations for different machines.
|
|
|
|
|
|
|
|
On Linux versions 3.9 and above, running HAProxy in multi-process mode is much
|
|
|
|
more efficient when each process uses a distinct listening socket on the same
|
|
|
|
IP:port ; this will make the kernel evenly distribute the load across all
|
|
|
|
processes instead of waking them all up. Please check the "process" option of
|
|
|
|
the "bind" keyword lines in the configuration manual for more information.
|
|
|
|
|
|
|
|
|
|
|
|
8. Logging
|
|
|
|
----------
|
|
|
|
|
|
|
|
For logging, HAProxy always relies on a syslog server since it does not perform
|
|
|
|
any file-system access. The standard way of using it is to send logs over UDP
|
|
|
|
to the log server (by default on port 514). Very commonly this is configured to
|
|
|
|
127.0.0.1 where the local syslog daemon is running, but it's also used over the
|
|
|
|
network to log to a central server. The central server provides additional
|
|
|
|
benefits especially in active-active scenarios where it is desirable to keep
|
|
|
|
the logs merged in arrival order. HAProxy may also make use of a UNIX socket to
|
|
|
|
send its logs to the local syslog daemon, but it is not recommended at all,
|
|
|
|
because if the syslog server is restarted while haproxy runs, the socket will
|
|
|
|
be replaced and new logs will be lost. Since HAProxy will be isolated inside a
|
|
|
|
chroot jail, it will not have the ability to reconnect to the new socket. It
|
|
|
|
has also been observed in field that the log buffers in use on UNIX sockets are
|
|
|
|
very small and lead to lost messages even at very light loads. But this can be
|
|
|
|
fine for testing however.
|
|
|
|
|
|
|
|
It is recommended to add the following directive to the "global" section to
|
|
|
|
make HAProxy log to the local daemon using facility "local0" :
|
|
|
|
|
|
|
|
log 127.0.0.1:514 local0
|
|
|
|
|
|
|
|
and then to add the following one to each "defaults" section or to each frontend
|
|
|
|
and backend section :
|
|
|
|
|
|
|
|
log global
|
|
|
|
|
|
|
|
This way, all logs will be centralized through the global definition of where
|
|
|
|
the log server is.
|
|
|
|
|
|
|
|
Some syslog daemons do not listen to UDP traffic by default, so depending on
|
|
|
|
the daemon being used, the syntax to enable this will vary :
|
|
|
|
|
|
|
|
- on sysklogd, you need to pass argument "-r" on the daemon's command line
|
|
|
|
so that it listens to a UDP socket for "remote" logs ; note that there is
|
|
|
|
no way to limit it to address 127.0.0.1 so it will also receive logs from
|
|
|
|
remote systems ;
|
|
|
|
|
|
|
|
- on rsyslogd, the following lines must be added to the configuration file :
|
|
|
|
|
|
|
|
$ModLoad imudp
|
|
|
|
$UDPServerAddress *
|
|
|
|
$UDPServerRun 514
|
|
|
|
|
|
|
|
- on syslog-ng, a new source can be created the following way, it then needs
|
|
|
|
to be added as a valid source in one of the "log" directives :
|
|
|
|
|
|
|
|
source s_udp {
|
|
|
|
udp(ip(127.0.0.1) port(514));
|
|
|
|
};
|
|
|
|
|
|
|
|
Please consult your syslog daemon's manual for more information. If no logs are
|
|
|
|
seen in the system's log files, please consider the following tests :
|
|
|
|
|
|
|
|
- restart haproxy. Each frontend and backend logs one line indicating it's
|
|
|
|
starting. If these logs are received, it means logs are working.
|
|
|
|
|
|
|
|
- run "strace -tt -s100 -etrace=sendmsg -p <haproxy's pid>" and perform some
|
|
|
|
activity that you expect to be logged. You should see the log messages
|
|
|
|
being sent using sendmsg() there. If they don't appear, restart using
|
|
|
|
strace on top of haproxy. If you still see no logs, it definitely means
|
|
|
|
that something is wrong in your configuration.
|
|
|
|
|
|
|
|
- run tcpdump to watch for port 514, for example on the loopback interface if
|
|
|
|
the traffic is being sent locally : "tcpdump -As0 -ni lo port 514". If the
|
|
|
|
packets are seen there, it's the proof they're sent then the syslogd daemon
|
|
|
|
needs to be troubleshooted.
|
|
|
|
|
|
|
|
While traffic logs are sent from the frontends (where the incoming connections
|
|
|
|
are accepted), backends also need to be able to send logs in order to report a
|
|
|
|
server state change consecutive to a health check. Please consult HAProxy's
|
|
|
|
configuration manual for more information regarding all possible log settings.
|
|
|
|
|
2016-07-02 01:01:18 +00:00
|
|
|
It is convenient to chose a facility that is not used by other daemons. HAProxy
|
2015-10-13 12:40:55 +00:00
|
|
|
examples often suggest "local0" for traffic logs and "local1" for admin logs
|
|
|
|
because they're never seen in field. A single facility would be enough as well.
|
|
|
|
Having separate logs is convenient for log analysis, but it's also important to
|
|
|
|
remember that logs may sometimes convey confidential information, and as such
|
2016-07-02 01:01:18 +00:00
|
|
|
they must not be mixed with other logs that may accidentally be handed out to
|
2015-10-13 12:40:55 +00:00
|
|
|
unauthorized people.
|
|
|
|
|
|
|
|
For in-field troubleshooting without impacting the server's capacity too much,
|
|
|
|
it is recommended to make use of the "halog" utility provided with HAProxy.
|
|
|
|
This is sort of a grep-like utility designed to process HAProxy log files at
|
|
|
|
a very fast data rate. Typical figures range between 1 and 2 GB of logs per
|
|
|
|
second. It is capable of extracting only certain logs (eg: search for some
|
|
|
|
classes of HTTP status codes, connection termination status, search by response
|
|
|
|
time ranges, look for errors only), count lines, limit the output to a number
|
|
|
|
of lines, and perform some more advanced statistics such as sorting servers
|
|
|
|
by response time or error counts, sorting URLs by time or count, sorting client
|
|
|
|
addresses by access count, and so on. It is pretty convenient to quickly spot
|
|
|
|
anomalies such as a bot looping on the site, and block them.
|
|
|
|
|
|
|
|
|
|
|
|
9. Statistics and monitoring
|
|
|
|
----------------------------
|
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
It is possible to query HAProxy about its status. The most commonly used
|
|
|
|
mechanism is the HTTP statistics page. This page also exposes an alternative
|
|
|
|
CSV output format for monitoring tools. The same format is provided on the
|
|
|
|
Unix socket.
|
|
|
|
|
2020-10-05 09:49:37 +00:00
|
|
|
Statistics are regroup in categories labelled as domains, corresponding to the
|
2020-12-20 20:22:40 +00:00
|
|
|
multiple components of HAProxy. There are two domains available: proxy and dns.
|
2020-10-05 09:49:46 +00:00
|
|
|
If not specified, the proxy domain is selected. Note that only the proxy
|
|
|
|
statistics are printed on the HTTP page.
|
2015-10-13 12:45:29 +00:00
|
|
|
|
|
|
|
9.1. CSV format
|
|
|
|
---------------
|
|
|
|
|
|
|
|
The statistics may be consulted either from the unix socket or from the HTTP
|
|
|
|
page. Both means provide a CSV format whose fields follow. The first line
|
|
|
|
begins with a sharp ('#') and has one word per comma-delimited field which
|
|
|
|
represents the title of the column. All other lines starting at the second one
|
|
|
|
use a classical CSV format using a comma as the delimiter, and the double quote
|
|
|
|
('"') as an optional text delimiter, but only if the enclosed text is ambiguous
|
|
|
|
(if it contains a quote or a comma). The double-quote character ('"') in the
|
|
|
|
text is doubled ('""'), which is the format that most tools recognize. Please
|
|
|
|
do not insert any column before these ones in order not to break tools which
|
|
|
|
use hard-coded column positions.
|
|
|
|
|
2020-10-05 09:49:39 +00:00
|
|
|
For proxy statistics, after each field name, the types which may have a value
|
|
|
|
for that field are specified in brackets. The types are L (Listeners), F
|
|
|
|
(Frontends), B (Backends), and S (Servers). There is a fixed set of static
|
|
|
|
fields that are always available in the same order. A column containing the
|
|
|
|
character '-' delimits the end of the static fields, after which presence or
|
|
|
|
order of the fields are not guaranteed.
|
|
|
|
|
|
|
|
Here is the list of static fields using the proxy statistics domain:
|
2015-10-13 12:45:29 +00:00
|
|
|
0. pxname [LFBS]: proxy name
|
|
|
|
1. svname [LFBS]: service name (FRONTEND for frontend, BACKEND for backend,
|
|
|
|
any name for server/listener)
|
|
|
|
2. qcur [..BS]: current queued requests. For the backend this reports the
|
|
|
|
number queued without a server assigned.
|
|
|
|
3. qmax [..BS]: max value of qcur
|
|
|
|
4. scur [LFBS]: current sessions
|
|
|
|
5. smax [LFBS]: max sessions
|
|
|
|
6. slim [LFBS]: configured session limit
|
2016-01-11 12:52:04 +00:00
|
|
|
7. stot [LFBS]: cumulative number of sessions
|
2015-10-13 12:45:29 +00:00
|
|
|
8. bin [LFBS]: bytes in
|
|
|
|
9. bout [LFBS]: bytes out
|
|
|
|
10. dreq [LFB.]: requests denied because of security concerns.
|
|
|
|
- For tcp this is because of a matched tcp-request content rule.
|
|
|
|
- For http this is because of a matched http-request or tarpit rule.
|
|
|
|
11. dresp [LFBS]: responses denied because of security concerns.
|
|
|
|
- For http this is because of a matched http-request rule, or
|
|
|
|
"option checkcache".
|
|
|
|
12. ereq [LF..]: request errors. Some of the possible causes are:
|
|
|
|
- early termination from the client, before the request has been sent.
|
|
|
|
- read error from the client
|
|
|
|
- client timeout
|
|
|
|
- client closed connection
|
|
|
|
- various bad requests from the client.
|
|
|
|
- request was tarpitted.
|
|
|
|
13. econ [..BS]: number of requests that encountered an error trying to
|
|
|
|
connect to a backend server. The backend stat is the sum of the stat
|
|
|
|
for all servers of that backend, plus any connection errors not
|
|
|
|
associated with a particular server (such as the backend having no
|
|
|
|
active servers).
|
|
|
|
14. eresp [..BS]: response errors. srv_abrt will be counted here also.
|
|
|
|
Some other errors are:
|
|
|
|
- write error on the client socket (won't be counted for the server stat)
|
|
|
|
- failure applying filters to the response.
|
|
|
|
15. wretr [..BS]: number of times a connection to a server was retried.
|
|
|
|
16. wredis [..BS]: number of times a request was redispatched to another
|
|
|
|
server. The server value counts the number of times that server was
|
|
|
|
switched away from.
|
2016-11-09 13:45:51 +00:00
|
|
|
17. status [LFBS]: status (UP/DOWN/NOLB/MAINT/MAINT(via)/MAINT(resolution)...)
|
2020-10-23 20:44:30 +00:00
|
|
|
18. weight [..BS]: total effective weight (backend), effective weight (server)
|
2015-10-13 12:45:29 +00:00
|
|
|
19. act [..BS]: number of active servers (backend), server is active (server)
|
|
|
|
20. bck [..BS]: number of backup servers (backend), server is backup (server)
|
|
|
|
21. chkfail [...S]: number of failed checks. (Only counts checks failed when
|
|
|
|
the server is up.)
|
|
|
|
22. chkdown [..BS]: number of UP->DOWN transitions. The backend counter counts
|
|
|
|
transitions to the whole backend being down, rather than the sum of the
|
|
|
|
counters for each server.
|
|
|
|
23. lastchg [..BS]: number of seconds since the last UP<->DOWN transition
|
|
|
|
24. downtime [..BS]: total downtime (in seconds). The value for the backend
|
|
|
|
is the downtime for the whole backend, not the sum of the server downtime.
|
|
|
|
25. qlimit [...S]: configured maxqueue for the server, or nothing in the
|
|
|
|
value is 0 (default, meaning no limit)
|
|
|
|
26. pid [LFBS]: process id (0 for first instance, 1 for second, ...)
|
|
|
|
27. iid [LFBS]: unique proxy id
|
|
|
|
28. sid [L..S]: server id (unique inside a proxy)
|
|
|
|
29. throttle [...S]: current throttle percentage for the server, when
|
|
|
|
slowstart is active, or no value if not in slowstart.
|
|
|
|
30. lbtot [..BS]: total number of times a server was selected, either for new
|
|
|
|
sessions, or when re-dispatching. The server counter is the number
|
|
|
|
of times that server was selected.
|
|
|
|
31. tracked [...S]: id of proxy/server if tracking is enabled.
|
|
|
|
32. type [LFBS]: (0=frontend, 1=backend, 2=server, 3=socket/listener)
|
|
|
|
33. rate [.FBS]: number of sessions per second over last elapsed second
|
|
|
|
34. rate_lim [.F..]: configured limit on new sessions per second
|
|
|
|
35. rate_max [.FBS]: max number of new sessions per second
|
|
|
|
36. check_status [...S]: status of last health check, one of:
|
|
|
|
UNK -> unknown
|
|
|
|
INI -> initializing
|
|
|
|
SOCKERR -> socket error
|
|
|
|
L4OK -> check passed on layer 4, no upper layers testing enabled
|
|
|
|
L4TOUT -> layer 1-4 timeout
|
|
|
|
L4CON -> layer 1-4 connection problem, for example
|
|
|
|
"Connection refused" (tcp rst) or "No route to host" (icmp)
|
|
|
|
L6OK -> check passed on layer 6
|
|
|
|
L6TOUT -> layer 6 (SSL) timeout
|
|
|
|
L6RSP -> layer 6 invalid response - protocol error
|
|
|
|
L7OK -> check passed on layer 7
|
|
|
|
L7OKC -> check conditionally passed on layer 7, for example 404 with
|
|
|
|
disable-on-404
|
|
|
|
L7TOUT -> layer 7 (HTTP/SMTP) timeout
|
|
|
|
L7RSP -> layer 7 invalid response - protocol error
|
|
|
|
L7STS -> layer 7 response error, for example HTTP 5xx
|
2017-09-01 17:13:55 +00:00
|
|
|
Notice: If a check is currently running, the last known status will be
|
|
|
|
reported, prefixed with "* ". e. g. "* L7OK".
|
2015-10-13 12:45:29 +00:00
|
|
|
37. check_code [...S]: layer5-7 code, if available
|
|
|
|
38. check_duration [...S]: time in ms took to finish last health check
|
|
|
|
39. hrsp_1xx [.FBS]: http responses with 1xx code
|
|
|
|
40. hrsp_2xx [.FBS]: http responses with 2xx code
|
|
|
|
41. hrsp_3xx [.FBS]: http responses with 3xx code
|
|
|
|
42. hrsp_4xx [.FBS]: http responses with 4xx code
|
|
|
|
43. hrsp_5xx [.FBS]: http responses with 5xx code
|
|
|
|
44. hrsp_other [.FBS]: http responses with other codes (protocol error)
|
|
|
|
45. hanafail [...S]: failed health checks details
|
|
|
|
46. req_rate [.F..]: HTTP requests per second over last elapsed second
|
|
|
|
47. req_rate_max [.F..]: max number of HTTP requests per second observed
|
2016-12-12 13:31:46 +00:00
|
|
|
48. req_tot [.FB.]: total number of HTTP requests received
|
2015-10-13 12:45:29 +00:00
|
|
|
49. cli_abrt [..BS]: number of data transfers aborted by the client
|
|
|
|
50. srv_abrt [..BS]: number of data transfers aborted by the server
|
|
|
|
(inc. in eresp)
|
|
|
|
51. comp_in [.FB.]: number of HTTP response bytes fed to the compressor
|
|
|
|
52. comp_out [.FB.]: number of HTTP response bytes emitted by the compressor
|
|
|
|
53. comp_byp [.FB.]: number of bytes that bypassed the HTTP compressor
|
|
|
|
(CPU/BW limit)
|
|
|
|
54. comp_rsp [.FB.]: number of HTTP responses that were compressed
|
|
|
|
55. lastsess [..BS]: number of seconds since last session assigned to
|
|
|
|
server/backend
|
|
|
|
56. last_chk [...S]: last health check contents or textual error
|
|
|
|
57. last_agt [...S]: last agent check contents or textual error
|
|
|
|
58. qtime [..BS]: the average queue time in ms over the 1024 last requests
|
|
|
|
59. ctime [..BS]: the average connect time in ms over the 1024 last requests
|
|
|
|
60. rtime [..BS]: the average response time in ms over the 1024 last requests
|
|
|
|
(0 for TCP)
|
|
|
|
61. ttime [..BS]: the average total session time in ms over the 1024 last
|
|
|
|
requests
|
2016-01-08 10:40:03 +00:00
|
|
|
62. agent_status [...S]: status of last agent check, one of:
|
|
|
|
UNK -> unknown
|
|
|
|
INI -> initializing
|
|
|
|
SOCKERR -> socket error
|
|
|
|
L4OK -> check passed on layer 4, no upper layers testing enabled
|
|
|
|
L4TOUT -> layer 1-4 timeout
|
|
|
|
L4CON -> layer 1-4 connection problem, for example
|
|
|
|
"Connection refused" (tcp rst) or "No route to host" (icmp)
|
|
|
|
L7OK -> agent reported "up"
|
|
|
|
L7STS -> agent reported "fail", "stop", or "down"
|
|
|
|
63. agent_code [...S]: numeric code reported by agent if any (unused for now)
|
|
|
|
64. agent_duration [...S]: time in ms taken to finish last check
|
2016-01-08 12:47:26 +00:00
|
|
|
65. check_desc [...S]: short human-readable description of check_status
|
|
|
|
66. agent_desc [...S]: short human-readable description of agent_status
|
2016-01-08 13:25:28 +00:00
|
|
|
67. check_rise [...S]: server's "rise" parameter used by checks
|
|
|
|
68. check_fall [...S]: server's "fall" parameter used by checks
|
|
|
|
69. check_health [...S]: server's health check value between 0 and rise+fall-1
|
|
|
|
70. agent_rise [...S]: agent's "rise" parameter, normally 1
|
|
|
|
71. agent_fall [...S]: agent's "fall" parameter, normally 1
|
|
|
|
72. agent_health [...S]: agent's health parameter, between 0 and rise+fall-1
|
2016-01-08 15:59:56 +00:00
|
|
|
73. addr [L..S]: address:port or "unix". IPv6 has brackets around the address.
|
2016-01-08 14:43:54 +00:00
|
|
|
74: cookie [..BS]: server's cookie value or backend's cookie name
|
2016-01-11 13:09:38 +00:00
|
|
|
75: mode [LFBS]: proxy mode (tcp, http, health, unknown)
|
2016-01-11 13:48:36 +00:00
|
|
|
76: algo [..B.]: load balancing algorithm
|
2016-01-11 12:52:04 +00:00
|
|
|
77: conn_rate [.F..]: number of connections over the last elapsed second
|
|
|
|
78: conn_rate_max [.F..]: highest known conn_rate
|
|
|
|
79: conn_tot [.F..]: cumulative number of connections
|
2016-01-11 13:40:47 +00:00
|
|
|
80: intercepted [.FB.]: cum. number of intercepted requests (monitor, stats)
|
2016-10-21 16:15:32 +00:00
|
|
|
81: dcon [LF..]: requests denied by "tcp-request connection" rules
|
2016-10-21 16:16:27 +00:00
|
|
|
82: dses [LF..]: requests denied by "tcp-request session" rules
|
2018-05-28 13:15:43 +00:00
|
|
|
83: wrew [LFBS]: cumulative number of failed header rewriting warnings
|
2019-07-17 07:24:46 +00:00
|
|
|
84: connect [..BS]: cumulative number of connection establishment attempts
|
|
|
|
85: reuse [..BS]: cumulative number of connection reuses
|
2019-11-08 06:29:34 +00:00
|
|
|
86: cache_lookups [.FB.]: cumulative number of cache lookups
|
2019-07-17 12:04:40 +00:00
|
|
|
87: cache_hits [.FB.]: cumulative number of cache hits
|
2019-11-08 14:27:27 +00:00
|
|
|
88: srv_icur [...S]: current number of idle connections available for reuse
|
|
|
|
89: src_ilim [...S]: limit on the number of available idle connections
|
|
|
|
90. qtime_max [..BS]: the maximum observed queue time in ms
|
|
|
|
91. ctime_max [..BS]: the maximum observed connect time in ms
|
|
|
|
92. rtime_max [..BS]: the maximum observed response time in ms (0 for TCP)
|
|
|
|
93. ttime_max [..BS]: the maximum observed total session time in ms
|
2019-12-16 13:40:39 +00:00
|
|
|
94. eint [LFBS]: cumulative number of internal errors
|
2020-10-08 14:37:14 +00:00
|
|
|
95. idle_conn_cur [...S]: current number of unsafe idle connections
|
|
|
|
96. safe_conn_cur [...S]: current number of safe idle connections
|
|
|
|
97. used_conn_cur [...S]: current number of connections in use
|
|
|
|
98. need_conn_est [...S]: estimated needed number of connections
|
2020-10-23 20:44:30 +00:00
|
|
|
99. uweight [..BS]: total user weight (backend), server user weight (server)
|
2015-10-13 12:45:29 +00:00
|
|
|
|
2020-10-05 09:49:39 +00:00
|
|
|
For all other statistics domains, the presence or the order of the fields are
|
|
|
|
not guaranteed. In this case, the header line should always be used to parse
|
|
|
|
the CSV data.
|
2015-10-13 12:45:29 +00:00
|
|
|
|
2020-12-02 19:36:08 +00:00
|
|
|
9.2. Typed output format
|
2016-03-11 10:09:34 +00:00
|
|
|
------------------------
|
|
|
|
|
|
|
|
Both "show info" and "show stat" support a mode where each output value comes
|
|
|
|
with its type and sufficient information to know how the value is supposed to
|
|
|
|
be aggregated between processes and how it evolves.
|
|
|
|
|
|
|
|
In all cases, the output consists in having a single value per line with all
|
|
|
|
the information split into fields delimited by colons (':').
|
|
|
|
|
|
|
|
The first column designates the object or metric being dumped. Its format is
|
|
|
|
specific to the command producing this output and will not be described in this
|
|
|
|
section. Usually it will consist in a series of identifiers and field names.
|
|
|
|
|
|
|
|
The second column contains 3 characters respectively indicating the origin, the
|
|
|
|
nature and the scope of the value being reported. The first character (the
|
|
|
|
origin) indicates where the value was extracted from. Possible characters are :
|
|
|
|
|
|
|
|
M The value is a metric. It is valid at one instant any may change depending
|
|
|
|
on its nature .
|
|
|
|
|
|
|
|
S The value is a status. It represents a discrete value which by definition
|
|
|
|
cannot be aggregated. It may be the status of a server ("UP" or "DOWN"),
|
|
|
|
the PID of the process, etc.
|
|
|
|
|
|
|
|
K The value is a sorting key. It represents an identifier which may be used
|
|
|
|
to group some values together because it is unique among its class. All
|
|
|
|
internal identifiers are keys. Some names can be listed as keys if they
|
|
|
|
are unique (eg: a frontend name is unique). In general keys come from the
|
2016-07-02 01:01:18 +00:00
|
|
|
configuration, even though some of them may automatically be assigned. For
|
2016-03-11 10:09:34 +00:00
|
|
|
most purposes keys may be considered as equivalent to configuration.
|
|
|
|
|
|
|
|
C The value comes from the configuration. Certain configuration values make
|
|
|
|
sense on the output, for example a concurrent connection limit or a cookie
|
|
|
|
name. By definition these values are the same in all processes started
|
|
|
|
from the same configuration file.
|
|
|
|
|
|
|
|
P The value comes from the product itself. There are very few such values,
|
|
|
|
most common use is to report the product name, version and release date.
|
|
|
|
These elements are also the same between all processes.
|
|
|
|
|
|
|
|
The second character (the nature) indicates the nature of the information
|
|
|
|
carried by the field in order to let an aggregator decide on what operation to
|
|
|
|
use to aggregate multiple values. Possible characters are :
|
|
|
|
|
|
|
|
A The value represents an age since a last event. This is a bit different
|
|
|
|
from the duration in that an age is automatically computed based on the
|
|
|
|
current date. A typical example is how long ago did the last session
|
|
|
|
happen on a server. Ages are generally aggregated by taking the minimum
|
|
|
|
value and do not need to be stored.
|
|
|
|
|
|
|
|
a The value represents an already averaged value. The average response times
|
|
|
|
and server weights are of this nature. Averages can typically be averaged
|
|
|
|
between processes.
|
|
|
|
|
|
|
|
C The value represents a cumulative counter. Such measures perpetually
|
|
|
|
increase until they wrap around. Some monitoring protocols need to tell
|
|
|
|
the difference between a counter and a gauge to report a different type.
|
|
|
|
In general counters may simply be summed since they represent events or
|
|
|
|
volumes. Examples of metrics of this nature are connection counts or byte
|
|
|
|
counts.
|
|
|
|
|
|
|
|
D The value represents a duration for a status. There are a few usages of
|
|
|
|
this, most of them include the time taken by the last health check and
|
|
|
|
the time a server has spent down. Durations are generally not summed,
|
|
|
|
most of the time the maximum will be retained to compute an SLA.
|
|
|
|
|
|
|
|
G The value represents a gauge. It's a measure at one instant. The memory
|
|
|
|
usage or the current number of active connections are of this nature.
|
|
|
|
Metrics of this type are typically summed during aggregation.
|
|
|
|
|
|
|
|
L The value represents a limit (generally a configured one). By nature,
|
|
|
|
limits are harder to aggregate since they are specific to the point where
|
|
|
|
they were retrieved. In certain situations they may be summed or be kept
|
|
|
|
separate.
|
|
|
|
|
|
|
|
M The value represents a maximum. In general it will apply to a gauge and
|
|
|
|
keep the highest known value. An example of such a metric could be the
|
|
|
|
maximum amount of concurrent connections that was encountered in the
|
|
|
|
product's life time. To correctly aggregate maxima, you are supposed to
|
|
|
|
output a range going from the maximum of all maxima and the sum of all
|
|
|
|
of them. There is indeed no way to know if they were encountered
|
|
|
|
simultaneously or not.
|
|
|
|
|
|
|
|
m The value represents a minimum. In general it will apply to a gauge and
|
|
|
|
keep the lowest known value. An example of such a metric could be the
|
|
|
|
minimum amount of free memory pools that was encountered in the product's
|
|
|
|
life time. To correctly aggregate minima, you are supposed to output a
|
|
|
|
range going from the minimum of all minima and the sum of all of them.
|
|
|
|
There is indeed no way to know if they were encountered simultaneously
|
|
|
|
or not.
|
|
|
|
|
|
|
|
N The value represents a name, so it is a string. It is used to report
|
|
|
|
proxy names, server names and cookie names. Names have configuration or
|
|
|
|
keys as their origin and are supposed to be the same among all processes.
|
|
|
|
|
|
|
|
O The value represents a free text output. Outputs from various commands,
|
|
|
|
returns from health checks, node descriptions are of such nature.
|
|
|
|
|
|
|
|
R The value represents an event rate. It's a measure at one instant. It is
|
|
|
|
quite similar to a gauge except that the recipient knows that this measure
|
|
|
|
moves slowly and may decide not to keep all values. An example of such a
|
|
|
|
metric is the measured amount of connections per second. Metrics of this
|
|
|
|
type are typically summed during aggregation.
|
|
|
|
|
|
|
|
T The value represents a date or time. A field emitting the current date
|
|
|
|
would be of this type. The method to aggregate such information is left
|
|
|
|
as an implementation choice. For now no field uses this type.
|
|
|
|
|
|
|
|
The third character (the scope) indicates what extent the value reflects. Some
|
|
|
|
elements may be per process while others may be per configuration or per system.
|
|
|
|
The distinction is important to know whether or not a single value should be
|
|
|
|
kept during aggregation or if values have to be aggregated. The following
|
|
|
|
characters are currently supported :
|
|
|
|
|
|
|
|
C The value is valid for a whole cluster of nodes, which is the set of nodes
|
|
|
|
communicating over the peers protocol. An example could be the amount of
|
|
|
|
entries present in a stick table that is replicated with other peers. At
|
|
|
|
the moment no metric use this scope.
|
|
|
|
|
|
|
|
P The value is valid only for the process reporting it. Most metrics use
|
|
|
|
this scope.
|
|
|
|
|
|
|
|
S The value is valid for the whole service, which is the set of processes
|
|
|
|
started together from the same configuration file. All metrics originating
|
|
|
|
from the configuration use this scope. Some other metrics may use it as
|
|
|
|
well for some shared resources (eg: shared SSL cache statistics).
|
|
|
|
|
|
|
|
s The value is valid for the whole system, such as the system's hostname,
|
|
|
|
current date or resource usage. At the moment this scope is not used by
|
|
|
|
any metric.
|
|
|
|
|
|
|
|
Consumers of these information will generally have enough of these 3 characters
|
|
|
|
to determine how to accurately report aggregated information across multiple
|
|
|
|
processes.
|
|
|
|
|
|
|
|
After this column, the third column indicates the type of the field, among "s32"
|
|
|
|
(signed 32-bit integer), "s64" (signed 64-bit integer), "u32" (unsigned 32-bit
|
|
|
|
integer), "u64" (unsigned 64-bit integer), "str" (string). It is important to
|
|
|
|
know the type before parsing the value in order to properly read it. For example
|
|
|
|
a string containing only digits is still a string an not an integer (eg: an
|
|
|
|
error code extracted by a check).
|
|
|
|
|
|
|
|
Then the fourth column is the value itself, encoded according to its type.
|
|
|
|
Strings are dumped as-is immediately after the colon without any leading space.
|
|
|
|
If a string contains a colon, it will appear normally. This means that the
|
|
|
|
output should not be exclusively split around colons or some check outputs
|
|
|
|
or server addresses might be truncated.
|
|
|
|
|
|
|
|
|
|
|
|
9.3. Unix Socket commands
|
2015-10-13 12:45:29 +00:00
|
|
|
-------------------------
|
|
|
|
|
|
|
|
The stats socket is not enabled by default. In order to enable it, it is
|
|
|
|
necessary to add one line in the global section of the haproxy configuration.
|
|
|
|
A second line is recommended to set a larger timeout, always appreciated when
|
|
|
|
issuing commands by hand :
|
|
|
|
|
|
|
|
global
|
|
|
|
stats socket /var/run/haproxy.sock mode 600 level admin
|
|
|
|
stats timeout 2m
|
|
|
|
|
|
|
|
It is also possible to add multiple instances of the stats socket by repeating
|
|
|
|
the line, and make them listen to a TCP port instead of a UNIX socket. This is
|
|
|
|
never done by default because this is dangerous, but can be handy in some
|
|
|
|
situations :
|
|
|
|
|
|
|
|
global
|
|
|
|
stats socket /var/run/haproxy.sock mode 600 level admin
|
|
|
|
stats socket ipv4@192.168.0.1:9999 level admin
|
|
|
|
stats timeout 2m
|
|
|
|
|
|
|
|
To access the socket, an external utility such as "socat" is required. Socat is
|
|
|
|
a swiss-army knife to connect anything to anything. We use it to connect
|
|
|
|
terminals to the socket, or a couple of stdin/stdout pipes to it for scripts.
|
|
|
|
The two main syntaxes we'll use are the following :
|
|
|
|
|
|
|
|
# socat /var/run/haproxy.sock stdio
|
|
|
|
# socat /var/run/haproxy.sock readline
|
|
|
|
|
|
|
|
The first one is used with scripts. It is possible to send the output of a
|
|
|
|
script to haproxy, and pass haproxy's output to another script. That's useful
|
|
|
|
for retrieving counters or attack traces for example.
|
|
|
|
|
|
|
|
The second one is only useful for issuing commands by hand. It has the benefit
|
|
|
|
that the terminal is handled by the readline library which supports line
|
|
|
|
editing and history, which is very convenient when issuing repeated commands
|
|
|
|
(eg: watch a counter).
|
|
|
|
|
|
|
|
The socket supports two operation modes :
|
|
|
|
- interactive
|
|
|
|
- non-interactive
|
|
|
|
|
|
|
|
The non-interactive mode is the default when socat connects to the socket. In
|
|
|
|
this mode, a single line may be sent. It is processed as a whole, responses are
|
|
|
|
sent back, and the connection closes after the end of the response. This is the
|
|
|
|
mode that scripts and monitoring tools use. It is possible to send multiple
|
|
|
|
commands in this mode, they need to be delimited by a semi-colon (';'). For
|
|
|
|
example :
|
|
|
|
|
|
|
|
# echo "show info;show stat;show table" | socat /var/run/haproxy stdio
|
|
|
|
|
2016-11-24 10:33:12 +00:00
|
|
|
If a command needs to use a semi-colon or a backslash (eg: in a value), it
|
2018-11-14 00:55:16 +00:00
|
|
|
must be preceded by a backslash ('\').
|
2016-05-26 20:42:25 +00:00
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
The interactive mode displays a prompt ('>') and waits for commands to be
|
|
|
|
entered on the line, then processes them, and displays the prompt again to wait
|
|
|
|
for a new command. This mode is entered via the "prompt" command which must be
|
|
|
|
sent on the first line in non-interactive mode. The mode is a flip switch, if
|
|
|
|
"prompt" is sent in interactive mode, it is disabled and the connection closes
|
|
|
|
after processing the last command of the same line.
|
|
|
|
|
|
|
|
For this reason, when debugging by hand, it's quite common to start with the
|
|
|
|
"prompt" command :
|
|
|
|
|
|
|
|
# socat /var/run/haproxy readline
|
|
|
|
prompt
|
|
|
|
> show info
|
|
|
|
...
|
|
|
|
>
|
|
|
|
|
|
|
|
Since multiple commands may be issued at once, haproxy uses the empty line as a
|
|
|
|
delimiter to mark an end of output for each command, and takes care of ensuring
|
|
|
|
that no command can emit an empty line on output. A script can thus easily
|
|
|
|
parse the output even when multiple commands were pipelined on a single line.
|
|
|
|
|
2018-04-18 11:26:46 +00:00
|
|
|
Some commands may take an optional payload. To add one to a command, the first
|
|
|
|
line needs to end with the "<<\n" pattern. The next lines will be treated as
|
|
|
|
the payload and can contain as many lines as needed. To validate a command with
|
|
|
|
a payload, it needs to end with an empty line.
|
|
|
|
|
|
|
|
Limitations do exist: the length of the whole buffer passed to the CLI must
|
|
|
|
not be greater than tune.bfsize and the pattern "<<" must not be glued to the
|
|
|
|
last word of the line.
|
|
|
|
|
|
|
|
When entering a paylod while in interactive mode, the prompt will change from
|
|
|
|
"> " to "+ ".
|
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
It is important to understand that when multiple haproxy processes are started
|
|
|
|
on the same sockets, any process may pick up the request and will output its
|
|
|
|
own stats.
|
|
|
|
|
|
|
|
The list of commands currently supported on the stats socket is provided below.
|
|
|
|
If an unknown command is sent, haproxy displays the usage message which reminds
|
|
|
|
all supported commands. Some commands support a more complex syntax, generally
|
|
|
|
it will explain what part of the command is invalid when this happens.
|
|
|
|
|
2017-08-31 09:05:10 +00:00
|
|
|
Some commands require a higher level of privilege to work. If you do not have
|
|
|
|
enough privilege, you will get an error "Permission denied". Please check
|
|
|
|
the "level" option of the "bind" keyword lines in the configuration manual
|
|
|
|
for more information.
|
|
|
|
|
2021-04-08 13:30:23 +00:00
|
|
|
abort ssl ca-file <cafile>
|
|
|
|
Abort and destroy a temporary CA file update transaction.
|
|
|
|
|
|
|
|
See also "set ssl ca-file" and "commit ssl ca-file".
|
|
|
|
|
2019-11-29 15:48:43 +00:00
|
|
|
abort ssl cert <filename>
|
|
|
|
Abort and destroy a temporary SSL certificate update transaction.
|
|
|
|
|
|
|
|
See also "set ssl cert" and "commit ssl cert".
|
|
|
|
|
2021-04-27 14:28:25 +00:00
|
|
|
abort ssl crl-file <crlfile>
|
|
|
|
Abort and destroy a temporary CRL file update transaction.
|
|
|
|
|
|
|
|
See also "set ssl crl-file" and "commit ssl crl-file".
|
|
|
|
|
2021-04-30 13:23:36 +00:00
|
|
|
add acl [@<ver>] <acl> <pattern>
|
2015-10-13 12:45:29 +00:00
|
|
|
Add an entry into the acl <acl>. <acl> is the #<id> or the <file> returned by
|
2021-04-30 13:23:36 +00:00
|
|
|
"show acl". This command does not verify if the entry already exists. Entries
|
|
|
|
are added to the current version of the ACL, unless a specific version is
|
|
|
|
specified with "@<ver>". This version number must have preliminary been
|
|
|
|
allocated by "prepare acl", and it will be comprised between the versions
|
|
|
|
reported in "curr_ver" and "next_ver" on the output of "show acl". Entries
|
|
|
|
added with a specific version number will not match until a "commit acl"
|
|
|
|
operation is performed on them. They may however be consulted using the
|
|
|
|
"show acl @<ver>" command, and cleared using a "clear acl @<ver>" command.
|
|
|
|
This command cannot be used if the reference <acl> is a file also used with
|
|
|
|
a map. In this case, the "add map" command must be used instead.
|
|
|
|
|
|
|
|
add map [@<ver>] <map> <key> <value>
|
|
|
|
add map [@<ver>] <map> <payload>
|
2015-10-13 12:45:29 +00:00
|
|
|
Add an entry into the map <map> to associate the value <value> to the key
|
|
|
|
<key>. This command does not verify if the entry already exists. It is
|
2021-04-30 13:23:36 +00:00
|
|
|
mainly used to fill a map after a "clear" or "prepare" operation. Entries
|
|
|
|
are added to the current version of the ACL, unless a specific version is
|
|
|
|
specified with "@<ver>". This version number must have preliminary been
|
|
|
|
allocated by "prepare acl", and it will be comprised between the versions
|
|
|
|
reported in "curr_ver" and "next_ver" on the output of "show acl". Entries
|
|
|
|
added with a specific version number will not match until a "commit map"
|
|
|
|
operation is performed on them. They may however be consulted using the
|
|
|
|
"show map @<ver>" command, and cleared using a "clear acl @<ver>" command.
|
|
|
|
If the designated map is also used as an ACL, the ACL will only match the
|
|
|
|
<key> part and will ignore the <value> part. Using the payload syntax it is
|
|
|
|
possible to add multiple key/value pairs by entering them on separate lines.
|
|
|
|
On each new line, the first word is the key and the rest of the line is
|
|
|
|
considered to be the value which can even contains spaces.
|
2018-04-18 12:04:47 +00:00
|
|
|
|
|
|
|
Example:
|
|
|
|
|
|
|
|
# socat /tmp/sock1 -
|
|
|
|
prompt
|
|
|
|
|
|
|
|
> add map #-1 <<
|
|
|
|
+ key1 value1
|
|
|
|
+ key2 value2 with spaces
|
|
|
|
+ key3 value3 also with spaces
|
|
|
|
+ key4 value4
|
|
|
|
|
|
|
|
>
|
2015-10-13 12:45:29 +00:00
|
|
|
|
2021-03-08 16:13:32 +00:00
|
|
|
add server <backend>/<server> [args]*
|
2022-03-09 14:07:31 +00:00
|
|
|
Instantiate a new server attached to the backend <backend>.
|
2021-03-08 16:13:32 +00:00
|
|
|
|
|
|
|
The <server> name must not be already used in the backend. A special
|
2021-04-29 12:59:42 +00:00
|
|
|
restriction is put on the backend which must used a dynamic load-balancing
|
|
|
|
algorithm. A subset of keywords from the server config file statement can be
|
|
|
|
used to configure the server behavior. Also note that no settings will be
|
|
|
|
reused from an hypothetical 'default-server' statement in the same backend.
|
2021-03-09 16:36:23 +00:00
|
|
|
|
2021-06-10 15:34:10 +00:00
|
|
|
Currently a dynamic server is statically initialized with the "none"
|
|
|
|
init-addr method. This means that no resolution will be undertaken if a FQDN
|
|
|
|
is specified as an address, even if the server creation will be validated.
|
|
|
|
|
2021-08-23 12:10:51 +00:00
|
|
|
To support the reload operations, it is expected that the server created via
|
|
|
|
the CLI is also manually inserted in the relevant haproxy configuration file.
|
|
|
|
A dynamic server not present in the configuration won't be restored after a
|
|
|
|
reload operation.
|
|
|
|
|
2021-07-13 08:36:03 +00:00
|
|
|
A dynamic server may use the "track" keyword to follow the check status of
|
|
|
|
another server from the configuration. However, it is not possible to track
|
|
|
|
another dynamic server. This is to ensure that the tracking chain is kept
|
|
|
|
consistent even in the case of dynamic servers deletion.
|
|
|
|
|
2021-07-22 14:04:59 +00:00
|
|
|
Use the "check" keyword to enable health-check support. Note that the
|
|
|
|
health-check is disabled by default and must be enabled independently from
|
2021-08-04 09:33:14 +00:00
|
|
|
the server using the "enable health" command. For agent checks, use the
|
|
|
|
"agent-check" keyword and the "enable agent" command. Note that in this case
|
|
|
|
the server may be activated via the agent depending on the status reported,
|
|
|
|
without an explicit "enable server" command. This also means that extra care
|
|
|
|
is required when removing a dynamic server with agent check. The agent should
|
|
|
|
be first deactivated via "disable agent" to be able to put the server in the
|
|
|
|
required maintenance mode before removal.
|
2021-07-22 14:04:59 +00:00
|
|
|
|
2021-08-06 08:25:32 +00:00
|
|
|
It may be possible to reach the fd limit when using a large number of dynamic
|
|
|
|
servers. Please refer to the "u-limit" global keyword documentation in this
|
|
|
|
case.
|
|
|
|
|
2021-03-09 16:36:23 +00:00
|
|
|
Here is the list of the currently supported keywords :
|
|
|
|
|
2021-08-04 09:33:14 +00:00
|
|
|
- agent-addr
|
|
|
|
- agent-check
|
|
|
|
- agent-inter
|
|
|
|
- agent-port
|
|
|
|
- agent-send
|
2021-05-19 07:49:41 +00:00
|
|
|
- allow-0rtt
|
|
|
|
- alpn
|
2021-07-22 14:04:59 +00:00
|
|
|
- addr
|
2021-03-09 16:36:23 +00:00
|
|
|
- backup
|
2021-05-19 07:49:41 +00:00
|
|
|
- ca-file
|
2021-07-22 14:04:59 +00:00
|
|
|
- check
|
2021-09-20 13:15:19 +00:00
|
|
|
- check-alpn
|
2021-07-22 14:04:59 +00:00
|
|
|
- check-proto
|
|
|
|
- check-send-proxy
|
2021-09-20 13:15:19 +00:00
|
|
|
- check-sni
|
|
|
|
- check-ssl
|
2021-07-22 14:04:59 +00:00
|
|
|
- check-via-socks4
|
2021-05-19 07:49:41 +00:00
|
|
|
- ciphers
|
|
|
|
- ciphersuites
|
|
|
|
- crl-file
|
|
|
|
- crt
|
2021-03-09 16:36:23 +00:00
|
|
|
- disabled
|
2021-07-22 14:04:59 +00:00
|
|
|
- downinter
|
2021-03-09 16:36:23 +00:00
|
|
|
- enabled
|
2021-09-20 13:16:12 +00:00
|
|
|
- error-limit
|
2021-07-22 14:04:59 +00:00
|
|
|
- fall
|
|
|
|
- fastinter
|
2021-05-19 07:49:41 +00:00
|
|
|
- force-sslv3/tlsv10/tlsv11/tlsv12/tlsv13
|
2021-03-09 16:36:23 +00:00
|
|
|
- id
|
2021-07-22 14:04:59 +00:00
|
|
|
- inter
|
2021-03-09 16:36:23 +00:00
|
|
|
- maxconn
|
|
|
|
- maxqueue
|
|
|
|
- minconn
|
2021-05-19 07:49:41 +00:00
|
|
|
- no-ssl-reuse
|
|
|
|
- no-sslv3/tlsv10/tlsv11/tlsv12/tlsv13
|
|
|
|
- no-tls-tickets
|
|
|
|
- npn
|
2021-09-20 13:16:12 +00:00
|
|
|
- observe
|
|
|
|
- on-error
|
|
|
|
- on-marked-down
|
|
|
|
- on-marked-up
|
2021-03-09 16:36:23 +00:00
|
|
|
- pool-low-conn
|
|
|
|
- pool-max-conn
|
|
|
|
- pool-purge-delay
|
2021-07-22 14:04:59 +00:00
|
|
|
- port
|
2021-03-12 17:03:27 +00:00
|
|
|
- proto
|
2021-03-09 16:36:23 +00:00
|
|
|
- proxy-v2-options
|
2021-07-22 14:04:59 +00:00
|
|
|
- rise
|
2021-03-09 16:36:23 +00:00
|
|
|
- send-proxy
|
|
|
|
- send-proxy-v2
|
2021-05-19 07:49:41 +00:00
|
|
|
- send-proxy-v2-ssl
|
|
|
|
- send-proxy-v2-ssl-cn
|
2021-09-21 09:51:54 +00:00
|
|
|
- slowstart
|
2021-05-19 07:49:41 +00:00
|
|
|
- sni
|
2021-03-09 16:36:23 +00:00
|
|
|
- source
|
2021-05-19 07:49:41 +00:00
|
|
|
- ssl
|
|
|
|
- ssl-max-ver
|
|
|
|
- ssl-min-ver
|
2021-03-09 16:36:23 +00:00
|
|
|
- tfo
|
2021-05-19 07:49:41 +00:00
|
|
|
- tls-tickets
|
2021-07-13 08:36:03 +00:00
|
|
|
- track
|
2021-03-09 16:36:23 +00:00
|
|
|
- usesrc
|
2021-05-19 07:49:41 +00:00
|
|
|
- verify
|
|
|
|
- verifyhost
|
2021-03-09 16:36:23 +00:00
|
|
|
- weight
|
2021-10-18 12:40:29 +00:00
|
|
|
- ws
|
2021-03-09 16:36:23 +00:00
|
|
|
|
|
|
|
Their syntax is similar to the server line from the configuration file,
|
|
|
|
please refer to their individual documentation for details.
|
2021-03-08 16:13:32 +00:00
|
|
|
|
2022-07-29 15:50:58 +00:00
|
|
|
add ssl ca-file <cafile> <payload>
|
|
|
|
Add a new certificate to a ca-file. This command is useful when you reached
|
2022-12-09 11:28:46 +00:00
|
|
|
the buffer size limit on the CLI and want to add multiple certificates.
|
2022-07-29 15:50:58 +00:00
|
|
|
Instead of doing a "set" with all the certificates you are able to add each
|
|
|
|
certificate individually. A "set ssl ca-file" will reset the ca-file.
|
|
|
|
|
|
|
|
Example:
|
|
|
|
echo -e "set ssl ca-file cafile.pem <<\n$(cat rootCA.crt)\n" | \
|
|
|
|
socat /var/run/haproxy.stat -
|
|
|
|
echo -e "add ssl ca-file cafile.pem <<\n$(cat intermediate1.crt)\n" | \
|
|
|
|
socat /var/run/haproxy.stat -
|
|
|
|
echo -e "add ssl ca-file cafile.pem <<\n$(cat intermediate2.crt)\n" | \
|
|
|
|
socat /var/run/haproxy.stat -
|
|
|
|
echo "commit ssl ca-file cafile.pem" | socat /var/run/haproxy.stat -
|
|
|
|
|
2020-04-02 15:42:51 +00:00
|
|
|
add ssl crt-list <crtlist> <certificate>
|
|
|
|
add ssl crt-list <crtlist> <payload>
|
|
|
|
Add an certificate in a crt-list. It can also be used for directories since
|
|
|
|
directories are now loaded the same way as the crt-lists. This command allow
|
|
|
|
you to use a certificate name in parameter, to use SSL options or filters a
|
|
|
|
crt-list line must sent as a payload instead. Only one crt-list line is
|
|
|
|
supported in the payload. This command will load the certificate for every
|
|
|
|
bind lines using the crt-list. To push a new certificate to HAProxy the
|
|
|
|
commands "new ssl cert" and "set ssl cert" must be used.
|
|
|
|
|
|
|
|
Example:
|
|
|
|
$ echo "new ssl cert foobar.pem" | socat /tmp/sock1 -
|
|
|
|
$ echo -e "set ssl cert foobar.pem <<\n$(cat foobar.pem)\n" | socat
|
|
|
|
/tmp/sock1 -
|
|
|
|
$ echo "commit ssl cert foobar.pem" | socat /tmp/sock1 -
|
|
|
|
$ echo "add ssl crt-list certlist1 foobar.pem" | socat /tmp/sock1 -
|
|
|
|
|
|
|
|
$ echo -e 'add ssl crt-list certlist1 <<\nfoobar.pem [allow-0rtt] foo.bar.com
|
|
|
|
!test1.com\n' | socat /tmp/sock1 -
|
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
clear counters
|
|
|
|
Clear the max values of the statistics counters in each proxy (frontend &
|
2018-01-20 18:30:13 +00:00
|
|
|
backend) and in each server. The accumulated counters are not affected. The
|
|
|
|
internal activity counters reported by "show activity" are also reset. This
|
2015-10-13 12:45:29 +00:00
|
|
|
can be used to get clean counters after an incident, without having to
|
|
|
|
restart nor to clear traffic counters. This command is restricted and can
|
|
|
|
only be issued on sockets configured for levels "operator" or "admin".
|
|
|
|
|
|
|
|
clear counters all
|
|
|
|
Clear all statistics counters in each proxy (frontend & backend) and in each
|
|
|
|
server. This has the same effect as restarting. This command is restricted
|
|
|
|
and can only be issued on sockets configured for level "admin".
|
|
|
|
|
2021-04-30 11:31:43 +00:00
|
|
|
clear acl [@<ver>] <acl>
|
2015-10-13 12:45:29 +00:00
|
|
|
Remove all entries from the acl <acl>. <acl> is the #<id> or the <file>
|
|
|
|
returned by "show acl". Note that if the reference <acl> is a file and is
|
2021-04-30 11:31:43 +00:00
|
|
|
shared with a map, this map will be also cleared. By default only the current
|
|
|
|
version of the ACL is cleared (the one being matched against). However it is
|
|
|
|
possible to specify another version using '@' followed by this version.
|
2015-10-13 12:45:29 +00:00
|
|
|
|
2021-04-30 11:31:43 +00:00
|
|
|
clear map [@<ver>] <map>
|
2015-10-13 12:45:29 +00:00
|
|
|
Remove all entries from the map <map>. <map> is the #<id> or the <file>
|
|
|
|
returned by "show map". Note that if the reference <map> is a file and is
|
2021-04-30 11:31:43 +00:00
|
|
|
shared with a acl, this acl will be also cleared. By default only the current
|
|
|
|
version of the map is cleared (the one being matched against). However it is
|
|
|
|
possible to specify another version using '@' followed by this version.
|
2015-10-13 12:45:29 +00:00
|
|
|
|
|
|
|
clear table <table> [ data.<type> <operator> <value> ] | [ key <key> ]
|
|
|
|
Remove entries from the stick-table <table>.
|
|
|
|
|
|
|
|
This is typically used to unblock some users complaining they have been
|
|
|
|
abusively denied access to a service, but this can also be used to clear some
|
|
|
|
stickiness entries matching a server that is going to be replaced (see "show
|
|
|
|
table" below for details). Note that sometimes, removal of an entry will be
|
|
|
|
refused because it is currently tracked by a session. Retrying a few seconds
|
|
|
|
later after the session ends is usual enough.
|
|
|
|
|
|
|
|
In the case where no options arguments are given all entries will be removed.
|
|
|
|
|
|
|
|
When the "data." form is used entries matching a filter applied using the
|
|
|
|
stored data (see "stick-table" in section 4.2) are removed. A stored data
|
|
|
|
type must be specified in <type>, and this data type must be stored in the
|
|
|
|
table otherwise an error is reported. The data is compared according to
|
|
|
|
<operator> with the 64-bit integer <value>. Operators are the same as with
|
|
|
|
the ACLs :
|
|
|
|
|
|
|
|
- eq : match entries whose data is equal to this value
|
|
|
|
- ne : match entries whose data is not equal to this value
|
|
|
|
- le : match entries whose data is less than or equal to this value
|
|
|
|
- ge : match entries whose data is greater than or equal to this value
|
|
|
|
- lt : match entries whose data is less than this value
|
|
|
|
- gt : match entries whose data is greater than this value
|
|
|
|
|
|
|
|
When the key form is used the entry <key> is removed. The key must be of the
|
|
|
|
same type as the table, which currently is limited to IPv4, IPv6, integer and
|
|
|
|
string.
|
|
|
|
|
|
|
|
Example :
|
|
|
|
$ echo "show table http_proxy" | socat stdio /tmp/sock1
|
|
|
|
>>> # table: http_proxy, type: ip, size:204800, used:2
|
|
|
|
>>> 0x80e6a4c: key=127.0.0.1 use=0 exp=3594729 gpc0=0 conn_rate(30000)=1 \
|
|
|
|
bytes_out_rate(60000)=187
|
|
|
|
>>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
|
|
|
|
bytes_out_rate(60000)=191
|
|
|
|
|
|
|
|
$ echo "clear table http_proxy key 127.0.0.1" | socat stdio /tmp/sock1
|
|
|
|
|
|
|
|
$ echo "show table http_proxy" | socat stdio /tmp/sock1
|
|
|
|
>>> # table: http_proxy, type: ip, size:204800, used:1
|
|
|
|
>>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
|
|
|
|
bytes_out_rate(60000)=191
|
|
|
|
$ echo "clear table http_proxy data.gpc0 eq 1" | socat stdio /tmp/sock1
|
|
|
|
$ echo "show table http_proxy" | socat stdio /tmp/sock1
|
|
|
|
>>> # table: http_proxy, type: ip, size:204800, used:1
|
|
|
|
|
2021-04-30 13:10:01 +00:00
|
|
|
commit acl @<ver> <acl>
|
|
|
|
Commit all changes made to version <ver> of ACL <acl>, and deletes all past
|
|
|
|
versions. <acl> is the #<id> or the <file> returned by "show acl". The
|
|
|
|
version number must be between "curr_ver"+1 and "next_ver" as reported in
|
|
|
|
"show acl". The contents to be committed to the ACL can be consulted with
|
|
|
|
"show acl @<ver> <acl>" if desired. The specified version number has normally
|
|
|
|
been created with the "prepare acl" command. The replacement is atomic. It
|
|
|
|
consists in atomically updating the current version to the specified version,
|
|
|
|
which will instantly cause all entries in other versions to become invisible,
|
|
|
|
and all entries in the new version to become visible. It is also possible to
|
|
|
|
use this command to perform an atomic removal of all visible entries of an
|
|
|
|
ACL by calling "prepare acl" first then committing without adding any
|
|
|
|
entries. This command cannot be used if the reference <acl> is a file also
|
|
|
|
used as a map. In this case, the "commit map" command must be used instead.
|
|
|
|
|
|
|
|
commit map @<ver> <map>
|
|
|
|
Commit all changes made to version <ver> of map <map>, and deletes all past
|
|
|
|
versions. <map> is the #<id> or the <file> returned by "show map". The
|
|
|
|
version number must be between "curr_ver"+1 and "next_ver" as reported in
|
|
|
|
"show map". The contents to be committed to the map can be consulted with
|
|
|
|
"show map @<ver> <map>" if desired. The specified version number has normally
|
|
|
|
been created with the "prepare map" command. The replacement is atomic. It
|
|
|
|
consists in atomically updating the current version to the specified version,
|
|
|
|
which will instantly cause all entries in other versions to become invisible,
|
|
|
|
and all entries in the new version to become visible. It is also possible to
|
|
|
|
use this command to perform an atomic removal of all visible entries of an
|
|
|
|
map by calling "prepare map" first then committing without adding any
|
|
|
|
entries.
|
|
|
|
|
2021-04-08 13:30:23 +00:00
|
|
|
commit ssl ca-file <cafile>
|
|
|
|
Commit a temporary SSL CA file update transaction.
|
|
|
|
|
|
|
|
In the case of an existing CA file (in a "Used" state in "show ssl ca-file"),
|
|
|
|
the new CA file tree entry is inserted in the CA file tree and every instance
|
|
|
|
that used the CA file entry is rebuilt, along with the SSL contexts it needs.
|
|
|
|
All the contexts previously used by the rebuilt instances are removed.
|
|
|
|
Upon success, the previous CA file entry is removed from the tree.
|
|
|
|
Upon failure, nothing is removed or deleted, and all the original SSL
|
|
|
|
contexts are kept and used.
|
|
|
|
Once the temporary transaction is committed, it is destroyed.
|
|
|
|
|
|
|
|
In the case of a new CA file (after a "new ssl ca-file" and in a "Unused"
|
|
|
|
state in "show ssl ca-file"), the CA file will be inserted in the CA file
|
|
|
|
tree but it won't be used anywhere in HAProxy. To use it and generate SSL
|
|
|
|
contexts that use it, you will need to add it to a crt-list with "add ssl
|
|
|
|
crt-list".
|
|
|
|
|
2022-07-29 15:50:58 +00:00
|
|
|
See also "new ssl ca-file", "set ssl ca-file", "add ssl ca-file",
|
|
|
|
"abort ssl ca-file" and "add ssl crt-list".
|
2021-04-08 13:30:23 +00:00
|
|
|
|
2019-11-29 15:48:43 +00:00
|
|
|
commit ssl cert <filename>
|
2020-06-26 13:39:57 +00:00
|
|
|
Commit a temporary SSL certificate update transaction.
|
|
|
|
|
|
|
|
In the case of an existing certificate (in a "Used" state in "show ssl
|
|
|
|
cert"), generate every SSL contextes and SNIs it need, insert them, and
|
|
|
|
remove the previous ones. Replace in memory the previous SSL certificates
|
|
|
|
everywhere the <filename> was used in the configuration. Upon failure it
|
|
|
|
doesn't remove or insert anything. Once the temporary transaction is
|
|
|
|
committed, it is destroyed.
|
|
|
|
|
|
|
|
In the case of a new certificate (after a "new ssl cert" and in a "Unused"
|
2020-12-20 20:22:40 +00:00
|
|
|
state in "show ssl cert"), the certificate will be committed in a certificate
|
2020-06-26 13:39:57 +00:00
|
|
|
storage, but it won't be used anywhere in haproxy. To use it and generate
|
|
|
|
its SNIs you will need to add it to a crt-list or a directory with "add ssl
|
|
|
|
crt-list".
|
|
|
|
|
2021-04-08 13:30:23 +00:00
|
|
|
See also "new ssl cert", "set ssl cert", "abort ssl cert" and
|
2020-06-26 13:39:57 +00:00
|
|
|
"add ssl crt-list".
|
2019-11-29 15:48:43 +00:00
|
|
|
|
2021-04-27 14:28:25 +00:00
|
|
|
commit ssl crl-file <crlfile>
|
|
|
|
Commit a temporary SSL CRL file update transaction.
|
|
|
|
|
|
|
|
In the case of an existing CRL file (in a "Used" state in "show ssl
|
|
|
|
crl-file"), the new CRL file entry is inserted in the CA file tree (which
|
|
|
|
holds both the CA files and the CRL files) and every instance that used the
|
|
|
|
CRL file entry is rebuilt, along with the SSL contexts it needs.
|
|
|
|
All the contexts previously used by the rebuilt instances are removed.
|
|
|
|
Upon success, the previous CRL file entry is removed from the tree.
|
|
|
|
Upon failure, nothing is removed or deleted, and all the original SSL
|
|
|
|
contexts are kept and used.
|
|
|
|
Once the temporary transaction is committed, it is destroyed.
|
|
|
|
|
|
|
|
In the case of a new CRL file (after a "new ssl crl-file" and in a "Unused"
|
|
|
|
state in "show ssl crl-file"), the CRL file will be inserted in the CRL file
|
|
|
|
tree but it won't be used anywhere in HAProxy. To use it and generate SSL
|
|
|
|
contexts that use it, you will need to add it to a crt-list with "add ssl
|
|
|
|
crt-list".
|
|
|
|
|
|
|
|
See also "new ssl crl-file", "set ssl crl-file", "abort ssl crl-file" and
|
|
|
|
"add ssl crt-list".
|
|
|
|
|
2019-05-20 12:25:05 +00:00
|
|
|
debug dev <command> [args]*
|
2019-10-24 16:03:39 +00:00
|
|
|
Call a developer-specific command. Only supported on a CLI connection running
|
|
|
|
in expert mode (see "expert-mode on"). Such commands are extremely dangerous
|
|
|
|
and not forgiving, any misuse may result in a crash of the process. They are
|
|
|
|
intended for experts only, and must really not be used unless told to do so.
|
|
|
|
Some of them are only available when haproxy is built with DEBUG_DEV defined
|
|
|
|
because they may have security implications. All of these commands require
|
|
|
|
admin privileges, and are purposely not documented to avoid encouraging their
|
|
|
|
use by people who are not at ease with the source code.
|
2019-05-20 12:25:05 +00:00
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
del acl <acl> [<key>|#<ref>]
|
|
|
|
Delete all the acl entries from the acl <acl> corresponding to the key <key>.
|
|
|
|
<acl> is the #<id> or the <file> returned by "show acl". If the <ref> is used,
|
|
|
|
this command delete only the listed reference. The reference can be found with
|
|
|
|
listing the content of the acl. Note that if the reference <acl> is a file and
|
|
|
|
is shared with a map, the entry will be also deleted in the map.
|
|
|
|
|
|
|
|
del map <map> [<key>|#<ref>]
|
|
|
|
Delete all the map entries from the map <map> corresponding to the key <key>.
|
|
|
|
<map> is the #<id> or the <file> returned by "show map". If the <ref> is used,
|
|
|
|
this command delete only the listed reference. The reference can be found with
|
|
|
|
listing the content of the map. Note that if the reference <map> is a file and
|
|
|
|
is shared with a acl, the entry will be also deleted in the map.
|
|
|
|
|
2021-04-08 13:30:23 +00:00
|
|
|
del ssl ca-file <cafile>
|
|
|
|
Delete a CA file tree entry from HAProxy. The CA file must be unused and
|
|
|
|
removed from any crt-list. "show ssl ca-file" displays the status of the CA
|
|
|
|
files. The deletion doesn't work with a certificate referenced directly with
|
|
|
|
the "ca-file" or "ca-verify-file" directives in the configuration.
|
|
|
|
|
2020-04-08 10:05:39 +00:00
|
|
|
del ssl cert <certfile>
|
|
|
|
Delete a certificate store from HAProxy. The certificate must be unused and
|
|
|
|
removed from any crt-list or directory. "show ssl cert" displays the status
|
|
|
|
of the certificate. The deletion doesn't work with a certificate referenced
|
|
|
|
directly with the "crt" directive in the configuration.
|
|
|
|
|
2021-04-27 14:28:25 +00:00
|
|
|
del ssl crl-file <crlfile>
|
|
|
|
Delete a CRL file tree entry from HAProxy. The CRL file must be unused and
|
|
|
|
removed from any crt-list. "show ssl crl-file" displays the status of the CRL
|
|
|
|
files. The deletion doesn't work with a certificate referenced directly with
|
|
|
|
the "crl-file" directive in the configuration.
|
|
|
|
|
2020-04-06 15:43:05 +00:00
|
|
|
del ssl crt-list <filename> <certfile[:line]>
|
|
|
|
Delete an entry in a crt-list. This will delete every SNIs used for this
|
|
|
|
entry in the frontends. If a certificate is used several time in a crt-list,
|
|
|
|
you will need to provide which line you want to delete. To display the line
|
|
|
|
numbers, use "show ssl crt-list -n <crtlist>".
|
|
|
|
|
2021-04-15 12:41:20 +00:00
|
|
|
del server <backend>/<server>
|
2021-08-23 12:10:51 +00:00
|
|
|
Remove a server attached to the backend <backend>. All servers are eligible,
|
|
|
|
except servers which are referenced by other configuration elements. The
|
|
|
|
server must be put in maintenance mode prior to its deletion. The operation
|
|
|
|
is cancelled if the serveur still has active or idle connection or its
|
|
|
|
connection queue is not empty.
|
2021-04-15 12:41:20 +00:00
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
disable agent <backend>/<server>
|
|
|
|
Mark the auxiliary agent check as temporarily stopped.
|
|
|
|
|
|
|
|
In the case where an agent check is being run as a auxiliary check, due
|
|
|
|
to the agent-check parameter of a server directive, new checks are only
|
2016-07-02 01:01:18 +00:00
|
|
|
initialized when the agent is in the enabled. Thus, disable agent will
|
2015-10-13 12:45:29 +00:00
|
|
|
prevent any new agent checks from begin initiated until the agent
|
|
|
|
re-enabled using enable agent.
|
|
|
|
|
|
|
|
When an agent is disabled the processing of an auxiliary agent check that
|
|
|
|
was initiated while the agent was set as enabled is as follows: All
|
|
|
|
results that would alter the weight, specifically "drain" or a weight
|
|
|
|
returned by the agent, are ignored. The processing of agent check is
|
|
|
|
otherwise unchanged.
|
|
|
|
|
|
|
|
The motivation for this feature is to allow the weight changing effects
|
|
|
|
of the agent checks to be paused to allow the weight of a server to be
|
|
|
|
configured using set weight without being overridden by the agent.
|
|
|
|
|
|
|
|
This command is restricted and can only be issued on sockets configured for
|
|
|
|
level "admin".
|
|
|
|
|
2017-03-14 19:08:46 +00:00
|
|
|
disable dynamic-cookie backend <backend>
|
2020-03-06 08:07:38 +00:00
|
|
|
Disable the generation of dynamic cookies for the backend <backend>
|
2017-03-14 19:08:46 +00:00
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
disable frontend <frontend>
|
|
|
|
Mark the frontend as temporarily stopped. This corresponds to the mode which
|
|
|
|
is used during a soft restart : the frontend releases the port but can be
|
|
|
|
enabled again if needed. This should be used with care as some non-Linux OSes
|
|
|
|
are unable to enable it back. This is intended to be used in environments
|
|
|
|
where stopping a proxy is not even imaginable but a misconfigured proxy must
|
|
|
|
be fixed. That way it's possible to release the port and bind it into another
|
|
|
|
process to restore operations. The frontend will appear with status "STOP"
|
|
|
|
on the stats page.
|
|
|
|
|
|
|
|
The frontend may be specified either by its name or by its numeric ID,
|
|
|
|
prefixed with a sharp ('#').
|
|
|
|
|
|
|
|
This command is restricted and can only be issued on sockets configured for
|
|
|
|
level "admin".
|
|
|
|
|
|
|
|
disable health <backend>/<server>
|
|
|
|
Mark the primary health check as temporarily stopped. This will disable
|
|
|
|
sending of health checks, and the last health check result will be ignored.
|
|
|
|
The server will be in unchecked state and considered UP unless an auxiliary
|
|
|
|
agent check forces it down.
|
|
|
|
|
|
|
|
This command is restricted and can only be issued on sockets configured for
|
|
|
|
level "admin".
|
|
|
|
|
|
|
|
disable server <backend>/<server>
|
|
|
|
Mark the server DOWN for maintenance. In this mode, no more checks will be
|
|
|
|
performed on the server until it leaves maintenance.
|
|
|
|
If the server is tracked by other servers, those servers will be set to DOWN
|
|
|
|
during the maintenance.
|
|
|
|
|
|
|
|
In the statistics page, a server DOWN for maintenance will appear with a
|
|
|
|
"MAINT" status, its tracking servers with the "MAINT(via)" one.
|
|
|
|
|
|
|
|
Both the backend and the server may be specified either by their name or by
|
|
|
|
their numeric ID, prefixed with a sharp ('#').
|
|
|
|
|
|
|
|
This command is restricted and can only be issued on sockets configured for
|
|
|
|
level "admin".
|
|
|
|
|
|
|
|
enable agent <backend>/<server>
|
|
|
|
Resume auxiliary agent check that was temporarily stopped.
|
|
|
|
|
|
|
|
See "disable agent" for details of the effect of temporarily starting
|
|
|
|
and stopping an auxiliary agent.
|
|
|
|
|
|
|
|
This command is restricted and can only be issued on sockets configured for
|
|
|
|
level "admin".
|
|
|
|
|
2017-03-14 19:08:46 +00:00
|
|
|
enable dynamic-cookie backend <backend>
|
2019-08-23 09:21:05 +00:00
|
|
|
Enable the generation of dynamic cookies for the backend <backend>.
|
|
|
|
A secret key must also be provided.
|
2017-03-14 19:08:46 +00:00
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
enable frontend <frontend>
|
|
|
|
Resume a frontend which was temporarily stopped. It is possible that some of
|
|
|
|
the listening ports won't be able to bind anymore (eg: if another process
|
|
|
|
took them since the 'disable frontend' operation). If this happens, an error
|
|
|
|
is displayed. Some operating systems might not be able to resume a frontend
|
|
|
|
which was disabled.
|
|
|
|
|
|
|
|
The frontend may be specified either by its name or by its numeric ID,
|
|
|
|
prefixed with a sharp ('#').
|
|
|
|
|
|
|
|
This command is restricted and can only be issued on sockets configured for
|
|
|
|
level "admin".
|
|
|
|
|
|
|
|
enable health <backend>/<server>
|
|
|
|
Resume a primary health check that was temporarily stopped. This will enable
|
|
|
|
sending of health checks again. Please see "disable health" for details.
|
|
|
|
|
|
|
|
This command is restricted and can only be issued on sockets configured for
|
|
|
|
level "admin".
|
|
|
|
|
|
|
|
enable server <backend>/<server>
|
|
|
|
If the server was previously marked as DOWN for maintenance, this marks the
|
|
|
|
server UP and checks are re-enabled.
|
|
|
|
|
|
|
|
Both the backend and the server may be specified either by their name or by
|
|
|
|
their numeric ID, prefixed with a sharp ('#').
|
|
|
|
|
|
|
|
This command is restricted and can only be issued on sockets configured for
|
|
|
|
level "admin".
|
|
|
|
|
2021-03-18 14:32:53 +00:00
|
|
|
experimental-mode [on|off]
|
|
|
|
Without options, this indicates whether the experimental mode is enabled or
|
|
|
|
disabled on the current connection. When passed "on", it turns the
|
|
|
|
experimental mode on for the current CLI connection only. With "off" it turns
|
|
|
|
it off.
|
|
|
|
|
|
|
|
The experimental mode is used to access to extra features still in
|
|
|
|
development. These features are currently not stable and should be used with
|
2021-03-19 17:21:44 +00:00
|
|
|
care. They may be subject to breaking changes across versions.
|
2021-03-18 14:32:53 +00:00
|
|
|
|
2022-02-01 15:08:50 +00:00
|
|
|
When used from the master CLI, this command shouldn't be prefixed, as it will
|
|
|
|
set the mode for any worker when connecting to its CLI.
|
|
|
|
|
|
|
|
Example:
|
2022-03-09 14:07:31 +00:00
|
|
|
echo "@1; experimental-mode on; <experimental_cmd>..." | socat /var/run/haproxy.master -
|
|
|
|
echo "experimental-mode on; @1 <experimental_cmd>..." | socat /var/run/haproxy.master -
|
2022-02-01 15:08:50 +00:00
|
|
|
|
2019-10-24 15:55:53 +00:00
|
|
|
expert-mode [on|off]
|
2021-03-18 14:32:53 +00:00
|
|
|
This command is similar to experimental-mode but is used to toggle the
|
|
|
|
expert mode.
|
|
|
|
|
|
|
|
The expert mode enables displaying of expert commands that can be extremely
|
2019-10-24 15:55:53 +00:00
|
|
|
dangerous for the process and which may occasionally help developers collect
|
|
|
|
important information about complex bugs. Any misuse of these features will
|
|
|
|
likely lead to a process crash. Do not use this option without being invited
|
|
|
|
to do so. Note that this command is purposely not listed in the help message.
|
|
|
|
This command is only accessible in admin level. Changing to another level
|
|
|
|
automatically resets the expert mode.
|
|
|
|
|
2022-02-01 15:08:50 +00:00
|
|
|
When used from the master CLI, this command shouldn't be prefixed, as it will
|
|
|
|
set the mode for any worker when connecting to its CLI.
|
|
|
|
|
|
|
|
Example:
|
|
|
|
echo "@1; expert-mode on; debug dev exit 1" | socat /var/run/haproxy.master -
|
|
|
|
echo "expert-mode on; @1 debug dev exit 1" | socat /var/run/haproxy.master -
|
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
get map <map> <value>
|
|
|
|
get acl <acl> <value>
|
|
|
|
Lookup the value <value> in the map <map> or in the ACL <acl>. <map> or <acl>
|
|
|
|
are the #<id> or the <file> returned by "show map" or "show acl". This command
|
|
|
|
returns all the matching patterns associated with this map. This is useful for
|
|
|
|
debugging maps and ACLs. The output format is composed by one line par
|
|
|
|
matching type. Each line is composed by space-delimited series of words.
|
|
|
|
|
|
|
|
The first two words are:
|
|
|
|
|
|
|
|
<match method>: The match method applied. It can be "found", "bool",
|
|
|
|
"int", "ip", "bin", "len", "str", "beg", "sub", "dir",
|
|
|
|
"dom", "end" or "reg".
|
|
|
|
|
|
|
|
<match result>: The result. Can be "match" or "no-match".
|
|
|
|
|
|
|
|
The following words are returned only if the pattern matches an entry.
|
|
|
|
|
|
|
|
<index type>: "tree" or "list". The internal lookup algorithm.
|
|
|
|
|
|
|
|
<case>: "case-insensitive" or "case-sensitive". The
|
|
|
|
interpretation of the case.
|
|
|
|
|
|
|
|
<entry matched>: match="<entry>". Return the matched pattern. It is
|
|
|
|
useful with regular expressions.
|
|
|
|
|
|
|
|
The two last word are used to show the returned value and its type. With the
|
|
|
|
"acl" case, the pattern doesn't exist.
|
|
|
|
|
|
|
|
return=nothing: No return because there are no "map".
|
|
|
|
return="<value>": The value returned in the string format.
|
|
|
|
return=cannot-display: The value cannot be converted as string.
|
|
|
|
|
|
|
|
type="<type>": The type of the returned sample.
|
|
|
|
|
2021-03-26 13:51:31 +00:00
|
|
|
get var <name>
|
|
|
|
Show the existence, type and contents of the process-wide variable 'name'.
|
|
|
|
Only process-wide variables are readable, so the name must begin with
|
|
|
|
'proc.' otherwise no variable will be found. This command requires levels
|
|
|
|
"operator" or "admin".
|
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
get weight <backend>/<server>
|
|
|
|
Report the current weight and the initial weight of server <server> in
|
|
|
|
backend <backend> or an error if either doesn't exist. The initial weight is
|
|
|
|
the one that appears in the configuration file. Both are normally equal
|
|
|
|
unless the current weight has been changed. Both the backend and the server
|
|
|
|
may be specified either by their name or by their numeric ID, prefixed with a
|
|
|
|
sharp ('#').
|
|
|
|
|
2021-05-09 18:59:23 +00:00
|
|
|
help [<command>]
|
|
|
|
Print the list of known keywords and their basic usage, or commands matching
|
|
|
|
the requested one. The same help screen is also displayed for unknown
|
|
|
|
commands.
|
2015-10-13 12:45:29 +00:00
|
|
|
|
2021-10-19 12:53:55 +00:00
|
|
|
httpclient <method> <URI>
|
|
|
|
Launch an HTTP client request and print the response on the CLI. Only
|
|
|
|
supported on a CLI connection running in expert mode (see "expert-mode on").
|
2022-09-29 13:00:15 +00:00
|
|
|
It's only meant for debugging. The httpclient is able to resolve a server
|
|
|
|
name in the URL using the "default" resolvers section, which is populated
|
|
|
|
with the DNS servers of your /etc/resolv.conf by default. However it won't be
|
|
|
|
able to resolve an host from /etc/hosts if you don't use a local dns daemon
|
|
|
|
which can resolve those.
|
2021-10-19 12:53:55 +00:00
|
|
|
|
2021-04-08 13:30:23 +00:00
|
|
|
new ssl ca-file <cafile>
|
|
|
|
Create a new empty CA file tree entry to be filled with a set of CA
|
|
|
|
certificates and added to a crt-list. This command should be used in
|
2022-07-29 15:50:58 +00:00
|
|
|
combination with "set ssl ca-file", "add ssl ca-file" and "add ssl crt-list".
|
2021-04-08 13:30:23 +00:00
|
|
|
|
2020-04-02 15:42:51 +00:00
|
|
|
new ssl cert <filename>
|
|
|
|
Create a new empty SSL certificate store to be filled with a certificate and
|
|
|
|
added to a directory or a crt-list. This command should be used in
|
|
|
|
combination with "set ssl cert" and "add ssl crt-list".
|
|
|
|
|
2021-04-27 14:28:25 +00:00
|
|
|
new ssl crl-file <crlfile>
|
|
|
|
Create a new empty CRL file tree entry to be filled with a set of CRLs
|
|
|
|
and added to a crt-list. This command should be used in combination with "set
|
|
|
|
ssl crl-file" and "add ssl crt-list".
|
|
|
|
|
2021-04-30 12:57:03 +00:00
|
|
|
prepare acl <acl>
|
|
|
|
Allocate a new version number in ACL <acl> for atomic replacement. <acl> is
|
|
|
|
the #<id> or the <file> returned by "show acl". The new version number is
|
|
|
|
shown in response after "New version created:". This number will then be
|
|
|
|
usable to prepare additions of new entries into the ACL which will then
|
|
|
|
atomically replace the current ones once committed. It is reported as
|
|
|
|
"next_ver" in "show acl". There is no impact of allocating new versions, as
|
|
|
|
unused versions will automatically be removed once a more recent version is
|
|
|
|
committed. Version numbers are unsigned 32-bit values which wrap at the end,
|
|
|
|
so care must be taken when comparing them in an external program. This
|
|
|
|
command cannot be used if the reference <acl> is a file also used as a map.
|
|
|
|
In this case, the "prepare map" command must be used instead.
|
|
|
|
|
|
|
|
prepare map <map>
|
|
|
|
Allocate a new version number in map <map> for atomic replacement. <map> is
|
|
|
|
the #<id> or the <file> returned by "show map". The new version number is
|
|
|
|
shown in response after "New version created:". This number will then be
|
|
|
|
usable to prepare additions of new entries into the map which will then
|
|
|
|
atomically replace the current ones once committed. It is reported as
|
|
|
|
"next_ver" in "show map". There is no impact of allocating new versions, as
|
|
|
|
unused versions will automatically be removed once a more recent version is
|
|
|
|
committed. Version numbers are unsigned 32-bit values which wrap at the end,
|
|
|
|
so care must be taken when comparing them in an external program.
|
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
prompt
|
|
|
|
Toggle the prompt at the beginning of the line and enter or leave interactive
|
|
|
|
mode. In interactive mode, the connection is not closed after a command
|
|
|
|
completes. Instead, the prompt will appear again, indicating the user that
|
|
|
|
the interpreter is waiting for a new command. The prompt consists in a right
|
|
|
|
angle bracket followed by a space "> ". This mode is particularly convenient
|
|
|
|
when one wants to periodically check information such as stats or errors.
|
|
|
|
It is also a good idea to enter interactive mode before issuing a "help"
|
|
|
|
command.
|
|
|
|
|
|
|
|
quit
|
|
|
|
Close the connection when in interactive mode.
|
|
|
|
|
2022-09-14 15:24:22 +00:00
|
|
|
set anon [on|off] [<key>]
|
|
|
|
This command enables or disables the "anonymized mode" for the current CLI
|
|
|
|
session, which replaces certain fields considered sensitive or confidential
|
|
|
|
in command outputs with hashes that preserve sufficient consistency between
|
|
|
|
elements to help developers identify relations between elements when trying
|
|
|
|
to spot bugs, but a low enough bit count (24) to make them non-reversible due
|
|
|
|
to the high number of possible matches. When turned on, if no key is
|
|
|
|
specified, the global key will be used (either specified in the configuration
|
2022-09-29 08:36:11 +00:00
|
|
|
file by "anonkey" or set via the CLI command "set anon global-key"). If no such
|
2022-09-14 15:24:22 +00:00
|
|
|
key was set, a random one will be generated. Otherwise it's possible to
|
|
|
|
specify the 32-bit key to be used for the current session, for example, to
|
|
|
|
reuse the key that was used in a previous dump to help compare outputs.
|
|
|
|
Developers will never need this key and it's recommended never to share it as
|
|
|
|
it could allow to confirm/infirm some guesses about what certain hashes could
|
|
|
|
be hiding.
|
|
|
|
|
2017-03-14 19:08:46 +00:00
|
|
|
set dynamic-cookie-key backend <backend> <value>
|
|
|
|
Modify the secret key used to generate the dynamic persistent cookies.
|
|
|
|
This will break the existing sessions.
|
|
|
|
|
2022-09-29 08:36:11 +00:00
|
|
|
set anon global-key <key>
|
2022-09-14 15:24:22 +00:00
|
|
|
This sets the global anonymizing key to <key>, which must be a 32-bit
|
|
|
|
integer between 0 and 4294967295 (0 disables the global key). This command
|
|
|
|
requires admin privilege.
|
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
set map <map> [<key>|#<ref>] <value>
|
|
|
|
Modify the value corresponding to each key <key> in a map <map>. <map> is the
|
|
|
|
#<id> or <file> returned by "show map". If the <ref> is used in place of
|
|
|
|
<key>, only the entry pointed by <ref> is changed. The new value is <value>.
|
|
|
|
|
|
|
|
set maxconn frontend <frontend> <value>
|
|
|
|
Dynamically change the specified frontend's maxconn setting. Any positive
|
|
|
|
value is allowed including zero, but setting values larger than the global
|
|
|
|
maxconn does not make much sense. If the limit is increased and connections
|
|
|
|
were pending, they will immediately be accepted. If it is lowered to a value
|
|
|
|
below the current number of connections, new connections acceptation will be
|
|
|
|
delayed until the threshold is reached. The frontend might be specified by
|
|
|
|
either its name or its numeric ID prefixed with a sharp ('#').
|
|
|
|
|
2015-10-27 21:46:25 +00:00
|
|
|
set maxconn server <backend/server> <value>
|
|
|
|
Dynamically change the specified server's maxconn setting. Any positive
|
|
|
|
value is allowed including zero, but setting values larger than the global
|
|
|
|
maxconn does not make much sense.
|
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
set maxconn global <maxconn>
|
|
|
|
Dynamically change the global maxconn setting within the range defined by the
|
|
|
|
initial global maxconn setting. If it is increased and connections were
|
|
|
|
pending, they will immediately be accepted. If it is lowered to a value below
|
|
|
|
the current number of connections, new connections acceptation will be
|
|
|
|
delayed until the threshold is reached. A value of zero restores the initial
|
|
|
|
setting.
|
|
|
|
|
2021-05-05 14:44:23 +00:00
|
|
|
set profiling { tasks | memory } { auto | on | off }
|
|
|
|
Enables or disables CPU or memory profiling for the indicated subsystem. This
|
|
|
|
is equivalent to setting or clearing the "profiling" settings in the "global"
|
2021-01-29 10:56:21 +00:00
|
|
|
section of the configuration file. Please also see "show profiling". Note
|
|
|
|
that manually setting the tasks profiling to "on" automatically resets the
|
|
|
|
scheduler statistics, thus allows to check activity over a given interval.
|
2021-05-05 14:44:23 +00:00
|
|
|
The memory profiling is limited to certain operating systems (known to work
|
|
|
|
on the linux-glibc target), and requires USE_MEMORY_PROFILING to be set at
|
|
|
|
compile time.
|
2018-11-22 10:02:09 +00:00
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
set rate-limit connections global <value>
|
|
|
|
Change the process-wide connection rate limit, which is set by the global
|
|
|
|
'maxconnrate' setting. A value of zero disables the limitation. This limit
|
|
|
|
applies to all frontends and the change has an immediate effect. The value
|
|
|
|
is passed in number of connections per second.
|
|
|
|
|
|
|
|
set rate-limit http-compression global <value>
|
|
|
|
Change the maximum input compression rate, which is set by the global
|
|
|
|
'maxcomprate' setting. A value of zero disables the limitation. The value is
|
|
|
|
passed in number of kilobytes per second. The value is available in the "show
|
|
|
|
info" on the line "CompressBpsRateLim" in bytes.
|
|
|
|
|
|
|
|
set rate-limit sessions global <value>
|
|
|
|
Change the process-wide session rate limit, which is set by the global
|
|
|
|
'maxsessrate' setting. A value of zero disables the limitation. This limit
|
|
|
|
applies to all frontends and the change has an immediate effect. The value
|
|
|
|
is passed in number of sessions per second.
|
|
|
|
|
|
|
|
set rate-limit ssl-sessions global <value>
|
|
|
|
Change the process-wide SSL session rate limit, which is set by the global
|
|
|
|
'maxsslrate' setting. A value of zero disables the limitation. This limit
|
|
|
|
applies to all frontends and the change has an immediate effect. The value
|
|
|
|
is passed in number of sessions per second sent to the SSL stack. It applies
|
|
|
|
before the handshake in order to protect the stack against handshake abuses.
|
|
|
|
|
2016-08-03 20:34:12 +00:00
|
|
|
set server <backend>/<server> addr <ip4 or ip6 address> [port <port>]
|
2015-10-13 12:45:29 +00:00
|
|
|
Replace the current IP address of a server by the one provided.
|
2019-05-24 08:25:45 +00:00
|
|
|
Optionally, the port can be changed using the 'port' parameter.
|
2016-08-03 20:34:12 +00:00
|
|
|
Note that changing the port also support switching from/to port mapping
|
|
|
|
(notation with +X or -Y), only if a port is configured for the health check.
|
2015-10-13 12:45:29 +00:00
|
|
|
|
|
|
|
set server <backend>/<server> agent [ up | down ]
|
|
|
|
Force a server's agent to a new state. This can be useful to immediately
|
|
|
|
switch a server's state regardless of some slow agent checks for example.
|
|
|
|
Note that the change is propagated to tracking servers if any.
|
|
|
|
|
2021-02-11 21:51:24 +00:00
|
|
|
set server <backend>/<server> agent-addr <addr> [port <port>]
|
2017-01-09 08:53:06 +00:00
|
|
|
Change addr for servers agent checks. Allows to migrate agent-checks to
|
|
|
|
another address at runtime. You can specify both IP and hostname, it will be
|
|
|
|
resolved.
|
2021-02-11 21:51:24 +00:00
|
|
|
Optionally, change the port agent.
|
|
|
|
|
|
|
|
set server <backend>/<server> agent-port <port>
|
|
|
|
Change the port used for agent checks.
|
2017-01-09 08:53:06 +00:00
|
|
|
|
|
|
|
set server <backend>/<server> agent-send <value>
|
|
|
|
Change agent string sent to agent check target. Allows to update string while
|
|
|
|
changing server address to keep those two matching.
|
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
set server <backend>/<server> health [ up | stopping | down ]
|
|
|
|
Force a server's health to a new state. This can be useful to immediately
|
|
|
|
switch a server's state regardless of some slow health checks for example.
|
|
|
|
Note that the change is propagated to tracking servers if any.
|
|
|
|
|
2021-02-11 21:51:23 +00:00
|
|
|
set server <backend>/<server> check-addr <ip4 | ip6> [port <port>]
|
|
|
|
Change the IP address used for server health checks.
|
|
|
|
Optionally, change the port used for server health checks.
|
|
|
|
|
2016-08-31 21:26:29 +00:00
|
|
|
set server <backend>/<server> check-port <port>
|
|
|
|
Change the port used for health checking to <port>
|
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
set server <backend>/<server> state [ ready | drain | maint ]
|
|
|
|
Force a server's administrative state to a new state. This can be useful to
|
|
|
|
disable load balancing and/or any traffic to a server. Setting the state to
|
|
|
|
"ready" puts the server in normal mode, and the command is the equivalent of
|
|
|
|
the "enable server" command. Setting the state to "maint" disables any traffic
|
|
|
|
to the server as well as any health checks. This is the equivalent of the
|
|
|
|
"disable server" command. Setting the mode to "drain" only removes the server
|
|
|
|
from load balancing but still allows it to be checked and to accept new
|
|
|
|
persistent connections. Changes are propagated to tracking servers if any.
|
|
|
|
|
|
|
|
set server <backend>/<server> weight <weight>[%]
|
|
|
|
Change a server's weight to the value passed in argument. This is the exact
|
|
|
|
equivalent of the "set weight" command below.
|
|
|
|
|
2017-04-26 09:24:02 +00:00
|
|
|
set server <backend>/<server> fqdn <FQDN>
|
2018-08-14 09:39:35 +00:00
|
|
|
Change a server's FQDN to the value passed in argument. This requires the
|
|
|
|
internal run-time DNS resolver to be configured and enabled for this server.
|
2017-04-26 09:24:02 +00:00
|
|
|
|
2022-01-19 14:17:08 +00:00
|
|
|
set server <backend>/<server> ssl [ on | off ] (deprecated)
|
2020-11-14 18:25:33 +00:00
|
|
|
This option configures SSL ciphering on outgoing connections to the server.
|
2022-01-06 15:57:15 +00:00
|
|
|
When switch off, all traffic becomes plain text; health check path is not
|
|
|
|
changed.
|
2020-11-14 18:25:33 +00:00
|
|
|
|
2022-01-19 14:17:08 +00:00
|
|
|
This command is deprecated, create a new server dynamically with or without
|
|
|
|
SSL instead, using the "add server" command.
|
|
|
|
|
2017-07-20 09:59:48 +00:00
|
|
|
set severity-output [ none | number | string ]
|
|
|
|
Change the severity output format of the stats socket connected to for the
|
|
|
|
duration of the current session.
|
|
|
|
|
2021-04-08 13:30:23 +00:00
|
|
|
set ssl ca-file <cafile> <payload>
|
2022-07-29 15:50:58 +00:00
|
|
|
this command is part of a transaction system, the "commit ssl ca-file" and
|
2021-04-08 13:30:23 +00:00
|
|
|
"abort ssl ca-file" commands could be required.
|
2022-07-29 15:50:58 +00:00
|
|
|
if there is no on-going transaction, it will create a ca file tree entry into
|
|
|
|
which the certificates contained in the payload will be stored. the ca file
|
|
|
|
entry will not be stored in the ca file tree and will only be kept in a
|
|
|
|
temporary transaction. if a transaction with the same filename already exists,
|
|
|
|
the previous ca file entry will be deleted and replaced by the new one.
|
|
|
|
once the modifications are done, you have to commit the transaction through
|
|
|
|
a "commit ssl ca-file" call. If you want to add multiple certificates
|
|
|
|
separately, you can use the "add ssl ca-file" command
|
2021-04-08 13:30:23 +00:00
|
|
|
|
|
|
|
Example:
|
|
|
|
echo -e "set ssl ca-file cafile.pem <<\n$(cat rootCA.crt)\n" | \
|
|
|
|
socat /var/run/haproxy.stat -
|
|
|
|
echo "commit ssl ca-file cafile.pem" | socat /var/run/haproxy.stat -
|
|
|
|
|
2019-11-29 15:48:43 +00:00
|
|
|
set ssl cert <filename> <payload>
|
|
|
|
This command is part of a transaction system, the "commit ssl cert" and
|
|
|
|
"abort ssl cert" commands could be required.
|
2021-04-14 14:19:28 +00:00
|
|
|
This whole transaction system works on any certificate displayed by the
|
2021-04-14 14:19:29 +00:00
|
|
|
"show ssl cert" command, so on any frontend or backend certificate.
|
2019-11-29 15:48:43 +00:00
|
|
|
If there is no on-going transaction, it will duplicate the certificate
|
|
|
|
<filename> in memory to a temporary transaction, then update this
|
|
|
|
transaction with the PEM file in the payload. If a transaction exists with
|
|
|
|
the same filename, it will update this transaction. It's also possible to
|
|
|
|
update the files linked to a certificate (.issuer, .sctl, .oscp etc.)
|
|
|
|
Once the modification are done, you have to "commit ssl cert" the
|
|
|
|
transaction.
|
|
|
|
|
2021-09-16 15:30:51 +00:00
|
|
|
Injection of files over the CLI must be done with caution since an empty line
|
|
|
|
is used to notify the end of the payload. It is recommended to inject a PEM
|
|
|
|
file which has been sanitized. A simple method would be to remove every empty
|
|
|
|
line and only leave what are in the PEM sections. It could be achieved with a
|
|
|
|
sed command.
|
|
|
|
|
2019-11-29 15:48:43 +00:00
|
|
|
Example:
|
2021-09-16 15:30:51 +00:00
|
|
|
|
|
|
|
# With some simple sanitizing
|
|
|
|
echo -e "set ssl cert localhost.pem <<\n$(sed -n '/^$/d;/-BEGIN/,/-END/p' 127.0.0.1.pem)\n" | \
|
|
|
|
socat /var/run/haproxy.stat -
|
|
|
|
|
|
|
|
# Complete example with commit
|
2019-11-29 15:48:43 +00:00
|
|
|
echo -e "set ssl cert localhost.pem <<\n$(cat 127.0.0.1.pem)\n" | \
|
|
|
|
socat /var/run/haproxy.stat -
|
|
|
|
echo -e \
|
|
|
|
"set ssl cert localhost.pem.issuer <<\n $(cat 127.0.0.1.pem.issuer)\n" | \
|
|
|
|
socat /var/run/haproxy.stat -
|
|
|
|
echo -e \
|
|
|
|
"set ssl cert localhost.pem.ocsp <<\n$(base64 -w 1000 127.0.0.1.pem.ocsp)\n" | \
|
|
|
|
socat /var/run/haproxy.stat -
|
|
|
|
echo "commit ssl cert localhost.pem" | socat /var/run/haproxy.stat -
|
|
|
|
|
2021-04-27 14:28:25 +00:00
|
|
|
set ssl crl-file <crlfile> <payload>
|
|
|
|
This command is part of a transaction system, the "commit ssl crl-file" and
|
|
|
|
"abort ssl crl-file" commands could be required.
|
|
|
|
If there is no on-going transaction, it will create a CRL file tree entry into
|
|
|
|
which the Revocation Lists contained in the payload will be stored. The CRL
|
|
|
|
file entry will not be stored in the CRL file tree and will only be kept in a
|
|
|
|
temporary transaction. If a transaction with the same filename already exists,
|
|
|
|
the previous CRL file entry will be deleted and replaced by the new one.
|
|
|
|
Once the modifications are done, you have to commit the transaction through
|
|
|
|
a "commit ssl crl-file" call.
|
|
|
|
|
|
|
|
Example:
|
|
|
|
echo -e "set ssl crl-file crlfile.pem <<\n$(cat rootCRL.pem)\n" | \
|
|
|
|
socat /var/run/haproxy.stat -
|
|
|
|
echo "commit ssl crl-file crlfile.pem" | socat /var/run/haproxy.stat -
|
|
|
|
|
2018-04-18 12:04:58 +00:00
|
|
|
set ssl ocsp-response <response | payload>
|
2015-10-13 12:45:29 +00:00
|
|
|
This command is used to update an OCSP Response for a certificate (see "crt"
|
|
|
|
on "bind" lines). Same controls are performed as during the initial loading of
|
|
|
|
the response. The <response> must be passed as a base64 encoded string of the
|
2017-05-22 12:58:00 +00:00
|
|
|
DER encoded response from the OCSP server. This command is not supported with
|
|
|
|
BoringSSL.
|
2015-10-13 12:45:29 +00:00
|
|
|
|
|
|
|
Example:
|
|
|
|
openssl ocsp -issuer issuer.pem -cert server.pem \
|
|
|
|
-host ocsp.issuer.com:80 -respout resp.der
|
|
|
|
echo "set ssl ocsp-response $(base64 -w 10000 resp.der)" | \
|
|
|
|
socat stdio /var/run/haproxy.stat
|
|
|
|
|
2018-04-18 12:04:58 +00:00
|
|
|
using the payload syntax:
|
|
|
|
echo -e "set ssl ocsp-response <<\n$(base64 resp.der)\n" | \
|
|
|
|
socat stdio /var/run/haproxy.stat
|
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
set ssl tls-key <id> <tlskey>
|
|
|
|
Set the next TLS key for the <id> listener to <tlskey>. This key becomes the
|
|
|
|
ultimate key, while the penultimate one is used for encryption (others just
|
|
|
|
decrypt). The oldest TLS key present is overwritten. <id> is either a numeric
|
|
|
|
#<id> or <file> returned by "show tls-keys". <tlskey> is a base64 encoded 48
|
2019-01-10 16:51:55 +00:00
|
|
|
or 80 bits TLS ticket key (ex. openssl rand 80 | openssl base64 -A).
|
2015-10-13 12:45:29 +00:00
|
|
|
|
|
|
|
set table <table> key <key> [data.<data_type> <value>]*
|
|
|
|
Create or update a stick-table entry in the table. If the key is not present,
|
|
|
|
an entry is inserted. See stick-table in section 4.2 to find all possible
|
|
|
|
values for <data_type>. The most likely use consists in dynamically entering
|
|
|
|
entries for source IP addresses, with a flag in gpc0 to dynamically block an
|
|
|
|
IP address or affect its quality of service. It is possible to pass multiple
|
|
|
|
data_types in a single call.
|
|
|
|
|
|
|
|
set timeout cli <delay>
|
|
|
|
Change the CLI interface timeout for current connection. This can be useful
|
|
|
|
during long debugging sessions where the user needs to constantly inspect
|
|
|
|
some indicators without being disconnected. The delay is passed in seconds.
|
|
|
|
|
2021-04-30 12:45:53 +00:00
|
|
|
set var <name> <expression>
|
2021-09-03 07:47:37 +00:00
|
|
|
set var <name> expr <expression>
|
|
|
|
set var <name> fmt <format>
|
2021-04-30 12:45:53 +00:00
|
|
|
Allows to set or overwrite the process-wide variable 'name' with the result
|
2021-09-03 07:47:37 +00:00
|
|
|
of expression <expression> or format string <format>. Only process-wide
|
|
|
|
variables may be used, so the name must begin with 'proc.' otherwise no
|
|
|
|
variable will be set. The <expression> and <format> may only involve
|
|
|
|
"internal" sample fetch keywords and converters even though the most likely
|
|
|
|
useful ones will be str('something'), int(), simple strings or references to
|
|
|
|
other variables. Note that the command line parser doesn't know about quotes,
|
|
|
|
so any space in the expression must be preceded by a backslash. This command
|
|
|
|
requires levels "operator" or "admin". This command is only supported on a
|
|
|
|
CLI connection running in experimental mode (see "experimental-mode on").
|
2021-04-30 12:45:53 +00:00
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
set weight <backend>/<server> <weight>[%]
|
|
|
|
Change a server's weight to the value passed in argument. If the value ends
|
|
|
|
with the '%' sign, then the new weight will be relative to the initially
|
|
|
|
configured weight. Absolute weights are permitted between 0 and 256.
|
|
|
|
Relative weights must be positive with the resulting absolute weight is
|
|
|
|
capped at 256. Servers which are part of a farm running a static
|
|
|
|
load-balancing algorithm have stricter limitations because the weight
|
|
|
|
cannot change once set. Thus for these servers, the only accepted values
|
|
|
|
are 0 and 100% (or 0 and the initial weight). Changes take effect
|
|
|
|
immediately, though certain LB algorithms require a certain amount of
|
|
|
|
requests to consider changes. A typical usage of this command is to
|
|
|
|
disable a server during an update by setting its weight to zero, then to
|
|
|
|
enable it again after the update by setting it back to 100%. This command
|
|
|
|
is restricted and can only be issued on sockets configured for level
|
|
|
|
"admin". Both the backend and the server may be specified either by their
|
|
|
|
name or by their numeric ID, prefixed with a sharp ('#').
|
|
|
|
|
2021-04-30 10:09:54 +00:00
|
|
|
show acl [[@<ver>] <acl>]
|
2017-07-28 14:52:23 +00:00
|
|
|
Dump info about acl converters. Without argument, the list of all available
|
2021-04-30 10:09:54 +00:00
|
|
|
acls is returned. If a <acl> is specified, its contents are dumped. <acl> is
|
|
|
|
the #<id> or <file>. By default the current version of the ACL is shown (the
|
|
|
|
version currently being matched against and reported as 'curr_ver' in the ACL
|
|
|
|
list). It is possible to instead dump other versions by prepending '@<ver>'
|
|
|
|
before the ACL's identifier. The version works as a filter and non-existing
|
|
|
|
versions will simply report no result. The dump format is the same as for the
|
|
|
|
maps even for the sample values. The data returned are not a list of
|
|
|
|
available ACL, but are the list of all patterns composing any ACL. Many of
|
2021-05-21 14:59:15 +00:00
|
|
|
these patterns can be shared with maps. The 'entry_cnt' value represents the
|
|
|
|
count of all the ACL entries, not just the active ones, which means that it
|
|
|
|
also includes entries currently being added.
|
2017-07-28 14:52:23 +00:00
|
|
|
|
2022-09-14 15:24:22 +00:00
|
|
|
show anon
|
|
|
|
Display the current state of the anonymized mode (enabled or disabled) and
|
|
|
|
the current session's key.
|
|
|
|
|
2017-07-28 14:52:23 +00:00
|
|
|
show backend
|
|
|
|
Dump the list of backends available in the running process
|
|
|
|
|
2018-12-13 08:05:45 +00:00
|
|
|
show cli level
|
|
|
|
Display the CLI level of the current CLI session. The result could be
|
|
|
|
'admin', 'operator' or 'user'. See also the 'operator' and 'user' commands.
|
|
|
|
|
|
|
|
Example :
|
|
|
|
|
|
|
|
$ socat /tmp/sock1 readline
|
|
|
|
prompt
|
|
|
|
> operator
|
|
|
|
> show cli level
|
|
|
|
operator
|
|
|
|
> user
|
|
|
|
> show cli level
|
|
|
|
user
|
|
|
|
> operator
|
|
|
|
Permission denied
|
|
|
|
|
|
|
|
operator
|
|
|
|
Decrease the CLI level of the current CLI session to operator. It can't be
|
2021-03-18 14:32:53 +00:00
|
|
|
increased. It also drops expert and experimental mode. See also "show cli
|
|
|
|
level".
|
2018-12-13 08:05:45 +00:00
|
|
|
|
|
|
|
user
|
|
|
|
Decrease the CLI level of the current CLI session to user. It can't be
|
2021-03-18 14:32:53 +00:00
|
|
|
increased. It also drops expert and experimental mode. See also "show cli
|
|
|
|
level".
|
2018-12-13 08:05:45 +00:00
|
|
|
|
2022-07-15 14:51:16 +00:00
|
|
|
show activity [-1 | 0 | thread_num]
|
2019-05-16 15:39:32 +00:00
|
|
|
Reports some counters about internal events that will help developers and
|
|
|
|
more generally people who know haproxy well enough to narrow down the causes
|
|
|
|
of reports of abnormal behaviours. A typical example would be a properly
|
|
|
|
running process never sleeping and eating 100% of the CPU. The output fields
|
|
|
|
will be made of one line per metric, and per-thread counters on the same
|
2021-01-08 04:24:41 +00:00
|
|
|
line. These counters are 32-bit and will wrap during the process's life, which
|
2019-05-16 15:39:32 +00:00
|
|
|
is not a problem since calls to this command will typically be performed
|
|
|
|
twice. The fields are purposely not documented so that their exact meaning is
|
|
|
|
verified in the code where the counters are fed. These values are also reset
|
2022-07-15 14:51:16 +00:00
|
|
|
by the "clear counters" command. On multi-threaded deployments, the first
|
|
|
|
column will indicate the total (or average depending on the nature of the
|
|
|
|
metric) for all threads, and the list of all threads' values will be
|
|
|
|
represented between square brackets in the thread order. Optionally the
|
|
|
|
thread number to be dumped may be specified in argument. The special value
|
|
|
|
"0" will report the aggregated value (first column), and "-1", which is the
|
|
|
|
default, will display all the columns. Note that just like in single-threaded
|
|
|
|
mode, there will be no brackets when a single column is requested.
|
2019-05-16 15:39:32 +00:00
|
|
|
|
2016-12-16 15:38:58 +00:00
|
|
|
show cli sockets
|
|
|
|
List CLI sockets. The output format is composed of 3 fields separated by
|
|
|
|
spaces. The first field is the socket address, it can be a unix socket, a
|
|
|
|
ipv4 address:port couple or a ipv6 one. Socket of other types won't be dump.
|
|
|
|
The second field describe the level of the socket: 'admin', 'user' or
|
|
|
|
'operator'. The last field list the processes on which the socket is bound,
|
|
|
|
separated by commas, it can be numbers or 'all'.
|
|
|
|
|
|
|
|
Example :
|
|
|
|
|
|
|
|
$ echo 'show cli sockets' | socat stdio /tmp/sock1
|
|
|
|
# socket lvl processes
|
|
|
|
/tmp/sock1 admin all
|
|
|
|
127.0.0.1:9999 user 2,3,4
|
|
|
|
127.0.0.2:9969 user 2
|
|
|
|
[::1]:9999 operator 2
|
|
|
|
|
2017-11-24 20:36:45 +00:00
|
|
|
show cache
|
2017-11-26 21:24:31 +00:00
|
|
|
List the configured caches and the objects stored in each cache tree.
|
2017-11-24 20:36:45 +00:00
|
|
|
|
|
|
|
$ echo 'show cache' | socat stdio /tmp/sock1
|
|
|
|
0x7f6ac6c5b03a: foobar (shctx:0x7f6ac6c5b000, available blocks:3918)
|
|
|
|
1 2 3 4
|
|
|
|
|
|
|
|
1. pointer to the cache structure
|
|
|
|
2. cache name
|
|
|
|
3. pointer to the mmap area (shctx)
|
|
|
|
4. number of blocks available for reuse in the shctx
|
|
|
|
|
2020-11-27 14:48:40 +00:00
|
|
|
0x7f6ac6c5b4cc hash:286881868 vary:0x0011223344556677 size:39114 (39 blocks), refcount:9, expire:237
|
|
|
|
1 2 3 4 5 6 7
|
2017-11-24 20:36:45 +00:00
|
|
|
|
|
|
|
1. pointer to the cache entry
|
|
|
|
2. first 32 bits of the hash
|
2020-11-27 14:48:40 +00:00
|
|
|
3. secondary hash of the entry in case of vary
|
|
|
|
4. size of the object in bytes
|
|
|
|
5. number of blocks used for the object
|
|
|
|
6. number of transactions using the entry
|
|
|
|
7. expiration time, can be negative if already expired
|
2017-11-24 20:36:45 +00:00
|
|
|
|
2016-02-16 10:27:28 +00:00
|
|
|
show env [<name>]
|
|
|
|
Dump one or all environment variables known by the process. Without any
|
|
|
|
argument, all variables are dumped. With an argument, only the specified
|
|
|
|
variable is dumped if it exists. Otherwise "Variable not found" is emitted.
|
|
|
|
Variables are dumped in the same format as they are stored or returned by the
|
|
|
|
"env" utility, that is, "<name>=<value>". This can be handy when debugging
|
|
|
|
certain configuration files making heavy use of environment variables to
|
|
|
|
ensure that they contain the expected values. This command is restricted and
|
|
|
|
can only be issued on sockets configured for levels "operator" or "admin".
|
|
|
|
|
2016-11-25 08:16:37 +00:00
|
|
|
show errors [<iid>|<proxy>] [request|response]
|
2015-10-13 12:45:29 +00:00
|
|
|
Dump last known request and response errors collected by frontends and
|
|
|
|
backends. If <iid> is specified, the limit the dump to errors concerning
|
2016-11-25 07:39:10 +00:00
|
|
|
either frontend or backend whose ID is <iid>. Proxy ID "-1" will cause
|
|
|
|
all instances to be dumped. If a proxy name is specified instead, its ID
|
2016-11-25 08:16:37 +00:00
|
|
|
will be used as the filter. If "request" or "response" is added after the
|
|
|
|
proxy name or ID, only request or response errors will be dumped. This
|
|
|
|
command is restricted and can only be issued on sockets configured for
|
|
|
|
levels "operator" or "admin".
|
2015-10-13 12:45:29 +00:00
|
|
|
|
|
|
|
The errors which may be collected are the last request and response errors
|
|
|
|
caused by protocol violations, often due to invalid characters in header
|
|
|
|
names. The report precisely indicates what exact character violated the
|
|
|
|
protocol. Other important information such as the exact date the error was
|
|
|
|
detected, frontend and backend names, the server name (when known), the
|
|
|
|
internal session ID and the source address which has initiated the session
|
|
|
|
are reported too.
|
|
|
|
|
|
|
|
All characters are returned, and non-printable characters are encoded. The
|
|
|
|
most common ones (\t = 9, \n = 10, \r = 13 and \e = 27) are encoded as one
|
|
|
|
letter following a backslash. The backslash itself is encoded as '\\' to
|
|
|
|
avoid confusion. Other non-printable characters are encoded '\xNN' where
|
|
|
|
NN is the two-digits hexadecimal representation of the character's ASCII
|
|
|
|
code.
|
|
|
|
|
|
|
|
Lines are prefixed with the position of their first character, starting at 0
|
|
|
|
for the beginning of the buffer. At most one input line is printed per line,
|
|
|
|
and large lines will be broken into multiple consecutive output lines so that
|
|
|
|
the output never goes beyond 79 characters wide. It is easy to detect if a
|
|
|
|
line was broken, because it will not end with '\n' and the next line's offset
|
|
|
|
will be followed by a '+' sign, indicating it is a continuation of previous
|
|
|
|
line.
|
|
|
|
|
|
|
|
Example :
|
2016-11-25 08:16:37 +00:00
|
|
|
$ echo "show errors -1 response" | socat stdio /tmp/sock1
|
2015-10-13 12:45:29 +00:00
|
|
|
>>> [04/Mar/2009:15:46:56.081] backend http-in (#2) : invalid response
|
|
|
|
src 127.0.0.1, session #54, frontend fe-eth0 (#1), server s2 (#1)
|
|
|
|
response length 213 bytes, error at position 23:
|
|
|
|
|
|
|
|
00000 HTTP/1.0 200 OK\r\n
|
|
|
|
00017 header/bizarre:blah\r\n
|
|
|
|
00038 Location: blah\r\n
|
|
|
|
00054 Long-line: this is a very long line which should b
|
|
|
|
00104+ e broken into multiple lines on the output buffer,
|
|
|
|
00154+ otherwise it would be too large to print in a ter
|
|
|
|
00204+ minal\r\n
|
|
|
|
00211 \r\n
|
|
|
|
|
|
|
|
In the example above, we see that the backend "http-in" which has internal
|
|
|
|
ID 2 has blocked an invalid response from its server s2 which has internal
|
|
|
|
ID 1. The request was on session 54 initiated by source 127.0.0.1 and
|
|
|
|
received by frontend fe-eth0 whose ID is 1. The total response length was
|
|
|
|
213 bytes when the error was detected, and the error was at byte 23. This
|
|
|
|
is the slash ('/') in header name "header/bizarre", which is not a valid
|
|
|
|
HTTP character for a header name.
|
|
|
|
|
2019-08-30 09:17:01 +00:00
|
|
|
show events [<sink>] [-w] [-n]
|
2019-08-26 16:17:04 +00:00
|
|
|
With no option, this lists all known event sinks and their types. With an
|
|
|
|
option, it will dump all available events in the designated sink if it is of
|
2019-08-30 09:17:01 +00:00
|
|
|
type buffer. If option "-w" is passed after the sink name, then once the end
|
|
|
|
of the buffer is reached, the command will wait for new events and display
|
|
|
|
them. It is possible to stop the operation by entering any input (which will
|
|
|
|
be discarded) or by closing the session. Finally, option "-n" is used to
|
|
|
|
directly seek to the end of the buffer, which is often convenient when
|
|
|
|
combined with "-w" to only report new events. For convenience, "-wn" or "-nw"
|
|
|
|
may be used to enable both options at once.
|
2019-08-26 16:17:04 +00:00
|
|
|
|
2023-03-31 14:33:53 +00:00
|
|
|
show fd [-!plcfbsd]* [<fd>]
|
2017-07-25 17:32:50 +00:00
|
|
|
Dump the list of either all open file descriptors or just the one number <fd>
|
2023-03-31 14:33:53 +00:00
|
|
|
if specified. A set of flags may optionally be passed to restrict the dump
|
|
|
|
only to certain FD types or to omit certain FD types. When '-' or '!' are
|
|
|
|
encountered, the selection is inverted for the following characters in the
|
|
|
|
same argument. The inversion is reset before each argument word delimited by
|
|
|
|
white spaces. Selectable FD types include 'p' for pipes, 'l' for listeners,
|
|
|
|
'c' for connections (any type), 'f' for frontend connections, 'b' for backend
|
|
|
|
connections (any type), 's' for connections to servers, 'd' for connections
|
|
|
|
to the "dispatch" address or the backend's transparent address. With this,
|
|
|
|
'b' is a shortcut for 'sd' and 'c' for 'fb' or 'fsd'. 'c!f' is equivalent to
|
|
|
|
'b' ("any connections except frontend connections" are indeed backend
|
|
|
|
connections). This is only aimed at developers who need to observe internal
|
2017-07-25 17:32:50 +00:00
|
|
|
states in order to debug complex issues such as abnormal CPU usages. One fd
|
|
|
|
is reported per lines, and for each of them, its state in the poller using
|
|
|
|
upper case letters for enabled flags and lower case for disabled flags, using
|
|
|
|
"P" for "polled", "R" for "ready", "A" for "active", the events status using
|
|
|
|
"H" for "hangup", "E" for "error", "O" for "output", "P" for "priority" and
|
|
|
|
"I" for "input", a few other flags like "N" for "new" (just added into the fd
|
|
|
|
cache), "U" for "updated" (received an update in the fd cache), "L" for
|
|
|
|
"linger_risk", "C" for "cloned", then the cached entry position, the pointer
|
|
|
|
to the internal owner, the pointer to the I/O callback and its name when
|
|
|
|
known. When the owner is a connection, the connection flags, and the target
|
|
|
|
are reported (frontend, proxy or server). When the owner is a listener, the
|
|
|
|
listener's state and its frontend are reported. There is no point in using
|
|
|
|
this command without a good knowledge of the internals. It's worth noting
|
|
|
|
that the output format may evolve over time so this output must not be parsed
|
2021-01-21 07:26:06 +00:00
|
|
|
by tools designed to be durable. Some internal structure states may look
|
|
|
|
suspicious to the function listing them, in this case the output line will be
|
|
|
|
suffixed with an exclamation mark ('!'). This may help find a starting point
|
|
|
|
when trying to diagnose an incident.
|
2017-07-25 17:32:50 +00:00
|
|
|
|
2021-05-08 05:54:24 +00:00
|
|
|
show info [typed|json] [desc] [float]
|
2016-03-11 10:09:34 +00:00
|
|
|
Dump info about haproxy status on current process. If "typed" is passed as an
|
|
|
|
optional argument, field numbers, names and types are emitted as well so that
|
|
|
|
external monitoring products can easily retrieve, possibly aggregate, then
|
|
|
|
report information found in fields they don't know. Each field is dumped on
|
2017-01-04 08:37:25 +00:00
|
|
|
its own line. If "json" is passed as an optional argument then
|
|
|
|
information provided by "typed" output is provided in JSON format as a
|
|
|
|
list of JSON objects. By default, the format contains only two columns
|
|
|
|
delimited by a colon (':'). The left one is the field name and the right
|
|
|
|
one is the value. It is very important to note that in typed output
|
|
|
|
format, the dump for a single object is contiguous so that there is no
|
2021-05-08 05:54:24 +00:00
|
|
|
need for a consumer to store everything at once. If "float" is passed as an
|
|
|
|
optional argument, some fields usually emitted as integers may switch to
|
|
|
|
floats for higher accuracy. It is purposely unspecified which ones are
|
|
|
|
concerned as this might evolve over time. Using this option implies that the
|
|
|
|
consumer is able to process floats. The output format used is sprintf("%f").
|
2016-03-11 10:09:34 +00:00
|
|
|
|
|
|
|
When using the typed output format, each line is made of 4 columns delimited
|
|
|
|
by colons (':'). The first column is a dot-delimited series of 3 elements. The
|
|
|
|
first element is the numeric position of the field in the list (starting at
|
|
|
|
zero). This position shall not change over time, but holes are to be expected,
|
|
|
|
depending on build options or if some fields are deleted in the future. The
|
|
|
|
second element is the field name as it appears in the default "show info"
|
|
|
|
output. The third element is the relative process number starting at 1.
|
|
|
|
|
|
|
|
The rest of the line starting after the first colon follows the "typed output
|
|
|
|
format" described in the section above. In short, the second column (after the
|
|
|
|
first ':') indicates the origin, nature and scope of the variable. The third
|
|
|
|
column indicates the type of the field, among "s32", "s64", "u32", "u64" and
|
|
|
|
"str". Then the fourth column is the value itself, which the consumer knows
|
|
|
|
how to parse thanks to column 3 and how to process thanks to column 2.
|
|
|
|
|
|
|
|
Thus the overall line format in typed mode is :
|
|
|
|
|
|
|
|
<field_pos>.<field_name>.<process_num>:<tags>:<type>:<value>
|
|
|
|
|
2019-10-09 13:44:21 +00:00
|
|
|
When "desc" is appended to the command, one extra colon followed by a quoted
|
|
|
|
string is appended with a description for the metric. At the time of writing,
|
|
|
|
this is only supported for the "typed" and default output formats.
|
|
|
|
|
2016-03-11 10:09:34 +00:00
|
|
|
Example :
|
|
|
|
|
|
|
|
> show info
|
|
|
|
Name: HAProxy
|
|
|
|
Version: 1.7-dev1-de52ea-146
|
|
|
|
Release_date: 2016/03/11
|
|
|
|
Nbproc: 1
|
|
|
|
Process_num: 1
|
|
|
|
Pid: 28105
|
|
|
|
Uptime: 0d 0h00m04s
|
|
|
|
Uptime_sec: 4
|
|
|
|
Memmax_MB: 0
|
|
|
|
PoolAlloc_MB: 0
|
|
|
|
PoolUsed_MB: 0
|
|
|
|
PoolFailed: 0
|
|
|
|
(...)
|
|
|
|
|
|
|
|
> show info typed
|
|
|
|
0.Name.1:POS:str:HAProxy
|
|
|
|
1.Version.1:POS:str:1.7-dev1-de52ea-146
|
|
|
|
2.Release_date.1:POS:str:2016/03/11
|
|
|
|
3.Nbproc.1:CGS:u32:1
|
|
|
|
4.Process_num.1:KGP:u32:1
|
|
|
|
5.Pid.1:SGP:u32:28105
|
|
|
|
6.Uptime.1:MDP:str:0d 0h00m08s
|
|
|
|
7.Uptime_sec.1:MDP:u32:8
|
|
|
|
8.Memmax_MB.1:CLP:u32:0
|
|
|
|
9.PoolAlloc_MB.1:MGP:u32:0
|
|
|
|
10.PoolUsed_MB.1:MGP:u32:0
|
|
|
|
11.PoolFailed.1:MCP:u32:0
|
|
|
|
(...)
|
|
|
|
|
2016-11-21 16:00:24 +00:00
|
|
|
In the typed format, the presence of the process ID at the end of the
|
|
|
|
first column makes it very easy to visually aggregate outputs from
|
|
|
|
multiple processes.
|
2016-03-11 10:09:34 +00:00
|
|
|
Example :
|
|
|
|
|
|
|
|
$ ( echo show info typed | socat /var/run/haproxy.sock1 ; \
|
|
|
|
echo show info typed | socat /var/run/haproxy.sock2 ) | \
|
|
|
|
sort -t . -k 1,1n -k 2,2 -k 3,3n
|
|
|
|
0.Name.1:POS:str:HAProxy
|
|
|
|
0.Name.2:POS:str:HAProxy
|
|
|
|
1.Version.1:POS:str:1.7-dev1-868ab3-148
|
|
|
|
1.Version.2:POS:str:1.7-dev1-868ab3-148
|
|
|
|
2.Release_date.1:POS:str:2016/03/11
|
|
|
|
2.Release_date.2:POS:str:2016/03/11
|
|
|
|
3.Nbproc.1:CGS:u32:2
|
|
|
|
3.Nbproc.2:CGS:u32:2
|
|
|
|
4.Process_num.1:KGP:u32:1
|
|
|
|
4.Process_num.2:KGP:u32:2
|
|
|
|
5.Pid.1:SGP:u32:30120
|
|
|
|
5.Pid.2:SGP:u32:30121
|
|
|
|
6.Uptime.1:MDP:str:0d 0h01m28s
|
|
|
|
6.Uptime.2:MDP:str:0d 0h01m28s
|
|
|
|
(...)
|
2015-10-13 12:45:29 +00:00
|
|
|
|
2017-01-04 08:37:25 +00:00
|
|
|
The format of JSON output is described in a schema which may be output
|
2017-01-04 08:37:26 +00:00
|
|
|
using "show schema json".
|
|
|
|
|
|
|
|
The JSON output contains no extra whitespace in order to reduce the
|
|
|
|
volume of output. For human consumption passing the output through a
|
|
|
|
pretty printer may be helpful. Example :
|
|
|
|
|
|
|
|
$ echo "show info json" | socat /var/run/haproxy.sock stdio | \
|
|
|
|
python -m json.tool
|
2017-01-04 08:37:25 +00:00
|
|
|
|
|
|
|
The JSON output contains no extra whitespace in order to reduce the
|
|
|
|
volume of output. For human consumption passing the output through a
|
|
|
|
pretty printer may be helpful. Example :
|
|
|
|
|
|
|
|
$ echo "show info json" | socat /var/run/haproxy.sock stdio | \
|
|
|
|
python -m json.tool
|
|
|
|
|
2021-12-28 08:57:10 +00:00
|
|
|
show libs
|
|
|
|
Dump the list of loaded shared dynamic libraries and object files, on systems
|
|
|
|
that support it. When available, for each shared object the range of virtual
|
|
|
|
addresses will be indicated, the size and the path to the object. This can be
|
|
|
|
used for example to try to estimate what library provides a function that
|
|
|
|
appears in a dump. Note that on many systems, addresses will change upon each
|
|
|
|
restart (address space randomization), so that this list would need to be
|
|
|
|
retrieved upon startup if it is expected to be used to analyse a core file.
|
|
|
|
This command may only be issued on sockets configured for levels "operator"
|
|
|
|
or "admin". Note that the output format may vary between operating systems,
|
|
|
|
architectures and even haproxy versions, and ought not to be relied on in
|
|
|
|
scripts.
|
|
|
|
|
2021-04-30 10:09:54 +00:00
|
|
|
show map [[@<ver>] <map>]
|
2015-10-13 12:45:29 +00:00
|
|
|
Dump info about map converters. Without argument, the list of all available
|
|
|
|
maps is returned. If a <map> is specified, its contents are dumped. <map> is
|
2021-04-30 10:09:54 +00:00
|
|
|
the #<id> or <file>. By default the current version of the map is shown (the
|
|
|
|
version currently being matched against and reported as 'curr_ver' in the map
|
|
|
|
list). It is possible to instead dump other versions by prepending '@<ver>'
|
|
|
|
before the map's identifier. The version works as a filter and non-existing
|
2021-05-21 14:59:15 +00:00
|
|
|
versions will simply report no result. The 'entry_cnt' value represents the
|
|
|
|
count of all the map entries, not just the active ones, which means that it
|
|
|
|
also includes entries currently being added.
|
2021-04-30 10:09:54 +00:00
|
|
|
|
|
|
|
In the output, the first column is a unique entry identifier, which is usable
|
|
|
|
as a reference for operations "del map" and "set map". The second column is
|
2015-10-13 12:45:29 +00:00
|
|
|
the pattern and the third column is the sample if available. The data returned
|
|
|
|
are not directly a list of available maps, but are the list of all patterns
|
|
|
|
composing any map. Many of these patterns can be shared with ACL.
|
|
|
|
|
2021-02-12 15:56:22 +00:00
|
|
|
show peers [dict|-] [<peers section>]
|
2019-04-15 11:50:23 +00:00
|
|
|
Dump info about the peers configured in "peers" sections. Without argument,
|
|
|
|
the list of the peers belonging to all the "peers" sections are listed. If
|
|
|
|
<peers section> is specified, only the information about the peers belonging
|
2021-02-12 15:56:22 +00:00
|
|
|
to this "peers" section are dumped. When "dict" is specified before the peers
|
|
|
|
section name, the entire Tx/Rx dictionary caches will also be dumped (very
|
|
|
|
large). Passing "-" may be required to dump a peers section called "dict".
|
2019-04-15 11:50:23 +00:00
|
|
|
|
2019-05-24 08:25:45 +00:00
|
|
|
Here are two examples of outputs where hostA, hostB and hostC peers belong to
|
2019-04-15 11:50:23 +00:00
|
|
|
"sharedlb" peers sections. Only hostA and hostB are connected. Only hostA has
|
|
|
|
sent data to hostB.
|
|
|
|
|
|
|
|
$ echo "show peers" | socat - /tmp/hostA
|
|
|
|
0x55deb0224320: [15/Apr/2019:11:28:01] id=sharedlb state=0 flags=0x3 \
|
2019-04-18 09:39:43 +00:00
|
|
|
resync_timeout=<PAST> task_calls=45122
|
2019-04-15 11:50:23 +00:00
|
|
|
0x55deb022b540: id=hostC(remote) addr=127.0.0.12:10002 status=CONN \
|
|
|
|
reconnect=4s confirm=0
|
|
|
|
flags=0x0
|
|
|
|
0x55deb022a440: id=hostA(local) addr=127.0.0.10:10000 status=NONE \
|
|
|
|
reconnect=<NEVER> confirm=0
|
|
|
|
flags=0x0
|
|
|
|
0x55deb0227d70: id=hostB(remote) addr=127.0.0.11:10001 status=ESTA
|
|
|
|
reconnect=2s confirm=0
|
2019-04-18 09:39:43 +00:00
|
|
|
flags=0x20000200 appctx:0x55deb028fba0 st0=7 st1=0 task_calls=14456 \
|
|
|
|
state=EST
|
2019-04-15 11:50:23 +00:00
|
|
|
xprt=RAW src=127.0.0.1:37257 addr=127.0.0.10:10000
|
|
|
|
remote_table:0x55deb0224a10 id=stkt local_id=1 remote_id=1
|
|
|
|
last_local_table:0x55deb0224a10 id=stkt local_id=1 remote_id=1
|
|
|
|
shared tables:
|
|
|
|
0x55deb0224a10 local_id=1 remote_id=1 flags=0x0 remote_data=0x65
|
|
|
|
last_acked=0 last_pushed=3 last_get=0 teaching_origin=0 update=3
|
|
|
|
table:0x55deb022d6a0 id=stkt update=3 localupdate=3 \
|
|
|
|
commitupdate=3 syncing=0
|
|
|
|
|
|
|
|
$ echo "show peers" | socat - /tmp/hostB
|
|
|
|
0x55871b5ab320: [15/Apr/2019:11:28:03] id=sharedlb state=0 flags=0x3 \
|
2019-04-18 09:39:43 +00:00
|
|
|
resync_timeout=<PAST> task_calls=3
|
2019-04-15 11:50:23 +00:00
|
|
|
0x55871b5b2540: id=hostC(remote) addr=127.0.0.12:10002 status=CONN \
|
|
|
|
reconnect=3s confirm=0
|
|
|
|
flags=0x0
|
|
|
|
0x55871b5b1440: id=hostB(local) addr=127.0.0.11:10001 status=NONE \
|
|
|
|
reconnect=<NEVER> confirm=0
|
|
|
|
flags=0x0
|
|
|
|
0x55871b5aed70: id=hostA(remote) addr=127.0.0.10:10000 status=ESTA \
|
|
|
|
reconnect=2s confirm=0
|
2019-04-18 09:39:43 +00:00
|
|
|
flags=0x20000200 appctx:0x7fa46800ee00 st0=7 st1=0 task_calls=62356 \
|
|
|
|
state=EST
|
2019-04-15 11:50:23 +00:00
|
|
|
remote_table:0x55871b5ab960 id=stkt local_id=1 remote_id=1
|
|
|
|
last_local_table:0x55871b5ab960 id=stkt local_id=1 remote_id=1
|
|
|
|
shared tables:
|
|
|
|
0x55871b5ab960 local_id=1 remote_id=1 flags=0x0 remote_data=0x65
|
|
|
|
last_acked=3 last_pushed=0 last_get=3 teaching_origin=0 update=0
|
|
|
|
table:0x55871b5b46a0 id=stkt update=1 localupdate=0 \
|
|
|
|
commitupdate=0 syncing=0
|
|
|
|
|
MINOR: cli/pools: add pool name filtering capability to "show pools"
Now it becomes possible to match a pool name's prefix, for example:
$ socat - /tmp/haproxy.sock <<< "show pools match quic byusage"
Dumping pools usage. Use SIGQUIT to flush them.
- Pool quic_conn_r (65560 bytes) : 1337 allocated (87653720 bytes), ...
- Pool quic_crypto (1048 bytes) : 6685 allocated (7005880 bytes), ...
- Pool quic_conn (4056 bytes) : 1337 allocated (5422872 bytes), ...
- Pool quic_rxbuf (262168 bytes) : 8 allocated (2097344 bytes), ...
- Pool quic_connne (184 bytes) : 9359 allocated (1722056 bytes), ...
- Pool quic_frame (184 bytes) : 7938 allocated (1460592 bytes), ...
- Pool quic_tx_pac (152 bytes) : 6454 allocated (981008 bytes), ...
- Pool quic_tls_ke (56 bytes) : 12033 allocated (673848 bytes), ...
- Pool quic_rx_pac (408 bytes) : 1596 allocated (651168 bytes), ...
- Pool quic_tls_se (88 bytes) : 6685 allocated (588280 bytes), ...
- Pool quic_cstrea (88 bytes) : 4011 allocated (352968 bytes), ...
- Pool quic_tls_iv (24 bytes) : 12033 allocated (288792 bytes), ...
- Pool quic_dgram (344 bytes) : 732 allocated (251808 bytes), ...
- Pool quic_arng (56 bytes) : 4011 allocated (224616 bytes), ...
- Pool quic_conn_c (152 bytes) : 1337 allocated (203224 bytes), ...
Total: 15 pools, 109578176 bytes allocated, 109578176 used ...
In this case the reported total only concerns the dumped ones.
2022-11-21 09:02:29 +00:00
|
|
|
show pools [byname|bysize|byusage] [match <pfx>] [<nb>]
|
2015-10-13 12:45:29 +00:00
|
|
|
Dump the status of internal memory pools. This is useful to track memory
|
|
|
|
usage when suspecting a memory leak for example. It does exactly the same
|
2022-11-21 08:34:02 +00:00
|
|
|
as the SIGQUIT when running in foreground except that it does not flush the
|
|
|
|
pools. The output is not sorted by default. If "byname" is specified, it is
|
|
|
|
sorted by pool name; if "bysize" is specified, it is sorted by item size in
|
|
|
|
reverse order; if "byusage" is specified, it is sorted by total usage in
|
|
|
|
reverse order, and only used entries are shown. It is also possible to limit
|
MINOR: cli/pools: add pool name filtering capability to "show pools"
Now it becomes possible to match a pool name's prefix, for example:
$ socat - /tmp/haproxy.sock <<< "show pools match quic byusage"
Dumping pools usage. Use SIGQUIT to flush them.
- Pool quic_conn_r (65560 bytes) : 1337 allocated (87653720 bytes), ...
- Pool quic_crypto (1048 bytes) : 6685 allocated (7005880 bytes), ...
- Pool quic_conn (4056 bytes) : 1337 allocated (5422872 bytes), ...
- Pool quic_rxbuf (262168 bytes) : 8 allocated (2097344 bytes), ...
- Pool quic_connne (184 bytes) : 9359 allocated (1722056 bytes), ...
- Pool quic_frame (184 bytes) : 7938 allocated (1460592 bytes), ...
- Pool quic_tx_pac (152 bytes) : 6454 allocated (981008 bytes), ...
- Pool quic_tls_ke (56 bytes) : 12033 allocated (673848 bytes), ...
- Pool quic_rx_pac (408 bytes) : 1596 allocated (651168 bytes), ...
- Pool quic_tls_se (88 bytes) : 6685 allocated (588280 bytes), ...
- Pool quic_cstrea (88 bytes) : 4011 allocated (352968 bytes), ...
- Pool quic_tls_iv (24 bytes) : 12033 allocated (288792 bytes), ...
- Pool quic_dgram (344 bytes) : 732 allocated (251808 bytes), ...
- Pool quic_arng (56 bytes) : 4011 allocated (224616 bytes), ...
- Pool quic_conn_c (152 bytes) : 1337 allocated (203224 bytes), ...
Total: 15 pools, 109578176 bytes allocated, 109578176 used ...
In this case the reported total only concerns the dumped ones.
2022-11-21 09:02:29 +00:00
|
|
|
the output to the <nb> first entries (e.g. when sorting by usage). Finally,
|
|
|
|
if "match" followed by a prefix is specified, then only pools whose name
|
|
|
|
starts with this prefix will be shown. The reported total only concerns pools
|
|
|
|
matching the filtering criteria. Example:
|
|
|
|
|
|
|
|
$ socat - /tmp/haproxy.sock <<< "show pools match quic byusage"
|
|
|
|
Dumping pools usage. Use SIGQUIT to flush them.
|
|
|
|
- Pool quic_conn_r (65560 bytes) : 1337 allocated (87653720 bytes), ...
|
|
|
|
- Pool quic_crypto (1048 bytes) : 6685 allocated (7005880 bytes), ...
|
|
|
|
- Pool quic_conn (4056 bytes) : 1337 allocated (5422872 bytes), ...
|
|
|
|
- Pool quic_rxbuf (262168 bytes) : 8 allocated (2097344 bytes), ...
|
2023-04-24 16:20:44 +00:00
|
|
|
- Pool quic_conne (184 bytes) : 9359 allocated (1722056 bytes), ...
|
MINOR: cli/pools: add pool name filtering capability to "show pools"
Now it becomes possible to match a pool name's prefix, for example:
$ socat - /tmp/haproxy.sock <<< "show pools match quic byusage"
Dumping pools usage. Use SIGQUIT to flush them.
- Pool quic_conn_r (65560 bytes) : 1337 allocated (87653720 bytes), ...
- Pool quic_crypto (1048 bytes) : 6685 allocated (7005880 bytes), ...
- Pool quic_conn (4056 bytes) : 1337 allocated (5422872 bytes), ...
- Pool quic_rxbuf (262168 bytes) : 8 allocated (2097344 bytes), ...
- Pool quic_connne (184 bytes) : 9359 allocated (1722056 bytes), ...
- Pool quic_frame (184 bytes) : 7938 allocated (1460592 bytes), ...
- Pool quic_tx_pac (152 bytes) : 6454 allocated (981008 bytes), ...
- Pool quic_tls_ke (56 bytes) : 12033 allocated (673848 bytes), ...
- Pool quic_rx_pac (408 bytes) : 1596 allocated (651168 bytes), ...
- Pool quic_tls_se (88 bytes) : 6685 allocated (588280 bytes), ...
- Pool quic_cstrea (88 bytes) : 4011 allocated (352968 bytes), ...
- Pool quic_tls_iv (24 bytes) : 12033 allocated (288792 bytes), ...
- Pool quic_dgram (344 bytes) : 732 allocated (251808 bytes), ...
- Pool quic_arng (56 bytes) : 4011 allocated (224616 bytes), ...
- Pool quic_conn_c (152 bytes) : 1337 allocated (203224 bytes), ...
Total: 15 pools, 109578176 bytes allocated, 109578176 used ...
In this case the reported total only concerns the dumped ones.
2022-11-21 09:02:29 +00:00
|
|
|
- Pool quic_frame (184 bytes) : 7938 allocated (1460592 bytes), ...
|
|
|
|
- Pool quic_tx_pac (152 bytes) : 6454 allocated (981008 bytes), ...
|
|
|
|
- Pool quic_tls_ke (56 bytes) : 12033 allocated (673848 bytes), ...
|
|
|
|
- Pool quic_rx_pac (408 bytes) : 1596 allocated (651168 bytes), ...
|
|
|
|
- Pool quic_tls_se (88 bytes) : 6685 allocated (588280 bytes), ...
|
|
|
|
- Pool quic_cstrea (88 bytes) : 4011 allocated (352968 bytes), ...
|
|
|
|
- Pool quic_tls_iv (24 bytes) : 12033 allocated (288792 bytes), ...
|
|
|
|
- Pool quic_dgram (344 bytes) : 732 allocated (251808 bytes), ...
|
|
|
|
- Pool quic_arng (56 bytes) : 4011 allocated (224616 bytes), ...
|
|
|
|
- Pool quic_conn_c (152 bytes) : 1337 allocated (203224 bytes), ...
|
|
|
|
Total: 15 pools, 109578176 bytes allocated, 109578176 used ...
|
2015-10-13 12:45:29 +00:00
|
|
|
|
2022-09-08 14:38:10 +00:00
|
|
|
show profiling [{all | status | tasks | memory}] [byaddr|bytime|aggr|<max_lines>]*
|
2018-11-22 10:02:09 +00:00
|
|
|
Dumps the current profiling settings, one per line, as well as the command
|
2021-01-28 23:07:40 +00:00
|
|
|
needed to change them. When tasks profiling is enabled, some per-function
|
|
|
|
statistics collected by the scheduler will also be emitted, with a summary
|
2021-05-05 15:48:13 +00:00
|
|
|
covering the number of calls, total/avg CPU time and total/avg latency. When
|
|
|
|
memory profiling is enabled, some information such as the number of
|
|
|
|
allocations/releases and their sizes will be reported. It is possible to
|
|
|
|
limit the dump to only the profiling status, the tasks, or the memory
|
|
|
|
profiling by specifying the respective keywords; by default all profiling
|
|
|
|
information are dumped. It is also possible to limit the number of lines
|
2021-05-13 08:00:17 +00:00
|
|
|
of output of each category by specifying a numeric limit. If is possible to
|
2022-09-08 14:38:10 +00:00
|
|
|
request that the output is sorted by address or by total execution time
|
|
|
|
instead of usage, e.g. to ease comparisons between subsequent calls or to
|
|
|
|
check what needs to be optimized, and to aggregate task activity by called
|
|
|
|
function instead of seeing the details. Please note that profiling is
|
2021-05-13 08:00:17 +00:00
|
|
|
essentially aimed at developers since it gives hints about where CPU cycles
|
|
|
|
or memory are wasted in the code. There is nothing useful to monitor there.
|
2018-11-22 10:02:09 +00:00
|
|
|
|
2021-01-29 11:01:46 +00:00
|
|
|
show resolvers [<resolvers section id>]
|
|
|
|
Dump statistics for the given resolvers section, or all resolvers sections
|
|
|
|
if no section is supplied.
|
|
|
|
|
|
|
|
For each name server, the following counters are reported:
|
|
|
|
sent: number of DNS requests sent to this server
|
|
|
|
valid: number of DNS valid responses received from this server
|
|
|
|
update: number of DNS responses used to update the server's IP address
|
|
|
|
cname: number of CNAME responses
|
|
|
|
cname_error: CNAME errors encountered with this server
|
|
|
|
any_err: number of empty response (IE: server does not support ANY type)
|
|
|
|
nx: non existent domain response received from this server
|
|
|
|
timeout: how many time this server did not answer in time
|
|
|
|
refused: number of requests refused by this server
|
|
|
|
other: any other DNS errors
|
|
|
|
invalid: invalid DNS response (from a protocol point of view)
|
|
|
|
too_big: too big response
|
2022-12-09 11:28:46 +00:00
|
|
|
outdated: number of response arrived too late (after another name server)
|
2021-01-29 11:01:46 +00:00
|
|
|
|
2023-05-05 14:07:58 +00:00
|
|
|
show quic [<format>] [all]
|
2023-02-01 09:18:26 +00:00
|
|
|
Dump information on all active QUIC frontend connections. This command is
|
|
|
|
restricted and can only be issued on sockets configured for levels "operator"
|
2023-05-05 14:07:58 +00:00
|
|
|
or "admin". An optional format can be specified as first argument to control
|
2023-05-05 14:08:34 +00:00
|
|
|
the verbosity. Currently supported values are "oneline" which is the default
|
|
|
|
if format is unspecified or "full". By default, connections on closing or
|
|
|
|
draining state are not displayed. Use the extra argument "all" to include
|
|
|
|
them in the output.
|
2023-02-01 09:18:26 +00:00
|
|
|
|
2020-07-01 05:00:59 +00:00
|
|
|
show servers conn [<backend>]
|
|
|
|
Dump the current and idle connections state of the servers belonging to the
|
|
|
|
designated backend (or all backends if none specified). A backend name or
|
|
|
|
identifier may be used.
|
|
|
|
|
|
|
|
The output consists in a header line showing the fields titles, then one
|
|
|
|
server per line with for each, the backend name and ID, server name and ID,
|
|
|
|
the address, port and a series or values. The number of fields varies
|
|
|
|
depending on thread count.
|
|
|
|
|
|
|
|
Given the threaded nature of idle connections, it's important to understand
|
|
|
|
that some values may change once read, and that as such, consistency within a
|
|
|
|
line isn't granted. This output is mostly provided as a debugging tool and is
|
|
|
|
not relevant to be routinely monitored nor graphed.
|
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
show servers state [<backend>]
|
|
|
|
Dump the state of the servers found in the running configuration. A backend
|
|
|
|
name or identifier may be provided to limit the output to this backend only.
|
|
|
|
|
|
|
|
The dump has the following format:
|
|
|
|
- first line contains the format version (1 in this specification);
|
|
|
|
- second line contains the column headers, prefixed by a sharp ('#');
|
|
|
|
- third line and next ones contain data;
|
|
|
|
- each line starting by a sharp ('#') is considered as a comment.
|
|
|
|
|
2016-07-02 01:01:18 +00:00
|
|
|
Since multiple versions of the output may co-exist, below is the list of
|
2015-10-13 12:45:29 +00:00
|
|
|
fields and their order per file format version :
|
|
|
|
1:
|
|
|
|
be_id: Backend unique id.
|
|
|
|
be_name: Backend label.
|
|
|
|
srv_id: Server unique id (in the backend).
|
|
|
|
srv_name: Server label.
|
|
|
|
srv_addr: Server IP address.
|
|
|
|
srv_op_state: Server operational state (UP/DOWN/...).
|
2016-11-01 23:19:58 +00:00
|
|
|
0 = SRV_ST_STOPPED
|
|
|
|
The server is down.
|
|
|
|
1 = SRV_ST_STARTING
|
|
|
|
The server is warming up (up but
|
|
|
|
throttled).
|
|
|
|
2 = SRV_ST_RUNNING
|
|
|
|
The server is fully up.
|
|
|
|
3 = SRV_ST_STOPPING
|
|
|
|
The server is up but soft-stopping
|
|
|
|
(eg: 404).
|
2015-10-13 12:45:29 +00:00
|
|
|
srv_admin_state: Server administrative state (MAINT/DRAIN/...).
|
2016-11-01 23:19:58 +00:00
|
|
|
The state is actually a mask of values :
|
|
|
|
0x01 = SRV_ADMF_FMAINT
|
|
|
|
The server was explicitly forced into
|
|
|
|
maintenance.
|
|
|
|
0x02 = SRV_ADMF_IMAINT
|
|
|
|
The server has inherited the maintenance
|
|
|
|
status from a tracked server.
|
|
|
|
0x04 = SRV_ADMF_CMAINT
|
|
|
|
The server is in maintenance because of
|
|
|
|
the configuration.
|
|
|
|
0x08 = SRV_ADMF_FDRAIN
|
|
|
|
The server was explicitly forced into
|
|
|
|
drain state.
|
|
|
|
0x10 = SRV_ADMF_IDRAIN
|
|
|
|
The server has inherited the drain status
|
|
|
|
from a tracked server.
|
2016-11-02 20:31:27 +00:00
|
|
|
0x20 = SRV_ADMF_RMAINT
|
|
|
|
The server is in maintenance because of an
|
|
|
|
IP address resolution failure.
|
2017-04-26 09:24:02 +00:00
|
|
|
0x40 = SRV_ADMF_HMAINT
|
|
|
|
The server FQDN was set from stats socket.
|
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
srv_uweight: User visible server's weight.
|
|
|
|
srv_iweight: Server's initial weight.
|
|
|
|
srv_time_since_last_change: Time since last operational change.
|
|
|
|
srv_check_status: Last health check status.
|
|
|
|
srv_check_result: Last check result (FAILED/PASSED/...).
|
2016-11-01 23:19:58 +00:00
|
|
|
0 = CHK_RES_UNKNOWN
|
|
|
|
Initialized to this by default.
|
|
|
|
1 = CHK_RES_NEUTRAL
|
|
|
|
Valid check but no status information.
|
|
|
|
2 = CHK_RES_FAILED
|
|
|
|
Check failed.
|
|
|
|
3 = CHK_RES_PASSED
|
|
|
|
Check succeeded and server is fully up
|
|
|
|
again.
|
|
|
|
4 = CHK_RES_CONDPASS
|
|
|
|
Check reports the server doesn't want new
|
|
|
|
sessions.
|
2015-10-13 12:45:29 +00:00
|
|
|
srv_check_health: Checks rise / fall current counter.
|
|
|
|
srv_check_state: State of the check (ENABLED/PAUSED/...).
|
2016-11-01 23:19:58 +00:00
|
|
|
The state is actually a mask of values :
|
|
|
|
0x01 = CHK_ST_INPROGRESS
|
|
|
|
A check is currently running.
|
|
|
|
0x02 = CHK_ST_CONFIGURED
|
|
|
|
This check is configured and may be
|
|
|
|
enabled.
|
|
|
|
0x04 = CHK_ST_ENABLED
|
|
|
|
This check is currently administratively
|
|
|
|
enabled.
|
|
|
|
0x08 = CHK_ST_PAUSED
|
|
|
|
Checks are paused because of maintenance
|
|
|
|
(health only).
|
2015-10-13 12:45:29 +00:00
|
|
|
srv_agent_state: State of the agent check (ENABLED/PAUSED/...).
|
2016-11-01 23:19:58 +00:00
|
|
|
This state uses the same mask values as
|
|
|
|
"srv_check_state", adding this specific one :
|
|
|
|
0x10 = CHK_ST_AGENT
|
|
|
|
Check is an agent check (otherwise it's a
|
|
|
|
health check).
|
2015-10-13 12:45:29 +00:00
|
|
|
bk_f_forced_id: Flag to know if the backend ID is forced by
|
|
|
|
configuration.
|
|
|
|
srv_f_forced_id: Flag to know if the server's ID is forced by
|
|
|
|
configuration.
|
2017-04-26 09:24:02 +00:00
|
|
|
srv_fqdn: Server FQDN.
|
2017-08-01 06:47:19 +00:00
|
|
|
srv_port: Server port.
|
2018-07-02 15:00:54 +00:00
|
|
|
srvrecord: DNS SRV record associated to this SRV.
|
2020-11-14 18:25:33 +00:00
|
|
|
srv_use_ssl: use ssl for server connections.
|
2021-02-11 21:51:26 +00:00
|
|
|
srv_check_port: Server health check port.
|
|
|
|
srv_check_addr: Server health check address.
|
|
|
|
srv_agent_addr: Server health agent address.
|
|
|
|
srv_agent_port: Server health agent port.
|
2015-10-13 12:45:29 +00:00
|
|
|
|
|
|
|
show sess
|
|
|
|
Dump all known sessions. Avoid doing this on slow connections as this can
|
|
|
|
be huge. This command is restricted and can only be issued on sockets
|
MINOR: cli: make "show sess" stop at the last known session
"show sess" and particularly "show sess all" can be very slow when dumping
lots of information, and while dumping, new sessions might appear, making
the output really endless. When threads are used, this causes a double
problem:
- all threads are paused during the dump, so an overly long dump degrades
the quality of service ;
- since all threads are paused, more events get postponed, possibly
resulting in more streams to be dumped on next invocation of the dump
function.
This patch addresses this long-lasting issue by doing something simple:
the CLI's stream is moved at the end of the steams list, serving as an
identifiable marker to end the dump, because all entries past it were
added after the command was entered. As a result, the CLI's stream always
appears as the last one.
It may make sense to backport this to stable branches where dumping live
streams is difficult as well.
2020-06-27 23:24:12 +00:00
|
|
|
configured for levels "operator" or "admin". Note that on machines with
|
|
|
|
quickly recycled connections, it is possible that this output reports less
|
|
|
|
entries than really exist because it will dump all existing sessions up to
|
|
|
|
the last one that was created before the command was entered; those which
|
|
|
|
die in the mean time will not appear.
|
2015-10-13 12:45:29 +00:00
|
|
|
|
|
|
|
show sess <id>
|
|
|
|
Display a lot of internal information about the specified session identifier.
|
|
|
|
This identifier is the first field at the beginning of the lines in the dumps
|
|
|
|
of "show sess" (it corresponds to the session pointer). Those information are
|
|
|
|
useless to most users but may be used by haproxy developers to troubleshoot a
|
|
|
|
complex bug. The output format is intentionally not documented so that it can
|
|
|
|
freely evolve depending on demands. You may find a description of all fields
|
|
|
|
returned in src/dumpstats.c
|
|
|
|
|
|
|
|
The special id "all" dumps the states of all sessions, which must be avoided
|
|
|
|
as much as possible as it is highly CPU intensive and can take a lot of time.
|
|
|
|
|
2020-11-01 15:54:17 +00:00
|
|
|
show stat [domain <dns|proxy>] [{<iid>|<proxy>} <type> <sid>] [typed|json] \
|
2020-10-23 18:19:47 +00:00
|
|
|
[desc] [up|no-maint]
|
2020-11-01 15:54:17 +00:00
|
|
|
Dump statistics. The domain is used to select which statistics to print; dns
|
|
|
|
and proxy are available for now. By default, the CSV format is used; you can
|
2020-10-05 09:49:37 +00:00
|
|
|
activate the extended typed output format described in the section above if
|
|
|
|
"typed" is passed after the other arguments; or in JSON if "json" is passed
|
|
|
|
after the other arguments. By passing <id>, <type> and <sid>, it is possible
|
|
|
|
to dump only selected items :
|
2016-11-25 07:50:58 +00:00
|
|
|
- <iid> is a proxy ID, -1 to dump everything. Alternatively, a proxy name
|
|
|
|
<proxy> may be specified. In this case, this proxy's ID will be used as
|
|
|
|
the ID selector.
|
2015-10-13 12:45:29 +00:00
|
|
|
- <type> selects the type of dumpable objects : 1 for frontends, 2 for
|
|
|
|
backends, 4 for servers, -1 for everything. These values can be ORed,
|
|
|
|
for example:
|
|
|
|
1 + 2 = 3 -> frontend + backend.
|
|
|
|
1 + 2 + 4 = 7 -> frontend + backend + server.
|
|
|
|
- <sid> is a server ID, -1 to dump everything from the selected proxy.
|
|
|
|
|
|
|
|
Example :
|
|
|
|
$ echo "show info;show stat" | socat stdio unix-connect:/tmp/sock1
|
|
|
|
>>> Name: HAProxy
|
|
|
|
Version: 1.4-dev2-49
|
|
|
|
Release_date: 2009/09/23
|
|
|
|
Nbproc: 1
|
|
|
|
Process_num: 1
|
|
|
|
(...)
|
|
|
|
|
|
|
|
# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq, (...)
|
|
|
|
stats,FRONTEND,,,0,0,1000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,1,0, (...)
|
|
|
|
stats,BACKEND,0,0,0,0,1000,0,0,0,0,0,,0,0,0,0,UP,0,0,0,,0,250,(...)
|
|
|
|
(...)
|
|
|
|
www1,BACKEND,0,0,0,0,1000,0,0,0,0,0,,0,0,0,0,UP,1,1,0,,0,250, (...)
|
|
|
|
|
|
|
|
$
|
|
|
|
|
2016-03-11 10:09:34 +00:00
|
|
|
In this example, two commands have been issued at once. That way it's easy to
|
|
|
|
find which process the stats apply to in multi-process mode. This is not
|
|
|
|
needed in the typed output format as the process number is reported on each
|
|
|
|
line. Notice the empty line after the information output which marks the end
|
|
|
|
of the first block. A similar empty line appears at the end of the second
|
|
|
|
block (stats) so that the reader knows the output has not been truncated.
|
|
|
|
|
|
|
|
When "typed" is specified, the output format is more suitable to monitoring
|
|
|
|
tools because it provides numeric positions and indicates the type of each
|
|
|
|
output field. Each value stands on its own line with process number, element
|
|
|
|
number, nature, origin and scope. This same format is available via the HTTP
|
|
|
|
stats by passing ";typed" after the URI. It is very important to note that in
|
2016-07-02 01:01:18 +00:00
|
|
|
typed output format, the dump for a single object is contiguous so that there
|
2016-03-11 10:09:34 +00:00
|
|
|
is no need for a consumer to store everything at once.
|
|
|
|
|
2020-10-23 18:19:47 +00:00
|
|
|
The "up" modifier will result in listing only servers which reportedly up or
|
|
|
|
not checked. Those down, unresolved, or in maintenance will not be listed.
|
|
|
|
This is analogous to the ";up" option on the HTTP stats. Similarly, the
|
|
|
|
"no-maint" modifier will act like the ";no-maint" HTTP modifier and will
|
|
|
|
result in disabled servers not to be listed. The difference is that those
|
|
|
|
which are enabled but down will not be evicted.
|
|
|
|
|
2016-03-11 10:09:34 +00:00
|
|
|
When using the typed output format, each line is made of 4 columns delimited
|
|
|
|
by colons (':'). The first column is a dot-delimited series of 5 elements. The
|
|
|
|
first element is a letter indicating the type of the object being described.
|
|
|
|
At the moment the following object types are known : 'F' for a frontend, 'B'
|
|
|
|
for a backend, 'L' for a listener, and 'S' for a server. The second element
|
|
|
|
The second element is a positive integer representing the unique identifier of
|
|
|
|
the proxy the object belongs to. It is equivalent to the "iid" column of the
|
|
|
|
CSV output and matches the value in front of the optional "id" directive found
|
|
|
|
in the frontend or backend section. The third element is a positive integer
|
|
|
|
containing the unique object identifier inside the proxy, and corresponds to
|
|
|
|
the "sid" column of the CSV output. ID 0 is reported when dumping a frontend
|
|
|
|
or a backend. For a listener or a server, this corresponds to their respective
|
|
|
|
ID inside the proxy. The fourth element is the numeric position of the field
|
|
|
|
in the list (starting at zero). This position shall not change over time, but
|
|
|
|
holes are to be expected, depending on build options or if some fields are
|
|
|
|
deleted in the future. The fifth element is the field name as it appears in
|
|
|
|
the CSV output. The sixth element is a positive integer and is the relative
|
|
|
|
process number starting at 1.
|
|
|
|
|
|
|
|
The rest of the line starting after the first colon follows the "typed output
|
|
|
|
format" described in the section above. In short, the second column (after the
|
|
|
|
first ':') indicates the origin, nature and scope of the variable. The third
|
2021-05-08 05:46:44 +00:00
|
|
|
column indicates the field type, among "s32", "s64", "u32", "u64", "flt' and
|
2016-03-11 10:09:34 +00:00
|
|
|
"str". Then the fourth column is the value itself, which the consumer knows
|
|
|
|
how to parse thanks to column 3 and how to process thanks to column 2.
|
|
|
|
|
2019-10-09 13:44:21 +00:00
|
|
|
When "desc" is appended to the command, one extra colon followed by a quoted
|
|
|
|
string is appended with a description for the metric. At the time of writing,
|
|
|
|
this is only supported for the "typed" output format.
|
|
|
|
|
2016-03-11 10:09:34 +00:00
|
|
|
Thus the overall line format in typed mode is :
|
|
|
|
|
|
|
|
<obj>.<px_id>.<id>.<fpos>.<fname>.<process_num>:<tags>:<type>:<value>
|
|
|
|
|
|
|
|
Here's an example of typed output format :
|
|
|
|
|
|
|
|
$ echo "show stat typed" | socat stdio unix-connect:/tmp/sock1
|
|
|
|
F.2.0.0.pxname.1:MGP:str:private-frontend
|
|
|
|
F.2.0.1.svname.1:MGP:str:FRONTEND
|
|
|
|
F.2.0.8.bin.1:MGP:u64:0
|
|
|
|
F.2.0.9.bout.1:MGP:u64:0
|
|
|
|
F.2.0.40.hrsp_2xx.1:MGP:u64:0
|
|
|
|
L.2.1.0.pxname.1:MGP:str:private-frontend
|
|
|
|
L.2.1.1.svname.1:MGP:str:sock-1
|
|
|
|
L.2.1.17.status.1:MGP:str:OPEN
|
|
|
|
L.2.1.73.addr.1:MGP:str:0.0.0.0:8001
|
|
|
|
S.3.13.60.rtime.1:MCP:u32:0
|
|
|
|
S.3.13.61.ttime.1:MCP:u32:0
|
|
|
|
S.3.13.62.agent_status.1:MGP:str:L4TOUT
|
|
|
|
S.3.13.64.agent_duration.1:MGP:u64:2001
|
|
|
|
S.3.13.65.check_desc.1:MCP:str:Layer4 timeout
|
|
|
|
S.3.13.66.agent_desc.1:MCP:str:Layer4 timeout
|
|
|
|
S.3.13.67.check_rise.1:MCP:u32:2
|
|
|
|
S.3.13.68.check_fall.1:MCP:u32:3
|
|
|
|
S.3.13.69.check_health.1:SGP:u32:0
|
|
|
|
S.3.13.70.agent_rise.1:MaP:u32:1
|
|
|
|
S.3.13.71.agent_fall.1:SGP:u32:1
|
|
|
|
S.3.13.72.agent_health.1:SGP:u32:1
|
|
|
|
S.3.13.73.addr.1:MCP:str:1.255.255.255:8888
|
|
|
|
S.3.13.75.mode.1:MAP:str:http
|
|
|
|
B.3.0.0.pxname.1:MGP:str:private-backend
|
|
|
|
B.3.0.1.svname.1:MGP:str:BACKEND
|
|
|
|
B.3.0.2.qcur.1:MGP:u32:0
|
|
|
|
B.3.0.3.qmax.1:MGP:u32:0
|
|
|
|
B.3.0.4.scur.1:MGP:u32:0
|
|
|
|
B.3.0.5.smax.1:MGP:u32:0
|
|
|
|
B.3.0.6.slim.1:MGP:u32:1000
|
|
|
|
B.3.0.55.lastsess.1:MMP:s32:-1
|
|
|
|
(...)
|
|
|
|
|
2016-11-21 16:00:24 +00:00
|
|
|
In the typed format, the presence of the process ID at the end of the
|
|
|
|
first column makes it very easy to visually aggregate outputs from
|
|
|
|
multiple processes, as show in the example below where each line appears
|
|
|
|
for each process :
|
2016-03-11 10:09:34 +00:00
|
|
|
|
|
|
|
$ ( echo show stat typed | socat /var/run/haproxy.sock1 - ; \
|
|
|
|
echo show stat typed | socat /var/run/haproxy.sock2 - ) | \
|
|
|
|
sort -t . -k 1,1 -k 2,2n -k 3,3n -k 4,4n -k 5,5 -k 6,6n
|
|
|
|
B.3.0.0.pxname.1:MGP:str:private-backend
|
|
|
|
B.3.0.0.pxname.2:MGP:str:private-backend
|
|
|
|
B.3.0.1.svname.1:MGP:str:BACKEND
|
|
|
|
B.3.0.1.svname.2:MGP:str:BACKEND
|
|
|
|
B.3.0.2.qcur.1:MGP:u32:0
|
|
|
|
B.3.0.2.qcur.2:MGP:u32:0
|
|
|
|
B.3.0.3.qmax.1:MGP:u32:0
|
|
|
|
B.3.0.3.qmax.2:MGP:u32:0
|
|
|
|
B.3.0.4.scur.1:MGP:u32:0
|
|
|
|
B.3.0.4.scur.2:MGP:u32:0
|
|
|
|
B.3.0.5.smax.1:MGP:u32:0
|
|
|
|
B.3.0.5.smax.2:MGP:u32:0
|
|
|
|
B.3.0.6.slim.1:MGP:u32:1000
|
|
|
|
B.3.0.6.slim.2:MGP:u32:1000
|
|
|
|
(...)
|
2015-10-13 12:45:29 +00:00
|
|
|
|
2017-01-04 08:37:25 +00:00
|
|
|
The format of JSON output is described in a schema which may be output
|
2017-01-04 08:37:26 +00:00
|
|
|
using "show schema json".
|
|
|
|
|
|
|
|
The JSON output contains no extra whitespace in order to reduce the
|
|
|
|
volume of output. For human consumption passing the output through a
|
|
|
|
pretty printer may be helpful. Example :
|
|
|
|
|
|
|
|
$ echo "show stat json" | socat /var/run/haproxy.sock stdio | \
|
|
|
|
python -m json.tool
|
2017-01-04 08:37:25 +00:00
|
|
|
|
|
|
|
The JSON output contains no extra whitespace in order to reduce the
|
|
|
|
volume of output. For human consumption passing the output through a
|
|
|
|
pretty printer may be helpful. Example :
|
|
|
|
|
|
|
|
$ echo "show stat json" | socat /var/run/haproxy.sock stdio | \
|
|
|
|
python -m json.tool
|
|
|
|
|
2021-04-08 13:30:23 +00:00
|
|
|
show ssl ca-file [<cafile>[:<index>]]
|
2023-01-10 13:44:27 +00:00
|
|
|
Display the list of CA files loaded into the process and their respective
|
|
|
|
certificate counts. The certificates are not used by any frontend or backend
|
|
|
|
until their status is "Used".
|
2023-01-10 14:07:12 +00:00
|
|
|
A "@system-ca" entry can appear in the list, it is loaded by the httpclient
|
|
|
|
by default. It contains the list of trusted CA of your system returned by
|
|
|
|
OpenSSL.
|
2023-01-10 13:44:27 +00:00
|
|
|
If a filename is prefixed by an asterisk, it is a transaction which
|
2021-04-08 13:30:23 +00:00
|
|
|
is not committed yet. If a <cafile> is specified without <index>, it will show
|
|
|
|
the status of the CA file ("Used"/"Unused") followed by details about all the
|
|
|
|
certificates contained in the CA file. The details displayed for every
|
|
|
|
certificate are the same as the ones displayed by a "show ssl cert" command.
|
|
|
|
If a <cafile> is specified followed by an <index>, it will only display the
|
|
|
|
details of the certificate having the specified index. Indexes start from 1.
|
|
|
|
If the index is invalid (too big for instance), nothing will be displayed.
|
|
|
|
This command can be useful to check if a CA file was properly updated.
|
|
|
|
You can also display the details of an ongoing transaction by prefixing the
|
|
|
|
filename by an asterisk.
|
|
|
|
|
|
|
|
Example :
|
|
|
|
|
|
|
|
$ echo "show ssl ca-file" | socat /var/run/haproxy.master -
|
|
|
|
# transaction
|
|
|
|
*cafile.crt - 2 certificate(s)
|
|
|
|
# filename
|
|
|
|
cafile.crt - 1 certificate(s)
|
|
|
|
|
|
|
|
$ echo "show ssl ca-file cafile.crt" | socat /var/run/haproxy.master -
|
|
|
|
Filename: /home/tricot/work/haproxy/reg-tests/ssl/set_cafile_ca2.crt
|
|
|
|
Status: Used
|
|
|
|
|
|
|
|
Certificate #1:
|
|
|
|
Serial: 11A4D2200DC84376E7D233CAFF39DF44BF8D1211
|
|
|
|
notBefore: Apr 1 07:40:53 2021 GMT
|
|
|
|
notAfter: Aug 17 07:40:53 2048 GMT
|
|
|
|
Subject Alternative Name:
|
|
|
|
Algorithm: RSA4096
|
|
|
|
SHA1 FingerPrint: A111EF0FEFCDE11D47FE3F33ADCA8435EBEA4864
|
|
|
|
Subject: /C=FR/ST=Some-State/O=HAProxy Technologies/CN=HAProxy Technologies CA
|
|
|
|
Issuer: /C=FR/ST=Some-State/O=HAProxy Technologies/CN=HAProxy Technologies CA
|
|
|
|
|
|
|
|
$ echo "show ssl ca-file *cafile.crt:2" | socat /var/run/haproxy.master -
|
|
|
|
Filename: */home/tricot/work/haproxy/reg-tests/ssl/set_cafile_ca2.crt
|
|
|
|
Status: Unused
|
|
|
|
|
|
|
|
Certificate #2:
|
|
|
|
Serial: 587A1CE5ED855040A0C82BF255FF300ADB7C8136
|
|
|
|
[...]
|
|
|
|
|
2019-12-05 09:26:40 +00:00
|
|
|
show ssl cert [<filename>]
|
2023-01-10 13:44:27 +00:00
|
|
|
Display the list of certificates loaded into the process. They are not used
|
|
|
|
by any frontend or backend until their status is "Used".
|
2021-04-14 14:19:29 +00:00
|
|
|
If a filename is prefixed by an asterisk, it is a transaction which is not
|
|
|
|
committed yet. If a filename is specified, it will show details about the
|
|
|
|
certificate. This command can be useful to check if a certificate was well
|
|
|
|
updated. You can also display details on a transaction by prefixing the
|
|
|
|
filename by an asterisk.
|
2021-06-10 11:51:15 +00:00
|
|
|
This command can also be used to display the details of a certificate's OCSP
|
|
|
|
response by suffixing the filename with a ".ocsp" extension. It works for
|
|
|
|
committed certificates as well as for ongoing transactions. On a committed
|
|
|
|
certificate, this command is equivalent to calling "show ssl ocsp-response"
|
|
|
|
with the certificate's corresponding OCSP response ID.
|
2019-12-05 09:26:40 +00:00
|
|
|
|
|
|
|
Example :
|
|
|
|
|
|
|
|
$ echo "@1 show ssl cert" | socat /var/run/haproxy.master -
|
|
|
|
# transaction
|
|
|
|
*test.local.pem
|
|
|
|
# filename
|
|
|
|
test.local.pem
|
|
|
|
|
|
|
|
$ echo "@1 show ssl cert test.local.pem" | socat /var/run/haproxy.master -
|
|
|
|
Filename: test.local.pem
|
2023-01-10 13:44:27 +00:00
|
|
|
Status: Used
|
2019-12-05 09:26:40 +00:00
|
|
|
Serial: 03ECC19BA54B25E85ABA46EE561B9A10D26F
|
|
|
|
notBefore: Sep 13 21:20:24 2019 GMT
|
|
|
|
notAfter: Dec 12 21:20:24 2019 GMT
|
|
|
|
Issuer: /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
|
|
|
|
Subject: /CN=test.local
|
|
|
|
Subject Alternative Name: DNS:test.local, DNS:imap.test.local
|
|
|
|
Algorithm: RSA2048
|
|
|
|
SHA1 FingerPrint: 417A11CAE25F607B24F638B4A8AEE51D1E211477
|
|
|
|
|
|
|
|
$ echo "@1 show ssl cert *test.local.pem" | socat /var/run/haproxy.master -
|
|
|
|
Filename: *test.local.pem
|
2023-01-10 13:44:27 +00:00
|
|
|
Status: Unused
|
2019-12-05 09:26:40 +00:00
|
|
|
[...]
|
|
|
|
|
2021-04-27 14:28:25 +00:00
|
|
|
show ssl crl-file [<crlfile>[:<index>]]
|
2023-01-10 13:44:27 +00:00
|
|
|
Display the list of CRL files loaded into the process. They are not used
|
|
|
|
by any frontend or backend until their status is "Used".
|
2021-04-27 14:28:25 +00:00
|
|
|
If a filename is prefixed by an asterisk, it is a transaction which is not
|
|
|
|
committed yet. If a <crlfile> is specified without <index>, it will show the
|
|
|
|
status of the CRL file ("Used"/"Unused") followed by details about all the
|
|
|
|
Revocation Lists contained in the CRL file. The details displayed for every
|
|
|
|
list are based on the output of "openssl crl -text -noout -in <file>".
|
|
|
|
If a <crlfile> is specified followed by an <index>, it will only display the
|
|
|
|
details of the list having the specified index. Indexes start from 1.
|
|
|
|
If the index is invalid (too big for instance), nothing will be displayed.
|
|
|
|
This command can be useful to check if a CRL file was properly updated.
|
|
|
|
You can also display the details of an ongoing transaction by prefixing the
|
|
|
|
filename by an asterisk.
|
|
|
|
|
|
|
|
Example :
|
|
|
|
|
|
|
|
$ echo "show ssl crl-file" | socat /var/run/haproxy.master -
|
|
|
|
# transaction
|
|
|
|
*crlfile.pem
|
|
|
|
# filename
|
|
|
|
crlfile.pem
|
|
|
|
|
|
|
|
$ echo "show ssl crl-file crlfile.pem" | socat /var/run/haproxy.master -
|
|
|
|
Filename: /home/tricot/work/haproxy/reg-tests/ssl/crlfile.pem
|
|
|
|
Status: Used
|
|
|
|
|
|
|
|
Certificate Revocation List #1:
|
|
|
|
Version 1
|
|
|
|
Signature Algorithm: sha256WithRSAEncryption
|
|
|
|
Issuer: /C=FR/O=HAProxy Technologies/CN=Intermediate CA2
|
|
|
|
Last Update: Apr 23 14:45:39 2021 GMT
|
|
|
|
Next Update: Sep 8 14:45:39 2048 GMT
|
|
|
|
Revoked Certificates:
|
|
|
|
Serial Number: 1008
|
|
|
|
Revocation Date: Apr 23 14:45:36 2021 GMT
|
|
|
|
|
|
|
|
Certificate Revocation List #2:
|
|
|
|
Version 1
|
|
|
|
Signature Algorithm: sha256WithRSAEncryption
|
|
|
|
Issuer: /C=FR/O=HAProxy Technologies/CN=Root CA
|
|
|
|
Last Update: Apr 23 14:30:44 2021 GMT
|
|
|
|
Next Update: Sep 8 14:30:44 2048 GMT
|
|
|
|
No Revoked Certificates.
|
|
|
|
|
2020-04-06 17:07:03 +00:00
|
|
|
show ssl crt-list [-n] [<filename>]
|
2020-04-02 15:42:51 +00:00
|
|
|
Display the list of crt-list and directories used in the HAProxy
|
2020-04-06 17:07:03 +00:00
|
|
|
configuration. If a filename is specified, dump the content of a crt-list or
|
|
|
|
a directory. Once dumped the output can be used as a crt-list file.
|
|
|
|
The '-n' option can be used to display the line number, which is useful when
|
|
|
|
combined with the 'del ssl crt-list' option when a entry is duplicated. The
|
|
|
|
output with the '-n' option is not compatible with the crt-list format and
|
|
|
|
not loadable by haproxy.
|
2020-04-02 15:42:51 +00:00
|
|
|
|
|
|
|
Example:
|
2020-04-06 17:07:03 +00:00
|
|
|
echo "show ssl crt-list -n localhost.crt-list" | socat /tmp/sock1 -
|
2020-04-02 15:42:51 +00:00
|
|
|
# localhost.crt-list
|
2020-04-06 17:07:03 +00:00
|
|
|
common.pem:1 !not.test1.com *.test1.com !localhost
|
|
|
|
common.pem:2
|
|
|
|
ecdsa.pem:3 [verify none allow-0rtt ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.3] localhost !www.test1.com
|
|
|
|
ecdsa.pem:4 [verify none allow-0rtt ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.3]
|
2020-04-02 15:42:51 +00:00
|
|
|
|
2023-03-13 14:56:34 +00:00
|
|
|
show ssl ocsp-response [[text|base64] <id|path>]
|
2021-06-10 11:51:13 +00:00
|
|
|
Display the IDs of the OCSP tree entries corresponding to all the OCSP
|
2023-03-13 14:56:35 +00:00
|
|
|
responses used in HAProxy, as well as the corresponding frontend
|
|
|
|
certificate's path, the issuer's name and key hash and the serial number of
|
|
|
|
the certificate for which the OCSP response was built.
|
2023-03-13 14:56:34 +00:00
|
|
|
If a valid <id> or the <path> of a valid frontend certificate is provided,
|
|
|
|
display the contents of the corresponding OCSP response. When an <id> is
|
|
|
|
provided, it it possible to define the format in which the data is dumped.
|
|
|
|
The 'text' option is the default one and it allows to display detailed
|
|
|
|
information about the OCSP response the same way as in an "openssl ocsp
|
|
|
|
-respin <ocsp-response> -text" call. The 'base64' format allows to dump the
|
|
|
|
contents of an OCSP response in base64.
|
2021-06-10 11:51:13 +00:00
|
|
|
|
|
|
|
Example :
|
|
|
|
|
|
|
|
$ echo "show ssl ocsp-response" | socat /var/run/haproxy.master -
|
|
|
|
# Certificate IDs
|
|
|
|
Certificate ID key : 303b300906052b0e03021a050004148a83e0060faff709ca7e9b95522a2e81635fda0a0414f652b0e435d5ea923851508f0adbe92d85de007a0202100a
|
2023-03-13 14:56:35 +00:00
|
|
|
Certificate path : /path_to_cert/foo.pem
|
2021-06-10 11:51:13 +00:00
|
|
|
Certificate ID:
|
|
|
|
Issuer Name Hash: 8A83E0060FAFF709CA7E9B95522A2E81635FDA0A
|
|
|
|
Issuer Key Hash: F652B0E435D5EA923851508F0ADBE92D85DE007A
|
|
|
|
Serial Number: 100A
|
|
|
|
|
|
|
|
$ echo "show ssl ocsp-response 303b300906052b0e03021a050004148a83e0060faff709ca7e9b95522a2e81635fda0a0414f652b0e435d5ea923851508f0adbe92d85de007a0202100a" | socat /var/run/haproxy.master -
|
|
|
|
OCSP Response Data:
|
|
|
|
OCSP Response Status: successful (0x0)
|
|
|
|
Response Type: Basic OCSP Response
|
|
|
|
Version: 1 (0x0)
|
|
|
|
Responder Id: C = FR, O = HAProxy Technologies, CN = ocsp.haproxy.com
|
|
|
|
Produced At: May 27 15:43:38 2021 GMT
|
|
|
|
Responses:
|
|
|
|
Certificate ID:
|
|
|
|
Hash Algorithm: sha1
|
|
|
|
Issuer Name Hash: 8A83E0060FAFF709CA7E9B95522A2E81635FDA0A
|
|
|
|
Issuer Key Hash: F652B0E435D5EA923851508F0ADBE92D85DE007A
|
|
|
|
Serial Number: 100A
|
|
|
|
Cert Status: good
|
|
|
|
This Update: May 27 15:43:38 2021 GMT
|
|
|
|
Next Update: Oct 12 15:43:38 2048 GMT
|
|
|
|
[...]
|
|
|
|
|
2023-03-13 14:56:34 +00:00
|
|
|
$ echo "show ssl ocsp-response base64 /path_to_cert/foo.pem" | socat /var/run/haproxy.sock -
|
2023-02-28 16:46:28 +00:00
|
|
|
MIIB8woBAKCCAewwggHoBgkrBgEFBQcwAQEEggHZMIIB1TCBvqE[...]
|
|
|
|
|
2023-02-28 16:46:23 +00:00
|
|
|
show ssl ocsp-updates
|
|
|
|
Display information about the entries concerned by the OCSP update mechanism.
|
|
|
|
The command will output one line per OCSP response and will contain the
|
|
|
|
expected update time of the response as well as the time of the last
|
|
|
|
successful update and counters of successful and failed updates. It will also
|
|
|
|
give the status of the last update (successful or not) in numerical form as
|
|
|
|
well as text form. See below for a full list of possible errors. The lines
|
|
|
|
will be sorted by ascending 'Next Update' time. The lines will also contain a
|
|
|
|
path to the first frontend certificate that uses the OCSP response.
|
|
|
|
See "show ssl ocsp-response" command and "ocsp-update" option for more
|
|
|
|
information on the OCSP auto update.
|
|
|
|
|
|
|
|
The update error codes and error strings can be the following:
|
|
|
|
|
|
|
|
+----+-------------------------------------+
|
|
|
|
| ID | message |
|
|
|
|
+----+-------------------------------------+
|
|
|
|
| 0 | "Unknown" |
|
|
|
|
| 1 | "Update successful" |
|
|
|
|
| 2 | "HTTP error" |
|
|
|
|
| 3 | "Missing \"ocsp-response\" header" |
|
|
|
|
| 4 | "OCSP response check failure" |
|
|
|
|
| 5 | "Error during insertion" |
|
|
|
|
+----+-------------------------------------+
|
|
|
|
|
|
|
|
Example :
|
|
|
|
$ echo "show ssl ocsp-updates" | socat /tmp/haproxy.sock -
|
|
|
|
OCSP Certid | Path | Next Update | Last Update | Successes | Failures | Last Update Status | Last Update Status (str)
|
|
|
|
303b300906052b0e03021a050004148a83e0060faff709ca7e9b95522a2e81635fda0a0414f652b0e435d5ea923851508f0adbe92d85de007a02021015 | /path_to_cert/cert.pem | 30/Jan/2023:00:08:09 +0000 | - | 0 | 1 | 2 | HTTP error
|
|
|
|
304b300906052b0e03021a0500041448dac9a0fb2bd32d4ff0de68d2f567b735f9b3c40414142eb317b75856cbae500940e61faf9d8b14c2c6021203e16a7aa01542f291237b454a627fdea9c1 | /path_to_cert/other_cert.pem | 30/Jan/2023:01:07:09 +0000 | 30/Jan/2023:00:07:09 +0000 | 1 | 0 | 1 | Update successful
|
|
|
|
|
2022-04-21 10:06:41 +00:00
|
|
|
show ssl providers
|
|
|
|
Display the names of the providers loaded by OpenSSL during init. Provider
|
|
|
|
loading can indeed be configured via the OpenSSL configuration file and this
|
|
|
|
option allows to check that the right providers were loaded. This command is
|
|
|
|
only available with OpenSSL v3.
|
|
|
|
|
|
|
|
Example :
|
|
|
|
$ echo "show ssl providers" | socat /var/run/haproxy.master -
|
|
|
|
Loaded providers :
|
|
|
|
- fips
|
|
|
|
- base
|
|
|
|
|
2022-10-14 13:29:07 +00:00
|
|
|
show startup-logs
|
|
|
|
Dump all messages emitted during the startup of the current haproxy process,
|
|
|
|
each startup-logs buffer is unique to its haproxy worker.
|
|
|
|
|
2022-10-14 13:41:55 +00:00
|
|
|
This keyword also exists on the master CLI, which shows the latest startup or
|
|
|
|
reload tentative.
|
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
show table
|
|
|
|
Dump general information on all known stick-tables. Their name is returned
|
|
|
|
(the name of the proxy which holds them), their type (currently zero, always
|
|
|
|
IP), their size in maximum possible number of entries, and the number of
|
|
|
|
entries currently in use.
|
|
|
|
|
|
|
|
Example :
|
|
|
|
$ echo "show table" | socat stdio /tmp/sock1
|
|
|
|
>>> # table: front_pub, type: ip, size:204800, used:171454
|
|
|
|
>>> # table: back_rdp, type: ip, size:204800, used:0
|
|
|
|
|
2020-01-16 14:19:29 +00:00
|
|
|
show table <name> [ data.<type> <operator> <value> [data.<type> ...]] | [ key <key> ]
|
2015-10-13 12:45:29 +00:00
|
|
|
Dump contents of stick-table <name>. In this mode, a first line of generic
|
|
|
|
information about the table is reported as with "show table", then all
|
|
|
|
entries are dumped. Since this can be quite heavy, it is possible to specify
|
|
|
|
a filter in order to specify what entries to display.
|
|
|
|
|
|
|
|
When the "data." form is used the filter applies to the stored data (see
|
|
|
|
"stick-table" in section 4.2). A stored data type must be specified
|
|
|
|
in <type>, and this data type must be stored in the table otherwise an
|
|
|
|
error is reported. The data is compared according to <operator> with the
|
|
|
|
64-bit integer <value>. Operators are the same as with the ACLs :
|
|
|
|
|
|
|
|
- eq : match entries whose data is equal to this value
|
|
|
|
- ne : match entries whose data is not equal to this value
|
|
|
|
- le : match entries whose data is less than or equal to this value
|
|
|
|
- ge : match entries whose data is greater than or equal to this value
|
|
|
|
- lt : match entries whose data is less than this value
|
|
|
|
- gt : match entries whose data is greater than this value
|
|
|
|
|
2020-01-16 14:19:29 +00:00
|
|
|
In this form, you can use multiple data filter entries, up to a maximum
|
|
|
|
defined during build time (4 by default).
|
2015-10-13 12:45:29 +00:00
|
|
|
|
|
|
|
When the key form is used the entry <key> is shown. The key must be of the
|
|
|
|
same type as the table, which currently is limited to IPv4, IPv6, integer,
|
|
|
|
and string.
|
|
|
|
|
|
|
|
Example :
|
|
|
|
$ echo "show table http_proxy" | socat stdio /tmp/sock1
|
|
|
|
>>> # table: http_proxy, type: ip, size:204800, used:2
|
|
|
|
>>> 0x80e6a4c: key=127.0.0.1 use=0 exp=3594729 gpc0=0 conn_rate(30000)=1 \
|
|
|
|
bytes_out_rate(60000)=187
|
|
|
|
>>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
|
|
|
|
bytes_out_rate(60000)=191
|
|
|
|
|
|
|
|
$ echo "show table http_proxy data.gpc0 gt 0" | socat stdio /tmp/sock1
|
|
|
|
>>> # table: http_proxy, type: ip, size:204800, used:2
|
|
|
|
>>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
|
|
|
|
bytes_out_rate(60000)=191
|
|
|
|
|
|
|
|
$ echo "show table http_proxy data.conn_rate gt 5" | \
|
|
|
|
socat stdio /tmp/sock1
|
|
|
|
>>> # table: http_proxy, type: ip, size:204800, used:2
|
|
|
|
>>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
|
|
|
|
bytes_out_rate(60000)=191
|
|
|
|
|
|
|
|
$ echo "show table http_proxy key 127.0.0.2" | \
|
|
|
|
socat stdio /tmp/sock1
|
|
|
|
>>> # table: http_proxy, type: ip, size:204800, used:2
|
|
|
|
>>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
|
|
|
|
bytes_out_rate(60000)=191
|
|
|
|
|
|
|
|
When the data criterion applies to a dynamic value dependent on time such as
|
|
|
|
a bytes rate, the value is dynamically computed during the evaluation of the
|
|
|
|
entry in order to decide whether it has to be dumped or not. This means that
|
|
|
|
such a filter could match for some time then not match anymore because as
|
|
|
|
time goes, the average event rate drops.
|
|
|
|
|
|
|
|
It is possible to use this to extract lists of IP addresses abusing the
|
|
|
|
service, in order to monitor them or even blacklist them in a firewall.
|
|
|
|
Example :
|
|
|
|
$ echo "show table http_proxy data.gpc0 gt 0" \
|
|
|
|
| socat stdio /tmp/sock1 \
|
|
|
|
| fgrep 'key=' | cut -d' ' -f2 | cut -d= -f2 > abusers-ip.txt
|
|
|
|
( or | awk '/key/{ print a[split($2,a,"=")]; }' )
|
|
|
|
|
2022-11-29 10:55:18 +00:00
|
|
|
When the stick-table is synchronized to a peers section supporting sharding,
|
|
|
|
the shard number will be displayed for each key (otherwise '0' is reported).
|
|
|
|
This allows to know which peers will receive this key.
|
|
|
|
Example:
|
|
|
|
$ echo "show table http_proxy" | socat stdio /tmp/sock1 | fgrep shard=
|
|
|
|
0x7f23b0c822a8: key=10.0.0.2 use=0 exp=296398 shard=9 gpc0=0
|
|
|
|
0x7f23a063f948: key=10.0.0.6 use=0 exp=296075 shard=12 gpc0=0
|
|
|
|
0x7f23b03920b8: key=10.0.0.8 use=0 exp=296766 shard=1 gpc0=0
|
|
|
|
0x7f23a43c09e8: key=10.0.0.12 use=0 exp=295368 shard=8 gpc0=0
|
|
|
|
|
2021-01-29 10:32:55 +00:00
|
|
|
show tasks
|
|
|
|
Dumps the number of tasks currently in the run queue, with the number of
|
|
|
|
occurrences for each function, and their average latency when it's known
|
|
|
|
(for pure tasks with task profiling enabled). The dump is a snapshot of the
|
|
|
|
instant it's done, and there may be variations depending on what tasks are
|
|
|
|
left in the queue at the moment it happens, especially in mono-thread mode
|
|
|
|
as there's less chance that I/Os can refill the queue (unless the queue is
|
|
|
|
full). This command takes exclusive access to the process and can cause
|
|
|
|
minor but measurable latencies when issued on a highly loaded process, so
|
|
|
|
it must not be abused by monitoring bots.
|
|
|
|
|
2019-05-16 15:44:30 +00:00
|
|
|
show threads
|
|
|
|
Dumps some internal states and structures for each thread, that may be useful
|
|
|
|
to help developers understand a problem. The output tries to be readable by
|
2019-05-17 08:08:49 +00:00
|
|
|
showing one block per thread. When haproxy is built with USE_THREAD_DUMP=1,
|
|
|
|
an advanced dump mechanism involving thread signals is used so that each
|
|
|
|
thread can dump its own state in turn. Without this option, the thread
|
|
|
|
processing the command shows all its details but the other ones are less
|
2019-05-22 05:06:44 +00:00
|
|
|
detailed. A star ('*') is displayed in front of the thread handling the
|
|
|
|
command. A right angle bracket ('>') may also be displayed in front of
|
|
|
|
threads which didn't make any progress since last invocation of this command,
|
|
|
|
indicating a bug in the code which must absolutely be reported. When this
|
|
|
|
happens between two threads it usually indicates a deadlock. If a thread is
|
|
|
|
alone, it's a different bug like a corrupted list. In all cases the process
|
|
|
|
needs is not fully functional anymore and needs to be restarted.
|
|
|
|
|
|
|
|
The output format is purposely not documented so that it can easily evolve as
|
|
|
|
new needs are identified, without having to maintain any form of backwards
|
|
|
|
compatibility, and just like with "show activity", the values are meaningless
|
|
|
|
without the code at hand.
|
2019-05-16 15:44:30 +00:00
|
|
|
|
2016-05-31 19:09:53 +00:00
|
|
|
show tls-keys [id|*]
|
|
|
|
Dump all loaded TLS ticket keys references. The TLS ticket key reference ID
|
|
|
|
and the file from which the keys have been loaded is shown. Both of those
|
|
|
|
can be used to update the TLS keys using "set ssl tls-key". If an ID is
|
|
|
|
specified as parameter, it will dump the tickets, using * it will dump every
|
|
|
|
keys from every references.
|
2015-10-13 12:45:29 +00:00
|
|
|
|
2017-01-04 08:37:26 +00:00
|
|
|
show schema json
|
|
|
|
Dump the schema used for the output of "show info json" and "show stat json".
|
|
|
|
|
|
|
|
The contains no extra whitespace in order to reduce the volume of output.
|
|
|
|
For human consumption passing the output through a pretty printer may be
|
|
|
|
helpful. Example :
|
|
|
|
|
|
|
|
$ echo "show schema json" | socat /var/run/haproxy.sock stdio | \
|
|
|
|
python -m json.tool
|
|
|
|
|
|
|
|
The schema follows "JSON Schema" (json-schema.org) and accordingly
|
|
|
|
verifiers may be used to verify the output of "show info json" and "show
|
|
|
|
stat json" against the schema.
|
|
|
|
|
2019-08-22 18:06:04 +00:00
|
|
|
show trace [<source>]
|
|
|
|
Show the current trace status. For each source a line is displayed with a
|
|
|
|
single-character status indicating if the trace is stopped, waiting, or
|
|
|
|
running. The output sink used by the trace is indicated (or "none" if none
|
|
|
|
was set), as well as the number of dropped events in this sink, followed by a
|
|
|
|
brief description of the source. If a source name is specified, a detailed
|
|
|
|
list of all events supported by the source, and their status for each action
|
|
|
|
(report, start, pause, stop), indicated by a "+" if they are enabled, or a
|
|
|
|
"-" otherwise. All these events are independent and an event might trigger
|
|
|
|
a start without being reported and conversely.
|
2017-01-04 08:37:26 +00:00
|
|
|
|
2021-12-14 14:22:29 +00:00
|
|
|
show version
|
|
|
|
Show the version of the current HAProxy process. This is available from
|
|
|
|
master and workers CLI.
|
|
|
|
Example:
|
|
|
|
|
|
|
|
$ echo "show version" | socat /var/run/haproxy.sock stdio
|
|
|
|
2.4.9
|
|
|
|
|
|
|
|
$ echo "show version" | socat /var/run/haproxy-master.sock stdio
|
|
|
|
2.5.0
|
|
|
|
|
2015-10-13 12:45:29 +00:00
|
|
|
shutdown frontend <frontend>
|
|
|
|
Completely delete the specified frontend. All the ports it was bound to will
|
|
|
|
be released. It will not be possible to enable the frontend anymore after
|
|
|
|
this operation. This is intended to be used in environments where stopping a
|
|
|
|
proxy is not even imaginable but a misconfigured proxy must be fixed. That
|
|
|
|
way it's possible to release the port and bind it into another process to
|
|
|
|
restore operations. The frontend will not appear at all on the stats page
|
|
|
|
once it is terminated.
|
|
|
|
|
|
|
|
The frontend may be specified either by its name or by its numeric ID,
|
|
|
|
prefixed with a sharp ('#').
|
|
|
|
|
|
|
|
This command is restricted and can only be issued on sockets configured for
|
|
|
|
level "admin".
|
|
|
|
|
|
|
|
shutdown session <id>
|
|
|
|
Immediately terminate the session matching the specified session identifier.
|
|
|
|
This identifier is the first field at the beginning of the lines in the dumps
|
|
|
|
of "show sess" (it corresponds to the session pointer). This can be used to
|
|
|
|
terminate a long-running session without waiting for a timeout or when an
|
|
|
|
endless transfer is ongoing. Such terminated sessions are reported with a 'K'
|
|
|
|
flag in the logs.
|
|
|
|
|
|
|
|
shutdown sessions server <backend>/<server>
|
|
|
|
Immediately terminate all the sessions attached to the specified server. This
|
|
|
|
can be used to terminate long-running sessions after a server is put into
|
|
|
|
maintenance mode, for instance. Such terminated sessions are reported with a
|
|
|
|
'K' flag in the logs.
|
|
|
|
|
2019-08-22 18:06:04 +00:00
|
|
|
trace
|
|
|
|
The "trace" command alone lists the trace sources, their current status, and
|
|
|
|
their brief descriptions. It is only meant as a menu to enter next levels,
|
|
|
|
see other "trace" commands below.
|
|
|
|
|
|
|
|
trace 0
|
|
|
|
Immediately stops all traces. This is made to be used as a quick solution
|
|
|
|
to terminate a debugging session or as an emergency action to be used in case
|
|
|
|
complex traces were enabled on multiple sources and impact the service.
|
|
|
|
|
|
|
|
trace <source> event [ [+|-|!]<name> ]
|
|
|
|
Without argument, this will list all the events supported by the designated
|
|
|
|
source. They are prefixed with a "-" if they are not enabled, or a "+" if
|
|
|
|
they are enabled. It is important to note that a single trace may be labelled
|
|
|
|
with multiple events, and as long as any of the enabled events matches one of
|
|
|
|
the events labelled on the trace, the event will be passed to the trace
|
|
|
|
subsystem. For example, receiving an HTTP/2 frame of type HEADERS may trigger
|
|
|
|
a frame event and a stream event since the frame creates a new stream. If
|
|
|
|
either the frame event or the stream event are enabled for this source, the
|
|
|
|
frame will be passed to the trace framework.
|
|
|
|
|
|
|
|
With an argument, it is possible to toggle the state of each event and
|
|
|
|
individually enable or disable them. Two special keywords are supported,
|
|
|
|
"none", which matches no event, and is used to disable all events at once,
|
|
|
|
and "any" which matches all events, and is used to enable all events at
|
|
|
|
once. Other events are specific to the event source. It is possible to
|
|
|
|
enable one event by specifying its name, optionally prefixed with '+' for
|
|
|
|
better readability. It is possible to disable one event by specifying its
|
|
|
|
name prefixed by a '-' or a '!'.
|
|
|
|
|
|
|
|
One way to completely disable a trace source is to pass "event none", and
|
|
|
|
this source will instantly be totally ignored.
|
|
|
|
|
|
|
|
trace <source> level [<level>]
|
2019-08-29 06:01:48 +00:00
|
|
|
Without argument, this will list all trace levels for this source, and the
|
2019-08-22 18:06:04 +00:00
|
|
|
current one will be indicated by a star ('*') prepended in front of it. With
|
2019-08-29 06:01:48 +00:00
|
|
|
an argument, this will change the trace level to the specified level. Detail
|
2019-08-22 18:06:04 +00:00
|
|
|
levels are a form of filters that are applied before reporting the events.
|
2019-08-29 06:01:48 +00:00
|
|
|
These filters are used to selectively include or exclude events depending on
|
|
|
|
their level of importance. For example a developer might need to know
|
|
|
|
precisely where in the code an HTTP header was considered invalid while the
|
|
|
|
end user may not even care about this header's validity at all. There are
|
|
|
|
currently 5 distinct levels for a trace :
|
2019-08-22 18:06:04 +00:00
|
|
|
|
|
|
|
user this will report information that are suitable for use by a
|
|
|
|
regular haproxy user who wants to observe his traffic.
|
|
|
|
Typically some HTTP requests and responses will be reported
|
|
|
|
without much detail. Most sources will set this as the
|
|
|
|
default level to ease operations.
|
|
|
|
|
2019-08-29 06:01:48 +00:00
|
|
|
proto in addition to what is reported at the "user" level, it also
|
|
|
|
displays protocol-level updates. This can for example be the
|
|
|
|
frame types or HTTP headers after decoding.
|
2019-08-22 18:06:04 +00:00
|
|
|
|
|
|
|
state in addition to what is reported at the "proto" level, it
|
|
|
|
will also display state transitions (or failed transitions)
|
|
|
|
which happen in parsers, so this will show attempts to
|
|
|
|
perform an operation while the "proto" level only shows
|
|
|
|
the final operation.
|
|
|
|
|
2019-08-29 06:01:48 +00:00
|
|
|
data in addition to what is reported at the "state" level, it
|
|
|
|
will also include data transfers between the various layers.
|
|
|
|
|
2019-08-22 18:06:04 +00:00
|
|
|
developer it reports everything available, which can include advanced
|
|
|
|
information such as "breaking out of this loop" that are
|
|
|
|
only relevant to a developer trying to understand a bug that
|
2019-08-29 06:40:59 +00:00
|
|
|
only happens once in a while in field. Function names are
|
|
|
|
only reported at this level.
|
2019-08-22 18:06:04 +00:00
|
|
|
|
|
|
|
It is highly recommended to always use the "user" level only and switch to
|
|
|
|
other levels only if instructed to do so by a developer. Also it is a good
|
|
|
|
idea to first configure the events before switching to higher levels, as it
|
|
|
|
may save from dumping many lines if no filter is applied.
|
|
|
|
|
|
|
|
trace <source> lock [criterion]
|
|
|
|
Without argument, this will list all the criteria supported by this source
|
|
|
|
for lock-on processing, and display the current choice by a star ('*') in
|
|
|
|
front of it. Lock-on means that the source will focus on the first matching
|
|
|
|
event and only stick to the criterion which triggered this event, and ignore
|
|
|
|
all other ones until the trace stops. This allows for example to take a trace
|
|
|
|
on a single connection or on a single stream. The following criteria are
|
|
|
|
supported by some traces, though not necessarily all, since some of them
|
|
|
|
might not be available to the source :
|
|
|
|
|
|
|
|
backend lock on the backend that started the trace
|
|
|
|
connection lock on the connection that started the trace
|
|
|
|
frontend lock on the frontend that started the trace
|
|
|
|
listener lock on the listener that started the trace
|
|
|
|
nothing do not lock on anything
|
|
|
|
server lock on the server that started the trace
|
|
|
|
session lock on the session that started the trace
|
|
|
|
thread lock on the thread that started the trace
|
|
|
|
|
|
|
|
In addition to this, each source may provide up to 4 specific criteria such
|
|
|
|
as internal states or connection IDs. For example in HTTP/2 it is possible
|
|
|
|
to lock on the H2 stream and ignore other streams once a strace starts.
|
|
|
|
|
|
|
|
When a criterion is passed in argument, this one is used instead of the
|
|
|
|
other ones and any existing tracking is immediately terminated so that it can
|
|
|
|
restart with the new criterion. The special keyword "nothing" is supported by
|
|
|
|
all sources to permanently disable tracking.
|
|
|
|
|
|
|
|
trace <source> { pause | start | stop } [ [+|-|!]event]
|
|
|
|
Without argument, this will list the events enabled to automatically pause,
|
|
|
|
start, or stop a trace for this source. These events are specific to each
|
|
|
|
trace source. With an argument, this will either enable the event for the
|
|
|
|
specified action (if optionally prefixed by a '+') or disable it (if
|
|
|
|
prefixed by a '-' or '!'). The special keyword "now" is not an event and
|
|
|
|
requests to take the action immediately. The keywords "none" and "any" are
|
|
|
|
supported just like in "trace event".
|
|
|
|
|
|
|
|
The 3 supported actions are respectively "pause", "start" and "stop". The
|
|
|
|
"pause" action enumerates events which will cause a running trace to stop and
|
|
|
|
wait for a new start event to restart it. The "start" action enumerates the
|
|
|
|
events which switch the trace into the waiting mode until one of the start
|
|
|
|
events appears. And the "stop" action enumerates the events which definitely
|
|
|
|
stop the trace until it is manually enabled again. In practice it makes sense
|
|
|
|
to manually start a trace using "start now" without caring about events, and
|
|
|
|
to stop it using "stop now". In order to capture more subtle event sequences,
|
|
|
|
setting "start" to a normal event (like receiving an HTTP request) and "stop"
|
|
|
|
to a very rare event like emitting a certain error, will ensure that the last
|
|
|
|
captured events will match the desired criteria. And the pause event is
|
|
|
|
useful to detect the end of a sequence, disable the lock-on and wait for
|
|
|
|
another opportunity to take a capture. In this case it can make sense to
|
|
|
|
enable lock-on to spot only one specific criterion (e.g. a stream), and have
|
|
|
|
"start" set to anything that starts this criterion (e.g. all events which
|
|
|
|
create a stream), "stop" set to the expected anomaly, and "pause" to anything
|
|
|
|
that ends that criterion (e.g. any end of stream event). In this case the
|
|
|
|
trace log will contain complete sequences of perfectly clean series affecting
|
|
|
|
a single object, until the last sequence containing everything from the
|
|
|
|
beginning to the anomaly.
|
|
|
|
|
|
|
|
trace <source> sink [<sink>]
|
|
|
|
Without argument, this will list all event sinks available for this source,
|
|
|
|
and the currently configured one will have a star ('*') prepended in front
|
|
|
|
of it. Sink "none" is always available and means that all events are simply
|
|
|
|
dropped, though their processing is not ignored (e.g. lock-on does occur).
|
|
|
|
Other sinks are available depending on configuration and build options, but
|
|
|
|
typically "stdout" and "stderr" will be usable in debug mode, and in-memory
|
|
|
|
ring buffers should be available as well. When a name is specified, the sink
|
|
|
|
instantly changes for the specified source. Events are not changed during a
|
|
|
|
sink change. In the worst case some may be lost if an invalid sink is used
|
|
|
|
(or "none"), but operations do continue to a different destination.
|
|
|
|
|
2019-08-29 06:24:16 +00:00
|
|
|
trace <source> verbosity [<level>]
|
|
|
|
Without argument, this will list all verbosity levels for this source, and the
|
|
|
|
current one will be indicated by a star ('*') prepended in front of it. With
|
|
|
|
an argument, this will change the verbosity level to the specified one.
|
|
|
|
|
|
|
|
Verbosity levels indicate how far the trace decoder should go to provide
|
|
|
|
detailed information. It depends on the trace source, since some sources will
|
|
|
|
not even provide a specific decoder. Level "quiet" is always available and
|
|
|
|
disables any decoding. It can be useful when trying to figure what's
|
|
|
|
happening before trying to understand the details, since it will have a very
|
|
|
|
low impact on performance and trace size. When no verbosity levels are
|
|
|
|
declared by a source, level "default" is available and will cause a decoder
|
|
|
|
to be called when specified in the traces. It is an opportunistic decoding.
|
|
|
|
When the source declares some verbosity levels, these ones are listed with
|
|
|
|
a description of what they correspond to. In this case the trace decoder
|
|
|
|
provided by the source will be as accurate as possible based on the
|
|
|
|
information available at the trace point. The first level above "quiet" is
|
|
|
|
set by default.
|
|
|
|
|
2022-12-20 10:11:07 +00:00
|
|
|
update ssl ocsp-response <certfile>
|
|
|
|
Create an OCSP request for the specified <certfile> and send it to the OCSP
|
|
|
|
responder whose URI should be specified in the "Authority Information Access"
|
|
|
|
section of the certificate. Only the first URI is taken into account. The
|
|
|
|
OCSP response that we should receive in return is then checked and inserted
|
|
|
|
in the local OCSP response tree. This command will only work for certificates
|
|
|
|
that already had a stored OCSP response, either because it was provided
|
|
|
|
during init or if it was previously set through the "set ssl cert" or "set
|
|
|
|
ssl ocsp-response" commands.
|
|
|
|
If the received OCSP response is valid and was properly inserted into the
|
|
|
|
local tree, its contents will be displayed on the standard output. The format
|
|
|
|
is the same as the one described in "show ssl ocsp-response".
|
|
|
|
|
2015-10-13 12:40:55 +00:00
|
|
|
|
2018-12-11 17:56:45 +00:00
|
|
|
9.4. Master CLI
|
|
|
|
---------------
|
|
|
|
|
|
|
|
The master CLI is a socket bound to the master process in master-worker mode.
|
|
|
|
This CLI gives access to the unix socket commands in every running or leaving
|
|
|
|
processes and allows a basic supervision of those processes.
|
|
|
|
|
|
|
|
The master CLI is configurable only from the haproxy program arguments with
|
|
|
|
the -S option. This option also takes bind options separated by commas.
|
|
|
|
|
|
|
|
Example:
|
|
|
|
|
|
|
|
# haproxy -W -S 127.0.0.1:1234 -f test1.cfg
|
|
|
|
# haproxy -Ws -S /tmp/master-socket,uid,1000,gid,1000,mode,600 -f test1.cfg
|
2018-12-13 08:05:47 +00:00
|
|
|
# haproxy -W -S /tmp/master-socket,level,user -f test1.cfg
|
2018-12-11 17:56:45 +00:00
|
|
|
|
|
|
|
|
2022-03-31 13:26:51 +00:00
|
|
|
9.4.1. Master CLI commands
|
2022-02-02 13:44:19 +00:00
|
|
|
--------------------------
|
2018-12-11 17:56:45 +00:00
|
|
|
|
2022-02-02 13:44:19 +00:00
|
|
|
@<[!]pid>
|
|
|
|
The master CLI uses a special prefix notation to access the multiple
|
|
|
|
processes. This notation is easily identifiable as it begins by a @.
|
2018-12-14 20:11:31 +00:00
|
|
|
|
2022-02-02 13:44:19 +00:00
|
|
|
A @ prefix can be followed by a relative process number or by an exclamation
|
|
|
|
point and a PID. (e.g. @1 or @!1271). A @ alone could be use to specify the
|
|
|
|
master. Leaving processes are only accessible with the PID as relative process
|
|
|
|
number are only usable with the current processes.
|
2018-12-14 20:11:31 +00:00
|
|
|
|
2022-02-02 13:44:19 +00:00
|
|
|
Examples:
|
|
|
|
|
|
|
|
$ socat /var/run/haproxy-master.sock readline
|
|
|
|
prompt
|
|
|
|
master> @1 show info; @2 show info
|
|
|
|
[...]
|
|
|
|
Process_num: 1
|
|
|
|
Pid: 1271
|
|
|
|
[...]
|
|
|
|
Process_num: 2
|
|
|
|
Pid: 1272
|
|
|
|
[...]
|
|
|
|
master>
|
2018-12-14 20:11:31 +00:00
|
|
|
|
2022-02-02 13:44:19 +00:00
|
|
|
$ echo '@!1271 show info; @!1272 show info' | socat /var/run/haproxy-master.sock -
|
|
|
|
[...]
|
|
|
|
|
|
|
|
A prefix could be use as a command, which will send every next commands to
|
|
|
|
the specified process.
|
|
|
|
|
|
|
|
Examples:
|
|
|
|
|
|
|
|
$ socat /var/run/haproxy-master.sock readline
|
|
|
|
prompt
|
|
|
|
master> @1
|
|
|
|
1271> show info
|
|
|
|
[...]
|
|
|
|
1271> show stat
|
|
|
|
[...]
|
|
|
|
1271> @
|
|
|
|
master>
|
|
|
|
|
|
|
|
$ echo '@1; show info; show stat; @2; show info; show stat' | socat /var/run/haproxy-master.sock -
|
|
|
|
[...]
|
|
|
|
|
2022-02-02 14:29:21 +00:00
|
|
|
expert-mode [on|off]
|
|
|
|
This command activates the "expert-mode" for every worker accessed from the
|
2022-02-02 10:43:20 +00:00
|
|
|
master CLI. Combined with "mcli-debug-mode" it also activates the command on
|
2022-02-02 13:13:54 +00:00
|
|
|
the master. Display the flag "e" in the master CLI prompt.
|
2022-02-02 14:29:21 +00:00
|
|
|
|
2022-02-02 10:43:20 +00:00
|
|
|
See also "expert-mode" in Section 9.3 and "mcli-debug-mode" in 9.4.1.
|
2022-02-02 14:29:21 +00:00
|
|
|
|
|
|
|
experimental-mode [on|off]
|
|
|
|
This command activates the "experimental-mode" for every worker accessed from
|
2022-02-02 10:43:20 +00:00
|
|
|
the master CLI. Combined with "mcli-debug-mode" it also activates the command on
|
2022-02-02 13:13:54 +00:00
|
|
|
the master. Display the flag "x" in the master CLI prompt.
|
2022-02-02 14:29:21 +00:00
|
|
|
|
2022-02-02 10:43:20 +00:00
|
|
|
See also "experimental-mode" in Section 9.3 and "mcli-debug-mode" in 9.4.1.
|
|
|
|
|
|
|
|
mcli-debug-mode [on|off]
|
|
|
|
This keyword allows a special mode in the master CLI which enables every
|
|
|
|
keywords that were meant for a worker CLI on the master CLI, allowing to debug
|
|
|
|
the master process. Once activated, you list the new available keywords with
|
|
|
|
"help". Combined with "experimental-mode" or "expert-mode" it enables even
|
2022-02-02 13:13:54 +00:00
|
|
|
more keywords. Display the flag "d" in the master CLI prompt.
|
2022-02-02 14:29:21 +00:00
|
|
|
|
2022-02-02 13:44:19 +00:00
|
|
|
prompt
|
|
|
|
When the prompt is enabled (via the "prompt" command), the context the CLI is
|
|
|
|
working on is displayed in the prompt. The master is identified by the "master"
|
|
|
|
string, and other processes are identified with their PID. In case the last
|
|
|
|
reload failed, the master prompt will be changed to "master[ReloadFailed]>" so
|
|
|
|
that it becomes visible that the process is still running on the previous
|
|
|
|
configuration and that the new configuration is not operational.
|
|
|
|
|
2022-02-02 13:13:54 +00:00
|
|
|
The prompt of the master CLI is able to display several flags which are the
|
|
|
|
enable modes. "d" for mcli-debug-mode, "e" for expert-mode, "x" for
|
|
|
|
experimental-mode.
|
|
|
|
|
|
|
|
Example:
|
|
|
|
$ socat /var/run/haproxy-master.sock -
|
|
|
|
prompt
|
|
|
|
master> expert-mode on
|
|
|
|
master(e)> experimental-mode on
|
|
|
|
master(xe)> mcli-debug-mode on
|
|
|
|
master(xed)> @1
|
|
|
|
95191(xed)>
|
|
|
|
|
2022-02-02 13:44:19 +00:00
|
|
|
reload
|
|
|
|
You can also reload the HAProxy master process with the "reload" command which
|
|
|
|
does the same as a `kill -USR2` on the master process, provided that the user
|
|
|
|
has at least "operator" or "admin" privileges.
|
|
|
|
|
2022-09-24 14:44:44 +00:00
|
|
|
This command allows you to perform a synchronous reload, the command will
|
|
|
|
return a reload status, once the reload was performed. Be careful with the
|
|
|
|
timeout if a tool is used to parse it, it is only returned once the
|
2022-09-27 09:38:10 +00:00
|
|
|
configuration is parsed and the new worker is forked. The "socat" command uses
|
|
|
|
a timeout of 0.5s by default so it will quits before showing the message if
|
|
|
|
the reload is too long. "ncat" does not have a timeout by default.
|
2022-10-13 16:14:55 +00:00
|
|
|
When compiled with USE_SHM_OPEN=1, the reload command is also able to dump
|
|
|
|
the startup-logs of the master.
|
2022-09-24 14:44:44 +00:00
|
|
|
|
2022-02-02 13:44:19 +00:00
|
|
|
Example:
|
|
|
|
|
2022-09-27 09:38:10 +00:00
|
|
|
$ echo "reload" | socat -t300 /var/run/haproxy-master.sock stdin
|
2022-10-13 16:14:55 +00:00
|
|
|
Success=1
|
|
|
|
--
|
|
|
|
[NOTICE] (482713) : haproxy version is 2.7-dev7-4827fb-69
|
|
|
|
[NOTICE] (482713) : path to executable is ./haproxy
|
|
|
|
[WARNING] (482713) : config : 'http-request' rules ignored for proxy 'frt1' as they require HTTP mode.
|
|
|
|
[NOTICE] (482713) : New worker (482720) forked
|
|
|
|
[NOTICE] (482713) : Loading success.
|
2022-09-24 14:44:44 +00:00
|
|
|
|
2022-09-27 09:38:10 +00:00
|
|
|
$ echo "reload" | socat -t300 /var/run/haproxy-master.sock stdin
|
2022-10-13 16:14:55 +00:00
|
|
|
Success=0
|
|
|
|
--
|
|
|
|
[NOTICE] (482886) : haproxy version is 2.7-dev7-4827fb-69
|
|
|
|
[NOTICE] (482886) : path to executable is ./haproxy
|
|
|
|
[ALERT] (482886) : config : parsing [test3.cfg:1]: unknown keyword 'Aglobal' out of section.
|
|
|
|
[ALERT] (482886) : config : Fatal errors found in configuration.
|
|
|
|
[WARNING] (482886) : Loading failure!
|
|
|
|
|
|
|
|
$
|
2022-09-24 14:44:44 +00:00
|
|
|
|
|
|
|
The reload command is the last executed on the master CLI, every other
|
|
|
|
command after it are ignored. Once the reload command returns its status, it
|
|
|
|
will close the connection to the CLI.
|
2022-02-02 13:44:19 +00:00
|
|
|
|
2022-09-24 14:44:44 +00:00
|
|
|
Note that a reload will close all connections to the master CLI.
|
2022-02-02 13:44:19 +00:00
|
|
|
|
|
|
|
show proc
|
|
|
|
The master CLI introduces a 'show proc' command to surpervise the
|
|
|
|
processe.
|
|
|
|
|
|
|
|
Example:
|
2018-12-14 20:11:31 +00:00
|
|
|
|
2022-02-02 13:44:19 +00:00
|
|
|
$ echo 'show proc' | socat /var/run/haproxy-master.sock -
|
|
|
|
#<PID> <type> <reloads> <uptime> <version>
|
|
|
|
1162 master 5 [failed: 0] 0d00h02m07s 2.5-dev13
|
|
|
|
# workers
|
|
|
|
1271 worker 1 0d00h00m00s 2.5-dev13
|
|
|
|
# old workers
|
|
|
|
1233 worker 3 0d00h00m43s 2.0-dev3-6019f6-289
|
|
|
|
# programs
|
|
|
|
1244 foo 0 0d00h00m00s -
|
|
|
|
1255 bar 0 0d00h00m00s -
|
|
|
|
|
|
|
|
In this example, the master has been reloaded 5 times but one of the old
|
|
|
|
worker is still running and survived 3 reloads. You could access the CLI of
|
|
|
|
this worker to understand what's going on.
|
2018-12-11 17:56:45 +00:00
|
|
|
|
2022-10-14 13:41:55 +00:00
|
|
|
show startup-logs
|
|
|
|
HAProxy needs to be compiled with USE_SHM_OPEN=1 to be used correctly on the
|
|
|
|
master CLI or all messages won't be visible.
|
|
|
|
|
|
|
|
Like its counterpart on the stats socket, this command is able to show the
|
|
|
|
startup messages of HAProxy. However it does not dump the startup messages
|
|
|
|
of the current worker, but the startup messages of the latest startup or
|
|
|
|
reload, which means it is able to dump the parsing messages of a failed
|
|
|
|
reload.
|
|
|
|
|
|
|
|
Those messages are also dumped with the "reload" command.
|
|
|
|
|
2015-10-13 12:40:55 +00:00
|
|
|
10. Tricks for easier configuration management
|
|
|
|
----------------------------------------------
|
|
|
|
|
|
|
|
It is very common that two HAProxy nodes constituting a cluster share exactly
|
|
|
|
the same configuration modulo a few addresses. Instead of having to maintain a
|
|
|
|
duplicate configuration for each node, which will inevitably diverge, it is
|
|
|
|
possible to include environment variables in the configuration. Thus multiple
|
|
|
|
configuration may share the exact same file with only a few different system
|
|
|
|
wide environment variables. This started in version 1.5 where only addresses
|
|
|
|
were allowed to include environment variables, and 1.6 goes further by
|
|
|
|
supporting environment variables everywhere. The syntax is the same as in the
|
|
|
|
UNIX shell, a variable starts with a dollar sign ('$'), followed by an opening
|
|
|
|
curly brace ('{'), then the variable name followed by the closing brace ('}').
|
|
|
|
Except for addresses, environment variables are only interpreted in arguments
|
|
|
|
surrounded with double quotes (this was necessary not to break existing setups
|
|
|
|
using regular expressions involving the dollar symbol).
|
|
|
|
|
|
|
|
Environment variables also make it convenient to write configurations which are
|
|
|
|
expected to work on various sites where only the address changes. It can also
|
|
|
|
permit to remove passwords from some configs. Example below where the the file
|
|
|
|
"site1.env" file is sourced by the init script upon startup :
|
|
|
|
|
|
|
|
$ cat site1.env
|
|
|
|
LISTEN=192.168.1.1
|
|
|
|
CACHE_PFX=192.168.11
|
|
|
|
SERVER_PFX=192.168.22
|
|
|
|
LOGGER=192.168.33.1
|
|
|
|
STATSLP=admin:pa$$w0rd
|
|
|
|
ABUSERS=/etc/haproxy/abuse.lst
|
|
|
|
TIMEOUT=10s
|
|
|
|
|
|
|
|
$ cat haproxy.cfg
|
|
|
|
global
|
|
|
|
log "${LOGGER}:514" local0
|
|
|
|
|
|
|
|
defaults
|
|
|
|
mode http
|
|
|
|
timeout client "${TIMEOUT}"
|
|
|
|
timeout server "${TIMEOUT}"
|
|
|
|
timeout connect 5s
|
|
|
|
|
|
|
|
frontend public
|
|
|
|
bind "${LISTEN}:80"
|
|
|
|
http-request reject if { src -f "${ABUSERS}" }
|
|
|
|
stats uri /stats
|
|
|
|
stats auth "${STATSLP}"
|
|
|
|
use_backend cache if { path_end .jpg .css .ico }
|
|
|
|
default_backend server
|
|
|
|
|
|
|
|
backend cache
|
|
|
|
server cache1 "${CACHE_PFX}.1:18080" check
|
|
|
|
server cache2 "${CACHE_PFX}.2:18080" check
|
|
|
|
|
|
|
|
backend server
|
|
|
|
server cache1 "${SERVER_PFX}.1:8080" check
|
|
|
|
server cache2 "${SERVER_PFX}.2:8080" check
|
|
|
|
|
|
|
|
|
|
|
|
11. Well-known traps to avoid
|
|
|
|
-----------------------------
|
|
|
|
|
|
|
|
Once in a while, someone reports that after a system reboot, the haproxy
|
|
|
|
service wasn't started, and that once they start it by hand it works. Most
|
|
|
|
often, these people are running a clustered IP address mechanism such as
|
|
|
|
keepalived, to assign the service IP address to the master node only, and while
|
|
|
|
it used to work when they used to bind haproxy to address 0.0.0.0, it stopped
|
|
|
|
working after they bound it to the virtual IP address. What happens here is
|
|
|
|
that when the service starts, the virtual IP address is not yet owned by the
|
|
|
|
local node, so when HAProxy wants to bind to it, the system rejects this
|
|
|
|
because it is not a local IP address. The fix doesn't consist in delaying the
|
|
|
|
haproxy service startup (since it wouldn't stand a restart), but instead to
|
|
|
|
properly configure the system to allow binding to non-local addresses. This is
|
|
|
|
easily done on Linux by setting the net.ipv4.ip_nonlocal_bind sysctl to 1. This
|
|
|
|
is also needed in order to transparently intercept the IP traffic that passes
|
|
|
|
through HAProxy for a specific target address.
|
|
|
|
|
|
|
|
Multi-process configurations involving source port ranges may apparently seem
|
|
|
|
to work but they will cause some random failures under high loads because more
|
|
|
|
than one process may try to use the same source port to connect to the same
|
|
|
|
server, which is not possible. The system will report an error and a retry will
|
|
|
|
happen, picking another port. A high value in the "retries" parameter may hide
|
|
|
|
the effect to a certain extent but this also comes with increased CPU usage and
|
|
|
|
processing time. Logs will also report a certain number of retries. For this
|
|
|
|
reason, port ranges should be avoided in multi-process configurations.
|
|
|
|
|
2016-07-02 01:01:18 +00:00
|
|
|
Since HAProxy uses SO_REUSEPORT and supports having multiple independent
|
2015-10-13 12:40:55 +00:00
|
|
|
processes bound to the same IP:port, during troubleshooting it can happen that
|
|
|
|
an old process was not stopped before a new one was started. This provides
|
|
|
|
absurd test results which tend to indicate that any change to the configuration
|
|
|
|
is ignored. The reason is that in fact even the new process is restarted with a
|
|
|
|
new configuration, the old one also gets some incoming connections and
|
|
|
|
processes them, returning unexpected results. When in doubt, just stop the new
|
|
|
|
process and try again. If it still works, it very likely means that an old
|
|
|
|
process remains alive and has to be stopped. Linux's "netstat -lntp" is of good
|
|
|
|
help here.
|
|
|
|
|
|
|
|
When adding entries to an ACL from the command line (eg: when blacklisting a
|
|
|
|
source address), it is important to keep in mind that these entries are not
|
|
|
|
synchronized to the file and that if someone reloads the configuration, these
|
|
|
|
updates will be lost. While this is often the desired effect (for blacklisting)
|
|
|
|
it may not necessarily match expectations when the change was made as a fix for
|
|
|
|
a problem. See the "add acl" action of the CLI interface.
|
|
|
|
|
|
|
|
|
|
|
|
12. Debugging and performance issues
|
|
|
|
------------------------------------
|
|
|
|
|
|
|
|
When HAProxy is started with the "-d" option, it will stay in the foreground
|
|
|
|
and will print one line per event, such as an incoming connection, the end of a
|
|
|
|
connection, and for each request or response header line seen. This debug
|
|
|
|
output is emitted before the contents are processed, so they don't consider the
|
|
|
|
local modifications. The main use is to show the request and response without
|
|
|
|
having to run a network sniffer. The output is less readable when multiple
|
|
|
|
connections are handled in parallel, though the "debug2ansi" and "debug2html"
|
|
|
|
scripts found in the examples/ directory definitely help here by coloring the
|
|
|
|
output.
|
|
|
|
|
|
|
|
If a request or response is rejected because HAProxy finds it is malformed, the
|
|
|
|
best thing to do is to connect to the CLI and issue "show errors", which will
|
|
|
|
report the last captured faulty request and response for each frontend and
|
|
|
|
backend, with all the necessary information to indicate precisely the first
|
|
|
|
character of the input stream that was rejected. This is sometimes needed to
|
|
|
|
prove to customers or to developers that a bug is present in their code. In
|
|
|
|
this case it is often possible to relax the checks (but still keep the
|
|
|
|
captures) using "option accept-invalid-http-request" or its equivalent for
|
|
|
|
responses coming from the server "option accept-invalid-http-response". Please
|
|
|
|
see the configuration manual for more details.
|
|
|
|
|
|
|
|
Example :
|
|
|
|
|
|
|
|
> show errors
|
|
|
|
Total events captured on [13/Oct/2015:13:43:47.169] : 1
|
|
|
|
|
|
|
|
[13/Oct/2015:13:43:40.918] frontend HAProxyLocalStats (#2): invalid request
|
|
|
|
backend <NONE> (#-1), server <NONE> (#-1), event #0
|
|
|
|
src 127.0.0.1:51981, session #0, session flags 0x00000080
|
|
|
|
HTTP msg state 26, msg flags 0x00000000, tx flags 0x00000000
|
|
|
|
HTTP chunk len 0 bytes, HTTP body len 0 bytes
|
|
|
|
buffer flags 0x00808002, out 0 bytes, total 31 bytes
|
|
|
|
pending 31 bytes, wrapping at 8040, error at position 13:
|
|
|
|
|
|
|
|
00000 GET /invalid request HTTP/1.1\r\n
|
|
|
|
|
|
|
|
|
|
|
|
The output of "show info" on the CLI provides a number of useful information
|
|
|
|
regarding the maximum connection rate ever reached, maximum SSL key rate ever
|
|
|
|
reached, and in general all information which can help to explain temporary
|
|
|
|
issues regarding CPU or memory usage. Example :
|
|
|
|
|
|
|
|
> show info
|
|
|
|
Name: HAProxy
|
|
|
|
Version: 1.6-dev7-e32d18-17
|
|
|
|
Release_date: 2015/10/12
|
|
|
|
Nbproc: 1
|
|
|
|
Process_num: 1
|
|
|
|
Pid: 7949
|
|
|
|
Uptime: 0d 0h02m39s
|
|
|
|
Uptime_sec: 159
|
|
|
|
Memmax_MB: 0
|
|
|
|
Ulimit-n: 120032
|
|
|
|
Maxsock: 120032
|
|
|
|
Maxconn: 60000
|
|
|
|
Hard_maxconn: 60000
|
|
|
|
CurrConns: 0
|
|
|
|
CumConns: 3
|
|
|
|
CumReq: 3
|
|
|
|
MaxSslConns: 0
|
|
|
|
CurrSslConns: 0
|
|
|
|
CumSslConns: 0
|
|
|
|
Maxpipes: 0
|
|
|
|
PipesUsed: 0
|
|
|
|
PipesFree: 0
|
|
|
|
ConnRate: 0
|
|
|
|
ConnRateLimit: 0
|
|
|
|
MaxConnRate: 1
|
|
|
|
SessRate: 0
|
|
|
|
SessRateLimit: 0
|
|
|
|
MaxSessRate: 1
|
|
|
|
SslRate: 0
|
|
|
|
SslRateLimit: 0
|
|
|
|
MaxSslRate: 0
|
|
|
|
SslFrontendKeyRate: 0
|
|
|
|
SslFrontendMaxKeyRate: 0
|
|
|
|
SslFrontendSessionReuse_pct: 0
|
|
|
|
SslBackendKeyRate: 0
|
|
|
|
SslBackendMaxKeyRate: 0
|
|
|
|
SslCacheLookups: 0
|
|
|
|
SslCacheMisses: 0
|
|
|
|
CompressBpsIn: 0
|
|
|
|
CompressBpsOut: 0
|
|
|
|
CompressBpsRateLim: 0
|
|
|
|
ZlibMemUsage: 0
|
|
|
|
MaxZlibMemUsage: 0
|
|
|
|
Tasks: 5
|
|
|
|
Run_queue: 1
|
|
|
|
Idle_pct: 100
|
|
|
|
node: wtap
|
|
|
|
description:
|
|
|
|
|
|
|
|
When an issue seems to randomly appear on a new version of HAProxy (eg: every
|
|
|
|
second request is aborted, occasional crash, etc), it is worth trying to enable
|
2016-07-02 01:01:18 +00:00
|
|
|
memory poisoning so that each call to malloc() is immediately followed by the
|
2015-10-13 12:40:55 +00:00
|
|
|
filling of the memory area with a configurable byte. By default this byte is
|
|
|
|
0x50 (ASCII for 'P'), but any other byte can be used, including zero (which
|
|
|
|
will have the same effect as a calloc() and which may make issues disappear).
|
2016-07-02 01:01:18 +00:00
|
|
|
Memory poisoning is enabled on the command line using the "-dM" option. It
|
2015-10-13 12:40:55 +00:00
|
|
|
slightly hurts performance and is not recommended for use in production. If
|
2016-07-02 01:01:18 +00:00
|
|
|
an issue happens all the time with it or never happens when poisoning uses
|
2015-10-13 12:40:55 +00:00
|
|
|
byte zero, it clearly means you've found a bug and you definitely need to
|
|
|
|
report it. Otherwise if there's no clear change, the problem it is not related.
|
|
|
|
|
|
|
|
When debugging some latency issues, it is important to use both strace and
|
|
|
|
tcpdump on the local machine, and another tcpdump on the remote system. The
|
|
|
|
reason for this is that there are delays everywhere in the processing chain and
|
|
|
|
it is important to know which one is causing latency to know where to act. In
|
|
|
|
practice, the local tcpdump will indicate when the input data come in. Strace
|
|
|
|
will indicate when haproxy receives these data (using recv/recvfrom). Warning,
|
|
|
|
openssl uses read()/write() syscalls instead of recv()/send(). Strace will also
|
|
|
|
show when haproxy sends the data, and tcpdump will show when the system sends
|
|
|
|
these data to the interface. Then the external tcpdump will show when the data
|
|
|
|
sent are really received (since the local one only shows when the packets are
|
|
|
|
queued). The benefit of sniffing on the local system is that strace and tcpdump
|
|
|
|
will use the same reference clock. Strace should be used with "-tts200" to get
|
|
|
|
complete timestamps and report large enough chunks of data to read them.
|
|
|
|
Tcpdump should be used with "-nvvttSs0" to report full packets, real sequence
|
|
|
|
numbers and complete timestamps.
|
|
|
|
|
|
|
|
In practice, received data are almost always immediately received by haproxy
|
|
|
|
(unless the machine has a saturated CPU or these data are invalid and not
|
|
|
|
delivered). If these data are received but not sent, it generally is because
|
|
|
|
the output buffer is saturated (ie: recipient doesn't consume the data fast
|
|
|
|
enough). This can be confirmed by seeing that the polling doesn't notify of
|
|
|
|
the ability to write on the output file descriptor for some time (it's often
|
|
|
|
easier to spot in the strace output when the data finally leave and then roll
|
|
|
|
back to see when the write event was notified). It generally matches an ACK
|
|
|
|
received from the recipient, and detected by tcpdump. Once the data are sent,
|
|
|
|
they may spend some time in the system doing nothing. Here again, the TCP
|
|
|
|
congestion window may be limited and not allow these data to leave, waiting for
|
|
|
|
an ACK to open the window. If the traffic is idle and the data take 40 ms or
|
|
|
|
200 ms to leave, it's a different issue (which is not an issue), it's the fact
|
|
|
|
that the Nagle algorithm prevents empty packets from leaving immediately, in
|
|
|
|
hope that they will be merged with subsequent data. HAProxy automatically
|
|
|
|
disables Nagle in pure TCP mode and in tunnels. However it definitely remains
|
|
|
|
enabled when forwarding an HTTP body (and this contributes to the performance
|
|
|
|
improvement there by reducing the number of packets). Some HTTP non-compliant
|
|
|
|
applications may be sensitive to the latency when delivering incomplete HTTP
|
|
|
|
response messages. In this case you will have to enable "option http-no-delay"
|
|
|
|
to disable Nagle in order to work around their design, keeping in mind that any
|
|
|
|
other proxy in the chain may similarly be impacted. If tcpdump reports that data
|
|
|
|
leave immediately but the other end doesn't see them quickly, it can mean there
|
2016-07-02 01:01:18 +00:00
|
|
|
is a congested WAN link, a congested LAN with flow control enabled and
|
2015-10-13 12:40:55 +00:00
|
|
|
preventing the data from leaving, or more commonly that HAProxy is in fact
|
|
|
|
running in a virtual machine and that for whatever reason the hypervisor has
|
|
|
|
decided that the data didn't need to be sent immediately. In virtualized
|
|
|
|
environments, latency issues are almost always caused by the virtualization
|
|
|
|
layer, so in order to save time, it's worth first comparing tcpdump in the VM
|
|
|
|
and on the external components. Any difference has to be credited to the
|
|
|
|
hypervisor and its accompanying drivers.
|
|
|
|
|
|
|
|
When some TCP SACK segments are seen in tcpdump traces (using -vv), it always
|
|
|
|
means that the side sending them has got the proof of a lost packet. While not
|
|
|
|
seeing them doesn't mean there are no losses, seeing them definitely means the
|
|
|
|
network is lossy. Losses are normal on a network, but at a rate where SACKs are
|
|
|
|
not noticeable at the naked eye. If they appear a lot in the traces, it is
|
|
|
|
worth investigating exactly what happens and where the packets are lost. HTTP
|
|
|
|
doesn't cope well with TCP losses, which introduce huge latencies.
|
|
|
|
|
|
|
|
The "netstat -i" command will report statistics per interface. An interface
|
|
|
|
where the Rx-Ovr counter grows indicates that the system doesn't have enough
|
|
|
|
resources to receive all incoming packets and that they're lost before being
|
|
|
|
processed by the network driver. Rx-Drp indicates that some received packets
|
|
|
|
were lost in the network stack because the application doesn't process them
|
|
|
|
fast enough. This can happen during some attacks as well. Tx-Drp means that
|
|
|
|
the output queues were full and packets had to be dropped. When using TCP it
|
2016-07-02 01:01:18 +00:00
|
|
|
should be very rare, but will possibly indicate a saturated outgoing link.
|
2015-10-13 12:40:55 +00:00
|
|
|
|
|
|
|
|
|
|
|
13. Security considerations
|
|
|
|
---------------------------
|
|
|
|
|
|
|
|
HAProxy is designed to run with very limited privileges. The standard way to
|
|
|
|
use it is to isolate it into a chroot jail and to drop its privileges to a
|
|
|
|
non-root user without any permissions inside this jail so that if any future
|
|
|
|
vulnerability were to be discovered, its compromise would not affect the rest
|
|
|
|
of the system.
|
|
|
|
|
2016-07-02 01:01:18 +00:00
|
|
|
In order to perform a chroot, it first needs to be started as a root user. It is
|
2015-10-13 12:40:55 +00:00
|
|
|
pointless to build hand-made chroots to start the process there, these ones are
|
|
|
|
painful to build, are never properly maintained and always contain way more
|
|
|
|
bugs than the main file-system. And in case of compromise, the intruder can use
|
|
|
|
the purposely built file-system. Unfortunately many administrators confuse
|
|
|
|
"start as root" and "run as root", resulting in the uid change to be done prior
|
|
|
|
to starting haproxy, and reducing the effective security restrictions.
|
|
|
|
|
|
|
|
HAProxy will need to be started as root in order to :
|
|
|
|
- adjust the file descriptor limits
|
|
|
|
- bind to privileged port numbers
|
|
|
|
- bind to a specific network interface
|
|
|
|
- transparently listen to a foreign address
|
|
|
|
- isolate itself inside the chroot jail
|
|
|
|
- drop to another non-privileged UID
|
|
|
|
|
|
|
|
HAProxy may require to be run as root in order to :
|
|
|
|
- bind to an interface for outgoing connections
|
|
|
|
- bind to privileged source ports for outgoing connections
|
2016-07-02 01:01:18 +00:00
|
|
|
- transparently bind to a foreign address for outgoing connections
|
2015-10-13 12:40:55 +00:00
|
|
|
|
|
|
|
Most users will never need the "run as root" case. But the "start as root"
|
|
|
|
covers most usages.
|
|
|
|
|
|
|
|
A safe configuration will have :
|
|
|
|
|
|
|
|
- a chroot statement pointing to an empty location without any access
|
|
|
|
permissions. This can be prepared this way on the UNIX command line :
|
|
|
|
|
|
|
|
# mkdir /var/empty && chmod 0 /var/empty || echo "Failed"
|
|
|
|
|
|
|
|
and referenced like this in the HAProxy configuration's global section :
|
|
|
|
|
|
|
|
chroot /var/empty
|
|
|
|
|
|
|
|
- both a uid/user and gid/group statements in the global section :
|
|
|
|
|
|
|
|
user haproxy
|
|
|
|
group haproxy
|
|
|
|
|
|
|
|
- a stats socket whose mode, uid and gid are set to match the user and/or
|
|
|
|
group allowed to access the CLI so that nobody may access it :
|
|
|
|
|
|
|
|
stats socket /var/run/haproxy.stat uid hatop gid hatop mode 600
|
|
|
|
|