I found on an (old) AIX 5.1 machine that stdint.h didn't exist while
inttypes.h which is expected to include it does exist and provides the
desired functionalities.
As explained here, stdint being just a subset of inttypes for use in
freestanding environments, it's probably always OK to switch to inttypes
instead:
https://pubs.opengroup.org/onlinepubs/009696799/basedefs/stdint.h.html
Also it's even clearer here in the autoconf doc :
https://www.gnu.org/software/autoconf/manual/autoconf-2.61/html_node/Header-Portability.html
"The C99 standard says that inttypes.h includes stdint.h, so there's
no need to include stdint.h separately in a standard environment.
Some implementations have inttypes.h but not stdint.h (e.g., Solaris
7), but we don't know of any implementation that has stdint.h but not
inttypes.h"
Since the flag EOI was added on channels, some hidden bugs in the prometheus
exporter now leads to error. the visible effect is that responses are
truncated.
So first of all, channel_add_input() must be called when the response headers
and the EOM block are added. To be sure to correctly update the response channel
(especially to_forward value). Then the request must really be fully
consumed. And finally, the return clause in the switch has been replaced by a
break. It was totally wrong to skip the end of the function in the states
PROMEX_DONE and PROMEX_ERROR. (Note that PROMEX_ERROR was never used, so it was
replaced by PROMEX_END just to ease reading the code).
No need to backport this patch, the Prometheus exporter does not exist in early
versions.
It has been developped as a service applet. Internally, it is called
"promex". To build HAProxy with the promex service, you should use the Makefile
variable "EXTRA_OBJS". To be used, it must be enabled in the configuration with
an "http-request" rule and the corresponding HTTP proxy must enable the HTX
support. For instance:
frontend test
mode http
...
option http-use-htx
http-request use-service prometheus-exporter if { path /metrics }
...
See contrib/prometheus-exporter/README for details.
These flags haven't been used for a while. SF_TUNNEL was reintroduced
by commit d62b98c6e ("MINOR: stream: don't set backend's nor response
analysers on SF_TUNNEL") to handle the two-level streams needed to
deal with the first model for H2, and was not removed after this model
was abandonned. SF_INITIALIZED was only set. SF_CONN_TAR was never
referenced at all.
This generates the tables and indexes which will be used by the HPACK
encoder. The headers are sorted by length, then by statistical frequency,
then by direction (preference for responses), then by name, then by index.
The purpose is to speed up their lookup.
The SI_FL_WANT_PUT flag is used in an awkward way, sometimes it's
set by the stream-interface to mean "I have something to deliver",
sometimes it's cleared by the channel to say "I don't want you to
send what you have", and it has to be set back once CF_DONT_READ
is cleared. This will have to be split between SI_FL_RX_WAIT_EP
and SI_FL_RXBLK_CHAN. This patch only replaces all uses of the
flag with its natural (but negated) replacement SI_FL_RX_WAIT_EP.
The code is expected to be strictly equivalent. The now unused flag
was completely removed.
The plan is to have the following flags to describe why a stream interface
doesn't produce data :
- SI_FL_RXBLK_CHAN : the channel doesn't want it to receive
- SI_FL_RXBLK_BUFF : waiting for a buffer allocation to complete
- SI_FL_RXBLK_ROOM : more room is required in the channel to receive
- SI_FL_RXBLK_SHUT : input now closed, nothing new will come
- SI_FL_RX_WAIT_EP : waiting for the endpoint to produce more data
Applets like the CLI which consume complete commands at once and produce
large chunks of responses will for example be able to stop being woken up
by clearing SI_FL_WANT_GET and setting SI_FL_RXBLK_ROOM when the rx buffer
is full. Once called they will unblock WANT_GET. The flags were moved
together in readable form with the Rx bits using 2 hex digits and still
have some room to do a similar operation on the Tx path later, with the
WAIT_EP flag being represented alone on a digit.
This flag is not enough to describe all blocking situations, as can be
seen in each case we remove it. The muxes has taught us that using multiple
blocking flags in parallel will be much easier, so let's start to do this
now. This patch only renames this flags in order to make next changes more
readable.
Commit 53216e7db ("MEDIUM: connections: Don't directly mess with the
polling from the upper layers.") removed the CS_FL_DATA_RD_ENA and
CS_FL_DATA_WR_ENA flags without updating flags.c, thus breaking the
build. This patch also adds flag CL_FL_NOT_FIRST which was brought
by commit 08088e77c.
This fixes a typo in the README of the peers section of this subsystem
and 2 typos in code comments. Groupped together as cleanup to avoid too
many 1 char patches.
The behaviour of the flag CF_WRITE_PARTIAL was modified by commit
95fad5ba4 ("BUG/MAJOR: stream-int: don't re-arm recv if send fails") due
to a situation where it could trigger an immediate wake up of the other
side, both acting in loops via the FD cache. This loss has caused the
need to introduce CF_WRITE_EVENT as commit c5a9d5bf, to replace it, but
both flags express more or less the same thing and this distinction
creates a lot of confusion and complexity in the code.
Since the FD cache now acts via tasklets, the issue worked around in the
first patch no longer exists, so it's more than time to kill this hack
and to restore CF_WRITE_PARTIAL's semantics (i.e.: there has been some
write activity since we last left process_stream).
This patch mostly reverts the two commits above. Only the part making
use of CF_WROTE_DATA instead of CF_WRITE_PARTIAL to detect the loss of
data upon connection setup was kept because it's more accurate and
better suited.
Now all the code used to manipulate chunks uses a struct buffer instead.
The functions are still called "chunk*", and some of them will progressively
move to the generic buffer handling code as they are cleaned up.
Chunks are only a subset of a buffer (a non-wrapping version with no head
offset). Despite this we still carry a lot of duplicated code between
buffers and chunks. Replacing chunks with buffers would significantly
reduce the maintenance efforts. This first patch renames the chunk's
fields to match the name and types used by struct buffers, with the goal
of isolating the code changes from the declaration changes.
Most of the changes were made with spatch using this coccinelle script :
@rule_d1@
typedef chunk;
struct chunk chunk;
@@
- chunk.str
+ chunk.area
@rule_d2@
typedef chunk;
struct chunk chunk;
@@
- chunk.len
+ chunk.data
@rule_i1@
typedef chunk;
struct chunk *chunk;
@@
- chunk->str
+ chunk->area
@rule_i2@
typedef chunk;
struct chunk *chunk;
@@
- chunk->len
+ chunk->data
Some minor updates to 3 http functions had to be performed to take size_t
ints instead of ints in order to match the unsigned length here.
The master process will exit with the status of the last worker. When
the worker is killed with SIGTERM, it is expected to get 143 as an
exit status. Therefore, we consider this exit status as normal from a
systemd point of view. If it happens when not stopping, the systemd
unit is configured to always restart, so it has no adverse effect.
This has mostly a cosmetic effect. Without the patch, stopping HAProxy
leads to the following status:
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/lib/systemd/system/haproxy.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2018-06-22 20:35:42 CEST; 8min ago
Docs: man:haproxy(1)
file:/usr/share/doc/haproxy/configuration.txt.gz
Process: 32715 ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE $EXTRAOPTS (code=exited, status=143)
Process: 32714 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $EXTRAOPTS (code=exited, status=0/SUCCESS)
Main PID: 32715 (code=exited, status=143)
After the patch:
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/lib/systemd/system/haproxy.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:haproxy(1)
file:/usr/share/doc/haproxy/configuration.txt.gz
When the connection is closed by HAProxy, the status code provided in the
DISCONNECT frame is lost. By retransmitting it in the agent's reply, we are sure
to have it in the SPOE logs.
This patch may be backported in 1.8.
When the connection is closed by HAProxy, the status code provided in the
DISCONNECT frame is lost. By retransmitting it in the agent's reply, we are sure
to have it in the SPOE logs.
This patch may be backported in 1.8.
When the connection is closed by HAProxy, the status code provided in the
DISCONNECT frame is lost. By retransmitting it in the agent's reply, we are sure
to have it in the SPOE logs.
This patch may be backported in 1.8.
The commit c4dcaff3 ("BUG/MEDIUM: spoe: Flags are not encoded in network order")
introduced an incompatibility with older agents. So the major version of the
SPOP is increased to make the situation unambiguous. And because before the fix,
the protocol is buggy, the support of the version 1.0 is removed to be sure to
not continue to support buggy agents.
The agents in the contrib folder (spoa_example, modsecurity and mod_defender)
are also updated to announce the SPOP version 2.0.
So, to be clear, from the patch, connections to agents announcing the SPOP
version 1.0 will be rejected.
This patch must be backported in 1.8.
A recent fix on the SPOE revealed a mismatch between the SPOE specification and
the modsecurity implementation on the way flags are encoded or decoded. They
must be exchanged using the network bytes order and not the host one.
Be careful though, this patch breaks the compatiblity with HAProxy SPOE before
commit c4dcaff3 ("BUG/MEDIUM: spoe: Flags are not encoded in network order").
A recent fix on the SPOE revealed a mismatch between the SPOE specification and
the mod_defender implementation on the way flags are encoded or decoded. They
must be exchanged using the network bytes order and not the host one.
Be careful though, this patch breaks the compatiblity with HAProxy SPOE before
commit c4dcaff3 ("BUG/MEDIUM: spoe: Flags are not encoded in network order").
Buf is unsigned, so nbargs will be negative for more then 127 args.
Note that I cant test this bug because I cant put sufficient args
on the configuration line. It is just detected reading code.
[wt: this can be backported to 1.8 & 1.7]
While the haproxy workers usually are running chrooted the master
process is not. This patch is a pretty safe defense in depth measure
to ensure haproxy cannot touch sensitive parts of the file system.
ProtectSystem takes non-boolean arguments in newer SystemD versions,
but setting those would leave older systems such as Ubuntu Xenial
unprotected. Distro maintainers and system administrators could
adapt the ProtectSystem value to the SystemD version they ship.
Commit f4cfcf9 ("MINOR: debug/flags: Add missing flags") added a number
of missing flags but a few of them were incorrect, hiding real values.
This can be backported to 1.8.
There were several unused variables in halog.c that each caused a
compiler warning [-Wunused-but-set-variable]. This patch simply
removes the declaration of said vairables and any instance where the
unused variable was assigned a value.
The declaration of main() in iprange.c did not specify a type, causing
a compiler warning [-Wimplicit-int]. This patch simply declares main()
to be type 'int' and calls exit(0) at the end of the function.
This variable was used by the wrapper which was removed in
a6cfa9098e. The correct way to do seamless reload is now to enable
"expose-fd listeners" on the stat socket.
Being an external agent, it's confusing that it uses haproxy's internal
types and it seems to have encouraged other implementations to do so.
Let's completely remove any reference to struct sample and use the
native DATA types instead of converting to and from haproxy's sample
types.
This patch adds support for `Type=notify` to the systemd unit.
Supporting `Type=notify` improves both starting as well as reloading
of the unit, because systemd will be let known when the action completed.
See this quote from `systemd.service(5)`:
> Note however that reloading a daemon by sending a signal (as with the
> example line above) is usually not a good choice, because this is an
> asynchronous operation and hence not suitable to order reloads of
> multiple services against each other. It is strongly recommended to
> set ExecReload= to a command that not only triggers a configuration
> reload of the daemon, but also synchronously waits for it to complete.
By making systemd aware of a reload in progress it is able to wait until
the reload actually succeeded.
This patch introduces both a new `USE_SYSTEMD` build option which controls
including the sd-daemon library as well as a `-Ws` runtime option which
runs haproxy in master-worker mode with systemd support.
When haproxy is running in master-worker mode with systemd support it will
send status messages to systemd using `sd_notify(3)` in the following cases:
- The master process forked off the worker processes (READY=1)
- The master process entered the `mworker_reload()` function (RELOADING=1)
- The master process received the SIGUSR1 or SIGTERM signal (STOPPING=1)
Change the unit file to specify `Type=notify` and replace master-worker
mode (`-W`) with master-worker mode with systemd support (`-Ws`).
Future evolutions of this feature could include making use of the `STATUS`
feature of `sd_notify()` to send information about the number of active
connections to systemd. This would require bidirectional communication
between the master and the workers and thus is left for future work.