In HTTP applets, the request's EOM was removed like other blocks when receive or
get_line was called from lua scripts. So it was impossible to stop receiving
data on successive calls when all the request body was already consumed,
blocking infinitly the applet.
Now, we never consume the EOM. So it is easy to interrupt receive/get_line
calls. In all cases, this block is consumed when the applet ends.
Before setting the infinite forward, we first forward all remaining input data
from the channel. Of course for HTX streams, this must be done using the amount
of data in the HTX message not in the channel (which appears as full because of
the HTX).
When producing an HTX message, we can't rely on the next-level H1 parser
to check and deduplicate the content-length header, so we have to do it
while parsing a message. The algorithm is the exact same as used for H1
messages.
There is an issue with some medium sized transfers occasionally not
shutting down at the end. Olivier tracked this to being caused by a
missing wakeup of process_stream(). What happens is that one of the
analysers sets CF_WAKE_WRITE to be woken up at the end of the transfer
to take note of the end of transaction, but a failed si_cs_send() at
the end of process_stream causes the call to be attempted again, with
CF_WAKE_WRITE lost. Then stream_int_notify() doesn't find any valid
condition to wake up process_stream(), and the stream stays there,
idling till the timeout.
In fact, CF_WAKE_WRITE has been designed for calling the analysers
to complete an operation without closing (keep-alive HTTP transfer
for instance). It only applies once the buffer is empty and there
is nothing left to be forwarded. In case the channel is closed, the
wakeup is already granted. So what we need here is to make sure to
wake process_stream() up in case the channel will not be closed and
it doesn't have anything left to be transferred. This is detected by
the lack of CF_AUTO_CLOSE and the emptiness of the buffer + to_forward
after a write activity. So now we take care of always waking the stream
up on end of transfers even if the analysers didn't subscribe to this
or if their subscription was lost.
CF_WAKE_WRITE should probably be killed now, though this first requires
careful inspection.
No backport is needed.
Cc: Olivier Houchard <ohouchard@haproxy.com>
Cc: Christopher Faulet <cfaulet@haproxy.com>
The error captures provided in HTX by the H1 mux would always report the
backend as the "other end". We need to assign the backend only on requests.
No backport is needed.
Today the demux only wakes a stream up after receiving some contents, but
not necessarily on close or error. Let's do it based on both error flags
and both EOS flags. With a bit of refinement we should be able to only do
it when the pending bits are there but not the static ones.
No backport is needed.
This function is called when dealing with a connection error or a GOAWAY
frame. It used to report a synchronous error instead of an asycnhronous
error, which can lead to data truncation since whatever is still available
in the rxbuf will be ignored. Let's correctly use CS_FL_ERR_PENDING instead
and only fall back to CS_FL_ERROR if CS_FL_EOS was already delivered.
No backport is needed.
If EOS has already been reported on the conn_stream, there won't be
any read anymore to turn ERR_PENDING into ERROR, so we have to do
report it directly.
No backport is needed.
Types DNS_SRVRQ and CS were not referenced in the type to string
conversions, causing possibly misleading outputs in session dumps.
Now instead of showing "NONE" for unknown invalid types names, we
display "!INVAL!" to clear the confusion that may exist in case of
memory corruption for example.
It takes ages to proceed with "show fd" when there is sustained activity
because it uses the rendez-vous point for each and every file descriptor
in the loop. It's very common to see socat timeout there.
Instead of doing this, let's just isolate the function when entering the
loop. Its duration is limited by the number of FDs that may be emitted in
a single buffer anyway, so it's much lighter and responds much faster.
The h2s pointer was used to scan fctl lists prior to being used to scan
the send list by ID, so it could appear non-null eventhough the list is
empty, resulting in misleading information on empty connections.
No backport is needed.
Most of the time when we issue "show fd" to dump a mux's state, it's
to figure why a transfer is frozen. Connection, stream and conn_stream
states are critical there. And most of the time when this happens there
is a single stream left in the H2 mux, so let's always dump the last
known stream on show fd, as most of the time it will be the one of
interest.
Cyril Bont reported a bug in the way the cookie length is computed
when aggregating multiple cookies : the first cookie name was counted
as part of the value length, causing random contents to be placed there,
possibly leading to bad requests.
No backport is needed.
Commit 7505f94f9 ("MEDIUM: h2: Don't use a wake() method anymore.")
changed the conditions to restart demuxing so that this happens as soon
as something is read. But similar to previous fix, at an end of stream
we may be woken up with nothing to read but data still available in the
demux buffer, so we must also use this as a valid condition for demuxing.
No backport is needed, this is purely 1.9.
Commit 082f559d3 ("BUG/MEDIUM: h2: restart demuxing after releasing
buffer space") tried to address a situation where transfers could stall
after a read, but the condition was not completely covered : some stalls
may still happen at end of stream because there's nothing anymore to
receive and the last data lie in the demux buffer. Thus we must also
consider this state as a valid condition to restart demuxing.
No backport is needed.
Commit d94f877cd ("BUG/MINOR: mux_pt: Set CS_FL_WANT_ROOM when count is
zero in rcv_buf() callback") triggered a pending issue with this flag,
which is that it's cleared too late and sometimes causes some Rx
transfers to stall. We need to clear it before attempting to receive
otherwise we may risk to see an earlier copy of the flag.
Note that it should probably be defined that this flag could be purged
on each invocation of mux->rcv_buf(), which would make sense.
No backport is needed.
Add a new flag to conn_streams, CS_FL_ERR_PENDING. This is to be set instead
of CS_FL_ERR in case there's still more data to be read, so that we read all
the data before closing.
When count is zero in the function mux_pt_rcv_buf(), it means the channel's
buffer is full. So we need to set the CS_FL_WANT_ROOM on the
conn_stream. Otherwise, while the channel is full, we will try to receive in
loop more data.
A bug was introduced when the buffers API was refactored. It was when wrapping
input data were compressed. the pointer b_peek(in, 0) was used instead of
"b_orig(in)". b_peek(in, 0) is in fact the same as b_head(in).
Released version 1.9-dev11 with the following main changes :
- BUG/MEDIUM: connection: Don't use the provided conn_stream if it was tried.
- REGTEST/MINOR: remove double body specification for server txresp
- BUG/MEDIUM: connections: Remove error flags when retrying.
- REGTEST/MINOR: skip seamless-reload test with abns socket on freebsd
- REGTEST/MINOR: remove health-check that can make the test fail
- DOC: clarify that check-sni needs an argument.
- DOC: refer to check-sni in the documentation of sni
- BUG/MEDIUM: mux-h2: fix encoding of non-GET/POST methods
- BUG/MINOR: mux-h1: Fix conn_mode processing for headerless outgoing messages
- BUG/MEDIUM: mux-h1: Add a BUSY mode to not loop on pipelinned requests
- BUG/MEDIUM: mux-h1: Don't loop on the headers parsing if the read0 was received
- BUG/MEDIUM: htx: Always do a defrag if a block value is replace by a bigger one
- BUG/MEDIUM: mux-h2: Don't forget to set the CS_FL_EOS flag with htx.
- BUG/MINOR: hpack: fix off-by-one in header name encoding length calculation
- CLEANUP: hpack: no need to include chunk.h, only include buf.h
- MINOR: hpack: simplify the len to bytes conversion
- MINOR: hpack: use ist2bin() to copy header names in hpack_encode_header()
- MINOR: hpack: optimize header encoding for short names
- CONTRIB: hpack: add a compressed stream generator for the encoder
- MEDIUM: hpack: make it possible to encode any static header name
- MINOR: hpack: move the length computation and encoding functions to .h
- MINOR: hpack: provide a function to encode a short indexed header
- MINOR: hpack: provide a function to encode a long indexed header
- MINOR: hpack: provide new functions to encode the ":status" header
- MEDIUM: mux-h2: make use of standard HPACK encoding functions for the status
- MINOR: hpack: provide a function to encode an HTTP method
- MEDIUM: mux-h2: make use of hpack_encode_method() to encode the method
- MINOR: hpack: provide a function to encode an HTTP scheme
- MEDIUM: mux-h2: make use of hpack_encode_scheme() to encode the scheme
- MINOR: hpack: provide a function to encode an HTTP path
- MEDIUM: mux-h2: make use of hpack_encode_path() to encode the path
- REGTEST: add the HTTP rules test involving HTX processing
- REORG: connection: centralize the conn_set_{tos,mark,quickack} functions
- MEDIUM: cli: rework the CLI proxy parser
- MINOR: cli: parse prompt command in the CLI proxy
- MINOR: cli: implements 'quit' in the CLI proxy
- BUG/MINOR: cli: wait for payload data even without prompt
- MEDIUM: cli: handle payload in CLI proxy
- MINOR: cli: use pcli_flags for prompt activation
- MINOR: compression: Rename the function check_legacy_http_comp_flt()
- MINOR: cache/htx: Don't use the same cache on HTX and legacy HTTP proxies
- MINOR: cache: Register the cache as a data filter only if response is cacheable
- MEDIUM: cache/htx: Add the HTX support into the cache
- MINOR: cache: Improve and simplify the cache configuration check
- MINOR: filters: Export the name of known filters
- MEDIUM: cache/compression: Add a way to safely combined compression and cache
- MEDIUM: cache: Require an explicit filter declaration if other filters are used
- REORG: htx: merge types+proto into common/htx.h
- REORG: http: create http_msg.c to place there some legacy HTTP parts
- REORG: h1: move legacy http functions to http_msg.c
- REORG: h1: move the h1_state definition to proto_http
- CLEANUP: h1: remove some occurrences of unneeded h1.h inclusions
- REORG: h1: merge types+proto into common/h1.h
- CLEANUP: stream: remove SF_TUNNEL, SF_INITIALIZED, SF_CONN_TAR
- MEDIUM: mux-h1: implement true zero-copy of DATA blocks
- MINOR: config: round up global.tune.bufsize to the next multiple of 2 void*
- BUG/MINOR: mux-h2: refrain from muxing during the preface
- BUG/MINOR: mux-h2: advertise a larger connection window size
- DOC: master CLI documentation in management.txt
- MINOR: mux-h2: avoid copying large blocks into full buffers
- MEDIUM: mux-h2: implement true zero-copy send of large HTX DATA blocks
- MINOR: mux-h2: force reads to be HTX-aligned in HTX mode
- MINOR: cli: change 'show proc' output of old processes
- BUG/MEDIUM: mux-h1: Fix the zero-copy on output for chunked messages
- BUG: dns: Prevent stack-exhaustion via recursion loop in dns_read_name
- BUG: dns: Prevent out-of-bounds read in dns_read_name()
- BUG: dns: Prevent out-of-bounds read in dns_validate_dns_response()
- BUG: dns: Fix out-of-bounds read via signedness error in dns_validate_dns_response()
- BUG: dns: Fix off-by-one write in dns_validate_dns_response()
- REGTEST: the cache regtest requires haproxy 1.9
- MEDIUM: cli: store CLI level in the appctx
- MEDIUM: cli: show and change CLI permissions
- CLEANUP: cli: use dedicated define instead of appctx ones
- MEDIUM: cli: handle CLI level from the master CLI
- BUG/MEDIUM: cli: handle correctly prefix and payload
- BUILD: Makefile: Implements the help target
- REGTESTS: adjust the http-rules regtest to support window updates
- BUG/MEDIUM: connections: Remove CS_FL_EOS | CS_FL_REOS on retry.
- BUG/MEDIUM: stream_interface: Don't report read0 if we were not connected.
- BUG/MEDIUM: connection: Just make sure we closed the fd on connection failure.
- MEDIUM: mux: Add an optional "reset" method.
- BUG/MEDIUM: mux-h1: Fix loop if server closes its connection with unparsed data
- MINOR: mux-h1: Add helper functions to wake a stream from recv or send
- BUG/MEDIUM: mux-h1: Wake the stream for send once the connection is established
- BUG/MEDIUM: connections: Don't attempt to reuse an unusable connection.
- MEDIUM: htx: Try to take a connection over if it has no owner.
- REGTEST: Reg testing improvements.
- REGTEST: Add a first test for health-checks.
- REGTEST: Reg test for "check" health-check option.
- REGTEST: level 1 health-check test 2.
- REGTEST: Add miscellaneous reg tests for health-checks.
- REGTEST: add a few HTTP messaging tests
- MINOR: lb: make the leastconn algorithm more accurate
- REGTEST: fix missing space in checks/s00001
- REGTEST: http-messaging: add "option http-buffer-request" for H2 tests
- BUG/MEDIUM: cache: fix random crash on filter parser's error path
- MINOR: connection: realign empty buffers in muxes, not transport layers
- MINOR: mux_h1/h2: simplify the zero-copy Rx alignment
- MINOR: backend: count the number of connect and reuse per server and per backend
- BUG/MINOR: stats: fix inversion of failed header rewrites and other statuses
- MINOR: tools: increase the number of ITOA strings to 16
- MINOR: cache: report the number of cache lookups and cache hits
- MEDIUM: tasks: check the global task mask instead of the thread number
- MINOR: mworker: set all_threads_mask and pid_bit to 1
- BUG/MINOR: proto_htx: Fix htx_res_set_status to also set the reason
- BUG/MINOR: stats: Parse post data for HTX streams
- MINOR: payload/htx: Adapt smp_fetch_len to be HTX aware
- MINOR: http_fecth: Implement body_len and body_size sample fetches for the HTX
- MAJOR: lua: Forbid calls to Channel functions for LUA scripts in HTTP proxies
- MEDIUM: lua/htx: Adapt functions of the HTTP to be compatible with HTX
- MINOR: lua/htx: Adapt the functions get_in_length and is_full to be HTX aware
- MAJOR: lua/htx: Adapt HTTP applets to support HTX messages
- MINOR: lua: Remove useless check on the messages state in HTTP functions
- BUG/MEDIUM: htx: When performing zero-copy, start from the right offset.
- BUG/MINOR: mworker: don't use unitialized mworker_proc struct
- MINOR: mworker/cli: indicate in the master prompt when a reload failed
- MINOR: cli: implements 'reload' on master CLI
- BUG/MEDIUM: log: Don't call sample_fetch_as_type if we don't have a stream.
- BUG/MEDIUM: mux-h1: make sure we always have at least one HTX block to send
- BUG/MAJOR: backend: only update server's counters when the server exists
- MINOR: tools: preset the port of fd-based "sockets" to zero
- BUG/MINOR: log: fix logging to both FD and IP
- REGTEST: Add a reg test for HTTP cookies.
- BUILD: ssl: Fix compilation without deprecated OpenSSL 1.1 APIs
- BUILD: thread: properly report multi-thread support
- BUG/MINOR: logs: leave startup-logs global and not per-thread
- BUG/MEDIUM: threads: don't close the thread waker pipe if not init
- BUG/MAJOR: compression/cache: Make it really works with these both filters
- BUG/MEDIUM: h2: Don't forget to destroy the h2s after deferred shut.
- MEDIUM: proxy: Set http-reuse safe as default.
- MEDIUM: servers: Add a command to limit the number of idling connections.
- MEDIUM: servers: Replace idle-timeout with pool-purge-delay.
- MEDIUM: mux: Destroy the stream before trying to add the conn to the idle list.
- MEDIUM: mux: provide the session to the init() and attach() method.
- MEDIUM: sessions: Don't keep an infinite number of idling connections.
- MEDIUM: servers: Be more agressive when adding H2 connection to idle lists.
- MEDIUM: mux_h2: Always set CS_FL_NOT_FIRST for new conn_streams.
- BUG/MEDIUM: htx/cache: use the correct class of error codes on abort
- BUG/MINOR: cache: also consider CF_SHUTR to abort delivery
- MINOR: pools: Cast to volatile int * instead of int *.
- MINOR: debug: make the ABORT_NOW macro use a volatile int
- BUG/MEDIUM: h2: Don't destroy the h2s if it still has a cs attached.
- BUG/MEDIUM: mux-h1: don't try to process an empty input buffer
- DOC: clarify the agent-check status line syntax
- BUG/MAJOR: hpack: fix length check for short names encoding
- DOC: split the README into README + INSTALL
The README was barely usable after all the additions having accumulated
over the years. This patch introduces a new INSTALL file explaining how
to build and install haproxy with various levels of details. The README
is now mostly an index to the list of useful documentations.
Commit 19ed92b ("MINOR: hpack: optimize header encoding for short names")
introduced an error in the space computation for short names, as it removed
the length encoding from the count without replacing with 1 (the minimum
byte). This results in the last byte of the area being occasionally
overwritten, which is immediately detected with -DDEBUG_MEMORY_POOLS as
the canary at the end gets overwritten.
No backport is needed.
Nick Ramirez reported that the wording is confusing and lets one think
that the CR or LF are both optional, which is not the case (either is
optional). Let's reformulate this.
h1_process_input() may occasionally be called with an empty input
buffer, and the code behind cannot deal with that, let's check the
condition at the beginning.
No backport is needed.
In h2_deferred_shut, if we're done sending the shutr/shutw, don't destroy
the h2s if it still has a conn_stream attached, or the conn_stream may try
to access it again.
When using DEBUG_MEMORY_POOLS, when we want to crash, instead of using
*(int *)0 = 0, use *(volatile int *)0 = 0, or clang will just translate it
to a nop, instead of dereferencing 0.
The cache runs in an applet, so it delivers data into the input side
of the channel's buffer. Thus it must also abort feeding the buffer
as soon as CF_SHUTR is present, not just CF_SHUTW*, since these last
ones may only appear later. There doesn't seem to be an observable
side effect of this bug, the fix probably doesn't even need to be
backported.
The HTX-specific cache code uses HTX_CACHE_* states which overlap with
the legacy HTTP states. A typo in the error handling made the state
become HTTP_CACHE_END, which equals 3 and is the value for HTX_CACHE_EOD,
which explains why we were seeing a transition to trailers and memory
corruption.
no backport needed.
When creating new conn_streams, always set the CS_FL_NOT_FIRST flag. We
don't really care about being the first request for HTTP/2, this only
really makes sense for HTTP/1, and that way we can reuse connections.
Add the newly created to the idle list as long as http-reuse != never, and
when completing a H2 request, add the connection to the safe list instead of
the idle list, if we have to add it at that point, that means we created
many streams so we know it's safe.
In session, don't keep an infinite number of connection that can idle.
Add a new frontend parameter, "max-session-srv-conns" to set a max number,
with a default value of 5.
Instead of trying to get the session from the connection, which is not
always there, and of course there could be multiple sessions per connection,
provide it with the init() and attach() methods, so that we know the
session for each outgoing stream.
In the mux_h1 and mux_h2, move the test to see if we should add the
connection in the idle list until after we destroyed the h1s/h2s, that way
later we'll be able to check if the connection has no stream at all, and if
it should be added to the server idling list.
Instead of the old "idle-timeout" mechanism, add a new option,
"pool-purge-delay", that sets the delay before purging idle connections.
Each time the delay happens, we destroy half of the idle connections.
Add a new command, "pool-max-conn" that sets the maximum number of connections
waiting in the orphan idling connections list (as activated with idle-timeout).
Using "-1" means unlimited. Using pools is now dependant on this.
Change the default for http-reuse from "never" to "safe", as it has been
the recommended setting for a few versions now and backend H2 makes little
sense without it.
Some warnings were removed from the config parser since it can dynamically
be disabled depending on the server's configuration, so there's no need to
disable it on a whole backend just for one server.
Caching the response with the compression enabled was totally broken. To fix the
problem, the compression must be done after caching the response. Otherwise it
needs to change the cache to store compressed and uncompressed objects for the
same ressource. So, because it is not possible for now, it is forbidden to
declare the compression filter before the cache one. To ease the configuration,
both can be implicitly declared (without "filter" keyword). The compression will
automatically be inserted after the cache.
Then, to make it works this way, the compression filter has been slighly
modified. Now, the response headers are updated after http-response rules
evaluations, instead of before. So, if the response contains a "Content-length"
header, it will be kept with the response stored in the cache. So this cached
response will be able to be served to clients not supporting the compression at
all.
This bugfix concerns the thread deinit but affects the master process.
When the master process falls in wait mode (it fails to reload the
configuration), it launches the deinit_pollers_per_thread and close the
thread waker pipe. It closes rd (-1) and wr (0).
Closing a FD in the master can have several sides effects and the
process will probably quit at some point.
In this case it assigns 0 to the socketpair of a worker during the next
correct reload, and then closes the socketpair once it falls in wait
mode again. The worker assumes that the master died and leaves.
Commit f8188c6 ("MEDIUM: threads/logs: Make logs thread-safe") made logs
thread-local but it also made the copy of the startup-logs thread-local,
meaning that when threads are configured, upon startup the list of startup
logs appears to be empty. Let's just remove the THEAD_LOCAL directive
there, as the check for the startup period is already present.
This fix should be backported to 1.8.
When refactoring the build option strings in 1.9, the thread support
was placed outside of the ifdef block resulting in threads always being
mentioned even if that was not true. Let's fix this and also mention
when threads are disabled to help troubleshooting.
Removing deprecated APIs is an optional part of OpenWrt's build system to
save some space on embedded devices.
Also added compatibility for LibreSSL.
Signed-off-by: Rosen Penev <rosenp@gmail.com>
This script tests the "cookie <name> insert indirect" directive with
header checks on server and client side. syslog messages are also
checked, especially --II (invalid, insert) flags logging.
Signed-off-by: Frdric Lcaille <flecaille@haproxy.com>
PiBa-NL reported an issue affecting logs when stdout is enabled at the
same time as an IP address. It does not affect FD and UNIX, but does
still affect multiple FDs. What happens is that the condition to detect
that the initialization was not made relies on the FD being -1, and in
this case the FD points to the *unique* FD used for AF_INET sockets, so
the configured socket used for outgoing logs over UDP gets overwritten
by the last configured FD. This is not appropriate, so instead we rely
on the sin_port part of the IPv4-mapped address to store the
initialization state for each FD.
This part deserves being significantly revamped, as IPv6 is still not
possible due to the way the FDs are managed, and inherited FDs are a
bit hackish.
Note that this patch relies on "MINOR: tools: preset the port of
fd-based "sockets" to zero" in order to operate properly.
No backport is needed.
Addresses made of a file descriptor store the file descriptor into the
address part of a sin_addr. Contrary to other address classes, there's
no way to figure later based on the FD if an initialization was done
(which is how logs initialize their FDs). The port part is currently
left with random data, so let's instead specifically set the port part
to zero when creating an FD, and let the code using it set whatever
info it needs there, typically an initialization state.