To do so, a dedicated configuration has been added on cache filters. Before the
cache filter configuration pointed directly to the cache it used. Now, it is the
dedicated structure cache_flt_conf. Store and use rules also point to this
structure. It is linked to the cache the filter must used. It also contains a
flags field. This will allow us to define the behavior of a cache filter when a
response is stored in the cache or delivered from it.
And now, Store and use rules uses a common parsing function. So if it does not
already exists, a filter is always created for both kind of rules. The cache
filters configuration is checked using their check callback. In the postparser
function, we only check the caches configuration. This removes the loop on all
proxies in the postparser function.
The cache is now able to store and resend HTX messages. When an HTX message is
stored in the cache, the headers are prefixed with their block's info (an
uint32_t), containing its type and its length. Data, on their side, are stored
without any prefix. Only the value is copied in the cache. 2 fields have been
added in the structure cache_entry, hdrs_len and data_len, to known the size, in
the cache, of the headers part and the data part. If the message is chunked, the
trailers are also copied, the same way as data. When the HTX message is
recreated in the cache applet, the trailers size is known removing the headers
length and the data lenght from the total object length.
Instead of calling register_data_filter() when the stream analyze starts, we now
call it when we are sure the response is cacheable. It is done in the
http_headers callback, just before the body analyzis, and only if the headers
was already been cached. And during the body analyzis, if an error occurred or
if the response is too big, we unregistered the cache immediatly.
This patch may be backported in 1.8. It is not a bug but a significant
improvement.
It is not possible to mix the format of messages stored in a cache. So we reject
the configurations with a cache used by an HTX proxy and a legacy HTTP proxy in
same time.
The CLI proxy was not handling payload. To do that, we needed to keep a
connection active on a server and to transfer each new line over that
connection until we receive a empty line.
The CLI proxy handles the payload in the same way that the CLI do it.
Examples:
$ echo -e "@1;add map #-1 <<\n$(cat data)\n" | socat /tmp/master-socket -
$ socat /tmp/master-socket readline
prompt
master> @1
25130> add map #-1 <<
+ test test
+ test2 test2
+ test3 test3
+
25130>
During a payload transfer, we need to wait for the data even when we are
not in interactive mode. Indeed, the data could be received line per
line progressively instead of in one recv.
Previously the CLI was doing a SHUTW just after the first line if it was
not in interactive mode. We now check if we are in payload mode to do
a SHUTW.
Should be backported in 1.8.
Rework the CLI proxy parser to look more like the CLI parser, corner
case and escaping are handled the same way.
The parser now splits the commands in words instead of just handling
the prefixes.
It's easier to compare words and arguments of a command this way and to
parse internal command that will be consumed directly by the CLI proxy.
There were a number of ugly setsockopt() calls spread all over
proto_http.c, proto_htx.c and hlua.c just to manipulate the front
connection's TOS, mark or TCP quick-ack. These ones entirely relied
on the connection, its existence, its control layer's presence, and
its addresses. Worse, inet_set_tos() was placed in proto_http.c,
exported and used from the two other ones, surrounded in #ifdefs.
This patch moves this code to connection.h and makes the other ones
rely on it without ifdefs.
The new function hpack_encode_path() supports encoding a path into
the ":path" header. It knows about "/" and "/index.html" which use
a single byte, and falls back to literal encoding for other ones,
with a fast path for short paths < 127 bytes.
The new function hpack_encode_scheme() supports encoding a scheme
into the ":scheme" header. It knows about "https" and "http" which use
a single byte, and falls back to literal encoding for other ones.
The new function hpack_encode_method() supports encoding a method.
It knows about GET and POST which use a single byte, and falls back
to literal encoding for other ones.
This way we don't open-code the HPACK status codes anymore in the H2
code. Special care was taken not to cause any slowdown as this code is
very sensitive.
This header exists with 7 different values, it's worth taking them
into account for the encoding, hence these functions. One of them
makes use of an integer only and computes the 3 output bytes in case
of literal. The other one benefits from the knowledge of an existing
string, which for example exists in the case of H1 to H2 encoding.
For long header values whose index is known, hpack_encodde_long_idx()
may now be used. This function emits the short index and follows with
the header's value.
Most direct calls to HPACK functions are made to encode short header
fields like methods, schemes or statuses, whose lengths and indexes
are known. Let's have a small function to do this.
We'll need these functions from other inline functions, let's make them
accessible. len_to_bytes() was renamed to hpack_len_to_bytes() since it's
now exposed.
We used to have a series of well-known header fields that were looked
up, but most of them were not. The current model couldn't scale with
the addition of the new headers or pseudo-headers required to process
requests, resulting in their encoding being hard-coded in the caller.
This patch implements a quick lookup which retrieves any header from
the static table. A binary stream is made of header names prefixed by
lengths and indexes. These header names are sorted by length, then by
frequency, then by direction (preference for response), then by name,
the the lowest index of each is stored only in case of multiple
entries. A parallel length index table provides the index of the first
header for a given string. This allows to focus on the first few values
matching the same length.
Everything was made to limit the cache footprint. Interestingly, the
lookup ends up being slightly faster than the previous one, while
covering the 54 distinct headers instead of only 10.
A test with a curl request and a basic response showed that the request
size has dropped from 85 to 56 bytes and that the response size has
dropped from 197 to 170 bytes, thus we can now shave roughly 25-30 bytes
per message.
This generates the tables and indexes which will be used by the HPACK
encoder. The headers are sorted by length, then by statistical frequency,
then by direction (preference for responses), then by name, then by index.
The purpose is to speed up their lookup.
For unknown fields, since we know that most of them are less than 127
characters, we don't need to go through the loop and can instead directly
emit the one-byte length encoding. This increases the request rate by
approximately 0.5%.
memcpy() tends to be overkill to copy short strings, better use ist's
naive functions for this. This shows a consistent 1.2% performance
gain with h2load.
The len-to-bytes conversion can be slightly simplified and optimized
by hardcoding a tree lookup. Just doing this increases by 1% the
request rate on H2. It could be made almost branch-free by using
fls() but it looks overkill for most situations since most headers
are very short.
In hpack_encode_header() there is a length check to verify that a literal
header name fits in the buffer, but there it an off-by-one in this length
check, which forgets the byte required to mark the encoding type (literal
without indexing). It should be harmless though as it cannot be triggered
since response headers passing through haproxy are limited by the reserve,
which is not the case of the output buffer.
This fix should be backported to 1.8.
Otherwise, after such replaces, the HTX message appears to wrap but the head
block address is not necessarily the first one. So adding new blocks will
override data of old ones.
If a server sends part of headers and then close its connection, the mux H1
reamins blocked in an infinite loop trying to read more data to finish the
parsing of the message. The flag CS_FL_REOS is set on the conn_stream. But
because there are some data in the input buffer, CS_FL_EOS is never set.
To fix the bug, in h1_process_input, when CS_FL_REOS is set on the conn_stream,
we also set CS_FL_EOS if the input buffer is empty OR if the channel's buffer is
empty.
When a request is fully processed, no more data are parsed until the response is
totally processed and a new transaction starts. But during this time, the mux is
trying to read more data and subscribes to read. If requests are pipelined, we
start to receive the next requests which will stay in the input buffer, leading
to a loop consuming all the CPU. This loop ends when the transaction ends. To
avoid this loop, the flag H1C_F_IN_BUSY has been added. It is set when the
request is fully parsed and unset when the transaction ends. Once set on H1C, it
blocks the reads. So the mux never tries to receive more data in this state.
Condition to process the connection mode on outgoing messages whithout
'Connection' header was wrong. It relied on the wrong H1M
state. H1_MSG_HDR_L2_LWS is only a possible state for messages with at least one
header. Now, to fix the bug, we just check the H1M state is not
H1_MSG_LAST_LF. So, we have the warranty the EOH was not processed yet.
Jerome reported that outgoing H2 failed for methods different from GET
or POST. It turns out that the HPACK encoding is performed by hand in
the outgoing headers encoding function and that the data length was not
incremented to cover the literal method value, resulting in a corrupted
HEADERS frame.
Admittedly this code should move to the generic HPACK code.
No backport is needed.
Make it obvious in the description of the sni directive that it can
not be used for health checks, and refer to the appropriate directive.
This can be backported to 1.8 as check-sni appeared in 1.8.
Make it more obvious that check-sni requires an argument, and that
it can only be a string. Also refer to sni for proxied traffic.
This can be backported to 1.8 as check-sni appeared in 1.8.
fix http-rules/h00000.vtc / http-rules/h00000.vtc as both 'bodylen' and
'body' are specified, these settings conflict with each other as they
both generate/present the body to send.
In connect_server(), don't attempt to reuse the conn_stream associated to
the stream_interface, if we already attempted a connection with it.
Using that conn_stream is only there for the cases where a connection and
a conn_stream was created ahead, mostly by http_proxy or by the LUA code.
If we already attempted to connect, that means we fail, and so we should
create a new connection.
No backport needed.
Released version 1.9-dev10 with the following main changes :
- MINOR: htx: Rename functions htx_*_to_str() to be H1 specific
- BUG/MINOR: htx: Force HTTP/1.1 on H1 formatting when version is 1.1 or above
- BUG/MINOR: fix ssl_fc_alpn and actually add ssl_bc_alpn
- BUG/MEDIUM: mworker: stop proxies which have no listener in the master
- BUG/MEDIUM: h1: Destroy a connection after detach if it has no owner.
- BUG/MEDIUM: h2: Don't forget to wake the tasklet after shutr/shutw.
- BUG/MINOR: flt_trace/compression: Use the right flag to add the HTX support
- BUG/MEDIUM: stream_interface: Make REALLY sure we read all the data.
- MEDIUM: mux-h1: Revamp the way subscriptions are handled.
- BUG/MEDIUM: mux-h1: Always set CS_FL_RCV_MORE when data are received in h1_recv()
- MINOR: mux-h1: Make sure to return 1 in h1_recv() when needed
- BUG/MEDIUM: mux-h1: Release the mux H1 in h1_process() if there is no h1s
- BUG/MINOR: proto_htx: Truncate the request when an error is detected
- BUG/MEDIUM: h2: When sending in HTX, make sure the caller knows we sent all.
- BUG/MEDIUM: mux-h2: properly update the window size in HTX mode
- BUG/MEDIUM: mux-h2: make sure to always report HTX EOM when consumed by headers
- BUG/MEDIUM: mux-h2: stop sending HTX once the mux is blocked
- BUG/MEDIUM: mux-h2: don't send more HTX data than requested
- MINOR: mux-h2: stop on non-DATA and non-EOM HTX blocks
- BUG/MEDIUM: h1: Correctly report used data with no len.
- MEDIUM: h1: Realign the ibuf before calling rcv_buf if needed.
- BUG/MEDIUM: mux_pt: Always set CS_FL_RCV_MORE.
- MINOR: htx: make htx_from_buf() adjust the size only on new buffers
- MINOR: htx: add buf_room_for_htx_data() to help optimize buffer transfers
- MEDIUM: mux-h1: make use of buf_room_for_htx_data() instead of b_room()
- MEDIUM: mux-h1: attempt to zero-copy Rx DATA transfers
- MEDIUM: mux-h1: avoid a double copy on the Tx path whenever possible
- BUG/MEDIUM: stream-int: don't mark as blocked an empty buffer on Rx
- BUG/MINOR: mux-h1: Check h1m flags to set the server conn_mode on request path
- MEDIUM: htx: Rework conversion from a buffer to an htx structure
- MEDIUM: channel/htx: Add functions for forward HTX data
- MINOR: mux-h1: Don't adjust anymore the amount of data sent in h1_snd_buf()
- CLEANUP: htx: Fix indentation here and there in HTX files
- MINOR: mux-h1: Allow partial data consumption during outgoing data processing
- BUG/MEDIUM: mux-h2: use the correct offset for the HTX start line
- BUG/MEDIUM: mux-h2: stop sending using HTX on errors
- MINOR: mux-h1: Drain obuf if the output is closed after sending data
- BUG/MEDIUM: mworker: stop every tasks in the master
- BUG/MEDIUM: htx: Set the right start-line offset after a defrag
- BUG/MEDIUM: stream: Don't dereference s->txn when it is not there yet.
- BUG/MEDIUM: connections: Reuse an already attached conn_stream.
- MINOR: stream-int: add a new blocking condition on the remote connection
- BUG/MEDIUM: stream-int: don't attempt to receive if the connection is not established
- BUG/MEDIUM: lua: block on remote connection establishment
- BUG/MEDIUM: mworker: fix several typos in mworker_cleantasks()
- SCRIPTS/REGTEST: merge grep+sed into sed in run-regtests
- BUG/MEDIUM: connections: Split CS_FL_RCV_MORE into 2 flags.
- BUG/MEDIUM: h1: Don't free the connection if it's an outgoing connection.
- BUG/MEDIUM: h1: Set CS_FL_REOS if we had a read0.
- BUG/MEDIUM: mux-h1: Be sure to have a conn_stream to set CS_FL_REOS in h1_recv
- REGTEST: Move LUA reg test 4 to level 1.
- MINOR: ist: add functions to copy/uppercase/lowercase into a buffer or string
- MEDIUM: ist: always turn header names to lower case
- MINOR: h2: don't turn HTX header names to lower case anymore
- MEDIUM: ist: use local conversion arrays to case conversion
- MINOR: htx: switch to case sensitive search of lower case header names
- MINOR: mux-h1: Set CS_FL_EOS when read0 is detected and no data are pending
- BUG/MINOR: stream-int: Process read0 even if no data was received in si_cs_recv
- REGTEST: fix the Lua test file name in test lua/h00002 :-)
- REGTEST: add a basic test for HTTP rules manipulating headers
- BUG/MEDIUM: sample: Don't treat SMP_T_METH as SMP_T_STR.
- MINOR: sample: add bc_http_major
- BUG/MEDIUM: htx: fix typo in htx_replace_stline() making it fail all the time
- REGTEST: make the HTTP rules test compatible with HTTP/2 as well
- BUG/MEDIUM: h2: Don't try to chunk data when using HTX.
- MINOR: compiler: add a new macro ALREADY_CHECKED()
- BUILD: h2: mark the start line already checked to avoid warnings
- BUG/MINOR: mux-h1: Remove the connection header when it is useless
When the connection mode can be deduced from the HTTP version, we remove the
redundant connection header. So "keep-alive" connection header is removed from
HTTP/1.1 messages and "close" connection header is remove from HTTP/1.0
messages.
Gcc 7 warns about a potential null pointer deref that cannot happen
since the start line block is guaranteed to be present in the functions
where it's dereferenced. Let's mark it as already checked.
This macro may be used to block constant propagation that lets the compiler
detect a possible NULL dereference on a variable resulting from an explicit
assignment in an impossible check. Sometimes a function is called which does
safety checks and returns NULL if safe conditions are not met. The place
where it's called cannot hit this condition and dereferencing the pointer
without first checking it will make the compiler emit a warning about a
"potential null pointer dereference" which is hard to work around. This
macro "washes" the pointer and prevents the compiler from emitting tests
branching to undefined instructions. It may only be used when the developer
is absolutely certain that the conditions are guaranteed and that the
pointer passed in argument cannot be NULL by design.
A typical use case is a top-level function doing this :
if (frame->type == HEADERS)
parse_frame(frame);
Then parse_frame() does this :
void parse_frame(struct frame *frame)
{
const char *frame_hdr;
frame_hdr = frame_hdr_start(frame);
if (*frame_hdr == FRAME_HDR_BEGIN)
process_frame(frame);
}
and :
const char *frame_hdr_start(const struct frame *frame)
{
if (frame->type == HEADERS)
return frame->data;
else
return NULL;
}
Above parse_frame() is only called for frame->type == HEADERS so it will
never get a NULL in return from frame_hdr_start(). Thus it's always safe
to dereference *frame_hdr since the check was already performed above.
It's then safe to address it this way instead of inventing dummy error
code paths that may create real bugs :
void parse_frame(struct frame *frame)
{
const char *frame_hdr;
frame_hdr = frame_hdr_start(frame);
ALREADY_CHECKED(frame_hdr);
if (*frame_hdr == FRAME_HDR_BEGIN)
process_frame(frame);
}
When we're using HTX, we don't have to generate chunk header/trailers, and
that ultimately leads to a crash when we try to access a buffer that
contains just chunk trailers.
This should not be backported.