We'll need these functions from other inline functions, let's make them
accessible. len_to_bytes() was renamed to hpack_len_to_bytes() since it's
now exposed.
We used to have a series of well-known header fields that were looked
up, but most of them were not. The current model couldn't scale with
the addition of the new headers or pseudo-headers required to process
requests, resulting in their encoding being hard-coded in the caller.
This patch implements a quick lookup which retrieves any header from
the static table. A binary stream is made of header names prefixed by
lengths and indexes. These header names are sorted by length, then by
frequency, then by direction (preference for response), then by name,
the the lowest index of each is stored only in case of multiple
entries. A parallel length index table provides the index of the first
header for a given string. This allows to focus on the first few values
matching the same length.
Everything was made to limit the cache footprint. Interestingly, the
lookup ends up being slightly faster than the previous one, while
covering the 54 distinct headers instead of only 10.
A test with a curl request and a basic response showed that the request
size has dropped from 85 to 56 bytes and that the response size has
dropped from 197 to 170 bytes, thus we can now shave roughly 25-30 bytes
per message.
For unknown fields, since we know that most of them are less than 127
characters, we don't need to go through the loop and can instead directly
emit the one-byte length encoding. This increases the request rate by
approximately 0.5%.
memcpy() tends to be overkill to copy short strings, better use ist's
naive functions for this. This shows a consistent 1.2% performance
gain with h2load.
The len-to-bytes conversion can be slightly simplified and optimized
by hardcoding a tree lookup. Just doing this increases by 1% the
request rate on H2. It could be made almost branch-free by using
fls() but it looks overkill for most situations since most headers
are very short.
In hpack_encode_header() there is a length check to verify that a literal
header name fits in the buffer, but there it an off-by-one in this length
check, which forgets the byte required to mark the encoding type (literal
without indexing). It should be harmless though as it cannot be triggered
since response headers passing through haproxy are limited by the reserve,
which is not the case of the output buffer.
This fix should be backported to 1.8.
Otherwise, after such replaces, the HTX message appears to wrap but the head
block address is not necessarily the first one. So adding new blocks will
override data of old ones.
If a server sends part of headers and then close its connection, the mux H1
reamins blocked in an infinite loop trying to read more data to finish the
parsing of the message. The flag CS_FL_REOS is set on the conn_stream. But
because there are some data in the input buffer, CS_FL_EOS is never set.
To fix the bug, in h1_process_input, when CS_FL_REOS is set on the conn_stream,
we also set CS_FL_EOS if the input buffer is empty OR if the channel's buffer is
empty.
When a request is fully processed, no more data are parsed until the response is
totally processed and a new transaction starts. But during this time, the mux is
trying to read more data and subscribes to read. If requests are pipelined, we
start to receive the next requests which will stay in the input buffer, leading
to a loop consuming all the CPU. This loop ends when the transaction ends. To
avoid this loop, the flag H1C_F_IN_BUSY has been added. It is set when the
request is fully parsed and unset when the transaction ends. Once set on H1C, it
blocks the reads. So the mux never tries to receive more data in this state.
Condition to process the connection mode on outgoing messages whithout
'Connection' header was wrong. It relied on the wrong H1M
state. H1_MSG_HDR_L2_LWS is only a possible state for messages with at least one
header. Now, to fix the bug, we just check the H1M state is not
H1_MSG_LAST_LF. So, we have the warranty the EOH was not processed yet.
Jerome reported that outgoing H2 failed for methods different from GET
or POST. It turns out that the HPACK encoding is performed by hand in
the outgoing headers encoding function and that the data length was not
incremented to cover the literal method value, resulting in a corrupted
HEADERS frame.
Admittedly this code should move to the generic HPACK code.
No backport is needed.
In connect_server(), don't attempt to reuse the conn_stream associated to
the stream_interface, if we already attempted a connection with it.
Using that conn_stream is only there for the cases where a connection and
a conn_stream was created ahead, mostly by http_proxy or by the LUA code.
If we already attempted to connect, that means we fail, and so we should
create a new connection.
No backport needed.
When the connection mode can be deduced from the HTTP version, we remove the
redundant connection header. So "keep-alive" connection header is removed from
HTTP/1.1 messages and "close" connection header is remove from HTTP/1.0
messages.
Gcc 7 warns about a potential null pointer deref that cannot happen
since the start line block is guaranteed to be present in the functions
where it's dereferenced. Let's mark it as already checked.
When we're using HTX, we don't have to generate chunk header/trailers, and
that ultimately leads to a crash when we try to access a buffer that
contains just chunk trailers.
This should not be backported.
A typo in the block type check makes this function fail all the time,
which has impact on anything rewriting a start line (set-uri, set-path
etc).
No backport needed.
This adds the sample fetch bc_http_major. It returns the backend connection's HTTP
version encoding, which may be 1 for HTTP/0.9 to HTTP/1.1 or 2 for HTTP/2.0. It is
based on the on-wire encoding, and not the version present in the request header.
In smp_dup(), don't consider a SMP_T_METH with an unknown method the same as
SMP_T_STR. The string and string length aren't stored at the same place.
This should be backported to 1.8.
The flag CS_FL_EOS can be set while no data was received. So the flas
CS_FL_RCV_MORE is not set. In this case, the read0 was never processed by the
stream interface. To be sure to process it, the test on CS_FL_RCV_MORE has been
moved after the one on CS_FL_EOS.
Now that we know that htx only contains lower case header names, there
is no need anymore for looking them up in a case-insensitive manner.
Note that http_find_header() still does it because header names to
compare against may come from everywhere there.
Since HTX stores header names in lower case already, we don't need to
do it again anymore. This increased H2 performance by 2.7% on quick
tests, now making H2 overr HTX about 5.5% faster than H2 over H1.
HTTP/2 and above require header names to be lower cased, while HTTP/1
doesn't care. By making lower case the standard way to store header
names in HTX, we can significantly simplify all operations applying to
header names retrieved from HTX (including, but not limited to, lookups
and lower case checks which are not needed anymore).
As a side effect of replacing memcpy() with ist2bin_lc(), a small increase
of the request rate performance of about 0.5-1% was noticed on keep-alive
traffic, very likely due to memcpy() being overkill for tiny strings.
This trivial patch was marked medium because it may have a visible end-user
impact (e.g. non-HTTP compliant agent, etc).
In the commit 6a2d33481 ("BUG/MEDIUM: h1: Set CS_FL_REOS if we had a read0."),
We set the flag CS_FL_REOS on the conn_stream when a read0 is detected. But we
must be sure to have a conn_stream first.
In h1_recv(), if we get a read0, let the conn_stream know by setting the
CS_FL_REOS flag, or it may never be aware we did hit EOS.
This should not be backported.
In h1_process(), don't release the connection if it is an outgoing connection
and we don't have an h1s associated, if it is so it is probably just in
a pool.
CS_FL_RCV_MORE is used in two cases, to let the conn_stream
know there may be more data available, and to let it know that
it needs more room. We can't easily differentiate between the
two, and that may leads to hangs, so split it into two flags,
CS_FL_RCV_MORE, that means there may be more data, and
CS_FL_WANT_ROOM, that means we need more room.
This should not be backported.
Commit 27f3fa5 ("BUG/MEDIUM: mworker: stop every tasks in the master")
used MAX_THREADS as a mask instead of MAX_THREADS_MASK to clean the
global run queue, and used rq_next (global variable) instead of next_rq.
Renamed next_rq as tmp_rq and next_wq as tmp_wq to avoid confusion.
No backport needed.
We used to wait for the other side to be connected, but the blocking
flags were inaccurate. It used to work fine almost by accident before
the stream interface changes. Now we use the new RXBLK_CONN flag to
explicitly subscribe to this event.
Thanks to Adis for reporting the issue, PiBaNL for the test case,
and Olivier for the diagnostic.
No backport is needed.
In connect_server(), if we already have a conn_stream, reuse it
instead of trying to create a new one. http_proxy and LUA both
manually create a conn_stream and a connection, and we want
to use it.
The offset was always wrong after an HTX defragmentation because the wrong
address was used and because the update could occcur several time on the same
defragmentation.
The master is not supposed to run (at the moment) any task before the
polling loop, the created tasks should be run only in the workers but in
the master they should be disabled or removed.
No backport needed.
It avoids to subscribe to send events because some may remain in the output
buffer. If the output is closed or if an error occurred, there is no way to send
these data anyway, so it is safe to drain them.
Due to a thinko, I used sl_off as the start line index number but it's
not it, it's its offset. The first index is obtained using htx_get_head(),
and the start line is obtained using htx_get_sline(). This caused crashes
to happen when forwarding HTX traffic via the H2 mux once the HTX buffer
started to wrap.
No backport is needed.
In h1_process_output(), instead of waiting to have enough data to send to
consume a full block of data, we are now able consume partially these blocks.
To ease the fast forwarding and the infinte forwarding on HTX proxies, 2
functions have been added to let the channel be almost aware of the way data are
stored in its buffer. By calling these functions instead of legacy ones, we are
sure to forward the right amount of data.
Now, the function htx_from_buf() will set the buffer's length to its size
automatically. In return, the caller should call htx_to_buf() at the end to be
sure to leave the buffer hosting the HTX message in the right state. When the
caller can use the function htxbuf() to get the HTX message without any update
on the underlying buffer.
On the server side, we must test the request headers to deduce if we able to do
keepalive or not. Otherwise, by default, the keepalive will be enabled on the
server's connection, whatever the client said.
After 8706c8131 ("BUG/MEDIUM: mux_pt: Always set CS_FL_RCV_MORE."), a
side effect caused failed receives to mark the buffer as missing room,
a flag that no other place can remove since it's empty. Ideally we need
a separate flag to mean "failed to deliver data by lack of room", but
in the mean time at the very least we must not mark as blocked an
empty buffer.
No backport is needed.
In order to properly deal with unaligned contents, the output data are
currently copied into a temporary buffer, to be copied into the mux's
output buffer at the end. The new buffer API allows several buffers to
share the same data area, so we're using this here to make the temporary
buffer point to the same area as the output buffer when that one is
empty. This is enough to avoid the copy at the end, only pointers and
lengths have to be adjusted. In addition the output buffer's head is
advanced by the HTX header size so that the remaining copy is aligned.
By doing this we improve the large object performance by an extra 10%,
which is 64% above the 1.9-dev9 state. It's worth noting that there are
no more calls to __memcpy_sse2_unaligned() now.
Since this code deals with various block types, it appears difficult to
adjust it to be smart enough to even avoid the first copy. However a
distinct approach could consist in trying to detect a single blocked
HTX and jump to dedicated code in this case.
When transferring large objects, most calls are made between a full
buffer and an empty buffer. In this case there is a large opportunity
for performing zero-copy calls, with a few exceptions : the input data
must fit into the output buffer, and the data need to be properly
aligned and formated to let the HTX header fit before and the HTX
block(s) fit after.
This patch does two things :
1) it makes sure that we prepare an empty input buffer before an recv()
call so that it appears as holding an HTX block at the front, which is
removed afterwards. This way the data received using recv() are placed
exactly at the target position in the input buffer for a later cast to
HTX.
2) when receiving data in h1_process_data(), if it appears that the input
buffer can be cast to an HTX buffer and the target buffer is empty,
then the buffers are swapped, an HTX block is prepended in front of the
data area, and the HTX block is appended to reference this data block.
In practice, this ensures that in most cases when transferring large files,
calls to h1_rcv_buf() are made using zero copy and a little bit of buffer
preparation (~40 bytes to be written).
Doing this adds an extra 13% performance boost on top of previous patch,
resulting in a total of 50% speed up on large transfers.
Just by using this buffer room estimation for the demux buffer, the large
object performance has increased by up to 33%. This is mostly due to less
recv() calls and unaligned copies.
When using the mux_pt, as we can't know if there's more data to be read,
always set CS_FL_RCV_MORE, and only remove it if we got an error or a shutr
and rcv_buf() returned 0.
If the ibuf only contains a small amount of data, realign it
before calling rcv_buf(), as it's probably going to be cheaper
to do so than to do 2 calls to recv().