For chunked messages, during output process, the mux is now able to write the
last empty chunk and empty trailers when corrsponding blocks have not been found
in the HTX message. It is handy for filters changing a not-chunked message into
a chunked one (like the compression filter).
In h1_set_srv_conn_mode(), we need to get the frontend proxy of a server
connection. untill now, we relied on the stream to get it. But it was a bit
dirty. The stream always exists at this stage but to get it, we also need to get
the stream-interface. Since the commit 7c6f8b146 ("MAJOR: connections: Detach
connections from streams."), the connection's owner is always the session, even
for outgoing connections. So now, we rely on the session to get the frontend
proxy in h1_set_srv_conn_mode().
Use the session instead of the stream to get
the frontend on the server connection
When the connection client is accepted, the info of the client conn_stream are
filled with the session info (accept_date, tv_accept and t_handshake). For all
other conn_streams, on client and server side, their info are filled using
global values (date and now).
Time to time, the need arises to get some info owned by the multiplexer about a
connection stream from the upper layer. Today we really need to get some dates
and durations specific to the conn_stream. It is only true for the mux H1 and
H2. Otherwise it will be impossible to have correct times reported in the logs.
To do so, the structure cs_info has been defined to provide all info we ever
need on a conn_stream from the upper layer. Of course, it is the first step. So
this structure will certainly envloved. But for now, only the bare minimum is
referenced. On the mux side, the callback get_cs_info() has been added in the
structure mux_ops. Multiplexers can now implement it, if necessary, to return a
pointer on a structure cs_info. And finally, the function si_get_cs_info()
should be used from the upper layer. If the stream interface is not attached to
a connection stream, this function returns NULL, likewise if the callback
get_cs_info() is not defined for the corresponding mux.
htx_cut_data_blk() is used to cut the beginning of a DATA block after a
part of it was tranferred. It simply advances the address, reduces the
advertised length and updates the htx's total data count.
It looks like we forgot to report HTX when listing the muxes and their
respective protocols, leading to "NONE" being displayed. Let's report
"HTX" and "HTTP|HTX" since both will exist. Also fix a minor typo in
the output message.
In http_wait_for_response(), we wait that all outgoing data have really been
sent (from the channel's point of view) to start the processing of the
response. In fact, it is used to send all intermediate 10x responses. For now
the HTX api is not really handy when multiple messages are stored in the HTX
structure.
Because multiple HTTP messages can be stored in an HTX structure, it is
important to not forget to reset the H1 parser at the beginning of each
one. With the current version, this case only happens on the response, when
multiple HTTP-1XX responses are forwarded to the client (for instance
103-Early-Hints). So strickly speaking, it is the same message. But for now,
internally, each one is a standalone message. Note that it might change in a
future version of the HTX.
in h1_process_output(), before formatting the headers, we need to find and check
the "Connection: " header to update the connection mode. But, the context used
to do so was not correctly initialized. We must explicitly set ctx.value to NULL
to be sure to rescan the current header.
the function http_show_error_snapshot() must not use the trash buffer to append
the HTTP error description. Instead, it must use the <out> buffer, its first
argument. Note that concretely, this function always succeeds because <out> is
always the trash buffer.
When a section's parser is registered, it can also define a post section
callback, called at the end of the section parsing. But when 2 sections with the
same name followed each other, the transition between them was missed. This
induced 2 bugs. First, the call to the post section callback was skipped. Then,
the parsing of the second section was mixed with the first one.
This patch must be backported in 1.8.
http_proxy is special, because it creates its connection and conn_stream
earlier. So in assign_server(), check that the connection associated with
the conn_stream has a destination address set, and in connect_server(),
use the connection and the conn_stream already attached to the
stream_interface, instead of looking for a connection in the session, and
creating a new conn_stream.
Instead of parsing all the available connections owned by the session
each time we choose a server, even if prefer-last-server is not set,
just do it if prefer-last-server is used, and check if the server is usable,
before checking the connections.
When a transaction ends, if we want to do keepalive, and the connection we
used didn't have an owner, attach the connection to the session, so that we
don't have to destroy it, and we can reuse it later.
Instead of just storing the last connection in the session, store all of
the connections, for at most MAX_SRV_LIST (currently 5) targets.
That way we can do keepalive on more than 1 outgoing connection when the
client uses HTTP/2.
When creating a new outgoing H2 connection, put it in the idle list so that
it's immediately available for others to use, if http-reuse always is used.
While it is true the SSL code will do the right thing if the SSL handshake
is not done, we have other types of handshake to deal with (proxy protocol,
netscaler, ...). For those we definitively don't want to try to send data
before it's done. All handshakes but SSL will go through the mux_pt, so in
mux_pt_snd_buf, don't try to send while a handshake is pending.
In session_free(), make sure the outgoing connection is not in the idle list
anymore, and it does no longer have an owner, so that it will properly be
destroyed and nobody will be able to access it.
We can reach sess_update_st_con_tcp() while we still have a connection
attached, so take that into account, and free the connection, instead of
assuming it's always a conn_stream.
When freeing the session, we may fail to free the outgoing connection,
because it still has streams attached. So remove ourself from the session
list, so that the connection doesn't try to access it later.
Use #!bin/sh more portable shebang.
Support filenames with spaces.
Set HAPROXY_PROGRAM environment variable value to ${PWD}/haproxy.
exit(1) if we could not creat the higher level temporary working directory
or create its sub-directory with mktemp utility.
As defined by POSIX, use six characters for the mktemp template.
In h2_recv(), return 1 if there's an error on the connection, not just if
there's a read0 pending, so that h2_process() can be called and act as a
janitor.
In si_cs_recv(), when there's an error on the connection or the conn_stream,
don't give up if CS_FL_RCV_MORE is set on the conn_stream, as it means there's
still data available.
In si_cs_send(), don't give up and subscribe if the connection is still
waiting for a SSL handshake. We will never be woken up once the handshake is
done if we're using HTTP/2. Instead, directly try to send data. When using
the mux_pt, if the handshake is not done yet, snd_buf() would return 0 and
we will subscribe anyway.
When we're deferring the mux choice until the ALPN is negociated, we
attach the connection to the stream_interface until it's done, so that we
can destroy it if something goes wrong and the stream is destroy.
Before calling si_attach_cs() to attach the conn_stream once we have it,
call si_detach_endpoint(), or is_attach_cs() would destroy the connection.
When we defer the mux choice until the ALPN is negociated, don't forget
to wake the stream once it's done, or it will never have the opportunity
to send data.
In ssl_sock_parse_clienthello(), the code considers that SSL Sessionid
size is '1', and then considers that the SSL cipher suite is availble
right after the session id size information.
This actually works in a single case, when the client does not send a
session id.
This patch fixes this issue by introducing the a propoer way to parse
the session id and move forward the cursor by the session id length when
required.
Need to be backported to 1.8.
In the mux_pt, when we're attaching a new conn_stream, don't forget to
unsubscribe from the connection. Failure to do so may lead to the mux_pt
freeing the connection while the conn_stream can still want to access it.
The client makes the same HTTP request four times.
The varnishtest HTTP server serves the first client request and quits.
So, the three last requests are handled by the haproxy cache.
Some tests require a minimal haproxy version or compilation options to be
able to run successfully. This script allows to add 'requirements' to tests
to check so they will automatically be skipped if a requirement is not met.
The script supports several parameters to slightly modify its behavior
including the directories to search for tests.
Also some features are not available for certain OS's these can also
be 'excluded', this should allow for the complete set of test cases to be
run on any OS against any haproxy release without 'expected failures'.
The test .vtc files will need to be modified to include their 'requirements'
by listing including text options as shown below:
#EXCLUDE_TARGETS=dos,freebsd,windows
#REQUIRE_OPTIONS=ZLIB,OPENSSL,LUA
#REQUIRE_VERSION=0.0
#REQUIRE_VERSION_BELOW=99.9,
When excluding a OS by its TARGET, please do make a comment why the test
can not succeed on that TARGET.
In h2_process_demux(), if we're demuxing multiple frames, and the previous
frame led to a stream getting closed, don't bogusly consider that an error,
and destroy the next stream, as there are valid cases where the stream could
be closed.
The CLOEXEC flag was set using a F_SETFL which can't work.
To set the CLOEXEC flag F_SETFD should be used, the problem is that it
needs a new call to fcntl() and it's on the path of every accept.
This flag was only needed in the case of the master, so the patch was
reverted and the flag set only in this case.
The bug was introduced by 0b3e849 ("MEDIUM: listeners: set O_CLOEXEC on
the accepted FDs").
No backport needed.
If the master was reloaded and there was a established connection to a
server, the FD resulting from the accept was leaking.
There was no CLOEXEC flag set on the FD of the socketpair created during
a connect call. This is specific to the socketpair in the master process
but it should be applied to every protocol in case we use them in the
master at some point.
No backport needed.
The previous fix da95fd90 ("BUILD/MINOR: ssl: fix build with non-alpn/
non-npn libssl") does fix the build in old OpenSSL release, but I
overlooked that the ctx is only freed when NPN is supported.
Fix this by moving the #endif to the proper place (this was broken in
c7566001 ("MINOR: server: Add "alpn" and "npn" keywords")).
Having a thread_local for the pool cache is messy as we need to
initialize all elements upon startup, but we can't until the threads
are created, and once created it's too late. For this reason, the
allocation code used to check for the pool's initialization, and
it was the release code which used to detect the first call and to
initialize the cache on the fly, which is not exactly optimal.
Now that we have initcalls, let's turn this into a per-thread array.
This array is initialized very early in the boot process (STG_PREPARE)
so that pools are always safe to use. This allows to remove the tests
from the alloc/free calls.
Doing just this has removed 2.5 kB of code on all cumulated pool_alloc()
and pool_free() paths.
signal_init(), init_log(), init_stream(), and init_task() all used to
only preset some values and lists. This needs to be done very early to
provide a reliable interface to all other users. The calls used to be
explicit in haproxy.c:init(). Now they're placed in initcalls at the
STG_PREPARE stage. The functions are not exported anymore.
Instead of exporting a number of pools and having to manually delete
them in deinit() or to have dedicated destructors to remove them, let's
simply kill all pools on deinit().
For this a new function pool_destroy_all() was introduced. As its name
implies, it destroys and frees all pools (provided they don't have any
user anymore of course).
This allowed to remove 4 implicit destructors, 2 explicit ones, and 11
individual calls to pool_destroy(). In addition it properly removes
the mux_pt_ctx pool which was not cleared on exit (no backport needed
here since it's 1.9 only). The sig_handler pool doesn't need to be
exported anymore and became static now.