Now, only one capture is mandatory in the path-info regex, the one matching the
script-name. The path-info capture is optional. Of couse, it must be defined to
fill the PATH_INFO parameter. But it is not mandatory. This way, it is possible
to get the script-name part from the path, excluding the path-info.
This patch is small enough to be backported to 2.1.
If a regex to match the PATH_INFO parameter is configured, it systematically
fails if a newline or a null character is present in the URL-decoded path. So,
from the moment there is at least a "%0a" or a "%00" in the request path, we
always fail to get the PATH_INFO parameter and all the decoded path is used for
the SCRIPT_NAME parameter.
It is probably not the expected behavior. Because, most of time, these
characters are not expected at all in a path, an error is now triggered when one
of these characters is found in the URL-decoded path before trying to execute
the path_info regex. However, this test is not performed if there is no regex
configured.
Note that in reality, the newline character is only a problem when HAProxy is
complied with pcre or pcre2 library and conversely, the null character is only a
problem for the libc's regex library. But both are always excluded to avoid any
inconsistency depending on compile options.
An alternative, not implemented yet, is to replace these characters by another
one. If someone complains about this behavior, it will be re-evaluated.
This patch must be backported to all versions supporting the FastCGI
applications, so to 2.1 for now.
When calling the mux "destroy" method, the argument should be the mux
context, not the connection. In a few instances in the mux code, the
connection was used (mainly when the session wouldn't handle the idle
connection, and the server pool was fool), and that could lead to random
segfaults.
This should be backported to 2.1, 2.0, and 1.9
we cannot return right after socket opening as we need to move back to
the default namespace first
this should fix github issue #500
this might be backported to all version >= 1.6
Fixes: b3e54fe387 ("MAJOR: namespace: add Linux network namespace
support")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
I managed to mess up with the file's permission while using a temporary
one during last release, and to backport the non-exec version everywhere.
This can be backported as far as 1.7 now.
when `getsockopt` previously failed, we were trying to set defaultmss
with -2 value.
this is a followup of github issue #499
this should be backported to all versions >= v1.8
Fixes: 153659f1ae ("MINOR: tcp: When binding socket, attempt to
reuse one from the old proc.")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
As haproxy wont build on AIX 7.2 using the old "aix52" TARGET a new
TARGET was introduced which adds two special CFLAGS to prevent the
loading of AIXs xmem.h and var.h. This is done by defining the
corresponding include-guards _H_XMEM and _H_VAR. Without excluding
those headers-files the build fails because of redefinition errors:
1)
CC src/mux_fcgi.o
In file included from /usr/include/sys/uio.h:90,
from /opt/freeware/lib/gcc/powerpc-ibm-aix7.1.0.0/8.3.0/include-fixed/sys/socket.h:104,
from include/common/compat.h:32,
from include/common/cfgparse.h:25,
from src/mux_fcgi.c:13:
src/mux_fcgi.c:204:13: error: expected ':', ',', ';', '}' or '__attribute__' before '.' token
struct ist rem_addr;
^~~~~~~~
2)
CC src/cfgparse-listen.o
In file included from include/types/arg.h:31,
from include/types/acl.h:29,
from include/types/proxy.h:41,
from include/proto/log.h:34,
from include/common/cfgparse.h:30,
from src/mux_h2.c:13:
include/types/vars.h:30:8: error: redefinition of 'struct var'
struct var {
^~~
Futhermore, to enable multithreading via USE_THREAD, the atomic
library was added to the LDFLAGS. Finally, two new CPUs were added
to simplify the usage of power8 and power9 optimizations.
This TARGET was only tested on GCC 8.3 and may or may not work on
IBM's native C-compiler (XLC).
Should be backported to 2.1.
we were trying to close file descriptor even when `socket` call was
failing.
this should fix github issue #499
this should be backported to all versions >= v1.8
Fixes: 153659f1ae ("MINOR: tcp: When binding socket, attempt to
reuse one from the old proc.")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
When intializing a listener, let's make sure the bind_thread mask is
always limited to all_threads_mask when inserting the FD. This will
avoid seeing listening FDs with bits corresponding to threads that are
not active (e.g. when using "bind ... process 1/even"). The side effect
is very limited, all that was identified is that atomic operations are
used in fd_update_events() when not necessary. It's more a matter of
long-term correctness in practice.
This fix might be backported as far as 1.8 (then proto_sockpair must
be dropped).
In bug #495 we found that it is possible to resume a listener on an
inexistent thread. This happens when a bind's thread_mask contains bits
out of the active threads mask, such as when using "1/odd" or "1/even".
The thread_mask was used as-is to pick a thread number to re-enable the
listener, and given that the highest number is used, 1/odd or 1/even can
produce quite high thread numbers and crash the process by queuing some
entries into non-existent lists.
This bug is an incomplete fix of commit 413e926ba ("BUG/MAJOR: listener:
fix thread safety in resume_listener()") though it will only trigger if
some bind lines are explicitly bound to thread numbers higher than the
thread count. The fix must be backported to all branches having the fix
above (as far as 1.8, though the code is different there, see the commit
message in 1.8 for changes).
There are a few other places where bind_thread is used without
enforcing all_thread_mask, namely when doing fd_insert() while creating
listeners. It seems harmless but would probably deserve another fix.
While looking for other occurrences of do { continue; } while (0) I
found these few leftovers in mini-clist where an outer loop was made
around "do { } while (0)" then another loop was placed inside just to
handle the continue. Let's clean this up by just removing the outer
one. Most of the patch is only the inner part of the loop that is
reindented. It was verified that the resulting code is the same.
Issue #490 reports that there are a few bogus constructs of the famous
"do { if (cond) continue; } while (0)" in the connection code, that are
used to retry on I/O failures caused by receipt of a signal. Let's turn
them into the more correct "while (1) { if (cond) continue; break }"
instead. This may or may not be backported, it shouldn't have any
visible effect.
"snap" images are updated frequently, while regular images are updated quarterly.
so, mixing "snap" and regular images lead to package naming mismatch, which will occur every
quarter. we cannot use 11.3 release image, it is broken, so we have to use 11.3 "snap" image.
Thus let us use all "snap" images. 13-snap is first introduced with this commit.
We do have some checks for the UNIX socket path length to validate the
full pathname of a unix socket but the pathname extension is only taken
into account when using a bind_prefix. The second check only matches
against MAXPATHLEN. So this means that path names between 98 and 108
might successfully parse but fail to bind. Let's adjust the check in
the address parser and refine the error checking at the bind() step.
This addresses bug #493.
In commit 477902b ("MEDIUM: connections: Get ride of the xprt_done
callback.") we added an inconditional call to h2_wake_some_streams()
in h2_wake(), though we must not do it if the connection is destroyed
or we end up with a use-after-free. In this case it's already done in
h2_process() before destroying the connection anyway.
Let's just add this test for now. A cleaner approach might consist in
doing it in the h2_process() function itself when a connection status
change is detected.
No backport is needed, this is purely 2.2.
This patch provides a schematic of the new architecture based on the
struct cert_key_and_chain which appeared with haproxy 2.1.
Could be backported in 2.1
A bug was introduced during TCP rules refactoring by the commit ac98d81f4
("MINOR: http-rule/tcp-rules: Make track-sc* custom actions"). There is no
stream when L4/L5 TCP rules are evaluated. For these rulesets, In track-sc*
actions, we must take care to rely on the session instead of the stream.
Because of this bug, any evaluation of L4/L5 TCP rules using a track-sc* action
leads to a crash of HAProxy.
No backport needed, except if the above commit is backported.
The code which is supposed to apply the bind_conf configuration on the
SSL_CTX was not called correctly. Indeed it was called with the previous
SSL_CTX so the new ones were left with default settings. For example the
ciphers were not changed.
This patch fixes#429.
Must be backported in 2.1.
This patch fixes memory leaks and a null pointer dereference found by coverity
on the error path when an HTTP return action is parsed. See issue #491.
No need to backport this patch except the HTT return action is backported too.
In action_http_set_status(), when a rewrite error occurred, the stream error
flag must be set before returning the error.
No need to backport this patch except if commit 333bf8c33 ("MINOR: http-rules:
Set SF_ERR_PRXCOND termination flag when a header rewrite fails") is
backported. This bug was reported in issue #491.
The condition was inverted. When the branch was the master, it was
harmless because it caused an extra "checkout master", but when it
was not the master, the commit could be applied to the wrong branch
and it could even possibly not match the name to stop on.
I'm fed up with having to scroll my terminals trying to look for the
mail send command printed 30 minutes before the release, let's have
it copied into the e-mail template itself, and replace the old headers
that used to be duplicated there and that are not needed anymore.
Released version 2.2-dev2 with the following main changes :
- BUILD: CI: temporarily mark openssl-1.0.2 as allowed failure
- MEDIUM: cli: Allow multiple filter entries for "show table"
- BUG/MEDIUM: netscaler: Don't forget to allocate storage for conn->src/dst.
- BUG/MINOR: ssl: ssl_sock_load_pem_into_ckch is not consistent
- BUILD: stick-table: fix build errors introduced by last stick-table change
- BUG/MINOR: cli: Missing arg offset for filter data values.
- MEDIUM: streams: Always create a conn_stream in connect_server().
- MEDIUM: connections: Get ride of the xprt_done callback.
- CLEANUP: changelog: remove the duplicate entry for 2.2-dev1
- BUILD: CI: move cygwin builds to Github Actions
- MINOR: cli: Report location of errors or any extra data for "show table"
- BUG/MINOR: ssl/cli: free the previous ckch content once a PEM is loaded
- CLEANUP: backend: remove useless test for inexistent connection
- CLEANUP: backend: shut another false null-deref in back_handle_st_con()
- CLEANUP: stats: shut up a wrong null-deref warning from gcc 9.2
- BUG/MINOR: ssl: increment issuer refcount if in chain
- BUG/MINOR: ssl: memory leak w/ the ocsp_issuer
- BUG/MINOR: ssl: typo in previous patch
- BUG/MEDIUM: connections: Set CO_FL_CONNECTED in conn_complete_session().
- BUG/MINOR: ssl/cli: ocsp_issuer must be set w/ "set ssl cert"
- MEDIUM: connection: remove CO_FL_CONNECTED and only rely on CO_FL_WAIT_*
- BUG/MEDIUM: 0rtt: Only consider the SSL handshake.
- MINOR: stream-int: always report received shutdowns
- MINOR: connection: remove CO_FL_SSL_WAIT_HS from CO_FL_HANDSHAKE
- MEDIUM: connection: use CO_FL_WAIT_XPRT more consistently than L4/L6/HANDSHAKE
- MINOR: connection: remove checks for CO_FL_HANDSHAKE before I/O
- MINOR: connection: do not check for CO_FL_SOCK_RD_SH too early
- MINOR: connection: don't check for CO_FL_SOCK_WR_SH too early in handshakes
- MINOR: raw-sock: always check for CO_FL_SOCK_WR_SH before sending
- MINOR: connection: remove some unneeded checks for CO_FL_SOCK_WR_SH
- BUG/MINOR: stktable: report the current proxy name in error messages
- BUG/MEDIUM: mux-h2: make sure we don't emit TE headers with anything but "trailers"
- MINOR: lua: Add hlua_prepend_path function
- MINOR: lua: Add lua-prepend-path configuration option
- MINOR: lua: Add HLUA_PREPEND_C?PATH build option
- BUILD: cfgparse: silence a bogus gcc warning on 32-bit machines
- BUG/MINOR: http-ana: Increment the backend counters on the backend
- BUG/MINOR: stream: Be sure to have a listener to increment its counters
- BUG/MEDIUM: streams: Move the conn_stream allocation outside #IF USE_OPENSSL.
- REGTESTS: make the set_ssl_cert test require version 2.2
- BUG/MINOR: ssl: Possible memleak when allowing the 0RTT data buffer.
- MINOR: ssl: Remove dead code.
- BUG/MEDIUM: ssl: Don't forget to free ctx->ssl on failure.
- BUG/MEDIUM: stream: Don't install the mux in back_handle_st_con().
- MEDIUM: streams: Don't close the connection in back_handle_st_con().
- MEDIUM: streams: Don't close the connection in back_handle_st_rdy().
- BUILD: CI: disable slow regtests on Travis
- BUG/MINOR: tcpchecks: fix the connect() flags regarding delayed ack
- BUG/MINOR: http-rules: Always init log-format expr for common HTTP actions
- BUG/MINOR: connection: fix ip6 dst_port copy in make_proxy_line_v2
- BUG/MINOR: dns: allow 63 char in hostname
- MINOR: proxy: clarify number of connections log when stopping
- DOC: word converter ignores delimiters at the start or end of input string
- MEDIUM: raw-sock: remove obsolete calls to fd_{cant,cond,done}_{send,recv}
- BUG/MINOR: ssl/cli: fix unused variable with openssl < 1.0.2
- MEDIUM: pipe/thread: reduce the locking overhead
- MEDIUM: pipe/thread: maintain a per-thread local cache of recently used pipes
- BUG/MEDIUM: pipe/thread: fix atomicity of pipe counters
- MINOR: tasks: move the list walking code to its own function
- MEDIUM: tasks: implement 3 different tasklet classes with their own queues
- MEDIUM: tasks: automatically requeue into the bulk queue an already running tasklet
- OPTIM: task: refine task classes default CPU bandwidth ratios
- BUG/MEDIUM: connections: Don't forget to unlock when killing a connection.
- MINOR: task: permanently flag tasklets waking themselves up
- MINOR: task: make sched->current also reflect tasklets
- MINOR: task: detect self-wakeups on tl==sched->current instead of TASK_RUNNING
- OPTIM: task: readjust CPU bandwidth distribution since last update
- MINOR: task: don't set TASK_RUNNING on tasklets
- BUG/MEDIUM: memory_pool: Update the seq number in pool_flush().
- MINOR: memory: Only init the pool spinlock once.
- BUG/MEDIUM: memory: Add a rwlock before freeing memory.
- BUG/MAJOR: memory: Don't forget to unlock the rwlock if the pool is empty.
- MINOR: ssl: ssl-load-extra-files configure loading of files
- SCRIPTS: add a new "backport" script to simplify long series of backports
- BUG/MINOR: ssl: we may only ignore the first 64 errors
- SCRIPTS: use /usr/bin/env bash instead of /bin/bash for scripts
- BUG/MINOR: ssl: clear the SSL errors on DH loading failure
- CLEANUP: hpack: remove a redundant test in the decoder
- CLEANUP: peers: Remove unused static function `free_dcache`
- CLEANUP: peers: Remove unused static function `free_dcache_tx`
- CONTRIB: debug: add missing flags SF_HTX and SF_MUX
- CONTRIB: debug: add the possibility to decode the value as certain types only
- CONTRIB: debug: support reporting multiple values at once
- BUG/MINOR: http-act: Use the good message to test strict rewritting mode
- MINOR: global: Set default tune.maxrewrite value during global structure init
- MINOR: http-rules: Set SF_ERR_PRXCOND termination flag when a header rewrite fails
- MINOR: http-htx: Emit a warning if an error file runs over the buffer's reserve
- MINOR: htx: Add a function to append an HTX message to another one
- MINOR: htx/channel: Add a function to copy an HTX message in a channel's buffer
- BUG/MINOR: http-ana: Don't overwrite outgoing data when an error is reported
- MINOR: dns: Dynamically allocate dns options to reduce the act_rule size
- MINOR: dns: Add function to release memory allocated for a do-resolve rule
- BUG/MINOR: http-ana: Reset HTX first index when HAPRoxy sends a response
- BUG/MINOR: http-ana: Set HTX_FL_PROXY_RESP flag if a server perform a redirect
- MINOR: http-rules: Add a flag on redirect rules to know the rule direction
- MINOR: http-rules: Handle the rule direction when a redirect is evaluated
- MINOR: http-ana: Rely on http_reply_and_close() to handle server error
- MINOR: http-ana: Add a function for forward internal responses
- MINOR: http-ana/http-rules: Use dedicated function to forward internal responses
- MEDIUM: http: Add a ruleset evaluated on all responses just before forwarding
- MEDIUM: http-rules: Add the return action to HTTP rules
- MEDIUM: http-rules: Support extra headers for HTTP return actions
- CLEANUP: lua: Remove consistency check for sample fetches and actions
- BUG/MINOR: http-ana: Increment failed_resp counters on invalid response
- MINOR: lua: Get the action return code on the stack when an action finishes
- MINOR: lua: Create the global 'act' object to register all action return codes
- MINOR: lua: Add act:wake_time() function to set a timeout when an action yields
- MEDIUM: lua: Add ability for actions to intercept HTTP messages
- REGTESTS: Add reg tests for the HTTP return action
- REGTESTS: Add a reg test for http-after-response rulesets
- BUILD: lua: silence a warning on systems where longjmp is not marked as noreturn
- MINOR: acl: Warn when an ACL is named 'or'
- CONTRIB: debug: also support reading values from stdin
- SCRIPTS: backport: use short revs and resolve the initial commit
- BUG/MINOR: acl: Fix type of log message when an acl is named 'or'
The patch adding this check initially only issued a warning, instead of
being fatal. It was changed before committing. However when making this
change the type of the log message was not changed from `ha_warning` to
`ha-alert`. This patch makes this forgotten adjustment.
see 0cf811a5f941261176b67046dbc542d0479ff4a7
No backport needed. The initial patch was backported as a warning, thus
the log message type is correct.
I find myself often getting trapped into calling "backport 2.0 HEAD" which
doesn't work because "HEAD" is passed as the argument to cherry-pick in
other repos. Let's resolve it first. And also let's shorten the commit IDs
to make the error messages more readable and to ease copy-paste.
This is convenient when processing large dumps, it allows to copy-paste
values to inspect from one window to another, or to directly transfer
a "show fd"/"show stream" output through sed. In order to do this, simply
pass "-" alone instead of the value and they will all be read one line at
a time from stdin. For example, in order to quickly print the different
set of connection flags from "show fd", this is sufficient:
sed -ne 's/^.* cflg=\([^ ]*\).*/\1/p' | contrib/debug/flags conn -
Consider a configuration like this:
> acl t always_true
> acl or always_false
>
> http-response set-header Foo Bar if t or t
The 'or' within the condition will be treated as a logical disjunction
and the header will be set, despite the ACL 'or' being falsy.
This patch makes it an error to declare such an ACL that will never
work. This patch may be backported to stable releases, turning the
error into a warning only (the code was written in a way to make this
trivial). It should not break anything and might improve the users'
lifes.
If the longjmp() call is not flagged as "noreturn", for example, because the
operating system doesn't target a gcc-compatible compiler, we may get this
warning when building Lua :
src/hlua.c: In function 'hlua_panic_ljmp':
src/hlua.c:128:1: warning: no return statement in function returning non-void [-Wreturn-type]
static int hlua_panic_ljmp(lua_State *L) { longjmp(safe_ljmp_env, 1); }
^~~~~~
The function's prototype cannot be changed because it must be compatible
with Lua's callbacks. Let's simply enclose the call inside WILL_LJMP()
which we created exactly to signal a call to longjmp(). It lets the compiler
know we won't get back into the function and that the return statement is
not needed.
A reg test has been added to ensure the evaluation of http-after-responses rules
is functionnal for all kind of responses (server, applet and internal
responses).
2 reg tests have been added to ensure the HTTP return action is functionnal. A
reg test is about returning error files. The other one is about returning
default responses and responses based on string or file payloads.
It is now possible to intercept HTTP messages from a lua action and reply to
clients. To do so, a reply object must be provided to the function
txn:done(). It may contain a status code with a reason, a header list and a
body. By default, if an empty reply object is used, an empty 200 response is
returned. If no reply is passed when txn:done() is called, the previous
behaviour is respected, the transaction is terminated and nothing is returned to
the client. The same is done for TCP streams. When txn:done() is called, the
action is terminated with the code ACT_RET_DONE on success and ACT_RET_ERR on
error, interrupting the message analysis.
The reply object may be created for the lua, by hand. Or txn:reply() may be
called. If so, this object provides some methods to fill it:
* Reply:set_status(<status> [ <reason>]) : Set the status and optionally the
reason. If no reason is provided, the default one corresponding to the status
code is used.
* Reply:add_header(<name>, <value>) : Add a header. For a given name, the
values are stored in an ordered list.
* Reply:del_header(<name>) : Removes all occurrences of a header name.
* Reply:set_body(<body>) : Set the reply body.
Here are some examples, all doing the same:
-- ex. 1
txn:done{
status = 400,
reason = "Bad request",
headers = {
["content-type"] = { "text/html" },
["cache-control"] = { "no-cache", "no-store" },
},
body = "<html><body><h1>invalid request<h1></body></html>"
}
-- ex. 2
local reply = txn:reply{
status = 400,
reason = "Bad request",
headers = {
["content-type"] = { "text/html" },
["cache-control"] = { "no-cache", "no-store" }
},
body = "<html><body><h1>invalid request<h1></body></html>"
}
txn:done(reply)
-- ex. 3
local reply = txn:reply()
reply:set_status(400, "Bad request")
reply:add_header("content-length", "text/html")
reply:add_header("cache-control", "no-cache")
reply:add_header("cache-control", "no-store")
reply:set_body("<html><body><h1>invalid request<h1></body></html>")
txn:done(reply)
This function may be used to defined a timeout when a lua action returns
act:YIELD. It is a way to force to reexecute the script after a short time
(defined in milliseconds).
Unlike core:sleep() or core:yield(), the script is fully reexecuted if it
returns act:YIELD. With core functions to yield, the script is interrupted and
restarts from the yield point. When a script returns act:YIELD, it is finished
but the message analysis is blocked on the action waiting its end.
ACT_RET_* code are now available from lua scripts. The gloabl object "act" is
used to register these codes as constant. Now, lua actions can return any of
following codes :
* act.CONTINUE for ACT_RET_CONT
* act.STOP for ACT_RET_STOP
* act.YIELD for ACT_RET_YIELD
* act.ERROR for ACT_RET_ERR
* act.DONE for ACT_RET_DONE
* act.DENY for ACT_RET_DENY
* act.ABORT for ACT_RET_ABRT
* act.INVALID for ACT_RET_INV
For instance, following script denied all requests :
core.register_action("deny", { "http-req" }, function (txn)
return act.DENY
end)
Thus "http-request lua.deny" do exactly the same than "http-request deny".
When an action successfully finishes, the action return code (ACT_RET_*) is now
retrieve on the stack, ff the first element is an integer. In addition, in
hlua_txn_done(), the value ACT_RET_DONE is pushed on the stack before
exiting. Thus, when a script uses this function, the corresponding action still
finishes with the good code. Thanks to this change, the flag HLUA_STOP is now
useless. So it has been removed.
It is a mandatory step to allow a lua action to return any action return code.
In http_process_res_common() analyzer, when a invalid response is reported, the
failed_resp counters must be incremented.
No need to backport this patch, except if the commit b8a5371a ("MEDIUM:
http-ana: Properly handle internal processing errors") is backported too.
It is not possible anymore to alter the HTTP parser state from lua sample
fetches or lua actions. So there is no reason to still check for the parser
state consistency.
It is now possible to append extra headers to the generated responses by HTTP
return actions, while it is not based on an errorfile. For return actions based
on errorfiles, these extra headers are ignored. To define an extra header, a
"hdr" argument must be used with a name and a value. The value is a log-format
string. For instance:
http-request status 200 hdr "x-src" "%[src]" hdr "x-dst" "%[dst]"
Thanks to this new action, it is now possible to return any responses from
HAProxy, with any status code, based on an errorfile, a file or a string. Unlike
the other internal messages generated by HAProxy, these ones are not interpreted
as errors. And it is not necessary to use a file containing a full HTTP
response, although it is still possible. In addition, using a log-format string
or a log-format file, it is possible to have responses with a dynamic
content. This action can be used on the request path or the response path. The
only constraint is to have a responses smaller than a buffer. And to avoid any
warning the buffer space reserved to the headers rewritting should also be free.
When a response is returned with a file or a string as payload, it only contains
the content-length header and the content-type header, if applicable. Here are
examples:
http-request return content-type image/x-icon file /var/www/favicon.ico \
if { path /favicon.ico }
http-request return status 403 content-type text/plain \
lf-string "Access denied. IP %[src] is blacklisted." \
if { src -f /etc/haproxy/blacklist.lst }
This patch introduces the 'http-after-response' rules. These rules are evaluated
at the end of the response analysis, just before the data forwarding, on ALL
HTTP responses, the server ones but also all responses generated by
HAProxy. Thanks to this ruleset, it is now possible for instance to add some
headers to the responses generated by the stats applet. Following actions are
supported :
* allow
* add-header
* del-header
* replace-header
* replace-value
* set-header
* set-status
* set-var
* strict-mode
* unset-var
Call http_forward_proxy_resp() function when an internal response is
returned. It concerns redirect, auth and error reponses. But also 100-Continue
and 103-Early-Hints responses. For errors, there is a subtlety. if the forward
fails, an HTTP 500 error is generated if it is not already an internal
error. For now http_forward_proxy_resp() cannot fail. But it will be possible
when the new ruleset applied on all responses will be added.
Operations performed when internal responses (redirect/deny/auth/errors) are
returned are always the same. The http_forward_proxy_resp() function is added to
group all of them under a unique function.
The http_server_error() function now relies on http_reply_and_close(). Both do
almost the same actions. In addtion, http_server_error() sets the error flag and
the final state flag on the stream.