The "httpclose" option affects both frontend and backend, so it
was logical to check for its presence at both places. A request
which traverses either a frontend or a backend with this option
set will have a "Connection: close" header appended.
The log format has been slightly updated to separately report the name
of the frontend and the name of the backend. The accept date has been
enhanced to report the millisecond. The number of remaining connections
has also been updated and their order reversed, to include the number
of connections on the frontend. The new log format is now :
- $1: IP:port
- $2: accept date in this format : [dd/mm/YYYY:HH:MM:SS.ttt]
- $3: frontend name
- $4: backend name '/' server name
- $5: req time '/' queue time '/' conn time '/' header time '/' total time
- $6: HTTP status code
- $7: number of bytes returned
- $8: captures (request)
- $9: captures (response)
- $10: completion flags
- $11: remaining conns on process '/' frontend '/' backend '/' server
- $12: srv queue size '/' backend queue size
- $13..: '"' full request '"'
The notion of capabilities has been added to the proxy so that we
know whether a proxy supports frontend, backend, or rulesets. Given
this, some parameters are optionnal, some are ignored with a warning
and others are forbidden. It is now possible to write valid two level
configs without binding to dummy address/ports.
The maxconn argument is used only for the listeners, and the
fullconn is used only for the backends. If unset, it inherits
maxconn's value which itself can inherit the default or the
global value (we might need to change this).
To solve the logging maze, it has been decided that the frontend
and nothing else will define how a session will be logged. It might
change in the future but at least this choice allows all sort of
fantasies.
It is now possible to define an errorloc in the backend as well as
in the frontend. The backend's will be used first, and if undefined,
then the frontend's will be used instead. If none is used, then the
original error messages will be used.
HTTP error messages were all specific cases handled by an IF.
Now they are all in an array so that it will be easier to add
new ones. Also, the return functions now use chunks as inputs
so that it should be easier to provide alternative return
messages if needed.
If git is found during the build process, then it will be used
to set the version, the commit number and the commit date. This
way, it will not be needed anymore to update the code to change
the version. The version is the last tag, and the commit number
is the number of commits since the last tag.
Sin Yu's patch to permit to change the proxy from a regex was merged
with little changes :
- req_cap/rsp_cap are not reassigned to the new proxy, they stay
attached to the frontend
- the actions have been renamed "reqsetbe" and "reqisetbe" for
"set BackEnd".
- the buffer is not reset after the switch, instead, the headers are
parsed again by the backend
- in Sin's patch, it was theorically possible to switch multiple times,
but the switching track was lost, making it impossible to apply
server responsesin the reverse order. Now switching is limited to
1 action (separation between frontend and backend) but the filters
remain.
Now it will be extremely easy to add other switching conditions, such
as host matching, URI matching, etc...
There's still a hard work to be done on the logs and stats.
The logs have become a real mess. It is now very hard to tell which
frontend/backend will impose its configuration for the logs. This
needs a complete rework but at least it should work.
The nbconn attribute in the proxies was not relevant anymore because
a frontend A may use backend B and both of them must account for their
respective connections. For this reason, there now are two separate
counters for frontend and backend connections.
The stats page has been updated to reflect the backend, but a separate
line entry for the frontend with error counts would be good.
Note that as of now, beconn may be higher than maxconn, because maxconn
applies to the frontend, while beconn may be increased due to sessions
passed from another frontend.
There was a confusion about the way to find filters and backend
parameters from sessions. The chaining has been changed between
the session and the proxy.
Now, a session knows only two proxies : one frontend (->fe) and
one backend (->be). Each proxy has a link to the proxy providing
filters and to the proxy providing backend parameters (both self
by default).
The captures (cookies and headers) have been attached to the
frontend's filters for now.
The uri_auth and the statistics are attached to the backend's
filters so that the uri can depend on a hostname for instance.
A proxy will be able to borrow parameters from another one.
In particular, the filters will be inheritable from another
proxy, and the backend parameters too.
The check of uri_auth is now in a separate function which is
checked after every backend switch, so that it will be possible
to have an uri_auth for the frontend and another one for the
backend.
Some while() constructs are not very efficient with gcc, yet they are
used to scan all the text in the start line and the headers. Replacing
them with more efficient (but ugly) loops provides a global gain of
about 2%, which is not bad at all !
The filters are now iterated for FE, FI, BE.
Some grey areas remain :
- uri_auth has been propagated to the backend, but in fact it
should be checked at every level (fe, fi, be), depending
where it is declared, and before the filters.
- the HTTP method and URI should be stored and propagated everywhere
they are used. For this, we would need to first apply filters to
be aware of filter changes which affect them.
- there seems to be no need anymore for hdr_idx[0] being empty.
It may contain the start line, which will slightly improve
performance and make the code easier to read.
The req_cap, hdr_state, hdr_idx, auth_hdr and req_line have been moved
to a dedicated hreq structure in the session. It makes is easier to
add HTTP-specific fields such as SOR (start of request) and EOF (end
of headers).
It also made it possible to fix two bugs introduced by last commit :
- end of headers not correctly detected
- hdr_idx not freed upon one specific error during session creation
When the backend side will be reworked, it should rely on a similar
structure.
The code is working again, but not as clean as it could be.
Many blocks should still move to dedicated functions. req->h
must be removed everywhere and updated everytime needed.
A few functions or macros should take care of the headers
during header insertion/deletion/change.
The new parser uses an FSM to strictly follow RFC2616.
Headers are indexed and parsed only once they're all available.
That way, complex regexes make more sense.
HTTP processing is now performed in several phases by calling
multiple functions, making the code cleaner and easier to read.
Note that req[i]pass does not work anymore because it would
require that we mark a header to be ignored. What is really
needed is to have the ability to add an exception to a matching
(match xx except yy).
Several bugs have been fixed in appsession during the conversion
to the new FSM (method length and recovery on malloc errors).
The code does build and work with the debug examples, but is
not usable yet to connect to anything as it does not forward
the requests yet.
This structure will consume 4 bytes per header to keep track of
headers within a request or a response without having to parse
the whole request for each regex. As it's not possible to allocate
only 4 bytes, we define a max number of HTTP headers. We set it
to (BUFSIZE+79)/80 so that 8kB buffers can contain 100 headers
(like Apache), resulting in 400 bytes dedicated to indexation,
or about 400/(2*8kB) ~= 2.4% of the memory usage.
This patch was added in 1.2.9 but was then incidentely reverted by
manipulation error when merging next patch (enforce max number of
conns). It's now merged again.
The references to the proxy from the session have been turned into
Frontend (fe), Filters (fi) and Backend (be). This should ease the
migration to the L7 switching features. Next step will be to kill
the struct proxy and have 3 independant structs instead, each
referenced from entities called listener, frontend, filters and
backend.
SO_REUSEPORT does not exist on Linux but the checks are available in
the code. With a little patch, it's possible to implement the feature,
but the value of SO_REUSEPORT will still have to be known from userland.
This patch adds a workaround to this problem by figuring out the value
for the one used by SO_REUSEADDR.
Using the cttproxy kernel patch, it's possible to bind to any source
address. It is highly recommended to use the 03-natdel patch with the
other ones.
A new keyword appears as a complement to the "source" keyword : "usesrc".
The source address is mandatory and must be valid on the interface which
will see the packets. The "usesrc" option supports "client" (for full
client_ip:client_port spoofing), "client_ip" (for client_ip spoofing)
and any 'IP[:port]' combination to pretend to be another machine.
Right now, the source binding is missing from server health-checks if
set to another address. It must be implemented (think restricted firewalls).
The doc is still missing too.
Released 1.3.3 with the following changes :
- fix broken redispatch option in case the connection has already
been marked "in progress" (ie: nearly always).
- support regparm on x86 to speed up some often called functions
- removed a few useless calls to gettimeofday() in log functions.
- lots of 'const char*' cleanups
- turn every FD_* into functions which are faster on recent CPUs
- builds again on OpenBSD and Solaris
Linux and BSD know about u_int32_t, while Solaris knows about uint32_t.
This is getting boring and unsigned int perfectly fits the goal for the
moment. Further investigation will be performed anyway.