mirror of
http://git.haproxy.org/git/haproxy.git/
synced 2024-12-25 22:22:11 +00:00
[DOC] add a few old and uncommitted docs
These docs were still lying in my directory uncommitted. They're not very important but can be useful for developers who seek info about internals.
This commit is contained in:
parent
ad14f753ea
commit
8263b91a53
46
doc/close-options.txt
Normal file
46
doc/close-options.txt
Normal file
@ -0,0 +1,46 @@
|
||||
2011/04/20 - List of keep-alive / close options with associated behaviours.
|
||||
|
||||
PK="http-pretend-keepalive", HC="httpclose", SC="http-server-close",
|
||||
FC = "forceclose".
|
||||
|
||||
0 = option not set
|
||||
1 = option is set
|
||||
* = option doesn't matter
|
||||
|
||||
Options can be split between frontend and backend, so some of them might have
|
||||
a meaning only when combined by associating a frontend to a backend. Some forms
|
||||
are not the normal ones and provide a behaviour compatible with another normal
|
||||
form. Those are considered alternate forms and are markes "(alt)".
|
||||
|
||||
FC SC HC PK Behaviour
|
||||
0 0 0 X tunnel mode
|
||||
0 0 1 0 passive close, only set headers then tunnel
|
||||
0 0 1 1 forced close with keep-alive announce (alt)
|
||||
0 1 0 0 server close
|
||||
0 1 0 1 server close with keep-alive announce
|
||||
0 1 1 0 forced close (alt)
|
||||
0 1 1 1 forced close with keep-alive announce (alt)
|
||||
1 * * 0 forced close
|
||||
1 * * 1 forced close with keep-alive announce
|
||||
|
||||
At this point this results in 4 distinct effective modes for a request being
|
||||
processed :
|
||||
- tunnel mode : Connection header is left untouched and body is ignored
|
||||
- passive close : Connection header is changed and body is ignored
|
||||
- server close : Connection header set, body scanned, client-side keep-alive
|
||||
is made possible regardless of server-side capabilities
|
||||
- forced close : Connection header set, body scanned, connection closed.
|
||||
|
||||
The "close" modes may be combined with a fake keep-alive announce to the server
|
||||
in order to workaround buggy servers that disable chunked encoding and content
|
||||
length announces when the client does not ask for keep-alive.
|
||||
|
||||
Note: "http-pretend-keepalive" alone has no effect. However, if it is set in a
|
||||
backend while a frontend is in "http-close" mode, then the combination of
|
||||
both will result in a forced close with keep-alive announces for requests
|
||||
passing through both.
|
||||
|
||||
It is also worth noting that "option httpclose" alone has become useless since
|
||||
1.4, because "option forceclose" does the right thing, while the former only
|
||||
pretends to do the right thing. Both options might get merged in the future.
|
||||
|
25
doc/cookie-options.txt
Normal file
25
doc/cookie-options.txt
Normal file
@ -0,0 +1,25 @@
|
||||
2011/04/13 : List of possible cookie settings with associated behaviours.
|
||||
|
||||
PSV="preserve", PFX="prefix", INS="insert", REW="rewrite", IND="indirect"
|
||||
0 = option not set
|
||||
1 = option is set
|
||||
* = option doesn't matter
|
||||
|
||||
PSV PFX INS REW IND Behaviour
|
||||
0 0 0 0 0 passive mode
|
||||
0 0 0 0 1 passive + indirect : remove response if not needed
|
||||
0 0 0 1 0 always rewrite response
|
||||
0 0 1 0 0 always insert or replace response
|
||||
0 0 1 0 1 insert + indirect : remove req and also resp if not needed
|
||||
* * 1 1 * [ forbidden ]
|
||||
0 1 0 0 0 prefix
|
||||
0 1 0 0 1 !! prefix on request, remove reponse cookie if not needed
|
||||
* 1 * 1 * [ forbidden ]
|
||||
* 1 1 * * [ forbidden ]
|
||||
* * * 1 1 [ forbidden ]
|
||||
1 * 0 * 0 [ forbidden ]
|
||||
1 0 0 0 1 passive mode (alternate form)
|
||||
1 0 1 0 0 insert only, and preserve server response cookie if any
|
||||
1 0 1 0 1 conditional insert only for new requests
|
||||
1 1 0 0 1 prefix on requests only (passive prefix)
|
||||
|
90
doc/design-thoughts/rate-shaping.txt
Normal file
90
doc/design-thoughts/rate-shaping.txt
Normal file
@ -0,0 +1,90 @@
|
||||
2010/01/24 - Design of multi-criteria request rate shaping.
|
||||
|
||||
We want to be able to rate-shape traffic on multiple cirteria. For instance, we
|
||||
may want to support shaping of per-host header requests, as well as per source.
|
||||
|
||||
In order to achieve this, we will use checkpoints, one per criterion. Each of
|
||||
these checkpoints will consist in a test, a rate counter and a queue.
|
||||
|
||||
A request reaches the checkpoint and checks the counter. If the counter is
|
||||
below the limit, it is updated and the request continues. If the limit is
|
||||
reached, the request attaches itself into the queue and sleeps. The sleep time
|
||||
is computed from the queue status, and updates the queue status.
|
||||
|
||||
A task is dedicated to each queue. Its sole purpose is to be woken up when the
|
||||
next task may wake up, to check the frequency counter, wake as many requests as
|
||||
possible and update the counter. All the woken up requests are detached from
|
||||
the queue. Maybe the task dedicated to the queue can be avoided and replaced
|
||||
with all queued tasks's sleep counters, though this looks tricky. Or maybe it's
|
||||
just the first request in the queue that should be responsible for waking up
|
||||
other tasks, and not to forget to pass on this responsibility to next tasks if
|
||||
it leaves the queue.
|
||||
|
||||
The woken up request then goes on evaluating other criteria and possibly sleeps
|
||||
again on another one. In the end, the task will have waited the amount of time
|
||||
required to pass all checkpoints, and all checkpoints will be able to maintain
|
||||
a permanent load of exactly their limit if enough streams flow through them.
|
||||
|
||||
Since a request can only sleep in one queue at a time, it makes sense to use a
|
||||
linked list element in each session to attach it to any queue. It could very
|
||||
well be shared with the pendconn hooks which could then be part of the session.
|
||||
|
||||
This mechanism could be used to rate-shape sessions and requests per backend
|
||||
and per server.
|
||||
|
||||
When rate-shaping on dynamic criteria, such as the source IP address, we have
|
||||
to first extract the data pattern, then look it up in a table very similar to
|
||||
the stickiness tables, but with a frequency counter. At the checkpoint, the
|
||||
pattern is looked up, the entry created or refreshed, and the frequency counter
|
||||
updated and checked. Then the request either goes on or sleeps as described
|
||||
above, but if it sleeps, it's still in the checkpoint's queue, but with a date
|
||||
computed from the criterion's status.
|
||||
|
||||
This means that we need 3 distinct features :
|
||||
|
||||
- optional pattern extraction
|
||||
- per-pattern or per-queue frequency counter
|
||||
- time-ordered queue with a task
|
||||
|
||||
Based on past experiences with frequency counters, it does not appear very easy
|
||||
to exactly compute sleep delays in advance for multiple requests. So most
|
||||
likely we'll have to run per-criterion queues too, with only the head of the
|
||||
queue holding a wake-up timeout.
|
||||
|
||||
This finally leads us to the following :
|
||||
|
||||
- optional pattern extraction
|
||||
- per-pattern or per-queue frequency counter
|
||||
- per-frequency counter queue
|
||||
- head of the queue serves as a global queue timer.
|
||||
|
||||
This brings us to a very flexible architecture :
|
||||
- 1 list of rule-based checkpoints per frontend
|
||||
- 1 list of rule-based checkpoints per backend
|
||||
- 1 list of rule-based checkpoints per server
|
||||
|
||||
Each of these lists have a lot of rules conditionned by ACLs, just like the
|
||||
use-backend rules, except that all rules are evaluated in turn.
|
||||
|
||||
Since we might sometimes just want to enable that without setting any limit and
|
||||
just for enabling control in ACLs (or logging ?), we should probably try to
|
||||
find a flexible way of declaring just a counter without a queue.
|
||||
|
||||
These checkpoints could be of two types :
|
||||
- rate-limit (described here)
|
||||
- concurrency-limit (very similar with the counter and no timer). This
|
||||
feature would require to keep track of all accounted criteria in a
|
||||
request so that they can be released upon request completion.
|
||||
|
||||
It should be possible to define a max of requests in the queue, above which a
|
||||
503 is returned. The same applies for the max delay in the queue. We could have
|
||||
it per-task (currently it's the connection timeout) and abort tasks with a 503
|
||||
when the delay is exceeded.
|
||||
|
||||
Per-server connection concurrency could be converted to use this mechanism
|
||||
which is very similar.
|
||||
|
||||
The construct should be flexible enough so that the counters may be checked
|
||||
from ACLs. That would allow to reject connections or switch to an alternate
|
||||
backend when some limits are reached.
|
||||
|
96
doc/internals/entities.txt
Normal file
96
doc/internals/entities.txt
Normal file
@ -0,0 +1,96 @@
|
||||
2011/02/25 - Description of the different entities in haproxy - w@1wt.eu
|
||||
|
||||
|
||||
1) Definitions
|
||||
--------------
|
||||
|
||||
Listener
|
||||
--------
|
||||
|
||||
A listener is the entity which is part of a frontend and which accepts
|
||||
connections. There are as many listeners as there are ip:port couples.
|
||||
There is at least one listener instanciated for each "bind" entry, and
|
||||
port ranges will lead to as many listeners as there are ports in the
|
||||
range. A listener just has a listening file descriptor ready to accept
|
||||
incoming connections and to dispatch them to upper layers.
|
||||
|
||||
|
||||
Initiator
|
||||
---------
|
||||
|
||||
An initiator is instanciated for each incoming connection on a listener. It may
|
||||
also be instanciated by a task pretending to be a client. An initiator calls
|
||||
the next stage's accept() callback to present it with the parameters of the
|
||||
incoming connection.
|
||||
|
||||
|
||||
Session
|
||||
-------
|
||||
|
||||
A session is the only entity located between an initiator and a connector.
|
||||
This is the last stage which offers an accept() callback, and all of its
|
||||
processing will continue with the next stage's connect() callback. It holds
|
||||
the buffers needed to forward the protocol data between each side. This entity
|
||||
sees the native protocol, and is able to call analysers on these buffers. As it
|
||||
is used in both directions, it always has two buffers.
|
||||
|
||||
When transformations are required, some of them may be done on the initiator
|
||||
side and other ones on the connector side. If additional buffers are needed for
|
||||
such transforms, those buffers cannot replace the session's buffers, but they
|
||||
may complete them.
|
||||
|
||||
A session only needs to be instanciated when forwarding of data is required
|
||||
between two sides. Accepting and filtering on layer 4 information only does not
|
||||
require a session.
|
||||
|
||||
For instance, let's consider the case of a proxy which receives and decodes
|
||||
HTTPS traffic, processes it as HTTP and recodes it as HTTPS before forwarding
|
||||
it. We'd have 3 layers of buffers, where the middle ones are used for
|
||||
forwarding of the protocol data (HTTP here) :
|
||||
|
||||
<-- ssl dec --> <-forwarding-> <-- ssl enc -->
|
||||
|
||||
,->[||||]--. ,->[||||]--. ,->[||||]--.
|
||||
client (|) (|) (|) (|) server
|
||||
^--[||||]<-' ^--[||||]<-' ^--[||||]<-'
|
||||
|
||||
HTTPS HTTP HTTPS
|
||||
|
||||
The session handling code is only responsible for monitoring the forwarding
|
||||
buffers here. It may declare the end of the session once those buffers are
|
||||
closed and no analyser wants to re-open them. The session is also the entity
|
||||
which applies the load balancing algorithm and decides the server to use.
|
||||
|
||||
The other sides are responsible for propagating the state up to the session
|
||||
which takes decisions.
|
||||
|
||||
|
||||
Connector
|
||||
---------
|
||||
|
||||
A connector is the entity which permits to instanciate a connection to a known
|
||||
destination. It presents a connect() callback, and as such appears on the right
|
||||
side of diagrams.
|
||||
|
||||
|
||||
Connection
|
||||
----------
|
||||
|
||||
A connection is the entity instanciated by a connector. It may be composed of
|
||||
multiple stages linked together. Generally it is the part of the stream
|
||||
interface holding a file descriptor, but it can also be a processing block or a
|
||||
transformation block terminated by a connection. A connection presents a
|
||||
server-side interface.
|
||||
|
||||
|
||||
2) Sequencing
|
||||
-------------
|
||||
|
||||
Upon startup, listeners are instanciated by the configuration. When an incoming
|
||||
connection reaches a listening file descriptor, its read() callback calls the
|
||||
corresponding listener's accept() function which instanciates an initiator and
|
||||
in turn recursively calls upper layers' accept() callbacks until
|
||||
accept_session() is called. accept_session() instanciates a new session which
|
||||
starts protocol analysis via process_session(). When all protocol analysis is
|
||||
done, process_session() calls the connect() callback of the connector in order
|
||||
to get a connection.
|
Loading…
Reference in New Issue
Block a user