2006-06-26 00:48:02 +00:00
|
|
|
/*
|
|
|
|
* Backend variables and functions.
|
|
|
|
*
|
MINOR: backend: rename sample fetch functions and declare the sample keywords
The following sample fetch functions were only usable by ACLs but are now
usable by sample fetches too :
avg_queue, be_conn, be_id, be_sess_rate, connslots, nbsrv,
queue, srv_conn, srv_id, srv_is_up, srv_sess_rate
The fetch functions have been renamed "smp_fetch_*".
2013-01-07 21:38:03 +00:00
|
|
|
* Copyright 2000-2013 Willy Tarreau <w@1wt.eu>
|
2006-06-26 00:48:02 +00:00
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License
|
|
|
|
* as published by the Free Software Foundation; either version
|
|
|
|
* 2 of the License, or (at your option) any later version.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <errno.h>
|
|
|
|
#include <fcntl.h>
|
|
|
|
#include <stdio.h>
|
|
|
|
#include <stdlib.h>
|
|
|
|
#include <syslog.h>
|
2006-11-14 14:40:51 +00:00
|
|
|
#include <string.h>
|
2008-04-14 18:47:37 +00:00
|
|
|
#include <ctype.h>
|
2009-08-24 11:11:06 +00:00
|
|
|
#include <sys/types.h>
|
2006-06-26 00:48:02 +00:00
|
|
|
|
2012-08-24 17:22:53 +00:00
|
|
|
#include <common/buffer.h>
|
2006-06-29 15:53:05 +00:00
|
|
|
#include <common/compat.h>
|
2006-06-29 16:54:54 +00:00
|
|
|
#include <common/config.h>
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
#include <common/debug.h>
|
2013-10-30 03:30:51 +00:00
|
|
|
#include <common/hash.h>
|
2008-07-06 22:09:58 +00:00
|
|
|
#include <common/ticks.h>
|
2006-06-29 15:53:05 +00:00
|
|
|
#include <common/time.h>
|
2014-11-17 14:11:45 +00:00
|
|
|
#include <common/namespace.h>
|
2006-06-26 00:48:02 +00:00
|
|
|
|
|
|
|
#include <types/global.h>
|
|
|
|
|
2007-11-30 19:48:53 +00:00
|
|
|
#include <proto/acl.h>
|
2012-04-19 15:16:54 +00:00
|
|
|
#include <proto/arg.h>
|
2006-06-26 00:48:02 +00:00
|
|
|
#include <proto/backend.h>
|
2012-08-24 17:22:53 +00:00
|
|
|
#include <proto/channel.h>
|
2010-05-24 19:02:37 +00:00
|
|
|
#include <proto/frontend.h>
|
[MEDIUM] backend: implement consistent hashing variation
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
2009-10-01 05:52:15 +00:00
|
|
|
#include <proto/lb_chash.h>
|
2012-02-13 16:12:08 +00:00
|
|
|
#include <proto/lb_fas.h>
|
2009-10-01 09:19:37 +00:00
|
|
|
#include <proto/lb_fwlc.h>
|
|
|
|
#include <proto/lb_fwrr.h>
|
|
|
|
#include <proto/lb_map.h>
|
2014-05-16 09:48:10 +00:00
|
|
|
#include <proto/log.h>
|
2012-11-11 23:42:33 +00:00
|
|
|
#include <proto/obj_type.h>
|
MEDIUM: samples: move payload-based fetches and ACLs to their own file
The file acl.c is a real mess, it both contains functions to parse and
process ACLs, and some sample extraction functions which act on buffers.
Some other payload analysers were arbitrarily dispatched to proto_tcp.c.
So now we're moving all payload-based fetches and ACLs to payload.c
which is capable of extracting data from buffers and rely on everything
that is protocol-independant. That way we can safely inflate this file
and only use the other ones when some fetches are really specific (eg:
HTTP, SSL, ...).
As a result of this cleanup, the following new sample fetches became
available even if they're not really useful :
always_false, always_true, rep_ssl_hello_type, rdp_cookie_cnt,
req_len, req_ssl_hello_type, req_ssl_sni, req_ssl_ver, wait_end
The function 'acl_fetch_nothing' was wrong and never used anywhere so it
was removed.
The "rdp_cookie" sample fetch used to have a mandatory argument while it
was optional in ACLs, which are supposed to iterate over RDP cookies. So
we're making it optional as a fetch too, and it will return the first one.
2013-01-07 20:59:07 +00:00
|
|
|
#include <proto/payload.h>
|
2012-09-12 20:58:11 +00:00
|
|
|
#include <proto/protocol.h>
|
2006-06-26 00:48:02 +00:00
|
|
|
#include <proto/proto_http.h>
|
2010-05-31 15:44:19 +00:00
|
|
|
#include <proto/proto_tcp.h>
|
2014-05-16 09:48:10 +00:00
|
|
|
#include <proto/proxy.h>
|
2006-06-26 00:48:02 +00:00
|
|
|
#include <proto/queue.h>
|
MEDIUM: samples: move payload-based fetches and ACLs to their own file
The file acl.c is a real mess, it both contains functions to parse and
process ACLs, and some sample extraction functions which act on buffers.
Some other payload analysers were arbitrarily dispatched to proto_tcp.c.
So now we're moving all payload-based fetches and ACLs to payload.c
which is capable of extracting data from buffers and rely on everything
that is protocol-independant. That way we can safely inflate this file
and only use the other ones when some fetches are really specific (eg:
HTTP, SSL, ...).
As a result of this cleanup, the following new sample fetches became
available even if they're not really useful :
always_false, always_true, rep_ssl_hello_type, rdp_cookie_cnt,
req_len, req_ssl_hello_type, req_ssl_sni, req_ssl_ver, wait_end
The function 'acl_fetch_nothing' was wrong and never used anywhere so it
was removed.
The "rdp_cookie" sample fetch used to have a mandatory argument while it
was optional in ACLs, which are supposed to iterate over RDP cookies. So
we're making it optional as a fetch too, and it will return the first one.
2013-01-07 20:59:07 +00:00
|
|
|
#include <proto/sample.h>
|
2009-03-05 17:43:00 +00:00
|
|
|
#include <proto/server.h>
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
#include <proto/stream.h>
|
2012-08-20 15:01:35 +00:00
|
|
|
#include <proto/raw_sock.h>
|
2011-03-10 13:03:36 +00:00
|
|
|
#include <proto/stream_interface.h>
|
2006-06-26 00:48:02 +00:00
|
|
|
#include <proto/task.h>
|
|
|
|
|
2015-07-09 09:40:25 +00:00
|
|
|
#ifdef USE_OPENSSL
|
|
|
|
#include <proto/ssl_sock.h>
|
|
|
|
#endif /* USE_OPENSSL */
|
|
|
|
|
2014-02-03 21:26:46 +00:00
|
|
|
int be_lastsession(const struct proxy *be)
|
|
|
|
{
|
|
|
|
if (be->be_counters.last_sess)
|
|
|
|
return now.tv_sec - be->be_counters.last_sess;
|
|
|
|
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2013-10-30 03:30:51 +00:00
|
|
|
/* helper function to invoke the correct hash method */
|
2014-07-08 15:51:03 +00:00
|
|
|
static unsigned int gen_hash(const struct proxy* px, const char* key, unsigned long len)
|
2013-10-30 03:30:51 +00:00
|
|
|
{
|
2014-07-08 15:51:03 +00:00
|
|
|
unsigned int hash;
|
2013-10-30 03:30:51 +00:00
|
|
|
|
|
|
|
switch (px->lbprm.algo & BE_LB_HASH_FUNC) {
|
|
|
|
case BE_LB_HFCN_DJB2:
|
|
|
|
hash = hash_djb2(key, len);
|
|
|
|
break;
|
2013-11-14 13:30:35 +00:00
|
|
|
case BE_LB_HFCN_WT6:
|
|
|
|
hash = hash_wt6(key, len);
|
|
|
|
break;
|
2015-01-20 18:44:50 +00:00
|
|
|
case BE_LB_HFCN_CRC32:
|
|
|
|
hash = hash_crc32(key, len);
|
|
|
|
break;
|
2013-10-30 03:30:51 +00:00
|
|
|
case BE_LB_HFCN_SDBM:
|
|
|
|
/* this is the default hash function */
|
|
|
|
default:
|
|
|
|
hash = hash_sdbm(key, len);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return hash;
|
|
|
|
}
|
|
|
|
|
2006-06-26 00:48:02 +00:00
|
|
|
/*
|
|
|
|
* This function recounts the number of usable active and backup servers for
|
|
|
|
* proxy <p>. These numbers are returned into the p->srv_act and p->srv_bck.
|
2007-11-26 00:15:43 +00:00
|
|
|
* This function also recomputes the total active and backup weights. However,
|
2008-03-08 20:42:54 +00:00
|
|
|
* it does not update tot_weight nor tot_used. Use update_backend_weight() for
|
2007-11-26 00:15:43 +00:00
|
|
|
* this.
|
2006-06-26 00:48:02 +00:00
|
|
|
*/
|
2009-10-01 07:17:05 +00:00
|
|
|
void recount_servers(struct proxy *px)
|
2006-06-26 00:48:02 +00:00
|
|
|
{
|
|
|
|
struct server *srv;
|
|
|
|
|
2007-11-15 22:26:18 +00:00
|
|
|
px->srv_act = px->srv_bck = 0;
|
|
|
|
px->lbprm.tot_wact = px->lbprm.tot_wbck = 0;
|
2007-11-26 00:15:43 +00:00
|
|
|
px->lbprm.fbck = NULL;
|
2006-06-26 00:48:02 +00:00
|
|
|
for (srv = px->srv; srv != NULL; srv = srv->next) {
|
2014-05-13 16:51:40 +00:00
|
|
|
if (!srv_is_usable(srv))
|
2007-11-26 00:15:43 +00:00
|
|
|
continue;
|
|
|
|
|
2014-05-13 13:54:22 +00:00
|
|
|
if (srv->flags & SRV_F_BACKUP) {
|
2007-11-26 00:15:43 +00:00
|
|
|
if (!px->srv_bck &&
|
2008-03-08 20:42:54 +00:00
|
|
|
!(px->options & PR_O_USE_ALL_BK))
|
2007-11-26 00:15:43 +00:00
|
|
|
px->lbprm.fbck = srv;
|
|
|
|
px->srv_bck++;
|
2016-10-25 16:49:45 +00:00
|
|
|
srv->cumulative_weight = px->lbprm.tot_wbck;
|
2007-11-26 00:15:43 +00:00
|
|
|
px->lbprm.tot_wbck += srv->eweight;
|
|
|
|
} else {
|
|
|
|
px->srv_act++;
|
2016-10-25 16:49:45 +00:00
|
|
|
srv->cumulative_weight = px->lbprm.tot_wact;
|
2007-11-26 00:15:43 +00:00
|
|
|
px->lbprm.tot_wact += srv->eweight;
|
2006-06-26 00:48:02 +00:00
|
|
|
}
|
|
|
|
}
|
2007-11-26 00:15:43 +00:00
|
|
|
}
|
2007-11-15 22:26:18 +00:00
|
|
|
|
2007-11-26 00:15:43 +00:00
|
|
|
/* This function simply updates the backend's tot_weight and tot_used values
|
|
|
|
* after servers weights have been updated. It is designed to be used after
|
|
|
|
* recount_servers() or equivalent.
|
|
|
|
*/
|
2009-10-01 07:17:05 +00:00
|
|
|
void update_backend_weight(struct proxy *px)
|
2007-11-26 00:15:43 +00:00
|
|
|
{
|
2007-11-15 22:26:18 +00:00
|
|
|
if (px->srv_act) {
|
|
|
|
px->lbprm.tot_weight = px->lbprm.tot_wact;
|
|
|
|
px->lbprm.tot_used = px->srv_act;
|
|
|
|
}
|
2007-11-26 00:15:43 +00:00
|
|
|
else if (px->lbprm.fbck) {
|
|
|
|
/* use only the first backup server */
|
|
|
|
px->lbprm.tot_weight = px->lbprm.fbck->eweight;
|
|
|
|
px->lbprm.tot_used = 1;
|
2007-11-15 22:26:18 +00:00
|
|
|
}
|
|
|
|
else {
|
2007-11-26 00:15:43 +00:00
|
|
|
px->lbprm.tot_weight = px->lbprm.tot_wbck;
|
|
|
|
px->lbprm.tot_used = px->srv_bck;
|
2007-11-15 22:26:18 +00:00
|
|
|
}
|
2007-11-26 00:15:43 +00:00
|
|
|
}
|
|
|
|
|
2009-10-01 19:11:15 +00:00
|
|
|
/*
|
|
|
|
* This function tries to find a running server for the proxy <px> following
|
|
|
|
* the source hash method. Depending on the number of active/backup servers,
|
|
|
|
* it will either look for active servers, or for backup servers.
|
|
|
|
* If any server is found, it will be returned. If no valid server is found,
|
|
|
|
* NULL is returned.
|
|
|
|
*/
|
|
|
|
struct server *get_server_sh(struct proxy *px, const char *addr, int len)
|
|
|
|
{
|
|
|
|
unsigned int h, l;
|
|
|
|
|
|
|
|
if (px->lbprm.tot_weight == 0)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
l = h = 0;
|
|
|
|
|
|
|
|
/* note: we won't hash if there's only one server left */
|
|
|
|
if (px->lbprm.tot_used == 1)
|
|
|
|
goto hash_done;
|
|
|
|
|
|
|
|
while ((l + sizeof (int)) <= len) {
|
|
|
|
h ^= ntohl(*(unsigned int *)(&addr[l]));
|
|
|
|
l += sizeof (int);
|
|
|
|
}
|
2013-11-05 16:54:02 +00:00
|
|
|
if ((px->lbprm.algo & BE_LB_HASH_MOD) == BE_LB_HMOD_AVAL)
|
2010-11-24 14:04:29 +00:00
|
|
|
h = full_hash(h);
|
2009-10-01 19:11:15 +00:00
|
|
|
hash_done:
|
[MEDIUM] backend: implement consistent hashing variation
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
2009-10-01 05:52:15 +00:00
|
|
|
if (px->lbprm.algo & BE_LB_LKUP_CHTREE)
|
|
|
|
return chash_get_server_hash(px, h);
|
|
|
|
else
|
|
|
|
return map_get_server_hash(px, h);
|
2009-10-01 19:11:15 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This function tries to find a running server for the proxy <px> following
|
|
|
|
* the URI hash method. In order to optimize cache hits, the hash computation
|
|
|
|
* ends at the question mark. Depending on the number of active/backup servers,
|
|
|
|
* it will either look for active servers, or for backup servers.
|
|
|
|
* If any server is found, it will be returned. If no valid server is found,
|
|
|
|
* NULL is returned.
|
|
|
|
*
|
|
|
|
* This code was contributed by Guillaume Dallaire, who also selected this hash
|
|
|
|
* algorithm out of a tens because it gave him the best results.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
struct server *get_server_uh(struct proxy *px, char *uri, int uri_len)
|
|
|
|
{
|
2014-07-08 15:51:03 +00:00
|
|
|
unsigned int hash = 0;
|
2009-10-01 19:11:15 +00:00
|
|
|
int c;
|
|
|
|
int slashes = 0;
|
2013-10-30 03:30:51 +00:00
|
|
|
const char *start, *end;
|
2009-10-01 19:11:15 +00:00
|
|
|
|
|
|
|
if (px->lbprm.tot_weight == 0)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/* note: we won't hash if there's only one server left */
|
|
|
|
if (px->lbprm.tot_used == 1)
|
|
|
|
goto hash_done;
|
|
|
|
|
|
|
|
if (px->uri_len_limit)
|
|
|
|
uri_len = MIN(uri_len, px->uri_len_limit);
|
|
|
|
|
2013-10-30 03:30:51 +00:00
|
|
|
start = end = uri;
|
2009-10-01 19:11:15 +00:00
|
|
|
while (uri_len--) {
|
2014-10-17 10:11:50 +00:00
|
|
|
c = *end;
|
2009-10-01 19:11:15 +00:00
|
|
|
if (c == '/') {
|
|
|
|
slashes++;
|
|
|
|
if (slashes == px->uri_dirs_depth1) /* depth+1 */
|
|
|
|
break;
|
|
|
|
}
|
2012-05-19 09:19:54 +00:00
|
|
|
else if (c == '?' && !px->uri_whole)
|
2009-10-01 19:11:15 +00:00
|
|
|
break;
|
2014-10-17 10:11:50 +00:00
|
|
|
end++;
|
2009-10-01 19:11:15 +00:00
|
|
|
}
|
2013-10-30 03:30:51 +00:00
|
|
|
|
|
|
|
hash = gen_hash(px, start, (end - start));
|
|
|
|
|
2013-11-05 16:54:02 +00:00
|
|
|
if ((px->lbprm.algo & BE_LB_HASH_MOD) == BE_LB_HMOD_AVAL)
|
2010-11-24 14:04:29 +00:00
|
|
|
hash = full_hash(hash);
|
2009-10-01 19:11:15 +00:00
|
|
|
hash_done:
|
[MEDIUM] backend: implement consistent hashing variation
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
2009-10-01 05:52:15 +00:00
|
|
|
if (px->lbprm.algo & BE_LB_LKUP_CHTREE)
|
|
|
|
return chash_get_server_hash(px, hash);
|
|
|
|
else
|
|
|
|
return map_get_server_hash(px, hash);
|
2009-10-01 19:11:15 +00:00
|
|
|
}
|
|
|
|
|
2007-11-01 21:48:15 +00:00
|
|
|
/*
|
|
|
|
* This function tries to find a running server for the proxy <px> following
|
|
|
|
* the URL parameter hash method. It looks for a specific parameter in the
|
|
|
|
* URL and hashes it to compute the server ID. This is useful to optimize
|
|
|
|
* performance by avoiding bounces between servers in contexts where sessions
|
|
|
|
* are shared but cookies are not usable. If the parameter is not found, NULL
|
|
|
|
* is returned. If any server is found, it will be returned. If no valid server
|
|
|
|
* is found, NULL is returned.
|
|
|
|
*/
|
|
|
|
struct server *get_server_ph(struct proxy *px, const char *uri, int uri_len)
|
|
|
|
{
|
2014-07-08 15:51:03 +00:00
|
|
|
unsigned int hash = 0;
|
2013-10-30 03:30:51 +00:00
|
|
|
const char *start, *end;
|
2008-04-14 18:47:37 +00:00
|
|
|
const char *p;
|
|
|
|
const char *params;
|
2007-11-01 21:48:15 +00:00
|
|
|
int plen;
|
|
|
|
|
2008-04-14 18:47:37 +00:00
|
|
|
/* when tot_weight is 0 then so is srv_count */
|
2007-11-15 22:26:18 +00:00
|
|
|
if (px->lbprm.tot_weight == 0)
|
2007-11-01 21:48:15 +00:00
|
|
|
return NULL;
|
|
|
|
|
2008-04-14 18:47:37 +00:00
|
|
|
if ((p = memchr(uri, '?', uri_len)) == NULL)
|
|
|
|
return NULL;
|
|
|
|
|
2007-11-01 21:48:15 +00:00
|
|
|
p++;
|
|
|
|
|
|
|
|
uri_len -= (p - uri);
|
|
|
|
plen = px->url_param_len;
|
2008-04-14 18:47:37 +00:00
|
|
|
params = p;
|
2007-11-01 21:48:15 +00:00
|
|
|
|
|
|
|
while (uri_len > plen) {
|
|
|
|
/* Look for the parameter name followed by an equal symbol */
|
2008-04-14 18:47:37 +00:00
|
|
|
if (params[plen] == '=') {
|
|
|
|
if (memcmp(params, px->url_param_name, plen) == 0) {
|
|
|
|
/* OK, we have the parameter here at <params>, and
|
2007-11-01 21:48:15 +00:00
|
|
|
* the value after the equal sign, at <p>
|
2008-04-14 18:47:37 +00:00
|
|
|
* skip the equal symbol
|
2007-11-01 21:48:15 +00:00
|
|
|
*/
|
2008-04-14 18:47:37 +00:00
|
|
|
p += plen + 1;
|
2013-10-30 03:30:51 +00:00
|
|
|
start = end = p;
|
2008-04-14 18:47:37 +00:00
|
|
|
uri_len -= plen + 1;
|
|
|
|
|
2013-10-30 03:30:51 +00:00
|
|
|
while (uri_len && *end != '&') {
|
2007-11-01 21:48:15 +00:00
|
|
|
uri_len--;
|
2013-10-30 03:30:51 +00:00
|
|
|
end++;
|
2007-11-01 21:48:15 +00:00
|
|
|
}
|
2013-10-30 03:30:51 +00:00
|
|
|
hash = gen_hash(px, start, (end - start));
|
|
|
|
|
2013-11-05 16:54:02 +00:00
|
|
|
if ((px->lbprm.algo & BE_LB_HASH_MOD) == BE_LB_HMOD_AVAL)
|
2010-11-24 14:04:29 +00:00
|
|
|
hash = full_hash(hash);
|
2013-10-30 03:30:51 +00:00
|
|
|
|
[MEDIUM] backend: implement consistent hashing variation
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
2009-10-01 05:52:15 +00:00
|
|
|
if (px->lbprm.algo & BE_LB_LKUP_CHTREE)
|
|
|
|
return chash_get_server_hash(px, hash);
|
|
|
|
else
|
|
|
|
return map_get_server_hash(px, hash);
|
2007-11-01 21:48:15 +00:00
|
|
|
}
|
|
|
|
}
|
2008-04-14 18:47:37 +00:00
|
|
|
/* skip to next parameter */
|
|
|
|
p = memchr(params, '&', uri_len);
|
|
|
|
if (!p)
|
|
|
|
return NULL;
|
|
|
|
p++;
|
|
|
|
uri_len -= (p - params);
|
|
|
|
params = p;
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
2007-11-01 21:48:15 +00:00
|
|
|
|
2008-04-14 18:47:37 +00:00
|
|
|
/*
|
|
|
|
* this does the same as the previous server_ph, but check the body contents
|
|
|
|
*/
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
struct server *get_server_ph_post(struct stream *s)
|
2008-04-14 18:47:37 +00:00
|
|
|
{
|
2014-07-08 15:51:03 +00:00
|
|
|
unsigned int hash = 0;
|
2015-04-03 21:46:31 +00:00
|
|
|
struct http_txn *txn = s->txn;
|
2014-11-27 19:45:39 +00:00
|
|
|
struct channel *req = &s->req;
|
2008-04-14 18:47:37 +00:00
|
|
|
struct http_msg *msg = &txn->req;
|
|
|
|
struct proxy *px = s->be;
|
|
|
|
unsigned int plen = px->url_param_len;
|
2014-04-17 18:08:17 +00:00
|
|
|
unsigned long len = http_body_bytes(msg);
|
2014-04-17 18:31:44 +00:00
|
|
|
const char *params = b_ptr(req->buf, -http_data_rewind(msg));
|
2009-12-06 18:18:09 +00:00
|
|
|
const char *p = params;
|
2013-10-30 03:30:51 +00:00
|
|
|
const char *start, *end;
|
2008-04-14 18:47:37 +00:00
|
|
|
|
2009-12-06 18:18:09 +00:00
|
|
|
if (len == 0)
|
2008-04-14 18:47:37 +00:00
|
|
|
return NULL;
|
|
|
|
|
2015-05-01 22:05:47 +00:00
|
|
|
if (len > req->buf->data + req->buf->size - p)
|
|
|
|
len = req->buf->data + req->buf->size - p;
|
|
|
|
|
2009-12-06 18:18:09 +00:00
|
|
|
if (px->lbprm.tot_weight == 0)
|
|
|
|
return NULL;
|
2008-04-14 18:47:37 +00:00
|
|
|
|
|
|
|
while (len > plen) {
|
|
|
|
/* Look for the parameter name followed by an equal symbol */
|
|
|
|
if (params[plen] == '=') {
|
|
|
|
if (memcmp(params, px->url_param_name, plen) == 0) {
|
|
|
|
/* OK, we have the parameter here at <params>, and
|
|
|
|
* the value after the equal sign, at <p>
|
|
|
|
* skip the equal symbol
|
|
|
|
*/
|
|
|
|
p += plen + 1;
|
2013-10-30 03:30:51 +00:00
|
|
|
start = end = p;
|
2008-04-14 18:47:37 +00:00
|
|
|
len -= plen + 1;
|
|
|
|
|
2013-10-30 03:30:51 +00:00
|
|
|
while (len && *end != '&') {
|
2008-04-14 18:47:37 +00:00
|
|
|
if (unlikely(!HTTP_IS_TOKEN(*p))) {
|
2009-12-06 18:18:09 +00:00
|
|
|
/* if in a POST, body must be URI encoded or it's not a URI.
|
2013-10-30 03:30:51 +00:00
|
|
|
* Do not interpret any possible binary data as a parameter.
|
2009-12-06 18:18:09 +00:00
|
|
|
*/
|
2008-04-14 18:47:37 +00:00
|
|
|
if (likely(HTTP_IS_LWS(*p))) /* eol, uncertain uri len */
|
|
|
|
break;
|
|
|
|
return NULL; /* oh, no; this is not uri-encoded.
|
|
|
|
* This body does not contain parameters.
|
|
|
|
*/
|
|
|
|
}
|
|
|
|
len--;
|
2013-10-30 03:30:51 +00:00
|
|
|
end++;
|
2008-04-14 18:47:37 +00:00
|
|
|
/* should we break if vlen exceeds limit? */
|
|
|
|
}
|
2013-10-30 03:30:51 +00:00
|
|
|
hash = gen_hash(px, start, (end - start));
|
|
|
|
|
2013-11-05 16:54:02 +00:00
|
|
|
if ((px->lbprm.algo & BE_LB_HASH_MOD) == BE_LB_HMOD_AVAL)
|
2010-11-24 14:04:29 +00:00
|
|
|
hash = full_hash(hash);
|
2013-10-30 03:30:51 +00:00
|
|
|
|
[MEDIUM] backend: implement consistent hashing variation
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
2009-10-01 05:52:15 +00:00
|
|
|
if (px->lbprm.algo & BE_LB_LKUP_CHTREE)
|
|
|
|
return chash_get_server_hash(px, hash);
|
|
|
|
else
|
|
|
|
return map_get_server_hash(px, hash);
|
2008-04-14 18:47:37 +00:00
|
|
|
}
|
|
|
|
}
|
2007-11-01 21:48:15 +00:00
|
|
|
/* skip to next parameter */
|
2008-04-14 18:47:37 +00:00
|
|
|
p = memchr(params, '&', len);
|
2007-11-01 21:48:15 +00:00
|
|
|
if (!p)
|
|
|
|
return NULL;
|
|
|
|
p++;
|
2008-04-14 18:47:37 +00:00
|
|
|
len -= (p - params);
|
|
|
|
params = p;
|
2007-11-01 21:48:15 +00:00
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
2006-06-26 00:48:02 +00:00
|
|
|
|
2008-04-14 18:47:37 +00:00
|
|
|
|
2009-03-25 12:02:10 +00:00
|
|
|
/*
|
|
|
|
* This function tries to find a running server for the proxy <px> following
|
|
|
|
* the Header parameter hash method. It looks for a specific parameter in the
|
|
|
|
* URL and hashes it to compute the server ID. This is useful to optimize
|
|
|
|
* performance by avoiding bounces between servers in contexts where sessions
|
|
|
|
* are shared but cookies are not usable. If the parameter is not found, NULL
|
|
|
|
* is returned. If any server is found, it will be returned. If no valid server
|
|
|
|
* is found, NULL is returned.
|
|
|
|
*/
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
struct server *get_server_hh(struct stream *s)
|
2009-03-25 12:02:10 +00:00
|
|
|
{
|
2014-07-08 15:51:03 +00:00
|
|
|
unsigned int hash = 0;
|
2015-04-03 21:46:31 +00:00
|
|
|
struct http_txn *txn = s->txn;
|
2009-03-25 12:02:10 +00:00
|
|
|
struct proxy *px = s->be;
|
|
|
|
unsigned int plen = px->hh_len;
|
|
|
|
unsigned long len;
|
|
|
|
struct hdr_ctx ctx;
|
|
|
|
const char *p;
|
2013-10-30 03:30:51 +00:00
|
|
|
const char *start, *end;
|
2009-03-25 12:02:10 +00:00
|
|
|
|
|
|
|
/* tot_weight appears to mean srv_count */
|
|
|
|
if (px->lbprm.tot_weight == 0)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
ctx.idx = 0;
|
|
|
|
|
|
|
|
/* if the message is chunked, we skip the chunk size, but use the value as len */
|
2014-11-27 19:45:39 +00:00
|
|
|
http_find_header2(px->hh_name, plen, b_ptr(s->req.buf, -http_hdr_rewind(&txn->req)), &txn->hdr_idx, &ctx);
|
2009-03-25 12:02:10 +00:00
|
|
|
|
|
|
|
/* if the header is not found or empty, let's fallback to round robin */
|
|
|
|
if (!ctx.idx || !ctx.vlen)
|
|
|
|
return NULL;
|
|
|
|
|
2009-10-01 19:11:15 +00:00
|
|
|
/* note: we won't hash if there's only one server left */
|
|
|
|
if (px->lbprm.tot_used == 1)
|
|
|
|
goto hash_done;
|
|
|
|
|
2009-03-25 12:02:10 +00:00
|
|
|
/* Found a the hh_name in the headers.
|
|
|
|
* we will compute the hash based on this value ctx.val.
|
|
|
|
*/
|
|
|
|
len = ctx.vlen;
|
|
|
|
p = (char *)ctx.line + ctx.val;
|
|
|
|
if (!px->hh_match_domain) {
|
2013-10-30 03:30:51 +00:00
|
|
|
hash = gen_hash(px, p, len);
|
2009-03-25 12:02:10 +00:00
|
|
|
} else {
|
|
|
|
int dohash = 0;
|
2015-01-04 14:17:36 +00:00
|
|
|
p += len;
|
2009-03-25 12:02:10 +00:00
|
|
|
/* special computation, use only main domain name, not tld/host
|
|
|
|
* going back from the end of string, start hashing at first
|
|
|
|
* dot stop at next.
|
|
|
|
* This is designed to work with the 'Host' header, and requires
|
|
|
|
* a special option to activate this.
|
|
|
|
*/
|
2015-01-04 14:17:36 +00:00
|
|
|
end = p;
|
2009-03-25 12:02:10 +00:00
|
|
|
while (len) {
|
2015-01-04 14:17:36 +00:00
|
|
|
if (dohash) {
|
|
|
|
/* Rewind the pointer until the previous char
|
|
|
|
* is a dot, this will allow to set the start
|
|
|
|
* position of the domain. */
|
|
|
|
if (*(p - 1) == '.')
|
2009-03-25 12:02:10 +00:00
|
|
|
break;
|
|
|
|
}
|
2015-01-04 14:17:36 +00:00
|
|
|
else if (*p == '.') {
|
|
|
|
/* The pointer is rewinded to the dot before the
|
|
|
|
* tld, we memorize the end of the domain and
|
|
|
|
* can enter the domain processing. */
|
|
|
|
end = p;
|
|
|
|
dohash = 1;
|
|
|
|
}
|
2009-03-25 12:02:10 +00:00
|
|
|
p--;
|
2015-01-04 14:17:36 +00:00
|
|
|
len--;
|
2009-03-25 12:02:10 +00:00
|
|
|
}
|
2015-01-04 14:17:36 +00:00
|
|
|
start = p;
|
2013-10-30 03:30:51 +00:00
|
|
|
hash = gen_hash(px, start, (end - start));
|
2009-03-25 12:02:10 +00:00
|
|
|
}
|
2013-11-05 16:54:02 +00:00
|
|
|
if ((px->lbprm.algo & BE_LB_HASH_MOD) == BE_LB_HMOD_AVAL)
|
2010-11-24 14:04:29 +00:00
|
|
|
hash = full_hash(hash);
|
2009-10-01 19:11:15 +00:00
|
|
|
hash_done:
|
[MEDIUM] backend: implement consistent hashing variation
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
2009-10-01 05:52:15 +00:00
|
|
|
if (px->lbprm.algo & BE_LB_LKUP_CHTREE)
|
|
|
|
return chash_get_server_hash(px, hash);
|
|
|
|
else
|
|
|
|
return map_get_server_hash(px, hash);
|
2009-03-25 12:02:10 +00:00
|
|
|
}
|
|
|
|
|
2012-04-19 15:16:54 +00:00
|
|
|
/* RDP Cookie HASH. */
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
struct server *get_server_rch(struct stream *s)
|
2009-06-30 15:56:00 +00:00
|
|
|
{
|
2014-07-08 15:51:03 +00:00
|
|
|
unsigned int hash = 0;
|
2009-06-30 15:56:00 +00:00
|
|
|
struct proxy *px = s->be;
|
|
|
|
unsigned long len;
|
|
|
|
int ret;
|
2012-04-23 14:16:37 +00:00
|
|
|
struct sample smp;
|
BUG/MAJOR: fix regression on content-based hashing and http-send-name-header
The recent split between the buffers and HTTP messages in 1.5-dev9 caused
a major trouble : in the past, we used to keep a pointer to HTTP data in the
buffer struct itself, which was the cause of most of the pain we had to deal
with buffers.
Now the two are split but we lost the information about the beginning of
the HTTP message once it's being forwarded. While it seems normal, it happens
that several parts of the code currently rely on this ability to inspect a
buffer containing old contents :
- balance uri
- balance url_param
- balance url_param check_post
- balance hdr()
- balance rdp-cookie()
- http-send-name-header
All these happen after the data are scheduled for being forwarded, which
also causes a server to be selected. So for a long time we've been relying
on supposedly sent data that we still had a pointer to.
Now that we don't have such a pointer anymore, we only have one possibility :
when we need to inspect such data, we have to rewind the buffer so that ->p
points to where it previously was. We're lucky, no data can leave the buffer
before it's being connecting outside, and since no inspection can begin until
it's empty, we know that the skipped data are exactly ->o. So we rewind the
buffer by ->o to get headers and advance it back by the same amount.
Proceeding this way is particularly important when dealing with chunked-
encoded requests, because the ->som and ->sov fields may be reused by the
chunk parser before the connection attempt is made, so we cannot rely on
them.
Also, we need to be able to come back after retries and redispatches, which
might change the size of the request if http-send-name-header is set. All of
this is accounted for by the output queue so in the end it does not look like
a bad solution.
No backport is needed.
2012-05-18 20:12:14 +00:00
|
|
|
int rewind;
|
2009-06-30 15:56:00 +00:00
|
|
|
|
|
|
|
/* tot_weight appears to mean srv_count */
|
|
|
|
if (px->lbprm.tot_weight == 0)
|
|
|
|
return NULL;
|
|
|
|
|
2012-04-23 14:16:37 +00:00
|
|
|
memset(&smp, 0, sizeof(smp));
|
2009-06-30 15:56:00 +00:00
|
|
|
|
2014-11-27 19:45:39 +00:00
|
|
|
b_rew(s->req.buf, rewind = s->req.buf->o);
|
BUG/MAJOR: fix regression on content-based hashing and http-send-name-header
The recent split between the buffers and HTTP messages in 1.5-dev9 caused
a major trouble : in the past, we used to keep a pointer to HTTP data in the
buffer struct itself, which was the cause of most of the pain we had to deal
with buffers.
Now the two are split but we lost the information about the beginning of
the HTTP message once it's being forwarded. While it seems normal, it happens
that several parts of the code currently rely on this ability to inspect a
buffer containing old contents :
- balance uri
- balance url_param
- balance url_param check_post
- balance hdr()
- balance rdp-cookie()
- http-send-name-header
All these happen after the data are scheduled for being forwarded, which
also causes a server to be selected. So for a long time we've been relying
on supposedly sent data that we still had a pointer to.
Now that we don't have such a pointer anymore, we only have one possibility :
when we need to inspect such data, we have to rewind the buffer so that ->p
points to where it previously was. We're lucky, no data can leave the buffer
before it's being connecting outside, and since no inspection can begin until
it's empty, we know that the skipped data are exactly ->o. So we rewind the
buffer by ->o to get headers and advance it back by the same amount.
Proceeding this way is particularly important when dealing with chunked-
encoded requests, because the ->som and ->sov fields may be reused by the
chunk parser before the connection attempt is made, so we cannot rely on
them.
Also, we need to be able to come back after retries and redispatches, which
might change the size of the request if http-send-name-header is set. All of
this is accounted for by the output queue so in the end it does not look like
a bad solution.
No backport is needed.
2012-05-18 20:12:14 +00:00
|
|
|
|
2013-07-22 16:09:52 +00:00
|
|
|
ret = fetch_rdp_cookie_name(s, &smp, px->hh_name, px->hh_len);
|
2015-08-19 07:07:19 +00:00
|
|
|
len = smp.data.u.str.len;
|
2011-12-16 18:11:42 +00:00
|
|
|
|
2014-11-27 19:45:39 +00:00
|
|
|
b_adv(s->req.buf, rewind);
|
BUG/MAJOR: fix regression on content-based hashing and http-send-name-header
The recent split between the buffers and HTTP messages in 1.5-dev9 caused
a major trouble : in the past, we used to keep a pointer to HTTP data in the
buffer struct itself, which was the cause of most of the pain we had to deal
with buffers.
Now the two are split but we lost the information about the beginning of
the HTTP message once it's being forwarded. While it seems normal, it happens
that several parts of the code currently rely on this ability to inspect a
buffer containing old contents :
- balance uri
- balance url_param
- balance url_param check_post
- balance hdr()
- balance rdp-cookie()
- http-send-name-header
All these happen after the data are scheduled for being forwarded, which
also causes a server to be selected. So for a long time we've been relying
on supposedly sent data that we still had a pointer to.
Now that we don't have such a pointer anymore, we only have one possibility :
when we need to inspect such data, we have to rewind the buffer so that ->p
points to where it previously was. We're lucky, no data can leave the buffer
before it's being connecting outside, and since no inspection can begin until
it's empty, we know that the skipped data are exactly ->o. So we rewind the
buffer by ->o to get headers and advance it back by the same amount.
Proceeding this way is particularly important when dealing with chunked-
encoded requests, because the ->som and ->sov fields may be reused by the
chunk parser before the connection attempt is made, so we cannot rely on
them.
Also, we need to be able to come back after retries and redispatches, which
might change the size of the request if http-send-name-header is set. All of
this is accounted for by the output queue so in the end it does not look like
a bad solution.
No backport is needed.
2012-05-18 20:12:14 +00:00
|
|
|
|
2012-04-23 14:16:37 +00:00
|
|
|
if (ret == 0 || (smp.flags & SMP_F_MAY_CHANGE) || len == 0)
|
2009-06-30 15:56:00 +00:00
|
|
|
return NULL;
|
|
|
|
|
2009-10-01 19:11:15 +00:00
|
|
|
/* note: we won't hash if there's only one server left */
|
|
|
|
if (px->lbprm.tot_used == 1)
|
|
|
|
goto hash_done;
|
|
|
|
|
2009-06-30 15:56:00 +00:00
|
|
|
/* Found a the hh_name in the headers.
|
|
|
|
* we will compute the hash based on this value ctx.val.
|
|
|
|
*/
|
2015-08-19 07:07:19 +00:00
|
|
|
hash = gen_hash(px, smp.data.u.str.str, len);
|
2013-10-30 03:30:51 +00:00
|
|
|
|
2013-11-05 16:54:02 +00:00
|
|
|
if ((px->lbprm.algo & BE_LB_HASH_MOD) == BE_LB_HMOD_AVAL)
|
2010-11-24 14:04:29 +00:00
|
|
|
hash = full_hash(hash);
|
2009-10-01 19:11:15 +00:00
|
|
|
hash_done:
|
[MEDIUM] backend: implement consistent hashing variation
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
2009-10-01 05:52:15 +00:00
|
|
|
if (px->lbprm.algo & BE_LB_LKUP_CHTREE)
|
|
|
|
return chash_get_server_hash(px, hash);
|
|
|
|
else
|
|
|
|
return map_get_server_hash(px, hash);
|
2009-06-30 15:56:00 +00:00
|
|
|
}
|
2009-03-25 12:02:10 +00:00
|
|
|
|
2006-06-26 00:48:02 +00:00
|
|
|
/*
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
* This function applies the load-balancing algorithm to the stream, as
|
|
|
|
* defined by the backend it is assigned to. The stream is then marked as
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
* 'assigned'.
|
|
|
|
*
|
2015-04-02 23:14:29 +00:00
|
|
|
* This function MAY NOT be called with SF_ASSIGNED already set. If the stream
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
* had a server previously assigned, it is rebalanced, trying to avoid the same
|
2011-03-10 15:55:02 +00:00
|
|
|
* server, which should still be present in target_srv(&s->target) before the call.
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
* The function tries to keep the original connection slot if it reconnects to
|
|
|
|
* the same server, otherwise it releases it and tries to offer it.
|
|
|
|
*
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
* It is illegal to call this function with a stream in a queue.
|
2006-06-26 00:48:02 +00:00
|
|
|
*
|
|
|
|
* It may return :
|
2011-03-10 10:38:29 +00:00
|
|
|
* SRV_STATUS_OK if everything is OK. ->srv and ->target are assigned.
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
* SRV_STATUS_NOSRV if no server is available. Stream is not ASSIGNED
|
|
|
|
* SRV_STATUS_FULL if all servers are saturated. Stream is not ASSIGNED
|
2006-06-26 00:48:02 +00:00
|
|
|
* SRV_STATUS_INTERNAL for other unrecoverable errors.
|
|
|
|
*
|
2015-04-02 23:14:29 +00:00
|
|
|
* Upon successful return, the stream flag SF_ASSIGNED is set to indicate that
|
2011-03-10 15:55:02 +00:00
|
|
|
* it does not need to be called anymore. This means that target_srv(&s->target)
|
|
|
|
* can be trusted in balance and direct modes.
|
2006-06-26 00:48:02 +00:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
int assign_server(struct stream *s)
|
2006-06-26 00:48:02 +00:00
|
|
|
{
|
2013-10-01 08:45:07 +00:00
|
|
|
struct connection *conn;
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
struct server *conn_slot;
|
2011-03-10 15:55:02 +00:00
|
|
|
struct server *srv, *prev_srv;
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
int err;
|
[MEDIUM]: Prevent redispatcher from selecting the same server, version #3
When haproxy decides that session needs to be redispatched it chose a server,
but there is no guarantee for it to be a different one. So, it often
happens that selected server is exactly the same that it was previously, so
a client ends up with a 503 error anyway, especially when one sever has
much bigger weight than others.
Changes from the previous version:
- drop stupid and unnecessary SN_DIRECT changes
- assign_server(): use srvtoavoid to keep the old server and clear s->srv
so SRV_STATUS_NOSRV guarantees that t->srv == NULL (again)
and get_server_rr_with_conns has chances to work (previously
we were passing a NULL here)
- srv_redispatch_connect(): remove t->srv->cum_sess and t->srv->failed_conns
incrementing as t->srv was guaranteed to be NULL
- add avoididx to get_server_rr_with_conns. I hope I correctly understand this code.
- fix http_flush_cookie_flags() and move it to assign_server_and_queue()
directly. The code here was supposed to set CK_DOWN and clear CK_VALID,
but: (TX_CK_VALID | TX_CK_DOWN) == TX_CK_VALID == TX_CK_MASK so:
if ((txn->flags & TX_CK_MASK) == TX_CK_VALID)
txn->flags ^= (TX_CK_VALID | TX_CK_DOWN);
was really a:
if ((txn->flags & TX_CK_MASK) == TX_CK_VALID)
txn->flags &= TX_CK_VALID
Now haproxy logs "--DI" after redispatching connection.
- defer srv->redispatches++ and s->be->redispatches++ so there
are called only if a conenction was redispatched, not only
supposed to.
- don't increment lbconn if redispatcher selected the same sarver
- don't count unsuccessfully redispatched connections as redispatched
connections
- don't count redispatched connections as errors, so:
- the number of connections effectively served by a server is:
srv->cum_sess - srv->failed_conns - srv->retries - srv->redispatches
and
SUM(servers->failed_conns) == be->failed_conns
- requires the "Don't increment server connections too much + fix retries" patch
- needs little more testing and probably some discussion so reverting to the RFC state
Tests #1:
retries 4
redispatch
i) 1 server(s): b (wght=1, down)
b) sessions=5, lbtot=1, err_conn=1, retr=4, redis=0
-> request failed
ii) server(s): b (wght=1, down), u (wght=1, down)
b) sessions=4, lbtot=1, err_conn=0, retr=3, redis=1
u) sessions=1, lbtot=1, err_conn=1, retr=0, redis=0
-> request FAILED
iii) 2 server(s): b (wght=1, down), u (wght=1, up)
b) sessions=4, lbtot=1, err_conn=0, retr=3, redis=1
u) sessions=1, lbtot=1, err_conn=0, retr=0, redis=0
-> request OK
iv) 2 server(s): b (wght=100, down), u (wght=1, up)
b) sessions=4, lbtot=1, err_conn=0, retr=3, redis=1
u) sessions=1, lbtot=1, err_conn=0, retr=0, redis=0
-> request OK
v) 1 server(s): b (down for first 4 SYNS)
b) sessions=5, lbtot=1, err_conn=0, retr=4, redis=0
-> request OK
Tests #2:
retries 4
i) 1 server(s): b (down)
b) sessions=5, lbtot=1, err_conn=1, retr=4, redis=0
-> request FAILED
2008-02-22 02:50:19 +00:00
|
|
|
|
2011-08-12 23:03:52 +00:00
|
|
|
DPRINTF(stderr,"assign_server : s=%p\n",s);
|
2006-06-26 00:48:02 +00:00
|
|
|
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
err = SRV_STATUS_INTERNAL;
|
2015-04-02 23:14:29 +00:00
|
|
|
if (unlikely(s->pend_pos || s->flags & SF_ASSIGNED))
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
goto out_err;
|
[MEDIUM]: Prevent redispatcher from selecting the same server, version #3
When haproxy decides that session needs to be redispatched it chose a server,
but there is no guarantee for it to be a different one. So, it often
happens that selected server is exactly the same that it was previously, so
a client ends up with a 503 error anyway, especially when one sever has
much bigger weight than others.
Changes from the previous version:
- drop stupid and unnecessary SN_DIRECT changes
- assign_server(): use srvtoavoid to keep the old server and clear s->srv
so SRV_STATUS_NOSRV guarantees that t->srv == NULL (again)
and get_server_rr_with_conns has chances to work (previously
we were passing a NULL here)
- srv_redispatch_connect(): remove t->srv->cum_sess and t->srv->failed_conns
incrementing as t->srv was guaranteed to be NULL
- add avoididx to get_server_rr_with_conns. I hope I correctly understand this code.
- fix http_flush_cookie_flags() and move it to assign_server_and_queue()
directly. The code here was supposed to set CK_DOWN and clear CK_VALID,
but: (TX_CK_VALID | TX_CK_DOWN) == TX_CK_VALID == TX_CK_MASK so:
if ((txn->flags & TX_CK_MASK) == TX_CK_VALID)
txn->flags ^= (TX_CK_VALID | TX_CK_DOWN);
was really a:
if ((txn->flags & TX_CK_MASK) == TX_CK_VALID)
txn->flags &= TX_CK_VALID
Now haproxy logs "--DI" after redispatching connection.
- defer srv->redispatches++ and s->be->redispatches++ so there
are called only if a conenction was redispatched, not only
supposed to.
- don't increment lbconn if redispatcher selected the same sarver
- don't count unsuccessfully redispatched connections as redispatched
connections
- don't count redispatched connections as errors, so:
- the number of connections effectively served by a server is:
srv->cum_sess - srv->failed_conns - srv->retries - srv->redispatches
and
SUM(servers->failed_conns) == be->failed_conns
- requires the "Don't increment server connections too much + fix retries" patch
- needs little more testing and probably some discussion so reverting to the RFC state
Tests #1:
retries 4
redispatch
i) 1 server(s): b (wght=1, down)
b) sessions=5, lbtot=1, err_conn=1, retr=4, redis=0
-> request failed
ii) server(s): b (wght=1, down), u (wght=1, down)
b) sessions=4, lbtot=1, err_conn=0, retr=3, redis=1
u) sessions=1, lbtot=1, err_conn=1, retr=0, redis=0
-> request FAILED
iii) 2 server(s): b (wght=1, down), u (wght=1, up)
b) sessions=4, lbtot=1, err_conn=0, retr=3, redis=1
u) sessions=1, lbtot=1, err_conn=0, retr=0, redis=0
-> request OK
iv) 2 server(s): b (wght=100, down), u (wght=1, up)
b) sessions=4, lbtot=1, err_conn=0, retr=3, redis=1
u) sessions=1, lbtot=1, err_conn=0, retr=0, redis=0
-> request OK
v) 1 server(s): b (down for first 4 SYNS)
b) sessions=5, lbtot=1, err_conn=0, retr=4, redis=0
-> request OK
Tests #2:
retries 4
i) 1 server(s): b (down)
b) sessions=5, lbtot=1, err_conn=1, retr=4, redis=0
-> request FAILED
2008-02-22 02:50:19 +00:00
|
|
|
|
2012-11-11 23:42:33 +00:00
|
|
|
prev_srv = objt_server(s->target);
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
conn_slot = s->srv_conn;
|
2006-06-26 00:48:02 +00:00
|
|
|
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
/* We have to release any connection slot before applying any LB algo,
|
|
|
|
* otherwise we may erroneously end up with no available slot.
|
|
|
|
*/
|
|
|
|
if (conn_slot)
|
|
|
|
sess_change_server(s, NULL);
|
|
|
|
|
2012-11-11 23:42:33 +00:00
|
|
|
/* We will now try to find the good server and store it into <objt_server(s->target)>.
|
|
|
|
* Note that <objt_server(s->target)> may be NULL in case of dispatch or proxy mode,
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
* as well as if no server is available (check error code).
|
|
|
|
*/
|
2007-11-01 20:08:19 +00:00
|
|
|
|
2011-03-10 15:55:02 +00:00
|
|
|
srv = NULL;
|
2012-11-11 23:42:33 +00:00
|
|
|
s->target = NULL;
|
2014-11-28 13:42:25 +00:00
|
|
|
conn = objt_conn(s->si[1].end);
|
2013-12-15 17:58:25 +00:00
|
|
|
|
2013-12-23 14:11:25 +00:00
|
|
|
if (conn &&
|
2014-02-19 17:40:43 +00:00
|
|
|
(conn->flags & CO_FL_CONNECTED) &&
|
2013-12-15 17:58:25 +00:00
|
|
|
objt_server(conn->target) && __objt_server(conn->target)->proxy == s->be &&
|
2015-04-03 21:46:31 +00:00
|
|
|
((s->txn && s->txn->flags & TX_PREFER_LAST) ||
|
2014-04-25 11:58:37 +00:00
|
|
|
((s->be->options & PR_O_PREF_LAST) &&
|
|
|
|
(!s->be->max_ka_queue ||
|
|
|
|
server_has_room(__objt_server(conn->target)) ||
|
|
|
|
(__objt_server(conn->target)->nbpend + 1) < s->be->max_ka_queue))) &&
|
2014-05-13 16:51:40 +00:00
|
|
|
srv_is_usable(__objt_server(conn->target))) {
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
/* This stream was relying on a server in a previous request
|
2014-12-10 02:21:30 +00:00
|
|
|
* and the proxy has "option prefer-last-server" set, so
|
2013-12-15 17:58:25 +00:00
|
|
|
* let's try to reuse the same server.
|
|
|
|
*/
|
|
|
|
srv = __objt_server(conn->target);
|
|
|
|
s->target = &srv->obj_type;
|
|
|
|
}
|
|
|
|
else if (s->be->lbprm.algo & BE_LB_KIND) {
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
/* we must check if we have at least one server available */
|
|
|
|
if (!s->be->lbprm.tot_weight) {
|
|
|
|
err = SRV_STATUS_NOSRV;
|
|
|
|
goto out;
|
|
|
|
}
|
2006-06-26 00:48:02 +00:00
|
|
|
|
2009-10-03 10:36:05 +00:00
|
|
|
/* First check whether we need to fetch some data or simply call
|
|
|
|
* the LB lookup function. Only the hashing functions will need
|
|
|
|
* some input data in fact, and will support multiple algorithms.
|
|
|
|
*/
|
|
|
|
switch (s->be->lbprm.algo & BE_LB_LKUP) {
|
|
|
|
case BE_LB_LKUP_RRTREE:
|
2011-03-10 15:55:02 +00:00
|
|
|
srv = fwrr_get_next_server(s->be, prev_srv);
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
break;
|
2009-10-03 10:36:05 +00:00
|
|
|
|
2012-02-13 16:12:08 +00:00
|
|
|
case BE_LB_LKUP_FSTREE:
|
|
|
|
srv = fas_get_next_server(s->be, prev_srv);
|
|
|
|
break;
|
|
|
|
|
2009-10-03 10:36:05 +00:00
|
|
|
case BE_LB_LKUP_LCTREE:
|
2011-03-10 15:55:02 +00:00
|
|
|
srv = fwlc_get_next_server(s->be, prev_srv);
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
break;
|
2009-10-03 10:36:05 +00:00
|
|
|
|
[MEDIUM] backend: implement consistent hashing variation
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
2009-10-01 05:52:15 +00:00
|
|
|
case BE_LB_LKUP_CHTREE:
|
2009-10-03 10:36:05 +00:00
|
|
|
case BE_LB_LKUP_MAP:
|
2009-10-03 10:56:50 +00:00
|
|
|
if ((s->be->lbprm.algo & BE_LB_KIND) == BE_LB_KIND_RR) {
|
[MEDIUM] backend: implement consistent hashing variation
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
2009-10-01 05:52:15 +00:00
|
|
|
if (s->be->lbprm.algo & BE_LB_LKUP_CHTREE)
|
2011-03-10 15:55:02 +00:00
|
|
|
srv = chash_get_next_server(s->be, prev_srv);
|
[MEDIUM] backend: implement consistent hashing variation
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
2009-10-01 05:52:15 +00:00
|
|
|
else
|
2011-03-10 15:55:02 +00:00
|
|
|
srv = map_get_server_rr(s->be, prev_srv);
|
2009-10-03 10:56:50 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
else if ((s->be->lbprm.algo & BE_LB_KIND) != BE_LB_KIND_HI) {
|
2009-10-03 10:36:05 +00:00
|
|
|
/* unknown balancing algorithm */
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
err = SRV_STATUS_INTERNAL;
|
|
|
|
goto out;
|
|
|
|
}
|
2009-10-03 10:36:05 +00:00
|
|
|
|
|
|
|
switch (s->be->lbprm.algo & BE_LB_PARM) {
|
|
|
|
case BE_LB_HASH_SRC:
|
2015-04-04 00:10:38 +00:00
|
|
|
conn = objt_conn(strm_orig(s));
|
2013-10-01 08:45:07 +00:00
|
|
|
if (conn && conn->addr.from.ss_family == AF_INET) {
|
2012-03-31 17:53:37 +00:00
|
|
|
srv = get_server_sh(s->be,
|
2013-10-01 08:45:07 +00:00
|
|
|
(void *)&((struct sockaddr_in *)&conn->addr.from)->sin_addr,
|
2012-03-31 17:53:37 +00:00
|
|
|
4);
|
|
|
|
}
|
2013-10-01 08:45:07 +00:00
|
|
|
else if (conn && conn->addr.from.ss_family == AF_INET6) {
|
2012-03-31 17:53:37 +00:00
|
|
|
srv = get_server_sh(s->be,
|
2013-10-01 08:45:07 +00:00
|
|
|
(void *)&((struct sockaddr_in6 *)&conn->addr.from)->sin6_addr,
|
2012-03-31 17:53:37 +00:00
|
|
|
16);
|
|
|
|
}
|
2009-10-03 10:36:05 +00:00
|
|
|
else {
|
|
|
|
/* unknown IP family */
|
|
|
|
err = SRV_STATUS_INTERNAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case BE_LB_HASH_URI:
|
|
|
|
/* URI hashing */
|
2015-04-03 21:46:31 +00:00
|
|
|
if (!s->txn || s->txn->req.msg_state < HTTP_MSG_BODY)
|
2010-03-24 13:54:30 +00:00
|
|
|
break;
|
2011-03-10 15:55:02 +00:00
|
|
|
srv = get_server_uh(s->be,
|
2015-04-03 21:46:31 +00:00
|
|
|
b_ptr(s->req.buf, -http_uri_rewind(&s->txn->req)),
|
|
|
|
s->txn->req.sl.rq.u_l);
|
2009-10-03 10:36:05 +00:00
|
|
|
break;
|
2008-04-14 18:47:37 +00:00
|
|
|
|
2009-10-03 10:36:05 +00:00
|
|
|
case BE_LB_HASH_PRM:
|
|
|
|
/* URL Parameter hashing */
|
2015-04-03 21:46:31 +00:00
|
|
|
if (!s->txn || s->txn->req.msg_state < HTTP_MSG_BODY)
|
2010-03-24 13:54:30 +00:00
|
|
|
break;
|
2011-03-01 19:35:49 +00:00
|
|
|
|
2011-03-10 15:55:02 +00:00
|
|
|
srv = get_server_ph(s->be,
|
2015-04-03 21:46:31 +00:00
|
|
|
b_ptr(s->req.buf, -http_uri_rewind(&s->txn->req)),
|
|
|
|
s->txn->req.sl.rq.u_l);
|
2011-03-01 19:35:49 +00:00
|
|
|
|
2015-04-03 21:46:31 +00:00
|
|
|
if (!srv && s->txn->meth == HTTP_METH_POST)
|
2011-03-10 15:55:02 +00:00
|
|
|
srv = get_server_ph_post(s);
|
2009-10-03 10:36:05 +00:00
|
|
|
break;
|
2009-03-25 12:02:10 +00:00
|
|
|
|
2009-10-03 10:36:05 +00:00
|
|
|
case BE_LB_HASH_HDR:
|
|
|
|
/* Header Parameter hashing */
|
2015-04-03 21:46:31 +00:00
|
|
|
if (!s->txn || s->txn->req.msg_state < HTTP_MSG_BODY)
|
2010-03-24 13:54:30 +00:00
|
|
|
break;
|
2011-03-10 15:55:02 +00:00
|
|
|
srv = get_server_hh(s);
|
2009-10-03 10:36:05 +00:00
|
|
|
break;
|
|
|
|
|
|
|
|
case BE_LB_HASH_RDP:
|
|
|
|
/* RDP Cookie hashing */
|
2011-03-10 15:55:02 +00:00
|
|
|
srv = get_server_rch(s);
|
2009-10-03 10:36:05 +00:00
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
/* unknown balancing algorithm */
|
|
|
|
err = SRV_STATUS_INTERNAL;
|
|
|
|
goto out;
|
2009-06-30 15:56:00 +00:00
|
|
|
}
|
|
|
|
|
2009-10-03 10:36:05 +00:00
|
|
|
/* If the hashing parameter was not found, let's fall
|
|
|
|
* back to round robin on the map.
|
|
|
|
*/
|
2011-03-10 15:55:02 +00:00
|
|
|
if (!srv) {
|
[MEDIUM] backend: implement consistent hashing variation
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
2009-10-01 05:52:15 +00:00
|
|
|
if (s->be->lbprm.algo & BE_LB_LKUP_CHTREE)
|
2011-03-10 15:55:02 +00:00
|
|
|
srv = chash_get_next_server(s->be, prev_srv);
|
[MEDIUM] backend: implement consistent hashing variation
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
2009-10-01 05:52:15 +00:00
|
|
|
else
|
2011-03-10 15:55:02 +00:00
|
|
|
srv = map_get_server_rr(s->be, prev_srv);
|
[MEDIUM] backend: implement consistent hashing variation
Consistent hashing provides some interesting advantages over common
hashing. It avoids full redistribution in case of a server failure,
or when expanding the farm. This has a cost however, the hashing is
far from being perfect, as we associate a server to a request by
searching the server with the closest key in a tree. Since servers
appear multiple times based on their weights, it is recommended to
use weights larger than approximately 10-20 in order to smoothen
the distribution a bit.
In some cases, playing with weights will be the only solution to
make a server appear more often and increase chances of being picked,
so stats are very important with consistent hashing.
In order to indicate the type of hashing, use :
hash-type map-based (default, old one)
hash-type consistent (new one)
Consistent hashing can make sense in a cache farm, in order not
to redistribute everyone when a cache changes state. It could also
probably be used for long sessions such as terminal sessions, though
that has not be attempted yet.
More details on this method of hashing here :
http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
2009-10-01 05:52:15 +00:00
|
|
|
}
|
2009-10-03 10:36:05 +00:00
|
|
|
|
|
|
|
/* end of map-based LB */
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
break;
|
2009-10-03 10:36:05 +00:00
|
|
|
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
default:
|
|
|
|
/* unknown balancing algorithm */
|
|
|
|
err = SRV_STATUS_INTERNAL;
|
|
|
|
goto out;
|
2006-06-26 00:48:02 +00:00
|
|
|
}
|
2009-10-03 10:36:05 +00:00
|
|
|
|
2011-03-10 15:55:02 +00:00
|
|
|
if (!srv) {
|
2009-10-03 10:36:05 +00:00
|
|
|
err = SRV_STATUS_FULL;
|
|
|
|
goto out;
|
|
|
|
}
|
2011-03-10 15:55:02 +00:00
|
|
|
else if (srv != prev_srv) {
|
2011-03-10 22:25:56 +00:00
|
|
|
s->be->be_counters.cum_lbconn++;
|
2011-03-10 15:55:02 +00:00
|
|
|
srv->counters.cum_lbconn++;
|
2007-11-29 14:43:32 +00:00
|
|
|
}
|
2012-11-11 23:42:33 +00:00
|
|
|
s->target = &srv->obj_type;
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
}
|
2011-08-06 15:05:02 +00:00
|
|
|
else if (s->be->options & (PR_O_DISPATCH | PR_O_TRANSP)) {
|
2012-11-11 23:42:33 +00:00
|
|
|
s->target = &s->be->obj_type;
|
2011-03-10 10:38:29 +00:00
|
|
|
}
|
2011-03-10 21:26:24 +00:00
|
|
|
else if ((s->be->options & PR_O_HTTP_PROXY) &&
|
2014-11-28 13:42:25 +00:00
|
|
|
(conn = objt_conn(s->si[1].end)) &&
|
2013-10-01 08:45:07 +00:00
|
|
|
is_addr(&conn->addr.to)) {
|
2011-03-10 10:38:29 +00:00
|
|
|
/* in proxy mode, we need a valid destination address */
|
2012-11-11 23:42:33 +00:00
|
|
|
s->target = &s->be->obj_type;
|
2006-06-26 00:48:02 +00:00
|
|
|
}
|
2011-03-10 10:38:29 +00:00
|
|
|
else {
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
err = SRV_STATUS_NOSRV;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2015-04-02 23:14:29 +00:00
|
|
|
s->flags |= SF_ASSIGNED;
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
err = SRV_STATUS_OK;
|
|
|
|
out:
|
|
|
|
|
|
|
|
/* Either we take back our connection slot, or we offer it to someone
|
|
|
|
* else if we don't need it anymore.
|
|
|
|
*/
|
|
|
|
if (conn_slot) {
|
2011-03-10 15:55:02 +00:00
|
|
|
if (conn_slot == srv) {
|
|
|
|
sess_change_server(s, srv);
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
} else {
|
|
|
|
if (may_dequeue_tasks(conn_slot, s->be))
|
|
|
|
process_srv_queue(conn_slot);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
out_err:
|
|
|
|
return err;
|
2006-06-26 00:48:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2015-04-02 23:14:29 +00:00
|
|
|
* This function assigns a server address to a stream, and sets SF_ADDR_SET.
|
2006-06-26 00:48:02 +00:00
|
|
|
* The address is taken from the currently assigned server, or from the
|
|
|
|
* dispatch or transparent address.
|
|
|
|
*
|
|
|
|
* It may return :
|
|
|
|
* SRV_STATUS_OK if everything is OK.
|
|
|
|
* SRV_STATUS_INTERNAL for other unrecoverable errors.
|
|
|
|
*
|
2015-04-02 23:14:29 +00:00
|
|
|
* Upon successful return, the stream flag SF_ADDR_SET is set. This flag is
|
2006-06-26 00:48:02 +00:00
|
|
|
* not cleared, so it's to the caller to clear it if required.
|
|
|
|
*
|
2013-10-11 17:34:20 +00:00
|
|
|
* The caller is responsible for having already assigned a connection
|
|
|
|
* to si->end.
|
2013-10-01 08:45:07 +00:00
|
|
|
*
|
2006-06-26 00:48:02 +00:00
|
|
|
*/
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
int assign_server_address(struct stream *s)
|
2006-06-26 00:48:02 +00:00
|
|
|
{
|
2015-04-04 00:10:38 +00:00
|
|
|
struct connection *cli_conn = objt_conn(strm_orig(s));
|
2014-11-28 13:42:25 +00:00
|
|
|
struct connection *srv_conn = objt_conn(s->si[1].end);
|
2013-10-01 08:45:07 +00:00
|
|
|
|
2006-06-26 00:48:02 +00:00
|
|
|
#ifdef DEBUG_FULL
|
|
|
|
fprintf(stderr,"assign_server_address : s=%p\n",s);
|
|
|
|
#endif
|
|
|
|
|
2015-04-02 23:14:29 +00:00
|
|
|
if ((s->flags & SF_DIRECT) || (s->be->lbprm.algo & BE_LB_KIND)) {
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
/* A server is necessarily known for this stream */
|
2015-04-02 23:14:29 +00:00
|
|
|
if (!(s->flags & SF_ASSIGNED))
|
2006-06-26 00:48:02 +00:00
|
|
|
return SRV_STATUS_INTERNAL;
|
|
|
|
|
2013-10-01 08:45:07 +00:00
|
|
|
srv_conn->addr.to = objt_server(s->target)->addr;
|
2006-06-26 00:48:02 +00:00
|
|
|
|
2013-10-01 08:45:07 +00:00
|
|
|
if (!is_addr(&srv_conn->addr.to) && cli_conn) {
|
2010-07-13 12:49:50 +00:00
|
|
|
/* if the server has no address, we use the same address
|
|
|
|
* the client asked, which is handy for remapping ports
|
2014-05-09 20:56:10 +00:00
|
|
|
* locally on multiple addresses at once. Nothing is done
|
|
|
|
* for AF_UNIX addresses.
|
2010-07-13 12:49:50 +00:00
|
|
|
*/
|
2013-10-01 08:45:07 +00:00
|
|
|
conn_get_to_addr(cli_conn);
|
2010-07-13 12:49:50 +00:00
|
|
|
|
2013-10-01 08:45:07 +00:00
|
|
|
if (cli_conn->addr.to.ss_family == AF_INET) {
|
|
|
|
((struct sockaddr_in *)&srv_conn->addr.to)->sin_addr = ((struct sockaddr_in *)&cli_conn->addr.to)->sin_addr;
|
|
|
|
} else if (cli_conn->addr.to.ss_family == AF_INET6) {
|
|
|
|
((struct sockaddr_in6 *)&srv_conn->addr.to)->sin6_addr = ((struct sockaddr_in6 *)&cli_conn->addr.to)->sin6_addr;
|
2010-10-22 14:36:33 +00:00
|
|
|
}
|
2010-07-13 12:49:50 +00:00
|
|
|
}
|
|
|
|
|
2006-06-26 00:48:02 +00:00
|
|
|
/* if this server remaps proxied ports, we'll use
|
|
|
|
* the port the client connected to with an offset. */
|
2014-05-13 13:54:22 +00:00
|
|
|
if ((objt_server(s->target)->flags & SRV_F_MAPPORTS) && cli_conn) {
|
2011-03-10 21:26:24 +00:00
|
|
|
int base_port;
|
|
|
|
|
2013-10-01 08:45:07 +00:00
|
|
|
conn_get_to_addr(cli_conn);
|
2011-03-10 21:26:24 +00:00
|
|
|
|
|
|
|
/* First, retrieve the port from the incoming connection */
|
2013-10-01 08:45:07 +00:00
|
|
|
base_port = get_host_port(&cli_conn->addr.to);
|
2011-03-10 21:26:24 +00:00
|
|
|
|
|
|
|
/* Second, assign the outgoing connection's port */
|
2013-10-01 08:45:07 +00:00
|
|
|
base_port += get_host_port(&srv_conn->addr.to);
|
|
|
|
set_host_port(&srv_conn->addr.to, base_port);
|
2006-06-26 00:48:02 +00:00
|
|
|
}
|
|
|
|
}
|
2011-08-06 15:05:02 +00:00
|
|
|
else if (s->be->options & PR_O_DISPATCH) {
|
2006-06-26 00:48:02 +00:00
|
|
|
/* connect to the defined dispatch addr */
|
2013-10-01 08:45:07 +00:00
|
|
|
srv_conn->addr.to = s->be->dispatch_addr;
|
2006-06-26 00:48:02 +00:00
|
|
|
}
|
2013-10-01 08:45:07 +00:00
|
|
|
else if ((s->be->options & PR_O_TRANSP) && cli_conn) {
|
2006-06-26 00:48:02 +00:00
|
|
|
/* in transparent mode, use the original dest addr if no dispatch specified */
|
2013-10-01 08:45:07 +00:00
|
|
|
conn_get_to_addr(cli_conn);
|
2006-06-26 00:48:02 +00:00
|
|
|
|
2013-10-01 08:45:07 +00:00
|
|
|
if (cli_conn->addr.to.ss_family == AF_INET || cli_conn->addr.to.ss_family == AF_INET6)
|
|
|
|
srv_conn->addr.to = cli_conn->addr.to;
|
2006-06-26 00:48:02 +00:00
|
|
|
}
|
2007-11-29 14:43:32 +00:00
|
|
|
else if (s->be->options & PR_O_HTTP_PROXY) {
|
|
|
|
/* If HTTP PROXY option is set, then server is already assigned
|
|
|
|
* during incoming client request parsing. */
|
|
|
|
}
|
2007-01-20 10:07:46 +00:00
|
|
|
else {
|
|
|
|
/* no server and no LB algorithm ! */
|
|
|
|
return SRV_STATUS_INTERNAL;
|
|
|
|
}
|
2006-06-26 00:48:02 +00:00
|
|
|
|
2014-11-17 14:11:45 +00:00
|
|
|
/* Copy network namespace from client connection */
|
2014-12-19 12:37:11 +00:00
|
|
|
srv_conn->proxy_netns = cli_conn ? cli_conn->proxy_netns : NULL;
|
2014-11-17 14:11:45 +00:00
|
|
|
|
2015-04-02 23:14:29 +00:00
|
|
|
s->flags |= SF_ADDR_SET;
|
2006-06-26 00:48:02 +00:00
|
|
|
return SRV_STATUS_OK;
|
|
|
|
}
|
|
|
|
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
/* This function assigns a server to stream <s> if required, and can add the
|
2006-06-26 00:48:02 +00:00
|
|
|
* connection to either the assigned server's queue or to the proxy's queue.
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
* If ->srv_conn is set, the stream is first released from the server.
|
2015-04-02 23:14:29 +00:00
|
|
|
* It may also be called with SF_DIRECT and/or SF_ASSIGNED though. It will
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
* be called before any connection and after any retry or redispatch occurs.
|
|
|
|
*
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
* It is not allowed to call this function with a stream in a queue.
|
2006-06-26 00:48:02 +00:00
|
|
|
*
|
|
|
|
* Returns :
|
|
|
|
*
|
|
|
|
* SRV_STATUS_OK if everything is OK.
|
2012-11-11 23:42:33 +00:00
|
|
|
* SRV_STATUS_NOSRV if no server is available. objt_server(s->target) = NULL.
|
2006-06-26 00:48:02 +00:00
|
|
|
* SRV_STATUS_QUEUED if the connection has been queued.
|
|
|
|
* SRV_STATUS_FULL if the server(s) is/are saturated and the
|
2011-03-10 15:55:02 +00:00
|
|
|
* connection could not be queued at the server's,
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
* which may be NULL if we queue on the backend.
|
2006-06-26 00:48:02 +00:00
|
|
|
* SRV_STATUS_INTERNAL for other unrecoverable errors.
|
|
|
|
*
|
|
|
|
*/
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
int assign_server_and_queue(struct stream *s)
|
2006-06-26 00:48:02 +00:00
|
|
|
{
|
|
|
|
struct pendconn *p;
|
2011-03-10 15:55:02 +00:00
|
|
|
struct server *srv;
|
2006-06-26 00:48:02 +00:00
|
|
|
int err;
|
|
|
|
|
|
|
|
if (s->pend_pos)
|
|
|
|
return SRV_STATUS_INTERNAL;
|
|
|
|
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
err = SRV_STATUS_OK;
|
2015-04-02 23:14:29 +00:00
|
|
|
if (!(s->flags & SF_ASSIGNED)) {
|
2012-11-11 23:42:33 +00:00
|
|
|
struct server *prev_srv = objt_server(s->target);
|
2011-03-10 10:42:13 +00:00
|
|
|
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
err = assign_server(s);
|
2011-03-10 10:42:13 +00:00
|
|
|
if (prev_srv) {
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
/* This stream was previously assigned to a server. We have to
|
|
|
|
* update the stream's and the server's stats :
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
* - if the server changed :
|
|
|
|
* - set TX_CK_DOWN if txn.flags was TX_CK_VALID
|
2015-04-02 23:14:29 +00:00
|
|
|
* - set SF_REDISP if it was successfully redispatched
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
* - increment srv->redispatches and be->redispatches
|
|
|
|
* - if the server remained the same : update retries.
|
[MEDIUM]: Prevent redispatcher from selecting the same server, version #3
When haproxy decides that session needs to be redispatched it chose a server,
but there is no guarantee for it to be a different one. So, it often
happens that selected server is exactly the same that it was previously, so
a client ends up with a 503 error anyway, especially when one sever has
much bigger weight than others.
Changes from the previous version:
- drop stupid and unnecessary SN_DIRECT changes
- assign_server(): use srvtoavoid to keep the old server and clear s->srv
so SRV_STATUS_NOSRV guarantees that t->srv == NULL (again)
and get_server_rr_with_conns has chances to work (previously
we were passing a NULL here)
- srv_redispatch_connect(): remove t->srv->cum_sess and t->srv->failed_conns
incrementing as t->srv was guaranteed to be NULL
- add avoididx to get_server_rr_with_conns. I hope I correctly understand this code.
- fix http_flush_cookie_flags() and move it to assign_server_and_queue()
directly. The code here was supposed to set CK_DOWN and clear CK_VALID,
but: (TX_CK_VALID | TX_CK_DOWN) == TX_CK_VALID == TX_CK_MASK so:
if ((txn->flags & TX_CK_MASK) == TX_CK_VALID)
txn->flags ^= (TX_CK_VALID | TX_CK_DOWN);
was really a:
if ((txn->flags & TX_CK_MASK) == TX_CK_VALID)
txn->flags &= TX_CK_VALID
Now haproxy logs "--DI" after redispatching connection.
- defer srv->redispatches++ and s->be->redispatches++ so there
are called only if a conenction was redispatched, not only
supposed to.
- don't increment lbconn if redispatcher selected the same sarver
- don't count unsuccessfully redispatched connections as redispatched
connections
- don't count redispatched connections as errors, so:
- the number of connections effectively served by a server is:
srv->cum_sess - srv->failed_conns - srv->retries - srv->redispatches
and
SUM(servers->failed_conns) == be->failed_conns
- requires the "Don't increment server connections too much + fix retries" patch
- needs little more testing and probably some discussion so reverting to the RFC state
Tests #1:
retries 4
redispatch
i) 1 server(s): b (wght=1, down)
b) sessions=5, lbtot=1, err_conn=1, retr=4, redis=0
-> request failed
ii) server(s): b (wght=1, down), u (wght=1, down)
b) sessions=4, lbtot=1, err_conn=0, retr=3, redis=1
u) sessions=1, lbtot=1, err_conn=1, retr=0, redis=0
-> request FAILED
iii) 2 server(s): b (wght=1, down), u (wght=1, up)
b) sessions=4, lbtot=1, err_conn=0, retr=3, redis=1
u) sessions=1, lbtot=1, err_conn=0, retr=0, redis=0
-> request OK
iv) 2 server(s): b (wght=100, down), u (wght=1, up)
b) sessions=4, lbtot=1, err_conn=0, retr=3, redis=1
u) sessions=1, lbtot=1, err_conn=0, retr=0, redis=0
-> request OK
v) 1 server(s): b (down for first 4 SYNS)
b) sessions=5, lbtot=1, err_conn=0, retr=4, redis=0
-> request OK
Tests #2:
retries 4
i) 1 server(s): b (down)
b) sessions=5, lbtot=1, err_conn=1, retr=4, redis=0
-> request FAILED
2008-02-22 02:50:19 +00:00
|
|
|
*/
|
|
|
|
|
2012-11-11 23:42:33 +00:00
|
|
|
if (prev_srv != objt_server(s->target)) {
|
2015-04-03 21:46:31 +00:00
|
|
|
if (s->txn && (s->txn->flags & TX_CK_MASK) == TX_CK_VALID) {
|
|
|
|
s->txn->flags &= ~TX_CK_MASK;
|
|
|
|
s->txn->flags |= TX_CK_DOWN;
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
}
|
2015-04-02 23:14:29 +00:00
|
|
|
s->flags |= SF_REDISP;
|
2011-03-10 10:42:13 +00:00
|
|
|
prev_srv->counters.redispatches++;
|
2011-03-10 22:25:56 +00:00
|
|
|
s->be->be_counters.redispatches++;
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
} else {
|
2011-03-10 10:42:13 +00:00
|
|
|
prev_srv->counters.retries++;
|
2011-03-10 22:25:56 +00:00
|
|
|
s->be->be_counters.retries++;
|
[MEDIUM]: Prevent redispatcher from selecting the same server, version #3
When haproxy decides that session needs to be redispatched it chose a server,
but there is no guarantee for it to be a different one. So, it often
happens that selected server is exactly the same that it was previously, so
a client ends up with a 503 error anyway, especially when one sever has
much bigger weight than others.
Changes from the previous version:
- drop stupid and unnecessary SN_DIRECT changes
- assign_server(): use srvtoavoid to keep the old server and clear s->srv
so SRV_STATUS_NOSRV guarantees that t->srv == NULL (again)
and get_server_rr_with_conns has chances to work (previously
we were passing a NULL here)
- srv_redispatch_connect(): remove t->srv->cum_sess and t->srv->failed_conns
incrementing as t->srv was guaranteed to be NULL
- add avoididx to get_server_rr_with_conns. I hope I correctly understand this code.
- fix http_flush_cookie_flags() and move it to assign_server_and_queue()
directly. The code here was supposed to set CK_DOWN and clear CK_VALID,
but: (TX_CK_VALID | TX_CK_DOWN) == TX_CK_VALID == TX_CK_MASK so:
if ((txn->flags & TX_CK_MASK) == TX_CK_VALID)
txn->flags ^= (TX_CK_VALID | TX_CK_DOWN);
was really a:
if ((txn->flags & TX_CK_MASK) == TX_CK_VALID)
txn->flags &= TX_CK_VALID
Now haproxy logs "--DI" after redispatching connection.
- defer srv->redispatches++ and s->be->redispatches++ so there
are called only if a conenction was redispatched, not only
supposed to.
- don't increment lbconn if redispatcher selected the same sarver
- don't count unsuccessfully redispatched connections as redispatched
connections
- don't count redispatched connections as errors, so:
- the number of connections effectively served by a server is:
srv->cum_sess - srv->failed_conns - srv->retries - srv->redispatches
and
SUM(servers->failed_conns) == be->failed_conns
- requires the "Don't increment server connections too much + fix retries" patch
- needs little more testing and probably some discussion so reverting to the RFC state
Tests #1:
retries 4
redispatch
i) 1 server(s): b (wght=1, down)
b) sessions=5, lbtot=1, err_conn=1, retr=4, redis=0
-> request failed
ii) server(s): b (wght=1, down), u (wght=1, down)
b) sessions=4, lbtot=1, err_conn=0, retr=3, redis=1
u) sessions=1, lbtot=1, err_conn=1, retr=0, redis=0
-> request FAILED
iii) 2 server(s): b (wght=1, down), u (wght=1, up)
b) sessions=4, lbtot=1, err_conn=0, retr=3, redis=1
u) sessions=1, lbtot=1, err_conn=0, retr=0, redis=0
-> request OK
iv) 2 server(s): b (wght=100, down), u (wght=1, up)
b) sessions=4, lbtot=1, err_conn=0, retr=3, redis=1
u) sessions=1, lbtot=1, err_conn=0, retr=0, redis=0
-> request OK
v) 1 server(s): b (down for first 4 SYNS)
b) sessions=5, lbtot=1, err_conn=0, retr=4, redis=0
-> request OK
Tests #2:
retries 4
i) 1 server(s): b (down)
b) sessions=5, lbtot=1, err_conn=1, retr=4, redis=0
-> request FAILED
2008-02-22 02:50:19 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-06-26 00:48:02 +00:00
|
|
|
switch (err) {
|
|
|
|
case SRV_STATUS_OK:
|
2015-04-02 23:14:29 +00:00
|
|
|
/* we have SF_ASSIGNED set */
|
2012-11-11 23:42:33 +00:00
|
|
|
srv = objt_server(s->target);
|
2011-03-10 15:55:02 +00:00
|
|
|
if (!srv)
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
return SRV_STATUS_OK; /* dispatch or proxy mode */
|
|
|
|
|
|
|
|
/* If we already have a connection slot, no need to check any queue */
|
2011-03-10 15:55:02 +00:00
|
|
|
if (s->srv_conn == srv)
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
return SRV_STATUS_OK;
|
|
|
|
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
/* OK, this stream already has an assigned server, but no
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
* connection slot yet. Either it is a redispatch, or it was
|
|
|
|
* assigned from persistence information (direct mode).
|
|
|
|
*/
|
2015-04-02 23:14:29 +00:00
|
|
|
if ((s->flags & SF_REDIRECTABLE) && srv->rdr_len) {
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
/* server scheduled for redirection, and already assigned. We
|
|
|
|
* don't want to go further nor check the queue.
|
2008-02-14 19:25:24 +00:00
|
|
|
*/
|
2011-03-10 15:55:02 +00:00
|
|
|
sess_change_server(s, srv); /* not really needed in fact */
|
2008-02-14 19:25:24 +00:00
|
|
|
return SRV_STATUS_OK;
|
|
|
|
}
|
|
|
|
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
/* We might have to queue this stream if the assigned server is full.
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
* We know we have to queue it into the server's queue, so if a maxqueue
|
|
|
|
* is set on the server, we must also check that the server's queue is
|
|
|
|
* not full, in which case we have to return FULL.
|
|
|
|
*/
|
2011-03-10 15:55:02 +00:00
|
|
|
if (srv->maxconn &&
|
|
|
|
(srv->nbpend || srv->served >= srv_dynamic_maxconn(srv))) {
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
|
2011-03-10 15:55:02 +00:00
|
|
|
if (srv->maxqueue > 0 && srv->nbpend >= srv->maxqueue)
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
return SRV_STATUS_FULL;
|
|
|
|
|
2006-06-26 00:48:02 +00:00
|
|
|
p = pendconn_add(s);
|
|
|
|
if (p)
|
|
|
|
return SRV_STATUS_QUEUED;
|
|
|
|
else
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
return SRV_STATUS_INTERNAL;
|
2006-06-26 00:48:02 +00:00
|
|
|
}
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
|
|
|
|
/* OK, we can use this server. Let's reserve our place */
|
2011-03-10 15:55:02 +00:00
|
|
|
sess_change_server(s, srv);
|
2006-06-26 00:48:02 +00:00
|
|
|
return SRV_STATUS_OK;
|
|
|
|
|
|
|
|
case SRV_STATUS_FULL:
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
/* queue this stream into the proxy's queue */
|
2006-06-26 00:48:02 +00:00
|
|
|
p = pendconn_add(s);
|
|
|
|
if (p)
|
|
|
|
return SRV_STATUS_QUEUED;
|
|
|
|
else
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
return SRV_STATUS_INTERNAL;
|
2006-06-26 00:48:02 +00:00
|
|
|
|
|
|
|
case SRV_STATUS_NOSRV:
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
return err;
|
|
|
|
|
2006-06-26 00:48:02 +00:00
|
|
|
case SRV_STATUS_INTERNAL:
|
|
|
|
return err;
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
|
2006-06-26 00:48:02 +00:00
|
|
|
default:
|
|
|
|
return SRV_STATUS_INTERNAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-03-29 17:36:59 +00:00
|
|
|
/* If an explicit source binding is specified on the server and/or backend, and
|
|
|
|
* this source makes use of the transparent proxy, then it is extracted now and
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
* assigned to the stream's pending connection. This function assumes that an
|
2014-11-28 13:42:25 +00:00
|
|
|
* outgoing connection has already been assigned to s->si[1].end.
|
2010-03-29 17:36:59 +00:00
|
|
|
*/
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
static void assign_tproxy_address(struct stream *s)
|
2010-03-29 17:36:59 +00:00
|
|
|
{
|
2015-08-20 17:35:14 +00:00
|
|
|
#if defined(CONFIG_HAP_TRANSPARENT)
|
2012-11-11 23:42:33 +00:00
|
|
|
struct server *srv = objt_server(s->target);
|
2012-12-08 22:13:33 +00:00
|
|
|
struct conn_src *src;
|
2013-10-01 08:45:07 +00:00
|
|
|
struct connection *cli_conn;
|
2014-11-28 13:42:25 +00:00
|
|
|
struct connection *srv_conn = objt_conn(s->si[1].end);
|
2011-03-10 15:55:02 +00:00
|
|
|
|
2012-12-08 22:13:33 +00:00
|
|
|
if (srv && srv->conn_src.opts & CO_SRC_BIND)
|
|
|
|
src = &srv->conn_src;
|
|
|
|
else if (s->be->conn_src.opts & CO_SRC_BIND)
|
|
|
|
src = &s->be->conn_src;
|
|
|
|
else
|
|
|
|
return;
|
|
|
|
|
|
|
|
switch (src->opts & CO_SRC_TPROXY_MASK) {
|
|
|
|
case CO_SRC_TPROXY_ADDR:
|
2013-10-01 08:45:07 +00:00
|
|
|
srv_conn->addr.from = src->tproxy_addr;
|
2012-12-08 22:13:33 +00:00
|
|
|
break;
|
|
|
|
case CO_SRC_TPROXY_CLI:
|
|
|
|
case CO_SRC_TPROXY_CIP:
|
|
|
|
/* FIXME: what can we do if the client connects in IPv6 or unix socket ? */
|
2015-04-04 00:10:38 +00:00
|
|
|
cli_conn = objt_conn(strm_orig(s));
|
2013-10-01 08:45:07 +00:00
|
|
|
if (cli_conn)
|
|
|
|
srv_conn->addr.from = cli_conn->addr.from;
|
|
|
|
else
|
|
|
|
memset(&srv_conn->addr.from, 0, sizeof(srv_conn->addr.from));
|
2012-12-08 22:13:33 +00:00
|
|
|
break;
|
|
|
|
case CO_SRC_TPROXY_DYN:
|
2015-04-03 21:46:31 +00:00
|
|
|
if (src->bind_hdr_occ && s->txn) {
|
2012-12-08 22:13:33 +00:00
|
|
|
char *vptr;
|
|
|
|
int vlen;
|
|
|
|
int rewind;
|
|
|
|
|
|
|
|
/* bind to the IP in a header */
|
2013-10-01 08:45:07 +00:00
|
|
|
((struct sockaddr_in *)&srv_conn->addr.from)->sin_family = AF_INET;
|
|
|
|
((struct sockaddr_in *)&srv_conn->addr.from)->sin_port = 0;
|
|
|
|
((struct sockaddr_in *)&srv_conn->addr.from)->sin_addr.s_addr = 0;
|
2012-12-08 22:13:33 +00:00
|
|
|
|
2015-04-03 21:46:31 +00:00
|
|
|
b_rew(s->req.buf, rewind = http_hdr_rewind(&s->txn->req));
|
|
|
|
if (http_get_hdr(&s->txn->req, src->bind_hdr_name, src->bind_hdr_len,
|
|
|
|
&s->txn->hdr_idx, src->bind_hdr_occ, NULL, &vptr, &vlen)) {
|
2013-10-01 08:45:07 +00:00
|
|
|
((struct sockaddr_in *)&srv_conn->addr.from)->sin_addr.s_addr =
|
2012-12-08 22:13:33 +00:00
|
|
|
htonl(inetaddr_host_lim(vptr, vptr + vlen));
|
2009-09-07 09:51:47 +00:00
|
|
|
}
|
2014-11-27 19:45:39 +00:00
|
|
|
b_adv(s->req.buf, rewind);
|
2010-03-29 17:36:59 +00:00
|
|
|
}
|
2012-12-08 22:13:33 +00:00
|
|
|
break;
|
|
|
|
default:
|
2013-10-01 08:45:07 +00:00
|
|
|
memset(&srv_conn->addr.from, 0, sizeof(srv_conn->addr.from));
|
2010-03-29 17:36:59 +00:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-06-26 00:48:02 +00:00
|
|
|
/*
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
* This function initiates a connection to the server assigned to this stream
|
2014-11-28 13:42:25 +00:00
|
|
|
* (s->target, s->si[1].addr.to). It will assign a server if none
|
2011-03-10 10:38:29 +00:00
|
|
|
* is assigned yet.
|
2006-06-26 00:48:02 +00:00
|
|
|
* It can return one of :
|
2015-04-02 23:14:29 +00:00
|
|
|
* - SF_ERR_NONE if everything's OK
|
|
|
|
* - SF_ERR_SRVTO if there are no more servers
|
|
|
|
* - SF_ERR_SRVCL if the connection was refused by the server
|
|
|
|
* - SF_ERR_PRXCOND if the connection has been limited by the proxy (maxconn)
|
|
|
|
* - SF_ERR_RESOURCE if a system resource is lacking (eg: fd limits, ports, ...)
|
|
|
|
* - SF_ERR_INTERNAL for any other purely internal errors
|
2016-11-29 01:15:19 +00:00
|
|
|
* Additionally, in the case of SF_ERR_RESOURCE, an emergency log will be emitted.
|
2013-10-01 08:45:07 +00:00
|
|
|
* The server-facing stream interface is expected to hold a pre-allocated connection
|
2014-11-28 13:42:25 +00:00
|
|
|
* in s->si[1].conn.
|
2006-06-26 00:48:02 +00:00
|
|
|
*/
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
int connect_server(struct stream *s)
|
2006-06-26 00:48:02 +00:00
|
|
|
{
|
2013-10-01 08:45:07 +00:00
|
|
|
struct connection *cli_conn;
|
2013-12-15 15:33:46 +00:00
|
|
|
struct connection *srv_conn;
|
2015-08-06 09:37:10 +00:00
|
|
|
struct connection *old_conn;
|
2011-03-10 15:55:02 +00:00
|
|
|
struct server *srv;
|
2013-12-15 15:33:46 +00:00
|
|
|
int reuse = 0;
|
2009-08-16 12:02:45 +00:00
|
|
|
int err;
|
2006-06-26 00:48:02 +00:00
|
|
|
|
2015-08-04 17:34:21 +00:00
|
|
|
srv = objt_server(s->target);
|
2014-11-28 13:42:25 +00:00
|
|
|
srv_conn = objt_conn(s->si[1].end);
|
2013-12-15 15:33:46 +00:00
|
|
|
if (srv_conn)
|
|
|
|
reuse = s->target == srv_conn->target;
|
|
|
|
|
2015-08-04 18:45:52 +00:00
|
|
|
if (srv && !reuse) {
|
2015-08-06 09:37:10 +00:00
|
|
|
old_conn = srv_conn;
|
|
|
|
if (old_conn) {
|
2015-08-04 18:45:52 +00:00
|
|
|
srv_conn = NULL;
|
2015-08-06 09:37:10 +00:00
|
|
|
old_conn->owner = NULL;
|
|
|
|
si_detach_endpoint(&s->si[1]);
|
|
|
|
/* note: if the connection was in a server's idle
|
|
|
|
* queue, it doesn't get dequeued.
|
|
|
|
*/
|
2015-08-04 18:45:52 +00:00
|
|
|
}
|
|
|
|
|
2015-08-05 15:16:33 +00:00
|
|
|
/* Below we pick connections from the safe or idle lists based
|
|
|
|
* on the strategy, the fact that this is a first or second
|
|
|
|
* (retryable) request, with the indicated priority (1 or 2) :
|
|
|
|
*
|
|
|
|
* SAFE AGGR ALWS
|
|
|
|
*
|
|
|
|
* +-----+-----+ +-----+-----+ +-----+-----+
|
|
|
|
* req| 1st | 2nd | req| 1st | 2nd | req| 1st | 2nd |
|
|
|
|
* ----+-----+-----+ ----+-----+-----+ ----+-----+-----+
|
|
|
|
* safe| - | 2 | safe| 1 | 2 | safe| 1 | 2 |
|
|
|
|
* ----+-----+-----+ ----+-----+-----+ ----+-----+-----+
|
|
|
|
* idle| - | 1 | idle| - | 1 | idle| 2 | 1 |
|
|
|
|
* ----+-----+-----+ ----+-----+-----+ ----+-----+-----+
|
|
|
|
*/
|
|
|
|
|
|
|
|
if (!LIST_ISEMPTY(&srv->idle_conns) &&
|
|
|
|
((s->be->options & PR_O_REUSE_MASK) != PR_O_REUSE_NEVR &&
|
|
|
|
s->txn && (s->txn->flags & TX_NOT_FIRST))) {
|
|
|
|
srv_conn = LIST_ELEM(srv->idle_conns.n, struct connection *, list);
|
|
|
|
}
|
|
|
|
else if (!LIST_ISEMPTY(&srv->safe_conns) &&
|
|
|
|
((s->txn && (s->txn->flags & TX_NOT_FIRST)) ||
|
|
|
|
(s->be->options & PR_O_REUSE_MASK) >= PR_O_REUSE_AGGR)) {
|
|
|
|
srv_conn = LIST_ELEM(srv->safe_conns.n, struct connection *, list);
|
|
|
|
}
|
|
|
|
else if (!LIST_ISEMPTY(&srv->idle_conns) &&
|
|
|
|
(s->be->options & PR_O_REUSE_MASK) == PR_O_REUSE_ALWS) {
|
2015-08-04 18:45:52 +00:00
|
|
|
srv_conn = LIST_ELEM(srv->idle_conns.n, struct connection *, list);
|
2015-08-05 15:16:33 +00:00
|
|
|
}
|
2015-08-04 18:45:52 +00:00
|
|
|
|
2015-08-05 15:16:33 +00:00
|
|
|
/* If we've picked a connection from the pool, we now have to
|
|
|
|
* detach it. We may have to get rid of the previous idle
|
|
|
|
* connection we had, so for this we try to swap it with the
|
|
|
|
* other owner's. That way it may remain alive for others to
|
|
|
|
* pick.
|
|
|
|
*/
|
|
|
|
if (srv_conn) {
|
2015-08-04 18:45:52 +00:00
|
|
|
LIST_DEL(&srv_conn->list);
|
|
|
|
LIST_INIT(&srv_conn->list);
|
|
|
|
|
2015-08-06 09:37:10 +00:00
|
|
|
if (srv_conn->owner) {
|
|
|
|
si_detach_endpoint(srv_conn->owner);
|
2016-02-02 17:29:05 +00:00
|
|
|
if (old_conn && !(old_conn->flags & CO_FL_PRIVATE)) {
|
2015-08-06 09:37:10 +00:00
|
|
|
si_attach_conn(srv_conn->owner, old_conn);
|
2016-02-02 17:29:05 +00:00
|
|
|
si_idle_conn(srv_conn->owner, NULL);
|
|
|
|
}
|
2015-08-06 09:37:10 +00:00
|
|
|
}
|
2015-08-04 18:45:52 +00:00
|
|
|
si_attach_conn(&s->si[1], srv_conn);
|
|
|
|
reuse = 1;
|
|
|
|
}
|
2015-08-06 09:37:10 +00:00
|
|
|
|
2015-08-05 15:16:33 +00:00
|
|
|
/* we may have to release our connection if we couldn't swap it */
|
2015-08-06 09:37:10 +00:00
|
|
|
if (old_conn && !old_conn->owner) {
|
|
|
|
LIST_DEL(&old_conn->list);
|
|
|
|
conn_force_close(old_conn);
|
|
|
|
conn_free(old_conn);
|
|
|
|
}
|
2015-08-04 18:45:52 +00:00
|
|
|
}
|
|
|
|
|
2013-12-15 15:33:46 +00:00
|
|
|
if (reuse) {
|
|
|
|
/* Disable connection reuse if a dynamic source is used.
|
|
|
|
* As long as we don't share connections between servers,
|
|
|
|
* we don't need to disable connection reuse on no-idempotent
|
|
|
|
* requests nor when PROXY protocol is used.
|
|
|
|
*/
|
|
|
|
if (srv && srv->conn_src.opts & CO_SRC_BIND) {
|
|
|
|
if ((srv->conn_src.opts & CO_SRC_TPROXY_MASK) == CO_SRC_TPROXY_DYN)
|
|
|
|
reuse = 0;
|
|
|
|
}
|
|
|
|
else if (s->be->conn_src.opts & CO_SRC_BIND) {
|
|
|
|
if ((s->be->conn_src.opts & CO_SRC_TPROXY_MASK) == CO_SRC_TPROXY_DYN)
|
|
|
|
reuse = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-08-04 17:45:36 +00:00
|
|
|
if (!reuse)
|
2015-08-05 19:47:23 +00:00
|
|
|
srv_conn = si_alloc_conn(&s->si[1]);
|
2015-08-04 17:00:17 +00:00
|
|
|
else {
|
|
|
|
/* reusing our connection, take it out of the idle list */
|
|
|
|
LIST_DEL(&srv_conn->list);
|
|
|
|
LIST_INIT(&srv_conn->list);
|
|
|
|
}
|
2015-08-04 17:45:36 +00:00
|
|
|
|
2013-10-11 17:34:20 +00:00
|
|
|
if (!srv_conn)
|
2015-04-02 23:14:29 +00:00
|
|
|
return SF_ERR_RESOURCE;
|
2013-10-11 17:34:20 +00:00
|
|
|
|
2015-04-02 23:14:29 +00:00
|
|
|
if (!(s->flags & SF_ADDR_SET)) {
|
2006-06-26 00:48:02 +00:00
|
|
|
err = assign_server_address(s);
|
|
|
|
if (err != SRV_STATUS_OK)
|
2015-04-02 23:14:29 +00:00
|
|
|
return SF_ERR_INTERNAL;
|
2006-06-26 00:48:02 +00:00
|
|
|
}
|
|
|
|
|
2013-12-20 10:09:51 +00:00
|
|
|
if (!conn_xprt_ready(srv_conn)) {
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
/* the target was only on the stream, assign it to the SI now */
|
2013-12-20 10:09:51 +00:00
|
|
|
srv_conn->target = s->target;
|
2009-06-10 09:09:37 +00:00
|
|
|
|
2013-12-20 10:09:51 +00:00
|
|
|
/* set the correct protocol on the output stream interface */
|
2015-08-04 17:34:21 +00:00
|
|
|
if (srv) {
|
|
|
|
conn_prepare(srv_conn, protocol_by_family(srv_conn->addr.to.ss_family), srv->xprt);
|
2013-12-20 10:09:51 +00:00
|
|
|
}
|
|
|
|
else if (obj_type(s->target) == OBJ_TYPE_PROXY) {
|
|
|
|
/* proxies exclusively run on raw_sock right now */
|
|
|
|
conn_prepare(srv_conn, protocol_by_family(srv_conn->addr.to.ss_family), &raw_sock);
|
2014-11-28 13:42:25 +00:00
|
|
|
if (!objt_conn(s->si[1].end) || !objt_conn(s->si[1].end)->ctrl)
|
2015-04-02 23:14:29 +00:00
|
|
|
return SF_ERR_INTERNAL;
|
2013-12-20 10:09:51 +00:00
|
|
|
}
|
|
|
|
else
|
2015-04-02 23:14:29 +00:00
|
|
|
return SF_ERR_INTERNAL; /* how did we get there ? */
|
2013-12-20 10:09:51 +00:00
|
|
|
|
|
|
|
/* process the case where the server requires the PROXY protocol to be sent */
|
|
|
|
srv_conn->send_proxy_ofs = 0;
|
2015-08-04 17:34:21 +00:00
|
|
|
if (srv && srv->pp_opts) {
|
2015-08-04 17:24:13 +00:00
|
|
|
srv_conn->flags |= CO_FL_PRIVATE;
|
2013-12-20 10:09:51 +00:00
|
|
|
srv_conn->send_proxy_ofs = 1; /* must compute size */
|
2015-04-04 00:10:38 +00:00
|
|
|
cli_conn = objt_conn(strm_orig(s));
|
2013-12-20 10:09:51 +00:00
|
|
|
if (cli_conn)
|
|
|
|
conn_get_to_addr(cli_conn);
|
|
|
|
}
|
2012-05-11 16:32:18 +00:00
|
|
|
|
2014-11-28 13:42:25 +00:00
|
|
|
si_attach_conn(&s->si[1], srv_conn);
|
2013-10-24 13:50:53 +00:00
|
|
|
|
2013-12-20 10:09:51 +00:00
|
|
|
assign_tproxy_address(s);
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
/* the connection is being reused, just re-attach it */
|
2014-11-28 13:42:25 +00:00
|
|
|
si_attach_conn(&s->si[1], srv_conn);
|
2015-04-02 23:14:29 +00:00
|
|
|
s->flags |= SF_SRV_REUSED;
|
2012-05-11 16:32:18 +00:00
|
|
|
}
|
2010-03-29 17:36:59 +00:00
|
|
|
|
2012-03-02 13:35:21 +00:00
|
|
|
/* flag for logging source ip/port */
|
2015-04-04 00:10:38 +00:00
|
|
|
if (strm_fe(s)->options2 & PR_O2_SRC_ADDR)
|
2014-11-28 13:42:25 +00:00
|
|
|
s->si[1].flags |= SI_FL_SRC_ADDR;
|
2012-03-02 13:35:21 +00:00
|
|
|
|
2012-08-30 20:23:13 +00:00
|
|
|
/* disable lingering */
|
|
|
|
if (s->be->options & PR_O_TCP_NOLING)
|
2014-11-28 13:42:25 +00:00
|
|
|
s->si[1].flags |= SI_FL_NOLINGER;
|
2012-08-30 20:23:13 +00:00
|
|
|
|
2014-11-28 13:42:25 +00:00
|
|
|
err = si_connect(&s->si[1]);
|
2009-06-10 09:09:37 +00:00
|
|
|
|
2015-04-02 23:14:29 +00:00
|
|
|
if (err != SF_ERR_NONE)
|
2009-08-16 12:02:45 +00:00
|
|
|
return err;
|
2008-08-26 11:25:39 +00:00
|
|
|
|
2012-08-30 20:23:13 +00:00
|
|
|
/* set connect timeout */
|
2014-11-28 13:42:25 +00:00
|
|
|
s->si[1].exp = tick_add_ifset(now_ms, s->be->timeout.connect);
|
2012-08-30 20:23:13 +00:00
|
|
|
|
2011-03-10 15:55:02 +00:00
|
|
|
if (srv) {
|
2015-04-02 23:14:29 +00:00
|
|
|
s->flags |= SF_CURR_SESS;
|
2011-03-10 15:55:02 +00:00
|
|
|
srv->cur_sess++;
|
|
|
|
if (srv->cur_sess > srv->counters.cur_sess_max)
|
|
|
|
srv->counters.cur_sess_max = srv->cur_sess;
|
2008-03-10 21:04:20 +00:00
|
|
|
if (s->be->lbprm.server_take_conn)
|
2011-03-10 15:55:02 +00:00
|
|
|
s->be->lbprm.server_take_conn(srv);
|
2015-07-09 09:40:25 +00:00
|
|
|
|
|
|
|
#ifdef USE_OPENSSL
|
|
|
|
if (srv->ssl_ctx.sni) {
|
|
|
|
struct sample *smp;
|
|
|
|
int rewind;
|
|
|
|
|
|
|
|
/* Tricky case : we have already scheduled the pending
|
|
|
|
* HTTP request or TCP data for leaving. So in HTTP we
|
|
|
|
* rewind exactly the headers, otherwise we rewind the
|
|
|
|
* output data.
|
|
|
|
*/
|
|
|
|
rewind = s->txn ? http_hdr_rewind(&s->txn->req) : s->req.buf->o;
|
|
|
|
b_rew(s->req.buf, rewind);
|
|
|
|
|
|
|
|
smp = sample_fetch_as_type(s->be, s->sess, s, SMP_OPT_DIR_REQ | SMP_OPT_FINAL, srv->ssl_ctx.sni, SMP_T_STR);
|
|
|
|
|
|
|
|
/* restore the pointers */
|
|
|
|
b_adv(s->req.buf, rewind);
|
|
|
|
|
2016-08-09 09:55:21 +00:00
|
|
|
if (smp_make_safe(smp)) {
|
2015-08-19 07:07:19 +00:00
|
|
|
ssl_sock_set_servername(srv_conn, smp->data.u.str.str);
|
2015-08-04 17:24:13 +00:00
|
|
|
srv_conn->flags |= CO_FL_PRIVATE;
|
2015-07-09 09:40:25 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif /* USE_OPENSSL */
|
|
|
|
|
2006-06-26 00:48:02 +00:00
|
|
|
}
|
|
|
|
|
2015-04-02 23:14:29 +00:00
|
|
|
return SF_ERR_NONE; /* connection is OK */
|
2006-06-26 00:48:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* This function performs the "redispatch" part of a connection attempt. It
|
|
|
|
* will assign a server if required, queue the connection if required, and
|
|
|
|
* handle errors that might arise at this level. It can change the server
|
|
|
|
* state. It will return 1 if it encounters an error, switches the server
|
|
|
|
* state, or has to queue a connection. Otherwise, it will return 0 indicating
|
|
|
|
* that the connection is ready to use.
|
|
|
|
*/
|
|
|
|
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
int srv_redispatch_connect(struct stream *s)
|
2006-06-26 00:48:02 +00:00
|
|
|
{
|
2011-03-10 15:55:02 +00:00
|
|
|
struct server *srv;
|
2006-06-26 00:48:02 +00:00
|
|
|
int conn_err;
|
|
|
|
|
|
|
|
/* We know that we don't have any connection pending, so we will
|
|
|
|
* try to get a new one, and wait in this state if it's queued
|
|
|
|
*/
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
redispatch:
|
2014-04-24 18:47:57 +00:00
|
|
|
conn_err = assign_server_and_queue(s);
|
|
|
|
srv = objt_server(s->target);
|
2011-03-10 15:55:02 +00:00
|
|
|
|
2006-06-26 00:48:02 +00:00
|
|
|
switch (conn_err) {
|
|
|
|
case SRV_STATUS_OK:
|
|
|
|
break;
|
|
|
|
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
case SRV_STATUS_FULL:
|
|
|
|
/* The server has reached its maxqueue limit. Either PR_O_REDISP is set
|
|
|
|
* and we can redispatch to another server, or it is not and we return
|
|
|
|
* 503. This only makes sense in DIRECT mode however, because normal LB
|
|
|
|
* algorithms would never select such a server, and hash algorithms
|
2014-04-24 18:47:57 +00:00
|
|
|
* would bring us on the same server again. Note that s->target is set
|
2011-03-10 15:55:02 +00:00
|
|
|
* in this case.
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
*/
|
2015-04-02 23:14:29 +00:00
|
|
|
if (((s->flags & (SF_DIRECT|SF_FORCE_PRST)) == SF_DIRECT) &&
|
2014-04-24 18:47:57 +00:00
|
|
|
(s->be->options & PR_O_REDISP)) {
|
2015-04-02 23:14:29 +00:00
|
|
|
s->flags &= ~(SF_DIRECT | SF_ASSIGNED | SF_ADDR_SET);
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
goto redispatch;
|
|
|
|
}
|
|
|
|
|
2014-11-28 13:42:25 +00:00
|
|
|
if (!s->si[1].err_type) {
|
|
|
|
s->si[1].err_type = SI_ET_QUEUE_ERR;
|
[MAJOR] rework of the server FSM
srv_state has been removed from HTTP state machines, and states
have been split in either TCP states or analyzers. For instance,
the TARPIT state has just become a simple analyzer.
New flags have been added to the struct buffer to compensate this.
The high-level stream processors sometimes need to force a disconnection
without touching a file-descriptor (eg: report an error). But if
they touched BF_SHUTW or BF_SHUTR, the file descriptor would not
be closed. Thus, the two SHUT?_NOW flags have been added so that
an application can request a forced close which the stream interface
will be forced to obey.
During this change, a new BF_HIJACK flag was added. It will
be used for data generation, eg during a stats dump. It
prevents the producer on a buffer from sending data into it.
BF_SHUTR_NOW /* the producer must shut down for reads ASAP */
BF_SHUTW_NOW /* the consumer must shut down for writes ASAP */
BF_HIJACK /* the producer is temporarily replaced */
BF_SHUTW_NOW has precedence over BF_HIJACK. BF_HIJACK has
precedence over BF_MAY_FORWARD (so that it does not need it).
New functions buffer_shutr_now(), buffer_shutw_now(), buffer_abort()
are provided to manipulate BF_SHUT* flags.
A new type "stream_interface" has been added to describe both
sides of a buffer. A stream interface has states and error
reporting. The session now has two stream interfaces (one per
side). Each buffer has stream_interface pointers to both
consumer and producer sides.
The server-side file descriptor has moved to its stream interface,
so that even the buffer has access to it.
process_srv() has been split into three parts :
- tcp_get_connection() obtains a connection to the server
- tcp_connection_failed() tests if a previously attempted
connection has succeeded or not.
- process_srv_data() only manages the data phase, and in
this sense should be roughly equivalent to process_cli.
Little code has been removed, and a lot of old code has been
left in comments for now.
2008-10-19 05:30:41 +00:00
|
|
|
}
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
|
2011-03-10 15:55:02 +00:00
|
|
|
srv->counters.failed_conns++;
|
2014-04-24 18:47:57 +00:00
|
|
|
s->be->be_counters.failed_conns++;
|
[BUG] fix the dequeuing logic to ensure that all requests get served
The dequeuing logic was completely wrong. First, a task was assigned
to all servers to process the queue, but this task was never scheduled
and was only woken up on session free. Second, there was no reservation
of server entries when a task was assigned a server. This means that
as long as the task was not connected to the server, its presence was
not accounted for. This was causing trouble when detecting whether or
not a server had reached maxconn. Third, during a redispatch, a session
could lose its place at the server's and get blocked because another
session at the same moment would have stolen the entry. Fourth, the
redispatch option did not work when maxqueue was reached for a server,
and it was not possible to do so without indefinitely hanging a session.
The root cause of all those problems was the lack of pre-reservation of
connections at the server's, and the lack of tracking of servers during
a redispatch. Everything relied on combinations of flags which could
appear similarly in quite distinct situations.
This patch is a major rework but there was no other solution, as the
internal logic was deeply flawed. The resulting code is cleaner, more
understandable, uses less magics and is overall more robust.
As an added bonus, "option redispatch" now works when maxqueue has
been reached on a server.
2008-06-20 13:04:11 +00:00
|
|
|
return 1;
|
|
|
|
|
2006-06-26 00:48:02 +00:00
|
|
|
case SRV_STATUS_NOSRV:
|
2011-03-10 15:55:02 +00:00
|
|
|
/* note: it is guaranteed that srv == NULL here */
|
2014-11-28 13:42:25 +00:00
|
|
|
if (!s->si[1].err_type) {
|
|
|
|
s->si[1].err_type = SI_ET_CONN_ERR;
|
[MAJOR] rework of the server FSM
srv_state has been removed from HTTP state machines, and states
have been split in either TCP states or analyzers. For instance,
the TARPIT state has just become a simple analyzer.
New flags have been added to the struct buffer to compensate this.
The high-level stream processors sometimes need to force a disconnection
without touching a file-descriptor (eg: report an error). But if
they touched BF_SHUTW or BF_SHUTR, the file descriptor would not
be closed. Thus, the two SHUT?_NOW flags have been added so that
an application can request a forced close which the stream interface
will be forced to obey.
During this change, a new BF_HIJACK flag was added. It will
be used for data generation, eg during a stats dump. It
prevents the producer on a buffer from sending data into it.
BF_SHUTR_NOW /* the producer must shut down for reads ASAP */
BF_SHUTW_NOW /* the consumer must shut down for writes ASAP */
BF_HIJACK /* the producer is temporarily replaced */
BF_SHUTW_NOW has precedence over BF_HIJACK. BF_HIJACK has
precedence over BF_MAY_FORWARD (so that it does not need it).
New functions buffer_shutr_now(), buffer_shutw_now(), buffer_abort()
are provided to manipulate BF_SHUT* flags.
A new type "stream_interface" has been added to describe both
sides of a buffer. A stream interface has states and error
reporting. The session now has two stream interfaces (one per
side). Each buffer has stream_interface pointers to both
consumer and producer sides.
The server-side file descriptor has moved to its stream interface,
so that even the buffer has access to it.
process_srv() has been split into three parts :
- tcp_get_connection() obtains a connection to the server
- tcp_connection_failed() tests if a previously attempted
connection has succeeded or not.
- process_srv_data() only manages the data phase, and in
this sense should be roughly equivalent to process_cli.
Little code has been removed, and a lot of old code has been
left in comments for now.
2008-10-19 05:30:41 +00:00
|
|
|
}
|
[MEDIUM]: Prevent redispatcher from selecting the same server, version #3
When haproxy decides that session needs to be redispatched it chose a server,
but there is no guarantee for it to be a different one. So, it often
happens that selected server is exactly the same that it was previously, so
a client ends up with a 503 error anyway, especially when one sever has
much bigger weight than others.
Changes from the previous version:
- drop stupid and unnecessary SN_DIRECT changes
- assign_server(): use srvtoavoid to keep the old server and clear s->srv
so SRV_STATUS_NOSRV guarantees that t->srv == NULL (again)
and get_server_rr_with_conns has chances to work (previously
we were passing a NULL here)
- srv_redispatch_connect(): remove t->srv->cum_sess and t->srv->failed_conns
incrementing as t->srv was guaranteed to be NULL
- add avoididx to get_server_rr_with_conns. I hope I correctly understand this code.
- fix http_flush_cookie_flags() and move it to assign_server_and_queue()
directly. The code here was supposed to set CK_DOWN and clear CK_VALID,
but: (TX_CK_VALID | TX_CK_DOWN) == TX_CK_VALID == TX_CK_MASK so:
if ((txn->flags & TX_CK_MASK) == TX_CK_VALID)
txn->flags ^= (TX_CK_VALID | TX_CK_DOWN);
was really a:
if ((txn->flags & TX_CK_MASK) == TX_CK_VALID)
txn->flags &= TX_CK_VALID
Now haproxy logs "--DI" after redispatching connection.
- defer srv->redispatches++ and s->be->redispatches++ so there
are called only if a conenction was redispatched, not only
supposed to.
- don't increment lbconn if redispatcher selected the same sarver
- don't count unsuccessfully redispatched connections as redispatched
connections
- don't count redispatched connections as errors, so:
- the number of connections effectively served by a server is:
srv->cum_sess - srv->failed_conns - srv->retries - srv->redispatches
and
SUM(servers->failed_conns) == be->failed_conns
- requires the "Don't increment server connections too much + fix retries" patch
- needs little more testing and probably some discussion so reverting to the RFC state
Tests #1:
retries 4
redispatch
i) 1 server(s): b (wght=1, down)
b) sessions=5, lbtot=1, err_conn=1, retr=4, redis=0
-> request failed
ii) server(s): b (wght=1, down), u (wght=1, down)
b) sessions=4, lbtot=1, err_conn=0, retr=3, redis=1
u) sessions=1, lbtot=1, err_conn=1, retr=0, redis=0
-> request FAILED
iii) 2 server(s): b (wght=1, down), u (wght=1, up)
b) sessions=4, lbtot=1, err_conn=0, retr=3, redis=1
u) sessions=1, lbtot=1, err_conn=0, retr=0, redis=0
-> request OK
iv) 2 server(s): b (wght=100, down), u (wght=1, up)
b) sessions=4, lbtot=1, err_conn=0, retr=3, redis=1
u) sessions=1, lbtot=1, err_conn=0, retr=0, redis=0
-> request OK
v) 1 server(s): b (down for first 4 SYNS)
b) sessions=5, lbtot=1, err_conn=0, retr=4, redis=0
-> request OK
Tests #2:
retries 4
i) 1 server(s): b (down)
b) sessions=5, lbtot=1, err_conn=1, retr=4, redis=0
-> request FAILED
2008-02-22 02:50:19 +00:00
|
|
|
|
2014-04-24 18:47:57 +00:00
|
|
|
s->be->be_counters.failed_conns++;
|
2006-06-26 00:48:02 +00:00
|
|
|
return 1;
|
|
|
|
|
|
|
|
case SRV_STATUS_QUEUED:
|
2014-11-28 13:42:25 +00:00
|
|
|
s->si[1].exp = tick_add_ifset(now_ms, s->be->timeout.queue);
|
|
|
|
s->si[1].state = SI_ST_QUE;
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
/* do nothing else and do not wake any other stream up */
|
2006-06-26 00:48:02 +00:00
|
|
|
return 1;
|
|
|
|
|
|
|
|
case SRV_STATUS_INTERNAL:
|
|
|
|
default:
|
2014-11-28 13:42:25 +00:00
|
|
|
if (!s->si[1].err_type) {
|
|
|
|
s->si[1].err_type = SI_ET_CONN_OTHER;
|
[MAJOR] rework of the server FSM
srv_state has been removed from HTTP state machines, and states
have been split in either TCP states or analyzers. For instance,
the TARPIT state has just become a simple analyzer.
New flags have been added to the struct buffer to compensate this.
The high-level stream processors sometimes need to force a disconnection
without touching a file-descriptor (eg: report an error). But if
they touched BF_SHUTW or BF_SHUTR, the file descriptor would not
be closed. Thus, the two SHUT?_NOW flags have been added so that
an application can request a forced close which the stream interface
will be forced to obey.
During this change, a new BF_HIJACK flag was added. It will
be used for data generation, eg during a stats dump. It
prevents the producer on a buffer from sending data into it.
BF_SHUTR_NOW /* the producer must shut down for reads ASAP */
BF_SHUTW_NOW /* the consumer must shut down for writes ASAP */
BF_HIJACK /* the producer is temporarily replaced */
BF_SHUTW_NOW has precedence over BF_HIJACK. BF_HIJACK has
precedence over BF_MAY_FORWARD (so that it does not need it).
New functions buffer_shutr_now(), buffer_shutw_now(), buffer_abort()
are provided to manipulate BF_SHUT* flags.
A new type "stream_interface" has been added to describe both
sides of a buffer. A stream interface has states and error
reporting. The session now has two stream interfaces (one per
side). Each buffer has stream_interface pointers to both
consumer and producer sides.
The server-side file descriptor has moved to its stream interface,
so that even the buffer has access to it.
process_srv() has been split into three parts :
- tcp_get_connection() obtains a connection to the server
- tcp_connection_failed() tests if a previously attempted
connection has succeeded or not.
- process_srv_data() only manages the data phase, and in
this sense should be roughly equivalent to process_cli.
Little code has been removed, and a lot of old code has been
left in comments for now.
2008-10-19 05:30:41 +00:00
|
|
|
}
|
|
|
|
|
2011-03-10 15:55:02 +00:00
|
|
|
if (srv)
|
|
|
|
srv_inc_sess_ctr(srv);
|
2014-02-03 21:26:46 +00:00
|
|
|
if (srv)
|
|
|
|
srv_set_sess_last(srv);
|
2011-03-10 15:55:02 +00:00
|
|
|
if (srv)
|
|
|
|
srv->counters.failed_conns++;
|
2014-04-24 18:47:57 +00:00
|
|
|
s->be->be_counters.failed_conns++;
|
2006-06-26 00:48:02 +00:00
|
|
|
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
/* release other streams waiting for this server */
|
2014-04-24 18:47:57 +00:00
|
|
|
if (may_dequeue_tasks(srv, s->be))
|
2011-03-10 15:55:02 +00:00
|
|
|
process_srv_queue(srv);
|
2006-06-26 00:48:02 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
/* if we get here, it's because we got SRV_STATUS_OK, which also
|
|
|
|
* means that the connection has not been queued.
|
|
|
|
*/
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-05-16 09:48:10 +00:00
|
|
|
/* sends a log message when a backend goes down, and also sets last
|
|
|
|
* change date.
|
|
|
|
*/
|
|
|
|
void set_backend_down(struct proxy *be)
|
|
|
|
{
|
|
|
|
be->last_change = now.tv_sec;
|
|
|
|
be->down_trans++;
|
|
|
|
|
2016-11-03 18:42:36 +00:00
|
|
|
if (!(global.mode & MODE_STARTING)) {
|
|
|
|
Alert("%s '%s' has no server available!\n", proxy_type_str(be), be->id);
|
|
|
|
send_log(be, LOG_EMERG, "%s %s has no server available!\n", proxy_type_str(be), be->id);
|
|
|
|
}
|
2014-05-16 09:48:10 +00:00
|
|
|
}
|
|
|
|
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
/* Apply RDP cookie persistence to the current stream. For this, the function
|
2010-05-24 18:27:29 +00:00
|
|
|
* tries to extract an RDP cookie from the request buffer, and look for the
|
|
|
|
* matching server in the list. If the server is found, it is assigned to the
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
* stream. This always returns 1, and the analyser removes itself from the
|
2010-05-24 18:27:29 +00:00
|
|
|
* list. Nothing is performed if a server was already assigned.
|
|
|
|
*/
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
int tcp_persist_rdp_cookie(struct stream *s, struct channel *req, int an_bit)
|
2010-05-24 18:27:29 +00:00
|
|
|
{
|
|
|
|
struct proxy *px = s->be;
|
|
|
|
int ret;
|
2012-04-23 14:16:37 +00:00
|
|
|
struct sample smp;
|
2010-05-24 18:27:29 +00:00
|
|
|
struct server *srv = px->srv;
|
|
|
|
struct sockaddr_in addr;
|
|
|
|
char *p;
|
|
|
|
|
REORG/MAJOR: session: rename the "session" entity to "stream"
With HTTP/2, we'll have to support multiplexed streams. A stream is in
fact the largest part of what we currently call a session, it has buffers,
logs, etc.
In order to catch any error, this commit removes any reference to the
struct session and tries to rename most "session" occurrences in function
names to "stream" and "sess" to "strm" when that's related to a session.
The files stream.{c,h} were added and session.{c,h} removed.
The session will be reintroduced later and a few parts of the stream
will progressively be moved overthere. It will more or less contain
only what we need in an embryonic session.
Sample fetch functions and converters will have to change a bit so
that they'll use an L5 (session) instead of what's currently called
"L4" which is in fact L6 for now.
Once all changes are completed, we should see approximately this :
L7 - http_txn
L6 - stream
L5 - session
L4 - connection | applet
There will be at most one http_txn per stream, and a same session will
possibly be referenced by multiple streams. A connection will point to
a session and to a stream. The session will hold all the information
we need to keep even when we don't yet have a stream.
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.
Stream flags "SN_*" still need to be renamed, it doesn't seem like
any of them will need to move to the session.
2015-04-02 22:22:06 +00:00
|
|
|
DPRINTF(stderr,"[%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bh=%d analysers=%02x\n",
|
2010-05-24 18:27:29 +00:00
|
|
|
now_ms, __FUNCTION__,
|
|
|
|
s,
|
|
|
|
req,
|
|
|
|
req->rex, req->wex,
|
|
|
|
req->flags,
|
2012-10-12 21:49:43 +00:00
|
|
|
req->buf->i,
|
2010-05-24 18:27:29 +00:00
|
|
|
req->analysers);
|
|
|
|
|
2015-04-02 23:14:29 +00:00
|
|
|
if (s->flags & SF_ASSIGNED)
|
2010-05-24 18:27:29 +00:00
|
|
|
goto no_cookie;
|
|
|
|
|
2012-04-23 14:16:37 +00:00
|
|
|
memset(&smp, 0, sizeof(smp));
|
2010-05-24 18:27:29 +00:00
|
|
|
|
2013-07-22 16:09:52 +00:00
|
|
|
ret = fetch_rdp_cookie_name(s, &smp, s->be->rdp_cookie_name, s->be->rdp_cookie_len);
|
2015-08-19 07:07:19 +00:00
|
|
|
if (ret == 0 || (smp.flags & SMP_F_MAY_CHANGE) || smp.data.u.str.len == 0)
|
2010-05-24 18:27:29 +00:00
|
|
|
goto no_cookie;
|
|
|
|
|
|
|
|
memset(&addr, 0, sizeof(addr));
|
|
|
|
addr.sin_family = AF_INET;
|
|
|
|
|
2011-12-16 18:11:42 +00:00
|
|
|
/* Considering an rdp cookie detected using acl, str ended with <cr><lf> and should return */
|
2015-08-19 07:07:19 +00:00
|
|
|
addr.sin_addr.s_addr = strtoul(smp.data.u.str.str, &p, 10);
|
2010-05-24 18:27:29 +00:00
|
|
|
if (*p != '.')
|
|
|
|
goto no_cookie;
|
|
|
|
p++;
|
|
|
|
addr.sin_port = (unsigned short)strtoul(p, &p, 10);
|
|
|
|
if (*p != '.')
|
|
|
|
goto no_cookie;
|
|
|
|
|
2012-11-11 23:42:33 +00:00
|
|
|
s->target = NULL;
|
2010-05-24 18:27:29 +00:00
|
|
|
while (srv) {
|
2014-05-09 20:47:50 +00:00
|
|
|
if (srv->addr.ss_family == AF_INET &&
|
|
|
|
memcmp(&addr, &(srv->addr), sizeof(addr)) == 0) {
|
2014-05-13 21:41:20 +00:00
|
|
|
if ((srv->state != SRV_ST_STOPPED) || (px->options & PR_O_PERSIST)) {
|
2010-05-24 18:27:29 +00:00
|
|
|
/* we found the server and it is usable */
|
2015-04-02 23:14:29 +00:00
|
|
|
s->flags |= SF_DIRECT | SF_ASSIGNED;
|
2012-11-11 23:42:33 +00:00
|
|
|
s->target = &srv->obj_type;
|
2010-05-24 18:27:29 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
srv = srv->next;
|
|
|
|
}
|
|
|
|
|
|
|
|
no_cookie:
|
|
|
|
req->analysers &= ~an_bit;
|
|
|
|
req->analyse_exp = TICK_ETERNITY;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
[MEDIUM] stats: report server and backend cumulated downtime
Hello,
This patch implements new statistics for SLA calculation by adding new
field 'Dwntime' with total down time since restart (both HTTP/CSV) and
extending status field (HTTP) or inserting a new one (CSV) with time
showing how long each server/backend is in a current state. Additionaly,
down transations are also calculated and displayed for backends, so it is
possible to know how many times selected backend was down, generating "No
server is available to handle this request." error.
New information are presentetd in two different ways:
- for HTTP: a "human redable form", one of "100000d 23h", "23h 59m" or
"59m 59s"
- for CSV: seconds
I believe that seconds resolution is enough.
As there are more columns in the status page I decided to shrink some
names to make more space:
- Weight -> Wght
- Check -> Chk
- Down -> Dwn
Making described changes I also made some improvements and fixed some
small bugs:
- don't increment s->health above 's->rise + s->fall - 1'. Previously it
was incremented an then (re)set to 's->rise + s->fall - 1'.
- do not set server down if it is down already
- do not set server up if it is up already
- fix colspan in multiple places (mostly introduced by my previous patch)
- add missing "status" header to CSV
- fix order of retries/redispatches in server (CSV)
- s/Tthen/Then/
- s/server/backend/ in DATA_ST_PX_BE (dumpstats.c)
Changes from previous version:
- deal with negative time intervales
- don't relay on s->state (SRV_RUNNING)
- little reworked human_time + compacted format (no spaces). If needed it
can be used in the future for other purposes by optionally making "cnt"
as an argument
- leave set_server_down mostly unchanged
- only little reworked "process_chk: 9"
- additional fields in CSV are appended to the rigth
- fix "SEC" macro
- named arguments (human_time, be_downtime, srv_downtime)
Hope it is OK. If there are only cosmetic changes needed please fill free
to correct it, however if there are some bigger changes required I would
like to discuss it first or at last to know what exactly was changed
especially since I already put this patch into my production server. :)
Thank you,
Best regards,
Krzysztof Oledzki
2007-10-22 14:21:10 +00:00
|
|
|
int be_downtime(struct proxy *px) {
|
2007-11-26 00:15:43 +00:00
|
|
|
if (px->lbprm.tot_weight && px->last_change < now.tv_sec) // ignore negative time
|
[MEDIUM] stats: report server and backend cumulated downtime
Hello,
This patch implements new statistics for SLA calculation by adding new
field 'Dwntime' with total down time since restart (both HTTP/CSV) and
extending status field (HTTP) or inserting a new one (CSV) with time
showing how long each server/backend is in a current state. Additionaly,
down transations are also calculated and displayed for backends, so it is
possible to know how many times selected backend was down, generating "No
server is available to handle this request." error.
New information are presentetd in two different ways:
- for HTTP: a "human redable form", one of "100000d 23h", "23h 59m" or
"59m 59s"
- for CSV: seconds
I believe that seconds resolution is enough.
As there are more columns in the status page I decided to shrink some
names to make more space:
- Weight -> Wght
- Check -> Chk
- Down -> Dwn
Making described changes I also made some improvements and fixed some
small bugs:
- don't increment s->health above 's->rise + s->fall - 1'. Previously it
was incremented an then (re)set to 's->rise + s->fall - 1'.
- do not set server down if it is down already
- do not set server up if it is up already
- fix colspan in multiple places (mostly introduced by my previous patch)
- add missing "status" header to CSV
- fix order of retries/redispatches in server (CSV)
- s/Tthen/Then/
- s/server/backend/ in DATA_ST_PX_BE (dumpstats.c)
Changes from previous version:
- deal with negative time intervales
- don't relay on s->state (SRV_RUNNING)
- little reworked human_time + compacted format (no spaces). If needed it
can be used in the future for other purposes by optionally making "cnt"
as an argument
- leave set_server_down mostly unchanged
- only little reworked "process_chk: 9"
- additional fields in CSV are appended to the rigth
- fix "SEC" macro
- named arguments (human_time, be_downtime, srv_downtime)
Hope it is OK. If there are only cosmetic changes needed please fill free
to correct it, however if there are some bigger changes required I would
like to discuss it first or at last to know what exactly was changed
especially since I already put this patch into my production server. :)
Thank you,
Best regards,
Krzysztof Oledzki
2007-10-22 14:21:10 +00:00
|
|
|
return px->down_time;
|
|
|
|
|
|
|
|
return now.tv_sec - px->last_change + px->down_time;
|
|
|
|
}
|
2006-06-26 00:48:02 +00:00
|
|
|
|
2010-01-04 15:03:09 +00:00
|
|
|
/*
|
|
|
|
* This function returns a string containing the balancing
|
|
|
|
* mode of the proxy in a format suitable for stats.
|
|
|
|
*/
|
|
|
|
|
|
|
|
const char *backend_lb_algo_str(int algo) {
|
|
|
|
|
|
|
|
if (algo == BE_LB_ALGO_RR)
|
|
|
|
return "roundrobin";
|
|
|
|
else if (algo == BE_LB_ALGO_SRR)
|
|
|
|
return "static-rr";
|
2012-02-13 16:12:08 +00:00
|
|
|
else if (algo == BE_LB_ALGO_FAS)
|
|
|
|
return "first";
|
2010-01-04 15:03:09 +00:00
|
|
|
else if (algo == BE_LB_ALGO_LC)
|
|
|
|
return "leastconn";
|
|
|
|
else if (algo == BE_LB_ALGO_SH)
|
|
|
|
return "source";
|
|
|
|
else if (algo == BE_LB_ALGO_UH)
|
|
|
|
return "uri";
|
|
|
|
else if (algo == BE_LB_ALGO_PH)
|
|
|
|
return "url_param";
|
|
|
|
else if (algo == BE_LB_ALGO_HH)
|
|
|
|
return "hdr";
|
|
|
|
else if (algo == BE_LB_ALGO_RCH)
|
|
|
|
return "rdp-cookie";
|
2016-11-26 14:52:04 +00:00
|
|
|
else if (algo == BE_LB_ALGO_NONE)
|
|
|
|
return "none";
|
2010-01-04 15:03:09 +00:00
|
|
|
else
|
2016-11-26 14:52:04 +00:00
|
|
|
return "unknown";
|
2010-01-04 15:03:09 +00:00
|
|
|
}
|
|
|
|
|
2007-11-01 20:39:54 +00:00
|
|
|
/* This function parses a "balance" statement in a backend section describing
|
|
|
|
* <curproxy>. It returns -1 if there is any error, otherwise zero. If it
|
2012-05-08 16:14:39 +00:00
|
|
|
* returns -1, it will write an error message into the <err> buffer which will
|
|
|
|
* automatically be allocated and must be passed as NULL. The trailing '\n'
|
|
|
|
* will not be written. The function must be called with <args> pointing to the
|
|
|
|
* first word after "balance".
|
2007-11-01 20:39:54 +00:00
|
|
|
*/
|
2012-05-08 16:14:39 +00:00
|
|
|
int backend_parse_balance(const char **args, char **err, struct proxy *curproxy)
|
2007-11-01 20:39:54 +00:00
|
|
|
{
|
|
|
|
if (!*(args[0])) {
|
|
|
|
/* if no option is set, use round-robin by default */
|
2007-11-29 14:38:04 +00:00
|
|
|
curproxy->lbprm.algo &= ~BE_LB_ALGO;
|
|
|
|
curproxy->lbprm.algo |= BE_LB_ALGO_RR;
|
2007-11-01 20:39:54 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!strcmp(args[0], "roundrobin")) {
|
2007-11-29 14:38:04 +00:00
|
|
|
curproxy->lbprm.algo &= ~BE_LB_ALGO;
|
|
|
|
curproxy->lbprm.algo |= BE_LB_ALGO_RR;
|
2007-11-01 20:39:54 +00:00
|
|
|
}
|
2009-10-03 10:56:50 +00:00
|
|
|
else if (!strcmp(args[0], "static-rr")) {
|
|
|
|
curproxy->lbprm.algo &= ~BE_LB_ALGO;
|
|
|
|
curproxy->lbprm.algo |= BE_LB_ALGO_SRR;
|
|
|
|
}
|
2012-02-13 16:12:08 +00:00
|
|
|
else if (!strcmp(args[0], "first")) {
|
|
|
|
curproxy->lbprm.algo &= ~BE_LB_ALGO;
|
|
|
|
curproxy->lbprm.algo |= BE_LB_ALGO_FAS;
|
|
|
|
}
|
2008-03-10 21:04:20 +00:00
|
|
|
else if (!strcmp(args[0], "leastconn")) {
|
|
|
|
curproxy->lbprm.algo &= ~BE_LB_ALGO;
|
|
|
|
curproxy->lbprm.algo |= BE_LB_ALGO_LC;
|
|
|
|
}
|
2007-11-01 20:39:54 +00:00
|
|
|
else if (!strcmp(args[0], "source")) {
|
2007-11-29 14:38:04 +00:00
|
|
|
curproxy->lbprm.algo &= ~BE_LB_ALGO;
|
|
|
|
curproxy->lbprm.algo |= BE_LB_ALGO_SH;
|
2007-11-01 20:39:54 +00:00
|
|
|
}
|
|
|
|
else if (!strcmp(args[0], "uri")) {
|
2008-04-27 21:25:55 +00:00
|
|
|
int arg = 1;
|
|
|
|
|
2007-11-29 14:38:04 +00:00
|
|
|
curproxy->lbprm.algo &= ~BE_LB_ALGO;
|
|
|
|
curproxy->lbprm.algo |= BE_LB_ALGO_UH;
|
2008-04-27 21:25:55 +00:00
|
|
|
|
2012-05-19 09:19:54 +00:00
|
|
|
curproxy->uri_whole = 0;
|
|
|
|
|
2008-04-27 21:25:55 +00:00
|
|
|
while (*args[arg]) {
|
|
|
|
if (!strcmp(args[arg], "len")) {
|
|
|
|
if (!*args[arg+1] || (atoi(args[arg+1]) <= 0)) {
|
2012-05-08 16:14:39 +00:00
|
|
|
memprintf(err, "%s : '%s' expects a positive integer (got '%s').", args[0], args[arg], args[arg+1]);
|
2008-04-27 21:25:55 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
curproxy->uri_len_limit = atoi(args[arg+1]);
|
|
|
|
arg += 2;
|
|
|
|
}
|
|
|
|
else if (!strcmp(args[arg], "depth")) {
|
|
|
|
if (!*args[arg+1] || (atoi(args[arg+1]) <= 0)) {
|
2012-05-08 16:14:39 +00:00
|
|
|
memprintf(err, "%s : '%s' expects a positive integer (got '%s').", args[0], args[arg], args[arg+1]);
|
2008-04-27 21:25:55 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
/* hint: we store the position of the ending '/' (depth+1) so
|
|
|
|
* that we avoid a comparison while computing the hash.
|
|
|
|
*/
|
|
|
|
curproxy->uri_dirs_depth1 = atoi(args[arg+1]) + 1;
|
|
|
|
arg += 2;
|
|
|
|
}
|
2012-05-19 09:19:54 +00:00
|
|
|
else if (!strcmp(args[arg], "whole")) {
|
|
|
|
curproxy->uri_whole = 1;
|
|
|
|
arg += 1;
|
|
|
|
}
|
2008-04-27 21:25:55 +00:00
|
|
|
else {
|
2012-05-19 09:19:54 +00:00
|
|
|
memprintf(err, "%s only accepts parameters 'len', 'depth', and 'whole' (got '%s').", args[0], args[arg]);
|
2008-04-27 21:25:55 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
}
|
2007-11-01 20:39:54 +00:00
|
|
|
}
|
2007-11-01 21:48:15 +00:00
|
|
|
else if (!strcmp(args[0], "url_param")) {
|
|
|
|
if (!*args[1]) {
|
2012-05-08 16:14:39 +00:00
|
|
|
memprintf(err, "%s requires an URL parameter name.", args[0]);
|
2007-11-01 21:48:15 +00:00
|
|
|
return -1;
|
|
|
|
}
|
2007-11-29 14:38:04 +00:00
|
|
|
curproxy->lbprm.algo &= ~BE_LB_ALGO;
|
|
|
|
curproxy->lbprm.algo |= BE_LB_ALGO_PH;
|
2008-08-03 10:19:50 +00:00
|
|
|
|
|
|
|
free(curproxy->url_param_name);
|
2007-11-01 21:48:15 +00:00
|
|
|
curproxy->url_param_name = strdup(args[1]);
|
2008-08-03 10:19:50 +00:00
|
|
|
curproxy->url_param_len = strlen(args[1]);
|
2008-04-27 21:25:55 +00:00
|
|
|
if (*args[2]) {
|
2008-04-14 18:47:37 +00:00
|
|
|
if (strcmp(args[2], "check_post")) {
|
2012-05-08 16:14:39 +00:00
|
|
|
memprintf(err, "%s only accepts 'check_post' modifier (got '%s').", args[0], args[2]);
|
2008-04-14 18:47:37 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
}
|
2007-11-01 21:48:15 +00:00
|
|
|
}
|
2009-03-25 12:02:10 +00:00
|
|
|
else if (!strncmp(args[0], "hdr(", 4)) {
|
|
|
|
const char *beg, *end;
|
|
|
|
|
|
|
|
beg = args[0] + 4;
|
|
|
|
end = strchr(beg, ')');
|
|
|
|
|
|
|
|
if (!end || end == beg) {
|
2012-05-08 16:14:39 +00:00
|
|
|
memprintf(err, "hdr requires an http header field name.");
|
2009-03-25 12:02:10 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
curproxy->lbprm.algo &= ~BE_LB_ALGO;
|
|
|
|
curproxy->lbprm.algo |= BE_LB_ALGO_HH;
|
|
|
|
|
|
|
|
free(curproxy->hh_name);
|
|
|
|
curproxy->hh_len = end - beg;
|
|
|
|
curproxy->hh_name = my_strndup(beg, end - beg);
|
|
|
|
curproxy->hh_match_domain = 0;
|
|
|
|
|
|
|
|
if (*args[1]) {
|
|
|
|
if (strcmp(args[1], "use_domain_only")) {
|
2012-05-08 16:14:39 +00:00
|
|
|
memprintf(err, "%s only accepts 'use_domain_only' modifier (got '%s').", args[0], args[1]);
|
2009-03-25 12:02:10 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
curproxy->hh_match_domain = 1;
|
|
|
|
}
|
|
|
|
}
|
2009-06-30 15:56:00 +00:00
|
|
|
else if (!strncmp(args[0], "rdp-cookie", 10)) {
|
|
|
|
curproxy->lbprm.algo &= ~BE_LB_ALGO;
|
|
|
|
curproxy->lbprm.algo |= BE_LB_ALGO_RCH;
|
|
|
|
|
|
|
|
if ( *(args[0] + 10 ) == '(' ) { /* cookie name */
|
|
|
|
const char *beg, *end;
|
|
|
|
|
|
|
|
beg = args[0] + 11;
|
|
|
|
end = strchr(beg, ')');
|
|
|
|
|
|
|
|
if (!end || end == beg) {
|
2012-05-08 16:14:39 +00:00
|
|
|
memprintf(err, "rdp-cookie : missing cookie name.");
|
2009-06-30 15:56:00 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
free(curproxy->hh_name);
|
|
|
|
curproxy->hh_name = my_strndup(beg, end - beg);
|
|
|
|
curproxy->hh_len = end - beg;
|
|
|
|
}
|
|
|
|
else if ( *(args[0] + 10 ) == '\0' ) { /* default cookie name 'mstshash' */
|
|
|
|
free(curproxy->hh_name);
|
|
|
|
curproxy->hh_name = strdup("mstshash");
|
|
|
|
curproxy->hh_len = strlen(curproxy->hh_name);
|
|
|
|
}
|
|
|
|
else { /* syntax */
|
2012-05-08 16:14:39 +00:00
|
|
|
memprintf(err, "rdp-cookie : missing cookie name.");
|
2009-06-30 15:56:00 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
}
|
2007-11-01 20:39:54 +00:00
|
|
|
else {
|
2012-05-08 16:14:39 +00:00
|
|
|
memprintf(err, "only supports 'roundrobin', 'static-rr', 'leastconn', 'source', 'uri', 'url_param', 'hdr(name)' and 'rdp-cookie(name)' options.");
|
2007-11-01 20:39:54 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2007-11-30 19:48:53 +00:00
|
|
|
|
|
|
|
/************************************************************************/
|
MINOR: backend: rename sample fetch functions and declare the sample keywords
The following sample fetch functions were only usable by ACLs but are now
usable by sample fetches too :
avg_queue, be_conn, be_id, be_sess_rate, connslots, nbsrv,
queue, srv_conn, srv_id, srv_is_up, srv_sess_rate
The fetch functions have been renamed "smp_fetch_*".
2013-01-07 21:38:03 +00:00
|
|
|
/* All supported sample and ACL keywords must be declared here. */
|
2007-11-30 19:48:53 +00:00
|
|
|
/************************************************************************/
|
|
|
|
|
2012-04-19 15:16:54 +00:00
|
|
|
/* set temp integer to the number of enabled servers on the proxy.
|
2012-04-20 09:37:56 +00:00
|
|
|
* Accepts exactly 1 argument. Argument is a backend, other types will lead to
|
2012-04-19 15:16:54 +00:00
|
|
|
* undefined behaviour.
|
|
|
|
*/
|
2007-11-30 19:48:53 +00:00
|
|
|
static int
|
2015-05-11 13:42:45 +00:00
|
|
|
smp_fetch_nbsrv(const struct arg *args, struct sample *smp, const char *kw, void *private)
|
2007-11-30 19:48:53 +00:00
|
|
|
{
|
2015-05-11 13:20:49 +00:00
|
|
|
struct proxy *px;
|
|
|
|
|
2012-04-23 14:16:37 +00:00
|
|
|
smp->flags = SMP_F_VOL_TEST;
|
2015-08-19 07:00:18 +00:00
|
|
|
smp->data.type = SMP_T_SINT;
|
2012-04-23 21:55:44 +00:00
|
|
|
px = args->data.prx;
|
2007-11-30 19:48:53 +00:00
|
|
|
|
|
|
|
if (px->srv_act)
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = px->srv_act;
|
2007-11-30 19:48:53 +00:00
|
|
|
else if (px->lbprm.fbck)
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = 1;
|
2007-11-30 19:48:53 +00:00
|
|
|
else
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = px->srv_bck;
|
2007-11-30 19:48:53 +00:00
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2012-04-23 14:16:37 +00:00
|
|
|
/* report in smp->flags a success or failure depending on the designated
|
2010-05-16 20:18:27 +00:00
|
|
|
* server's state. There is no match function involved since there's no pattern.
|
2012-04-19 15:16:54 +00:00
|
|
|
* Accepts exactly 1 argument. Argument is a server, other types will lead to
|
|
|
|
* undefined behaviour.
|
2010-05-16 20:18:27 +00:00
|
|
|
*/
|
|
|
|
static int
|
2015-05-11 13:42:45 +00:00
|
|
|
smp_fetch_srv_is_up(const struct arg *args, struct sample *smp, const char *kw, void *private)
|
2010-05-16 20:18:27 +00:00
|
|
|
{
|
2012-04-23 21:55:44 +00:00
|
|
|
struct server *srv = args->data.srv;
|
2010-05-16 20:18:27 +00:00
|
|
|
|
2012-04-23 14:16:37 +00:00
|
|
|
smp->flags = SMP_F_VOL_TEST;
|
2015-08-19 07:00:18 +00:00
|
|
|
smp->data.type = SMP_T_BOOL;
|
2014-05-13 17:44:56 +00:00
|
|
|
if (!(srv->admin & SRV_ADMF_MAINT) &&
|
2014-05-13 21:41:20 +00:00
|
|
|
(!(srv->check.state & CHK_ST_CONFIGURED) || (srv->state != SRV_ST_STOPPED)))
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = 1;
|
2010-05-16 20:18:27 +00:00
|
|
|
else
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = 0;
|
2010-05-16 20:18:27 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2012-04-19 15:16:54 +00:00
|
|
|
/* set temp integer to the number of enabled servers on the proxy.
|
2012-04-20 09:37:56 +00:00
|
|
|
* Accepts exactly 1 argument. Argument is a backend, other types will lead to
|
2012-04-19 15:16:54 +00:00
|
|
|
* undefined behaviour.
|
|
|
|
*/
|
2008-09-03 17:03:03 +00:00
|
|
|
static int
|
2015-05-11 13:42:45 +00:00
|
|
|
smp_fetch_connslots(const struct arg *args, struct sample *smp, const char *kw, void *private)
|
2008-09-03 17:03:03 +00:00
|
|
|
{
|
|
|
|
struct server *iterator;
|
2012-04-19 17:28:33 +00:00
|
|
|
|
2012-04-23 14:16:37 +00:00
|
|
|
smp->flags = SMP_F_VOL_TEST;
|
2015-08-19 07:00:18 +00:00
|
|
|
smp->data.type = SMP_T_SINT;
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = 0;
|
2012-04-20 09:37:56 +00:00
|
|
|
|
2012-04-23 21:55:44 +00:00
|
|
|
for (iterator = args->data.prx->srv; iterator; iterator = iterator->next) {
|
2014-05-13 21:41:20 +00:00
|
|
|
if (iterator->state == SRV_ST_STOPPED)
|
2008-09-03 17:03:03 +00:00
|
|
|
continue;
|
2012-04-20 09:37:56 +00:00
|
|
|
|
2008-09-03 17:03:03 +00:00
|
|
|
if (iterator->maxconn == 0 || iterator->maxqueue == 0) {
|
2011-12-16 16:06:15 +00:00
|
|
|
/* configuration is stupid */
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = -1; /* FIXME: stupid value! */
|
2008-09-03 17:03:03 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint += (iterator->maxconn - iterator->cur_sess)
|
2012-04-20 18:49:27 +00:00
|
|
|
+ (iterator->maxqueue - iterator->nbpend);
|
2008-09-03 17:03:03 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2011-12-16 16:06:15 +00:00
|
|
|
/* set temp integer to the id of the backend */
|
2010-12-15 13:04:51 +00:00
|
|
|
static int
|
2015-05-11 13:42:45 +00:00
|
|
|
smp_fetch_be_id(const struct arg *args, struct sample *smp, const char *kw, void *private)
|
2012-04-23 14:16:37 +00:00
|
|
|
{
|
2016-03-10 10:47:01 +00:00
|
|
|
if (!smp->strm)
|
|
|
|
return 0;
|
|
|
|
|
2012-04-23 16:53:56 +00:00
|
|
|
smp->flags = SMP_F_VOL_TXN;
|
2015-08-19 07:00:18 +00:00
|
|
|
smp->data.type = SMP_T_SINT;
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = smp->strm->be->uuid;
|
2010-12-15 13:04:51 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2016-12-12 13:08:05 +00:00
|
|
|
/* set string to the name of the backend */
|
|
|
|
static int
|
|
|
|
smp_fetch_be_name(const struct arg *args, struct sample *smp, const char *kw, void *private)
|
|
|
|
{
|
|
|
|
if (!smp->strm)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
smp->data.u.str.str = (char *)smp->strm->be->id;
|
|
|
|
if (!smp->data.u.str.str)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
smp->data.type = SMP_T_STR;
|
|
|
|
smp->flags = SMP_F_CONST;
|
|
|
|
smp->data.u.str.len = strlen(smp->data.u.str.str);
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2011-12-16 16:06:15 +00:00
|
|
|
/* set temp integer to the id of the server */
|
2010-12-15 13:04:51 +00:00
|
|
|
static int
|
2015-05-11 13:42:45 +00:00
|
|
|
smp_fetch_srv_id(const struct arg *args, struct sample *smp, const char *kw, void *private)
|
2012-04-23 14:16:37 +00:00
|
|
|
{
|
2016-03-10 10:47:01 +00:00
|
|
|
if (!smp->strm)
|
|
|
|
return 0;
|
|
|
|
|
2015-05-11 13:20:49 +00:00
|
|
|
if (!objt_server(smp->strm->target))
|
2011-02-23 13:27:06 +00:00
|
|
|
return 0;
|
|
|
|
|
2015-08-19 07:00:18 +00:00
|
|
|
smp->data.type = SMP_T_SINT;
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = objt_server(smp->strm->target)->puid;
|
2010-12-15 13:04:51 +00:00
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2012-04-19 15:16:54 +00:00
|
|
|
/* set temp integer to the number of connections per second reaching the backend.
|
2012-04-20 09:37:56 +00:00
|
|
|
* Accepts exactly 1 argument. Argument is a backend, other types will lead to
|
2012-04-19 15:16:54 +00:00
|
|
|
* undefined behaviour.
|
|
|
|
*/
|
2009-03-05 20:34:28 +00:00
|
|
|
static int
|
2015-05-11 13:42:45 +00:00
|
|
|
smp_fetch_be_sess_rate(const struct arg *args, struct sample *smp, const char *kw, void *private)
|
2009-03-05 20:34:28 +00:00
|
|
|
{
|
2012-04-23 14:16:37 +00:00
|
|
|
smp->flags = SMP_F_VOL_TEST;
|
2015-08-19 07:00:18 +00:00
|
|
|
smp->data.type = SMP_T_SINT;
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = read_freq_ctr(&args->data.prx->be_sess_per_sec);
|
2009-03-05 20:34:28 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2012-04-19 15:16:54 +00:00
|
|
|
/* set temp integer to the number of concurrent connections on the backend.
|
2012-04-20 09:37:56 +00:00
|
|
|
* Accepts exactly 1 argument. Argument is a backend, other types will lead to
|
2012-04-19 15:16:54 +00:00
|
|
|
* undefined behaviour.
|
|
|
|
*/
|
2009-10-10 10:02:45 +00:00
|
|
|
static int
|
2015-05-11 13:42:45 +00:00
|
|
|
smp_fetch_be_conn(const struct arg *args, struct sample *smp, const char *kw, void *private)
|
2009-10-10 10:02:45 +00:00
|
|
|
{
|
2012-04-23 14:16:37 +00:00
|
|
|
smp->flags = SMP_F_VOL_TEST;
|
2015-08-19 07:00:18 +00:00
|
|
|
smp->data.type = SMP_T_SINT;
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = args->data.prx->beconn;
|
2009-10-10 10:02:45 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2012-04-19 15:16:54 +00:00
|
|
|
/* set temp integer to the total number of queued connections on the backend.
|
2012-04-20 09:37:56 +00:00
|
|
|
* Accepts exactly 1 argument. Argument is a backend, other types will lead to
|
2012-04-19 15:16:54 +00:00
|
|
|
* undefined behaviour.
|
|
|
|
*/
|
2009-10-10 10:02:45 +00:00
|
|
|
static int
|
2015-05-11 13:42:45 +00:00
|
|
|
smp_fetch_queue_size(const struct arg *args, struct sample *smp, const char *kw, void *private)
|
2009-10-10 10:02:45 +00:00
|
|
|
{
|
2012-04-23 14:16:37 +00:00
|
|
|
smp->flags = SMP_F_VOL_TEST;
|
2015-08-19 07:00:18 +00:00
|
|
|
smp->data.type = SMP_T_SINT;
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = args->data.prx->totpend;
|
2009-10-10 10:02:45 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2011-12-16 16:06:15 +00:00
|
|
|
/* set temp integer to the total number of queued connections on the backend divided
|
2009-10-10 10:02:45 +00:00
|
|
|
* by the number of running servers and rounded up. If there is no running
|
|
|
|
* server, we return twice the total, just as if we had half a running server.
|
|
|
|
* This is more or less correct anyway, since we expect the last server to come
|
|
|
|
* back soon.
|
2012-04-20 09:37:56 +00:00
|
|
|
* Accepts exactly 1 argument. Argument is a backend, other types will lead to
|
2012-04-19 15:16:54 +00:00
|
|
|
* undefined behaviour.
|
2009-10-10 10:02:45 +00:00
|
|
|
*/
|
|
|
|
static int
|
2015-05-11 13:42:45 +00:00
|
|
|
smp_fetch_avg_queue_size(const struct arg *args, struct sample *smp, const char *kw, void *private)
|
2009-10-10 10:02:45 +00:00
|
|
|
{
|
|
|
|
int nbsrv;
|
2015-05-11 13:20:49 +00:00
|
|
|
struct proxy *px;
|
2009-10-10 10:02:45 +00:00
|
|
|
|
2012-04-23 14:16:37 +00:00
|
|
|
smp->flags = SMP_F_VOL_TEST;
|
2015-08-19 07:00:18 +00:00
|
|
|
smp->data.type = SMP_T_SINT;
|
2012-04-23 21:55:44 +00:00
|
|
|
px = args->data.prx;
|
2009-10-10 10:02:45 +00:00
|
|
|
|
|
|
|
if (px->srv_act)
|
|
|
|
nbsrv = px->srv_act;
|
|
|
|
else if (px->lbprm.fbck)
|
|
|
|
nbsrv = 1;
|
|
|
|
else
|
|
|
|
nbsrv = px->srv_bck;
|
|
|
|
|
|
|
|
if (nbsrv > 0)
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = (px->totpend + nbsrv - 1) / nbsrv;
|
2009-10-10 10:02:45 +00:00
|
|
|
else
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = px->totpend * 2;
|
2009-10-10 10:02:45 +00:00
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
2007-11-30 19:48:53 +00:00
|
|
|
|
2012-04-19 15:16:54 +00:00
|
|
|
/* set temp integer to the number of concurrent connections on the server in the backend.
|
|
|
|
* Accepts exactly 1 argument. Argument is a server, other types will lead to
|
|
|
|
* undefined behaviour.
|
|
|
|
*/
|
2011-08-05 10:09:44 +00:00
|
|
|
static int
|
2015-05-11 13:42:45 +00:00
|
|
|
smp_fetch_srv_conn(const struct arg *args, struct sample *smp, const char *kw, void *private)
|
2011-08-05 10:09:44 +00:00
|
|
|
{
|
2012-04-23 16:53:56 +00:00
|
|
|
smp->flags = SMP_F_VOL_TEST;
|
2015-08-19 07:00:18 +00:00
|
|
|
smp->data.type = SMP_T_SINT;
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = args->data.srv->cur_sess;
|
2011-08-05 10:09:44 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2012-12-06 02:39:31 +00:00
|
|
|
/* set temp integer to the number of enabled servers on the proxy.
|
|
|
|
* Accepts exactly 1 argument. Argument is a server, other types will lead to
|
|
|
|
* undefined behaviour.
|
|
|
|
*/
|
|
|
|
static int
|
2015-05-11 13:42:45 +00:00
|
|
|
smp_fetch_srv_sess_rate(const struct arg *args, struct sample *smp, const char *kw, void *private)
|
2012-12-06 02:39:31 +00:00
|
|
|
{
|
|
|
|
smp->flags = SMP_F_VOL_TEST;
|
2015-08-19 07:00:18 +00:00
|
|
|
smp->data.type = SMP_T_SINT;
|
2015-08-19 07:07:19 +00:00
|
|
|
smp->data.u.sint = read_freq_ctr(&args->data.srv->sess_per_sec);
|
2012-12-06 02:39:31 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
MINOR: backend: rename sample fetch functions and declare the sample keywords
The following sample fetch functions were only usable by ACLs but are now
usable by sample fetches too :
avg_queue, be_conn, be_id, be_sess_rate, connslots, nbsrv,
queue, srv_conn, srv_id, srv_is_up, srv_sess_rate
The fetch functions have been renamed "smp_fetch_*".
2013-01-07 21:38:03 +00:00
|
|
|
|
|
|
|
/* Note: must not be declared <const> as its list will be overwritten.
|
|
|
|
* Please take care of keeping this list alphabetically sorted.
|
|
|
|
*/
|
2013-06-21 21:16:39 +00:00
|
|
|
static struct sample_fetch_kw_list smp_kws = {ILH, {
|
2015-07-06 21:43:03 +00:00
|
|
|
{ "avg_queue", smp_fetch_avg_queue_size, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
|
|
|
|
{ "be_conn", smp_fetch_be_conn, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
|
|
|
|
{ "be_id", smp_fetch_be_id, 0, NULL, SMP_T_SINT, SMP_USE_BKEND, },
|
2016-12-12 13:08:05 +00:00
|
|
|
{ "be_name", smp_fetch_be_name, 0, NULL, SMP_T_STR, SMP_USE_BKEND, },
|
2015-07-06 21:43:03 +00:00
|
|
|
{ "be_sess_rate", smp_fetch_be_sess_rate, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
|
|
|
|
{ "connslots", smp_fetch_connslots, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
|
|
|
|
{ "nbsrv", smp_fetch_nbsrv, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
|
|
|
|
{ "queue", smp_fetch_queue_size, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
|
|
|
|
{ "srv_conn", smp_fetch_srv_conn, ARG1(1,SRV), NULL, SMP_T_SINT, SMP_USE_INTRN, },
|
|
|
|
{ "srv_id", smp_fetch_srv_id, 0, NULL, SMP_T_SINT, SMP_USE_SERVR, },
|
MINOR: backend: rename sample fetch functions and declare the sample keywords
The following sample fetch functions were only usable by ACLs but are now
usable by sample fetches too :
avg_queue, be_conn, be_id, be_sess_rate, connslots, nbsrv,
queue, srv_conn, srv_id, srv_is_up, srv_sess_rate
The fetch functions have been renamed "smp_fetch_*".
2013-01-07 21:38:03 +00:00
|
|
|
{ "srv_is_up", smp_fetch_srv_is_up, ARG1(1,SRV), NULL, SMP_T_BOOL, SMP_USE_INTRN, },
|
2015-07-06 21:43:03 +00:00
|
|
|
{ "srv_sess_rate", smp_fetch_srv_sess_rate, ARG1(1,SRV), NULL, SMP_T_SINT, SMP_USE_INTRN, },
|
MINOR: backend: rename sample fetch functions and declare the sample keywords
The following sample fetch functions were only usable by ACLs but are now
usable by sample fetches too :
avg_queue, be_conn, be_id, be_sess_rate, connslots, nbsrv,
queue, srv_conn, srv_id, srv_is_up, srv_sess_rate
The fetch functions have been renamed "smp_fetch_*".
2013-01-07 21:38:03 +00:00
|
|
|
{ /* END */ },
|
|
|
|
}};
|
|
|
|
|
|
|
|
|
2012-04-19 16:42:05 +00:00
|
|
|
/* Note: must not be declared <const> as its list will be overwritten.
|
|
|
|
* Please take care of keeping this list alphabetically sorted.
|
|
|
|
*/
|
2013-06-21 21:16:39 +00:00
|
|
|
static struct acl_kw_list acl_kws = {ILH, {
|
MINOR: backend: rename sample fetch functions and declare the sample keywords
The following sample fetch functions were only usable by ACLs but are now
usable by sample fetches too :
avg_queue, be_conn, be_id, be_sess_rate, connslots, nbsrv,
queue, srv_conn, srv_id, srv_is_up, srv_sess_rate
The fetch functions have been renamed "smp_fetch_*".
2013-01-07 21:38:03 +00:00
|
|
|
{ /* END */ },
|
2007-11-30 19:48:53 +00:00
|
|
|
}};
|
|
|
|
|
|
|
|
|
|
|
|
__attribute__((constructor))
|
|
|
|
static void __backend_init(void)
|
|
|
|
{
|
MINOR: backend: rename sample fetch functions and declare the sample keywords
The following sample fetch functions were only usable by ACLs but are now
usable by sample fetches too :
avg_queue, be_conn, be_id, be_sess_rate, connslots, nbsrv,
queue, srv_conn, srv_id, srv_is_up, srv_sess_rate
The fetch functions have been renamed "smp_fetch_*".
2013-01-07 21:38:03 +00:00
|
|
|
sample_register_fetches(&smp_kws);
|
2007-11-30 19:48:53 +00:00
|
|
|
acl_register_keywords(&acl_kws);
|
|
|
|
}
|
|
|
|
|
2006-06-26 00:48:02 +00:00
|
|
|
/*
|
|
|
|
* Local variables:
|
|
|
|
* c-indent-level: 8
|
|
|
|
* c-basic-offset: 8
|
|
|
|
* End:
|
|
|
|
*/
|