Commit Graph

76 Commits

Author SHA1 Message Date
Willy Tarreau
4cae3bf631 BUG/MEDIUM: connection: don't keep more idle connections than ever needed
When using "http-reuse safe", which is the default, a new incoming connection
does not automatically reuse an existing connection for the first request, as
we don't want to risk to lose the contents if we know the client will not be
able to replay the request. A side effect to this is that when dealing with
mostly http-close traffic, the reuse rate is extremely low and we keep
accumulating server-side connections that may even never be reused. At some
point we're limited to a ratio of file descriptors, but when the system is
configured with very high FD limits, we can still reach the limit of outgoing
source ports and make the system significantly slow down trying to find an
available port for outgoing connections. A simple test on my laptop with
ulimit 100000 and with the following config results in the load immediately
dropping after a few seconds :

   listen l1
        bind :4445
        mode http
        server s1 127.0.0.1:8000

As can be seen, the load falls from 38k cps to 400 cps during the first 200ms
(in fact when the source port table is full and connect() takes ages to find
a spare port for a new connection):

   $ injectl464 -p 4 -o 1 -u 10 -G 127.0.0.1:4445/ -F -c -w 100
   hits ^hits hits/s  ^h/s     bytes  kB/s  last  errs  tout htime  sdht ptime
   2439  2439  39338 39338    356094  5743  5743     0     0 0.4 0.5 0.4
   7637  5198  38185 37666   1115002  5575  5499     0     0 0.7 0.5 0.7
   7719    82  25730   820   1127002  3756   120     0     0 21.8 18.8 21.8
   7797    78  19492   780   1138446  2846   114     0     0 61.4 2.5 61.4
   7877    80  15754   800   1150182  2300   117     0     0 58.6 0.5 58.6
   7920    43  13200   430   1156488  1927    63     0     0 58.9 0.3 58.9

At this point, lots of connections are indeed in use, for only 10 connections
on the frontend side:

   $ ss -ant state established | wc -l
   39022

This patch makes sure we never keep more idle connections than we've ever
had outstanding requests on a server. This way the total number of idle
connections will never exceed the sum of maximum connections. Thus highly
loaded servers will be able to get many connections and slightly loaded
servers will keep less. Ideally we should apply similar limits per process
and the per backend, but in practice this already addresses the issues
pretty well:

   $ injectl464 -p 4 -o 1 -u 10 -G 127.0.0.1:4445/ -F -c -w 100
   hits ^hits hits/s  ^h/s     bytes  kB/s  last  errs  tout htime  sdht ptime
   4423  4423  40209 40209    645758  5870  5870     0     0 0.2 0.4 0.2
   8020  3597  40100 39966   1170920  5854  5835     0     0 0.2 0.4 0.2
  12037  4017  40123 40170   1757402  5858  5864     0     0 0.2 0.4 0.2
  16069  4032  40172 40320   2346074  5865  5886     0     0 0.2 0.4 0.2
  20047  3978  40013 39386   2926862  5842  5750     0     0 0.3 0.4 0.3
  24005  3958  40008 39979   3504730  5841  5837     0     0 0.2 0.4 0.2

   $ ss -ant state established | wc -l
   234

This patch must be backported to 2.0. It could be useful in 1.9 as well
eventhough pools and reuse are not enabled by default there.
2019-09-08 09:30:50 +02:00
Christopher Faulet
34ce7d075a BUG/MINOR: server: Be really able to keep "pool-max-conn" idle connections
The maximum number of idle connections for a server can be configured by setting
the server option "pool-max-conn". But when we try to add a connection in its
idle list, because of a wrong comparison, it may be rejected because there are
already "pool-max-conn - 1" idle connections.

This patch must be backported to 2.0 and 1.9.
2019-07-10 14:20:52 +02:00
Olivier Houchard
88698d966d MEDIUM: connections: Add a way to control the number of idling connections.
As by default we add all keepalive connections to the idle pool, if we run
into a pathological case, where all client don't do keepalive, but the server
does, and haproxy is configured to only reuse "safe" connections, we will
soon find ourself having lots of idling, unusable for new sessions, connections,
while we won't have any file descriptors available to create new connections.

To fix this, add 2 new global settings, "pool_low_ratio" and "pool_high_ratio".
pool-low-fd-ratio  is the % of fds we're allowed to use (against the maximum
number of fds available to haproxy) before we stop adding connections to the
idle pool, and destroy them instead. The default is 20. pool-high-fd-ratio is
the % of fds we're allowed to use (against the maximum number of fds available
to haproxy) before we start killing idling connection in the event we have to
create a new outgoing connection, and no reuse is possible. The default is 25.
2019-04-18 19:52:03 +02:00
Willy Tarreau
0e492e2ad0 BUILD: address a few cases of "static <type> inline foo()"
Older compilers don't like to see "inline" placed after the type in a
function declaration, it must be "static inline <type>" only. This
patch touches various areas. The warnings were seen with gcc-3.4.
2019-04-15 21:55:48 +02:00
Olivier Houchard
aa4d71a7fe MEDIUM: server: Use the new _HA_ATOMIC_* macros.
Use the new _HA_ATOMIC_* macros and add barriers where needed.
2019-03-11 17:02:37 +01:00
Olivier Houchard
9ea5d361ae MEDIUM: servers: Reorganize the way idle connections are cleaned.
Instead of having one task per thread and per server that does clean the
idling connections, have only one global task for every servers.
That tasks parses all the servers that currently have idling connections,
and remove half of them, to put them in a per-thread list of connections
to kill. For each thread that does have connections to kill, wake a task
to do so, so that the cleaning will be done in the context of said thread.
2019-02-26 18:17:32 +01:00
Olivier Houchard
7f1bc31fee MEDIUM: servers: Used a locked list for idle_orphan_conns.
Use the locked macros when manipulating idle_orphan_conns, so that other
threads can remove elements from it.
It will be useful later to avoid having a task per server and per thread to
cleanup the orphan list.
2019-02-26 18:17:32 +01:00
Olivier Houchard
f131481a0a BUG/MEDIUM: servers: Add a per-thread counter of idle connections.
Add a per-thread counter of idling connections, and use it to determine
how many connections we should kill after the timeout, instead of using
the global counter, or we're likely to just kill most of the connections.

This should be backported to 1.9.
2019-02-21 19:07:45 +01:00
Olivier Houchard
e737103173 BUG/MEDIUM: servers: Use atomic operations when handling curr_idle_conns.
Use atomic operations when dealing with srv->curr_idle_conns, as it's shared
between threads, otherwise we could get inconsistencies.

This should be backported to 1.9.
2019-02-21 19:07:19 +01:00
Willy Tarreau
980855bd95 BUG/MEDIUM: server: initialize the orphaned conns lists and tasks at the end
This also depends on the nbthread count, so it must only be performed after
parsing the whole config file. As a side effect, this removes some code
duplication between servers and server-templates.

This must be backported to 1.9.
2019-02-07 15:08:13 +01:00
Willy Tarreau
00f18a36b6 BUG/MINOR: server: fix logic flaw in idle connection list management
With variable connection limits, it's not possible to accurately determine
whether the mux is still in use by comparing usage and max to be equal due
to the fact that one determines the capacity and the other one takes care
of the context. This can cause some connections to be dropped before they
reach their stream ID limit.

It seems it could also cause some connections to be terminated with
streams still alive if the limit was reduced to match the newly computed
avail_streams() value, though this cannot yet happen with existing muxes.

Instead let's switch to usage reports and simply check whether connections
are both unused and available before adding them to the idle list.

This should be backported to 1.9.
2019-01-31 19:38:25 +01:00
Frédéric Lécaille
355b2033ec MINOR: cfgparse: SSL/TLS binding in "peers" sections.
Make "bind" keywork be supported in "peers" sections.
All "bind" settings are supported on this line.
Add "default-bind" option to parse the binding options excepted the bind address.
Do not parse anymore the bind address for local peers on "server" lines.
Do not use anymore list_for_each_entry() to set the "peers" section
listener parameters because there is only one listener by "peers" section.

May be backported to 1.5 and newer.
2019-01-18 14:26:21 +01:00
Olivier Houchard
a4d4fdfaa3 MEDIUM: sessions: Don't keep an infinite number of idling connections.
In session, don't keep an infinite number of connection that can idle.
Add a new frontend parameter, "max-session-srv-conns" to set a max number,
with a default value of 5.
2018-12-15 23:50:10 +01:00
William Lallemand
313bfd18c1 MINOR: server: export new_server() function
The new_server() function will be useful to create a proxy for the
master-worker.
2018-10-28 13:51:38 +01:00
Willy Tarreau
91c2826e1d CLEANUP: server: remove the update list and the update lock
These ones are not more used, let's get rid of them.
2018-08-08 09:57:45 +02:00
Willy Tarreau
3ff577e165 MAJOR: server: make server state changes synchronous again
Now we try to synchronously push updates as they come using the new rdv
point, so that the call to the server update function from the main poll
loop is not needed anymore.

It further reduces the apparent latency in the health checks as the response
time almost always appears as 0 ms, resulting in a slightly higher check rate
of ~1960 conn/s. Despite this, the CPU consumption has slightly dropped again
to ~32% for the same test.

The only trick is that the checks code is built with a bit of recursivity
because srv_update_status() calls server_recalc_eweight(), and the latter
needs to signal srv_update_status() in case of updates. Thus we added an
extra argument to this function to indicate whether or not it must
propagate updates (no if it comes from srv_update_status).
2018-08-08 09:57:45 +02:00
Willy Tarreau
83061a820e MAJOR: chunks: replace struct chunk with struct buffer
Now all the code used to manipulate chunks uses a struct buffer instead.
The functions are still called "chunk*", and some of them will progressively
move to the generic buffer handling code as they are cleaned up.
2018-07-19 16:23:43 +02:00
Olivier Houchard
d16bfe6c01 BUG/MINOR: dns: Fix SRV records with the new thread code.
srv_set_fqdn() may be called with the DNS lock already held, but tries to
lock it anyway. So, add a new parameter to let it know if it was already
locked or not;
2017-10-31 15:47:55 +01:00
Christopher Faulet
29f77e846b MEDIUM: threads/server: Add a lock per server and atomically update server vars
The server's lock is use, among other things, to lock acces to the active
connection list of a server.
2017-10-31 13:58:31 +01:00
Christopher Faulet
67957bd59e MAJOR: dns: Refactor the DNS code
This is a huge patch with many changes, all about the DNS. Initially, the idea
was to update the DNS part to ease the threads support integration. But quickly,
I started to refactor some parts. And after several iterations, it was
impossible for me to commit the different parts atomically. So, instead of
adding tens of patches, often reworking the same parts, it was easier to merge
all my changes in a uniq patch. Here are all changes made on the DNS.

First, the DNS initialization has been refactored. The DNS configuration parsing
remains untouched, in cfgparse.c. But all checks have been moved in a post-check
callback. In the function dns_finalize_config, for each resolvers, the
nameservers configuration is tested and the task used to manage DNS resolutions
is created. The links between the backend's servers and the resolvers are also
created at this step. Here no connection are kept alive. So there is no needs
anymore to reopen them after HAProxy fork. Connections used to send DNS queries
will be opened on demand.

Then, the way DNS requesters are linked to a DNS resolution has been
reworked. The resolution used by a requester is now referenced into the
dns_requester structure and the resolution pointers in server and dns_srvrq
structures have been removed. wait and curr list of requesters, for a DNS
resolution, have been replaced by a uniq list. And Finally, the way a requester
is removed from a DNS resolution has been simplified. Now everything is done in
dns_unlink_resolution.

srv_set_fqdn function has been simplified. Now, there is only 1 way to set the
server's FQDN, independently it is done by the CLI or when a SRV record is
resolved.

The static DNS resolutions pool has been replaced by a dynamoc pool. The part
has been modified by Baptiste Assmann.

The way the DNS resolutions are triggered by the task or by a health-check has
been totally refactored. Now, all timeouts are respected. Especially
hold.valid. The default frequency to wake up a resolvers is now configurable
using "timeout resolve" parameter.

Now, as documented, as long as invalid repsonses are received, we really wait
all name servers responses before retrying.

As far as possible, resources allocated during DNS configuration parsing are
releases when HAProxy is shutdown.

Beside all these changes, the code has been cleaned to ease code review and the
doc has been updated.
2017-10-31 11:36:12 +01:00
Emeric Brun
5a1335110c BUG/MEDIUM: log: check result details truncated.
Fix regression introduced by commit:
'MAJOR: servers: propagate server status changes asynchronously.'

The building of the log line was re-worked to be done at the
postponed point without lack of data.

[wt: this only affects 1.8-dev, no backport needed]
2017-10-19 18:51:32 +02:00
Emeric Brun
64cc49cf7e MAJOR: servers: propagate server status changes asynchronously.
In order to prepare multi-thread development, code was re-worked
to propagate changes asynchronoulsy.

Servers with pending status changes are registered in a list
and this one is processed and emptied only once 'run poll' loop.

Operational status changes are performed before administrative
status changes.

In a case of multiple operational status change or admin status
change in the same 'run poll' loop iteration, those changes are
merged to reach only the targeted status.
2017-10-13 12:00:27 +02:00
Emeric Brun
52a91d3d48 MEDIUM: check: server states and weight propagation re-work
The server state and weight was reworked to handle
"pending" values updated by checks/CLI/LUA/agent.
These values are commited to be propagated to the
LB stack.

In further dev related to multi-thread, the commit
will be handled into a sync point.

Pending values are named using the prefix 'next_'
Current values used by the LB stack are named 'cur_'
2017-09-05 15:23:16 +02:00
Baptiste Assmann
747359eeca BUG/MINOR: dns: server set by SRV records stay in "no resolution" status
This patch fixes a bug where some servers managed by SRV record query
types never ever recover from a "no resolution" status.
The problem is due to a wrong function called when breaking the
server/resolution (A/AAAA) relationship: this is performed when a server's SRV
record disappear from the SRV response.
2017-08-22 11:34:49 +02:00
Olivier Houchard
8da5f98fbe MINOR: dns: Handle SRV records.
Make it so for each server, instead of specifying a hostname, one can use
a SRV label.
When doing so, haproxy will first resolve the SRV label, then use the
resulting hostnames, as well as port and weight (priority is ignored right
now), to each server using the SRV label.
It is resolved periodically, and any server disappearing from the SRV records
will be removed, and any server appearing will be added, assuming there're
free servers in haproxy.
2017-08-09 16:32:49 +02:00
Olivier Houchard
a8c6db8d2d MINOR: dns: Cache previous DNS answers.
As DNS servers may not return all IPs in one answer, we want to cache the
previous entries. Those entries are removed when considered obsolete, which
happens when the IP hasn't been returned by the DNS server for a time
defined in the "hold obsolete" parameter of the resolver section. The default
is 30s.
2017-08-09 16:32:49 +02:00
Baptiste Assmann
201c07f681 MAJOR/REORG: dns: DNS resolution task and requester queues
This patch is a major upgrade of the internal run-time DNS resolver in
HAProxy and it brings the following 2 main changes:

1. DNS resolution task

Up to now, DNS resolution was triggered by the health check task.
From now, DNS resolution task is autonomous. It is started by HAProxy
right after the scheduler is available and it is woken either when a
network IO occurs for one of its nameserver or when a timeout is
matched.

From now, this means we can enable DNS resolution for a server without
enabling health checking.

2. Introduction of a dns_requester structure

Up to now, DNS resolution was purposely made for resolving server
hostnames.
The idea, is to ensure that any HAProxy internal object should be able
to trigger a DNS resolution. For this purpose, 2 things has to be done:
  - clean up the DNS code from the server structure (this was already
    quite clean actually) and clean up the server's callbacks from
    manipulating too much DNS resolution
  - create an agnostic structure which allows linking a DNS resolution
    and a requester of any type (using obj_type enum)

3. Manage requesters through queues

Up to now, there was an uniq relationship between a resolution and it's
owner (aka the requester now). It's a shame, because in some cases,
multiple objects may share the same hostname and may benefit from a
resolution being performed by a third party.
This patch introduces the notion of queues, which are basically lists of
either currently running resolution or waiting ones.

The resolutions are now available as a pool, which belongs to the resolvers.
The pool has has a default size of 64 resolutions per resolvers and is
allocated at configuration parsing.
2017-06-02 11:58:54 +02:00
Baptiste Assmann
729c901c3f MAJOR: dns: save a copy of the DNS response in struct resolution
Prior this patch, the DNS responses were stored in a pre-allocated
memory area (allocated at HAProxy's startup).
The problem is that this memory is erased for each new DNS responses
received and processed.

This patch removes the global memory allocation (which was not thread
safe by the way) and introduces a storage of the dns response  in the
struct
resolution.
The memory in the struct resolution is also reserved at start up and is
thread safe, since each resolution structure will have its own memory
area.

For now, we simply store the response and use it atomically per
response per server.
2017-06-02 11:30:21 +02:00
Baptiste Assmann
fb7091e213 MINOR: dns: new snr_check_ip_callback function
In the process of breaking links between dns_* functions and other
structures (mainly server and a bit of resolution), the function
dns_get_ip_from_response needs to be reworked: it now can call
"callback" functions based on resolution's owner type to allow modifying
the way the response is processed.

For now, main purpose of the callback function is to check that an IP
address is not already affected to an element of the same type.

For now, only server type has a callback.
2017-06-02 11:28:14 +02:00
Olivier Houchard
4e694049fa MINOR: server: Add dynamic session cookies.
This adds a new "dynamic" keyword for the cookie option. If set, a cookie
will be generated for each server (assuming one isn't already provided on
the "server" line), from the IP of the server, the TCP port, and a secret
key provided. To provide the secret key, a new keyword as been added,
"dynamic-cookie-key", for backends.

Example :
backend bk_web
  balance roundrobin
  dynamic-cookie-key "bla"
  cookie WEBSRV insert dynamic
  server s1 127.0.0.1:80 check
  server s2 192.168.56.1:80 check

This is a first step to be able to dynamically add and remove servers,
without modifying the configuration file, and still have all the load
balancers redirect the traffic to the right server.

Provide a way to generate session cookies, based on the IP address of the
server, the TCP port, and a secret key provided.
2017-03-15 11:37:30 +01:00
Willy Tarreau
df4399fcb6 BUILD: server: remove a build warning introduced by latest series
We get this when Lua is disabled, just a missing include.

In file included from src/queue.c:18:0:
include/proto/server.h:51:39: warning: 'struct appctx' declared inside parameter list [enabled by default]
2016-11-24 17:32:01 +01:00
Willy Tarreau
21b069dca8 MINOR: server: create new function cli_find_server() to find a server
Several CLI commands require a server, so let's have a function to
look this one up and prepare the appropriate error message and the
appctx's state in case of failure.
2016-11-24 16:59:27 +01:00
Willy Tarreau
25e515235a MEDIUM: server: make use of init-addr
It is now supported. If not set, we default to the legacy methods list
which is "last,libc".
2016-11-09 15:30:47 +01:00
Baptiste Assmann
25938278b7 MEDIUM: server: add a new init-addr server line setting
This new setting supports a comma-delimited list of methods used to
resolve the server's FQDN to an IP address. Currently supported methods
are "libc" (use the regular libc's resolver) and "last" (use the last
known valid address found in the state file).

The list is implemented in a 32-bit integer, because each init-addr
method only requires 3 bits. The last one must always be SRV_IADDR_END
(0), allowing to store up to 10 methods in a single 32 bit integer.

Note: the doc is provided at the end of this series.
2016-11-09 15:30:47 +01:00
Willy Tarreau
8b42848a44 MINOR: server: make srv_set_admin_state() capable of telling why this happens
It will be important to help debugging some DNS resolution issues to
know why a server was marked down, so let's make  the function support
a 3rd argument with an indication of the reason. Passing NULL will keep
the message as-is.
2016-11-09 15:30:47 +01:00
Baptiste Assmann
83cbaa531f MAJOR: server: postpone address resolution
Server addresses are not resolved anymore upon the first pass so that we
don't fail if an address cannot be resolved by the libc. Instead they are
processed all at once after the configuration is fully loaded, by the new
function srv_init_addr(). This function only acts on the server's address
if this address uses an FQDN, which appears in server->hostname.

For now the function does two things, to followup with HAProxy's historical
default behavior:

  1. apply server IP address found in server-state file if runtime DNS
     resolution is enabled for this server

  2. use the DNS resolver provided by the libc

If none of the 2 options above can find an IP address, then an error is
returned.

All of this will be needed to support the new server parameter "init-addr".
For now, the biggest user-visible change is that all server resolution errors
are dumped at once instead of causing a startup failure one by one.
2016-11-09 14:24:20 +01:00
Willy Tarreau
757478e900 BUG/MEDIUM: servers: properly propagate the maintenance states during startup
Right now there is an issue with the way the maintenance flags are
propagated upon startup. They are not propagate, just copied from the
tracked server. This implies that depending on the server's order, some
tracking servers may not be marked down. For example this configuration
does not work as expected :

        server s1 1.1.1.1:8000 track s2
        server s2 1.1.1.1:8000 track s3
        server s3 1.1.1.1:8000 track s4
        server s4 wtap:8000 check inter 1s disabled

It results in s1/s2 being up, and s3/s4 being down, while all of them
should be down.

The only clean way to process this is to run through all "root" servers
(those not tracking any other server), and to propagate their state down
to all their trackers. This is the same algorithm used to propagate the
state changes. It has to be done both to compute the IDRAIN flag and the
IMAINT flag. However, doing so requires that tracking servers are not
marked as inherited maintenance anymore while parsing the configuration
(and given that it is wrong, better drop it).

This fix also addresses another side effect of the bug above which is
that the IDRAIN/IMAINT flags are stored in the state files, and if
restored while the tracked server doesn't have the equivalent flag,
the servers may end up in a situation where it's impossible to remove
these flags. For example in the configuration above, after removing
"disabled" on server s4, the other servers would have remained down,
and not anymore with this fix. Similarly, the combination of IMAINT
or IDRAIN with their respective forced modes was not accepted on
reload, which is wrong as well.

This bug has been present at least since 1.5, maybe even 1.4 (it came
with tracking support). The fix needs to be backported there, though
the srv-state parts are irrelevant.

This commit relies on previous patch to silence warnings on startup.
2016-11-07 14:31:52 +01:00
Baptiste Assmann
c1ce5f358e MEDIUM: dns: new DNS response parser
New DNS response parser function which turn the DNS response from a
network buffer into a DNS structure, much easier for later analysis
by upper layer.

Memory is pre-allocated at start-up in a chunk dedicated to DNS
response store.

New error code to report a wrong number of queries in a DNS response.
2016-09-12 19:54:23 +02:00
Baptiste Assmann
d458adcc52 MINOR: new update_server_addr_port() function to change both server's ADDR and service PORT
This function can replace update_server_addr() where the need to change the
server's port as well as the IP address is required.
It performs some validation before performing each type of change.
2016-09-11 08:13:11 +02:00
Nenad Merdanovic
174dd37d88 MINOR: Add ability for agent-check to set server maxconn
This is very useful in complex architecture systems where HAproxy
is balancing DB connections for example. We want to keep the maxconn
high in order to avoid issues with queueing on the LB level when
there is slowness on another part of the system. Example is a case of
an architecture where each thread opens multiple DB connections, which
if get stuck in queue cause a snowball effect (old connections aren't
closed, new ones cannot be established). These connections are mostly
idle and the DB server has no problem handling thousands of them.

Allowing us to dynamically set maxconn depending on the backend usage
(LA, CPU, memory, etc.) enables us to have high maxconn for situations
like above, but lowering it in case there are real issues where the
backend servers become overloaded (cache issues, DB gets hit hard).
2016-04-25 17:23:50 +02:00
Thierry Fournier
09a9178311 MINOR: server: generalize the "updater" source
the function server_parse_addr_change_request() contain an hardcoded
updater source "stats command". this function can be called from other
sources than the "stats command", so this patch make this argument
generic.
2016-02-24 23:37:39 +01:00
Thierry Fournier
d35b7a6d93 CLEANUP: server: add "const" to some message strings
"updater" is used in "read only" mode, so I add a const qualifier
to the variable declaration.
2016-02-24 23:37:39 +01:00
Thierry Fournier
9f72555b65 BUG/MINOR: server: some prototypes are renamed
The commit 87b096 renames the functions srv_shutdown_backup_sessions()
and srv_shutdown_sessions() to srv_shutdown_backup_streams() and
srv_shutdown_streams().

The header file <proto/servers.h> does not repport these changes.

This bug should be repported in the 1.6 branch, even if it is useless
because new dev are frozen.
2016-02-23 22:42:47 +01:00
Baptiste Assmann
e11cfcd2c9 MINOR: config: new backend directives: load-server-state-from-file and server-state-file-name
This directive gives HAProxy the ability to use the either the global
server-state-file directive or a local one using server-state-file-name to
load server states.
The state can be saved right before the reload by the init script, using
the "show servers state" command on the stats socket redirecting output into
a file.
2015-09-19 17:05:28 +02:00
Baptiste Assmann
19a106d24a MINOR: server: server_find functions: id, name, best_match
This patch introduces three new functions which can be used to find a
server in a farm using different server information:
- server unique id (srv->puid)
- server name
- find best match using either name or unique id

When performing best matching, the following applies:
 - use the server name first (if provided)
 - use the server id if provided
 in any case, the function can update the caller about mismatches
 encountered.
2015-07-21 23:24:16 +02:00
Baptiste Assmann
a68ca96375 MAJOR: server: add DNS-based server name resolution
Relies on the DNS protocol freshly implemented in HAProxy.
It performs a server IP addr resolution based on a server hostname.
2015-06-13 22:07:35 +02:00
Baptiste Assmann
3d8f831f13 MEDIUM: server: change server ip address from stats socket
New command available on the stats socket to change a server addr using
the command "set server <backend>/<server> addr <ip4|ip6>"
2015-06-13 22:07:35 +02:00
Baptiste Assmann
14e4014a48 MEDIUM: server: add support for changing a server's address
Ability to change a server IP address during HAProxy run time.
For now this is provided via function update_server_addr() which
currently is not called.

A log is emitted on each change. For now we do it inconditionally,
but later we'll want to do it only on certain circumstances, which
explains why the logging block is enclosed in if(1).
2015-06-13 22:07:35 +02:00
Willy Tarreau
e7dff02dd4 REORG/MEDIUM: stream: rename stream flags from SN_* to SF_*
This is in order to keep things consistent.
2015-04-06 11:23:57 +02:00
Godbach
e468d55998 BUG/MINOR: server: move the directive #endif to the end of file
If a source file includes proto/server.h twice or more, redefinition errors will
be triggered for such inline functions as server_throttle_rate(),
server_is_draining(), srv_adm_set_maint() and so on. Just move #endif directive
to the end of file to solve this issue.

Signed-off-by: Godbach <nylzhaowei@gmail.com>
2014-07-29 11:03:14 +02:00