were sent in perfect synchronisation because the initial time was
the same for all. This could induce high load peaks when fragile
servers were hosting tens of instances for the same application.
Now the load is spread evenly across the smallest interval amongst
a listener.
to 'close', but does not actually close any connection. The problem
is, there are some servers which don't close the connection even if
the proxy tells them 'Connection: close'. A workaround was added by
the way of a new option 'forceclose' (which implies 'httpclose'),
and which makes the proxy close the outgoing channel to the server
once it has sent all its headers. Just don't use this with the
'CONNECT' method of course !
were erroneously load-balanced while the doc said the opposite.
Since load-balanced backup servers is one of the features some
people have been asking for, the problem was fixed to reflect the
documented behaviour and a new option 'allbackups' was introduced
to provide the feature to those who need it.
its timeout times the number of retransmits exceeded the server
read or write timeout, because the later was used to compute
select()'s timeout while the connection timeout was not reached.
could trigger both a read and a write calls, thus sometimes inducing headers
being directly sent from srv to cli without modification, and leading further
modification to crash the process by memory corruption, because
rep.data+rep.l<rep.h so the memmove() length argument is negative. Only
observed with epoll() and never poll(), though this one should have been
affected too. Now, only call functions which have been allowed to.
because event_srv_chk_r() is called before _w() and flushes the socket
error. The result is that the server remains UP. The problem only
affects pure TCP health-checks when select() is disabled. You may
encounter this on SSL or SMTP proxies.
an error if the connection was refused before the the timeout. So the
client was sent to the server anyway and then got its connection broken
because of the write error. This is not a real problem with persistence,
but it definitely is for new clients. This stupid bug must have been
present for years !