OPTIM: raw-sock: don't speculate after a short read if polling is enabled

This is the reimplementation of the "done" action : when we experience
a short read, we're almost certain that we've exhausted the system's
buffers and that we'll meet an EAGAIN if we attempt to read again. If
the FD is not yet polled, the stream interface already takes care of
stopping the speculative read. When the FD is already being polled, we
have two options :
  - either we're running from a level-triggered poller, in which case
    we'd rather report that we've reached the end so that we don't
    speculate over the poller and let it report next time data are
    available ;

  - or we're running from an edge-triggered poller in which case we
    have no choice and have to see the EAGAIN to re-enable events.

At the moment we don't have any edge-triggered poller, so it's desirable
to avoid speculative I/O that we know will fail.

Note that this must not be ported to SSL since SSL hides the real
readiness of the file descriptor.

Thanks to this change, we observe no EAGAIN anymore during keep-alive
transfers, and failed recvfrom() are reduced by half in http-server-close
mode (the client-facing side is always being polled and the second recv
can be avoided). Doing so results in about 5% performance increase in
keep-alive mode. Similarly, we used to have up to about 1.6% of EAGAIN
on accept() (1/maxaccept), and these have completely disappeared under
high loads.
This commit is contained in:
Willy Tarreau 2014-01-24 00:54:27 +01:00
parent baf5b9b445
commit 6c11bd2f89
3 changed files with 17 additions and 1 deletions

View File

@ -252,6 +252,17 @@ static inline void fd_may_recv(const int fd)
updt_fd(fd);
}
/* Disable readiness when polled. This is useful to interrupt reading when it
* is suspected that the end of data might have been reached (eg: short read).
* This can only be done using level-triggered pollers, so if any edge-triggered
* is ever implemented, a test will have to be added here.
*/
static inline void fd_done_recv(const int fd)
{
if (fd_recv_polled(fd))
fd_cant_recv(fd);
}
/* Report that FD <fd> cannot send anymore without polling (EAGAIN detected). */
static inline void fd_cant_send(const int fd)
{

View File

@ -356,7 +356,7 @@ void listener_accept(int fd)
return;
default:
/* unexpected result, let's give up and let other tasks run */
return;
goto stop;
}
}
@ -414,6 +414,8 @@ void listener_accept(int fd)
} /* end of while (max_accept--) */
/* we've exhausted max_accept, so there is no need to poll again */
stop:
fd_done_recv(fd);
return;
}

View File

@ -176,6 +176,7 @@ int raw_sock_to_pipe(struct connection *conn, struct pipe *pipe, unsigned int co
* being asked to poll.
*/
conn->flags |= CO_FL_WAIT_ROOM;
fd_done_recv(conn->t.sock.fd);
break;
}
} /* while */
@ -299,6 +300,8 @@ static int raw_sock_to_buf(struct connection *conn, struct buffer *buf, int coun
*/
if (fdtab[conn->t.sock.fd].ev & FD_POLL_HUP)
goto read0;
fd_done_recv(conn->t.sock.fd);
break;
}
count -= ret;