Commit Graph

7343 Commits

Author SHA1 Message Date
Willy Tarreau
421f02e738 MINOR: threads: add a MAX_THREADS define instead of LONGBITS
This one allows not to inflate some structures when threads are
disabled. Now struct global is 1.4 kB instead of 33 kB.

Should be backported to 1.8 for ease of backporting of upcoming
patches.
2018-01-23 15:28:20 +01:00
Willy Tarreau
f4571a027f MINOR: global/threads: move cpu_map at the end of the global struct
The "thread" part is 32kB long, better move it at the end of the
structure since it's only used during initialization, to keep the
rest grouped together.

Should be backported to 1.8 to ease backporting of upcoming patches,
no functional impact.
2018-01-23 15:27:52 +01:00
Olivier Houchard
e9bad0a936 MINOR: servers: Don't report duplicate dyncookies for disabled servers.
Especially with server-templates, it can happen servers starts with a
placeholder IP, in the disabled state. In this case, we don't want to report
that the same cookie was generated for multiple servers. So defer the test
until the server is enabled.

This should be backported to 1.8.
2018-01-23 14:05:17 +01:00
Emeric Brun
5548291395 BUG/MEDIUM: peers: fix expire date wasn't updated if entry is modified remotely.
The stktable_touch_remote considers the expire field stored in the stksess
struct.
The expire field was updated on the a newly created stksess to store.

But if the stksess with a same key is still present the expire was not updated.

This patch postpones the update of the expire field of the stksess just before
processing the "touch".

These bug was introduced in commit:

MEDIUM: threads/stick-tables: handle multithreads on stick tables.

And the fix should be backported on 1.8.
2018-01-22 16:03:25 +01:00
Etienne Carriere
a792a0aa93 MINOR: sample: add date_us sample
Add date_us sample that returns the microsecond part of the timeval
structure representing the date of the structure. The "second" part of
the timeval can already be fetched by the "date" sample
2018-01-21 07:56:42 +01:00
Willy Tarreau
cc35923c32 BUG/MINOR: poll: too large size allocation for FD events
Commit 80da05a ("MEDIUM: poll: do not use FD_* macros anymore") which
appeared in 1.5-dev18 and which was backported to 1.4.23 made explicit
use of arrays of FDs mapped to unsigned ints. The problem lies in the
allocated size for poll(), as the resulting size is in bits and not
bytes, resulting in poll() arrays being 8 times larger than necessary!

In practice poll() is not used on highly loaded systems, explaining why
nobody noticed. But it definetely has to be addressed.

This fix needs to be backported to all stable versions.
2018-01-17 15:52:11 +01:00
Willy Tarreau
260bf5c106 CONTRIB: debug: fix a few flags definitions
Commit f4cfcf9 ("MINOR: debug/flags: Add missing flags") added a number
of missing flags but a few of them were incorrect, hiding real values.
This can be backported to 1.8.
2018-01-15 18:59:16 +01:00
Jérôme Magnin
4a326cba5b DOC: clarify the scope of ssl_fc_is_resumed
Clarify that it's for incoming connections.
2018-01-15 14:18:25 +01:00
Christopher Faulet
333694d771 MINOR: spoe: Don't queue a SPOE context if nothing is sent
When some messages must be sent to an agent, the SPOE context of the stream is
queued to be handled by an SPOE applet. If there is no available applet, a new
one is created, thus opening a connection with the agent.

Since the support of ACLs on messages, some processing can now be discarded. So,
to avoid opening a connection for nothing, the SPOE context is now queued after
the messages encoding.
2018-01-15 13:48:03 +01:00
Christopher Faulet
336d3ef0e7 MINOR: spoe: add register-var-names directive in spoe-agent configuration
In addition to "option force-set-var", recently added, this directive can be
used to selectivelly register unknown variable names, without totally relaxing
their registration during the runtime, like "option force-set-var" does.

So there is no way for a malicious agent to exhaust memory by defining a too
high number of variable names. In other hand, you need to enumerate all
variable names. This could be painfull in some circumstances.

Remember, this directive is only usefull when the variable names are not
referenced anywhere in the HAProxy configuration or the SPOE one.

Thanks to Etienne Carrière for his help on this part.
2018-01-15 13:47:27 +01:00
Willy Tarreau
d651ba14d4 BUG/MEDIUM: stream: properly handle client aborts during redispatch
James Mc Bride reported an interesting case affecting all versions since
at least 1.5 : if a client aborts a connection on an empty buffer at the
exact moment a server redispatch happens, the CF_SHUTW_NOW flag on the
channel is immediately turned into CF_SHUTW, which is not caught by
check_req_may_abort(), leading the redispatch to be performed anyway
with the channel marked as shut in both directions while the stream
interface correctly establishes. This situation makes no sense.
Ultimately the transfer times out and the server-side stream interface
remains in EST state while the client is in CLO state, and this case
doesn't correspond to anything we can handle in process_stream, leading
to poll() being woken up all the time without any progress being made.
And the session cannot even be killed from the CLI.

So we must ensure that check_req_may_abort() also considers the case
where the channel is already closed, which is what this patch does.
Thanks to James for providing detailed captures allowing to diagnose
the problem.

This fix must be backported to all maintained versions.
2018-01-12 10:47:48 +01:00
David Carlier
ec5e84552a BUILD/MINOR: ancient gcc versions atomic fix
Commit 1a69af6d38 introduced code
for atomic prior to 4.7. Unfortunately clang uses as well those
constants which is misleading.
2018-01-11 15:31:07 +01:00
Willy Tarreau
1a69af6d38 MINOR: hathreads: add support for gcc < 4.7
Till now the use of __atomic_* gcc builtins required gcc >= 4.7. Since
some supported and quite common operating systems like CentOS 6 still
come with older versions (4.4) and the mapping to the older builtins
is reasonably simple, let's implement it.

This code is only used for gcc < 4.7. It has been quickly tested on a
machine using gcc 4.4.4 and provided expected results.

This patch should be backported to 1.8.
2018-01-10 07:51:56 +01:00
William Lallemand
29f690c945 BUG/MEDIUM: mworker: execvp failure depending on argv[0]
The copy_argv() function lacks a check on '-' to remove the -x, -sf and
-st parameters.

When reloading a master process with a path starting by /st, /sf, or
/x..  the copy_argv() function skipped argv[0] leading to an execvp()
without the binary.
2018-01-09 23:44:18 +01:00
Olivier Houchard
2ec2db9725 MINOR: dns: Handle SRV record weight correctly.
A SRV record weight can range from 0 to 65535, while haproxy weight goes
from 0 to 256, so we have to divide it by 256 before handing it to haproxy.
Also, a SRV record with a weight of 0 doesn't mean the server shouldn't be
used, so use a minimum weight of 1.

This should probably be backported to 1.8.
2018-01-09 15:43:11 +01:00
Tim Duesterhus
119a5f10e4 BUG/MINOR: lua: Fix return value of Socket.settimeout
The `socket.tcp.settimeout` method of Lua returns `1` in all cases,
while the `Socket.settimeout` method of haproxy returns `0` in all
cases. This breaks the `socket.http` module, because it validates
the return value of `settimeout`.

This bug was introduced in commit 7e7ac32dad
(which is the very first commit adding the Socket class to Lua). This
bugfix should be backported to every branch containing that commit:
- 1.6
- 1.7
- 1.8

A test case for this bug is as follows:

The 'Test' response header will contain an HTTP status code with the
patch applied and will be zero (nil) without the patch applied.

http.lua:
  http = require("socket.http")

  core.register_action("bug", { "http-req" }, function(txn)
  	local b, c, h = http.request {
  		url = "http://93.184.216.34",
  		headers = {
  			Host = "example.com"
  		},
  		create = core.tcp,
  		redirect = false
  	}

  	txn:set_var("txn.foo", c)
  end)

haproxy.cfg:
  global
  	lua-load /scratch/haproxy/http.lua

  frontend fe
  	bind 127.0.0.1:8080
  	http-request lua.bug
  	http-response set-header Test %[var(txn.foo)]

  	default_backend be

  backend be
  	server s example.com:80
2018-01-09 15:22:55 +01:00
Tim Duesterhus
6edab865f6 BUG/MEDIUM: lua: Fix IPv6 with separate port support for Socket.connect
The `socket.tcp.connect` method of Lua requires at least two parameters:
The host and the port. The `Socket.connect` method of haproxy requires
only one when a host with a combined port is provided. This stems from
the fact that `str2sa_range` is used internally in `hlua_socket_connect`.
This very fact unfortunately causes a diversion in the behaviour of
Lua's socket class and haproxy's for IPv6 addresses:

  sock:connect("::1", "80")

works fine with Lua, but fails with:

  connect: cannot parse destination address '::1'

in haproxy, because `str2sa_range` parses the trailing `:1` as the port.

This patch forcefully adds a `:` to the end of the address iff a port
number greater than `0` is given as the second parameter.

Technically this breaks backwards compatibility, because the docs state:

> The syntax "127.0.0.1:1234" is valid. in this case, the
> parameter *port* is ignored.

But: The connect() call can only succeed if the second parameter is left
out (which causes no breakage) or if the second parameter is an integer
or a numeric string.

It seems unlikely that someone would provide an address with a port number
and would also provide a second parameter containing a number other than
zero. Thus I feel this breakage is warranted to fix the mismatch between
haproxy's socket class and Lua's one.

This commit should be backported to haproxy 1.8 only, because of the
possible breakage of existing Lua scripts.
2018-01-09 15:22:55 +01:00
Tim Duesterhus
b33754ce86 DOC: lua: Fix typos in comments of hlua_socket_receive 2018-01-09 15:22:49 +01:00
Tim Duesterhus
c6e377e6bb BUG/MINOR: lua: Fix default value for pattern in Socket.receive
The default value of the pattern in `Socket.receive` is `*l` according
to the documentation and in the `socket.tcp.receive` method of Lua.

The default value of `wanted` in `int hlua_socket_receive(struct lua_State *)`
reflects this requirement, but the function fails to ensure this
nonetheless:

If no parameter is given the top of the Lua stack will have the index 1.
`lua_pushinteger(L, wanted);` then pushes the default value onto the stack
(with index 2).
The following `lua_replace(L, 2);` then pops the top index (2) and tries to
replace the index 2 with it.
I am not sure why exactly that happens (possibly, because one cannot replace
non-existent stack indicies), but this causes the stack index to be lost.

`hlua_socket_receive_yield` then tries to read the stack index 2, to
determine what to read and get the value `0`, instead of the correct
HLSR_READ_LINE, thus taking the wrong branch.

Fix this by ensuring that the top of the stack is not replaced by itself.

This bug was introduced in commit 7e7ac32dad
(which is the very first commit adding the Socket class to Lua). This
bugfix should be backported to every branch containing that commit:
- 1.6
- 1.7
- 1.8

A test case for this bug is as follows:

The 'Test' response header will contain an HTTP status line with the
patch applied and will be empty without the patch applied. Replacing
the `sock:receive()` with `sock:receive("*l")` will cause the status
line to appear with and without the patch

http.lua:
  core.register_action("bug", { "http-req" }, function(txn)
  	local sock = core.tcp()
  	sock:settimeout(60)
  	sock:connect("127.0.0.1:80")
  	sock:send("GET / HTTP/1.0\r\n\r\n")
  	response = sock:receive()
  	sock:close()
  	txn:set_var("txn.foo", response)
  end)

haproxy.cfg (bits omitted for brevity):
  global
  	lua-load /scratch/haproxy/http.lua

  frontend fe
  	bind 127.0.0.1:8080
  	http-request lua.bug
  	http-response set-header Test %[var(txn.foo)]

  	default_backend be

  backend be
  	server s 127.0.0.1:80
2018-01-09 15:22:46 +01:00
William Lallemand
99b90af621 BUG/MEDIUM: ssl: cache doesn't release shctx blocks
Since the rework of the shctx with the hot list system, the ssl cache
was putting session inside the hot list, without removing them.
Once all block were used, they were all locked in the hot list, which
was forbiding to reuse them for new sessions.

Bug introduced by 4f45bb9 ("MEDIUM: shctx: separate ssl and shctx")

Thanks to Jeffrey J. Persch for reporting this bug.

Must be backported to 1.8.
2018-01-05 11:46:54 +01:00
Olivier Houchard
e2a34967a9 CLEANUP: rbtree: remove
Remove the rbtree implementation. It's not used, it's not even connected to
the build, and we probably have no use for it .
2018-01-05 10:56:32 +01:00
Willy Tarreau
5d4cafb610 BUILD: ssl: silence a warning when building without NPN nor ALPN support
When building with a library not offering any of these, ssl_conf_cur
is not used.

Can be backported to 1.8.
2018-01-04 19:04:08 +01:00
Willy Tarreau
4a28da1e9d BUG/MEDIUM: h2: properly handle the END_STREAM flag on empty DATA frames
Peter Lindegaard Hansen reported a problem affecting some POST requests
sent by MSIE on 1.8.3. Lukas found that we incorrectly dealt with the
END_STREAM flag on empty DATA frames.

What happens in fact is that while we correctly report that we've read a
zero-byte frame, since commit 8fc016d ("BUG/MEDIUM: h2: support uploading
partial DATA frames") backported into 1.8.2, we've been able to return
without updating the parser's state nor checking the frame flags in this
case.

The fix is trival, we just need not to return too early.

This fix must be backported to 1.8.
2018-01-04 14:41:00 +01:00
Willy Tarreau
8ec140604a MEDIUM: h2: prepare a graceful shutdown when the frontend is stopped
During a reload operation, instead of keeping the H2 connections opened
forever causing confusion during configuration changes, let's send a
graceful shutdown so that the client knows that it would better open a
new connection for future requests. We can't really catch the signal
from H2, but we can advertise this graceful shutdown upon the next I/O
event (eg: a WINDOW_UPDATE from the client or a new request). One of
the visible effect is that the old process quits much faster.

This patch should be backported to 1.8 since it is affected by this
problem.
2017-12-30 18:08:13 +01:00
Willy Tarreau
4576424174 CONTRIB: hpack: add an hpack decoder
This decoder takes a series of hex codes on stdin using one line
per HEADERS frame and shows the decoded headers.
2017-12-30 17:43:28 +01:00
Willy Tarreau
c775f8372b DEBUG: hpack: add more traces to the hpack decoder
These ones are only enabled when DEBUG_HPACK is defined so they have no
effect on the production code.
2017-12-30 17:37:08 +01:00
Willy Tarreau
4f03436c48 DEBUG: hpack: make hpack_dht_dump() expose the output file
It's more convenient to be able to choose between stdout and stderr.
2017-12-30 17:17:07 +01:00
Willy Tarreau
3083276187 MINOR: h2: add a function to report pseudo-header names
For debugging we need to be able to dump pseudo headers when we know
their name, let's put this there as we already have the other way
around.
2017-12-30 17:17:07 +01:00
Willy Tarreau
bb39b4945b BUG/MAJOR: hpack: don't return direct references to the dynamic headers table
Maximilian Böhm and Lucas Rolff both reported some random failed requests
with HTTP/2. Upon deep investigation on detailed traces provided by Lucas,
it turned out that some header names were occasionally corrupted and used
to point to random strings within the dynamic headers table.

The HPACK decoder must always return copies of header names that point
to the dynamic headers table. Otherwise, the insertion of a header after
the current one leading to a reorganization of the table will change the
data the pointer designates. Unfortunately, one such copy was missing for
indexed names, leading to random request failures due to invalid header
names.

Many thanks to Lucas who ran a large number of tests with full traces
helping to capture a reproduceable sequence exhibiting this issue.

This patch must be backported to 1.8.
2017-12-30 17:17:06 +01:00
Willy Tarreau
ff47b3f41d BUG/MEDIUM: http: don't automatically forward request close
Maximilian Böhm, and Lucas Rolff reported some frequent HTTP/2 POST
failures affecting version 1.8.2 that were not affecting 1.8.1. Lukas
Tribus determined that these ones appeared consecutive to commit a48c141
("BUG/MAJOR: connection: refine the situations where we don't send shutw()").

It turns out that the HTTP request forwarding engine lets a shutr from
the client be automatically forwarded to the server unless chunked
encoding is in use. It's a bit tricky to meet this condition as it only
happens if the shutr is not reported in the initial request. So if a
request is large enough or the body is delayed after the headers (eg:
Expect: 100-continue), the the function quits with channel_auto_close()
left enabled. The patch above was not really related in fact. It's just
that a previous bug was causing this shutw to be skipped at the lower
layers, and the two bugs used to cancel themselves.

In the HTTP request we should only pass the close in tunnel mode, as
other cases either need to keep the connection alive (eg: for reuse)
or will force-close it. Also the forced close will properly take care
of avoiding the painful time-wait, which is not possible with the early
close.

This patch must be backported to 1.8 as it directly impacts HTTP/2, and
may be backported to older version to save them from being abused by
clients causing TIME_WAITs between haproxy and the server.

Thanks to Lukas and Lucas for running many tests with captures allowing
the bug to be narrowed down.
2017-12-29 17:23:40 +01:00
William Lallemand
e134041910 MINOR: don't close stdio anymore
Closing the standard IO FDs (0,1,2) can be troublesome, especially in
the case of the master-worker.

Instead of closing those FDs, they are now pointing to /dev/null which
prevents sending debugging messages to the wrong FDs.

This patch could be backported in 1.8.
2017-12-29 16:33:41 +01:00
PiBa-NL
149a81a443 BUG/MEDIUM: mworker: don't close stdio several time
This patch makes sure that a frontend socket that gets created after
initialization won't be closed when the master gets re-executed.

When used in daemon mode, the master-worker is closing the FDs 0, 1, 2
after the fork of the children.

When the master was reloading, those FDs were assigned again during the
parsing of the configuration (probably for some listeners), and the
workers were closing them thinking it was the stdio.

This patch must be backported to 1.8.
2017-12-29 16:31:10 +01:00
Willy Tarreau
d790143d99 BUG/MEDIUM: h2: ensure we always know the stream before sending a reset
The recent patch introducing the H2_CS_FRAME_E state to emit stream
resets was not totally correct in that in the rare case where there is
no room left to emit the reset, the next call to process it later could
use an uninitialized stream. This only affects responses to frames that
are sent on closed streams though.

This fix must be backported to 1.8.
2017-12-29 11:34:40 +01:00
Davor Ocelic
e9ed281e9f DOC/MINOR: configuration: typo, formatting fixes
- Add simple typo and formatting fixes
- Eliminate a couple > 80 column lines

Changes do not affect technical content and can be backported.
2017-12-27 19:03:32 +01:00
Willy Tarreau
ab83750a29 BUG/MEDIUM: h2: improve handling of frames received on closed streams
The h2spec utility found certain situations where we're returning an
RST_STREAM while a GOAWAY is expected. While we can't always reliably
decide which one to use (eg: after a stream has been closed for a long
time), in practice we often still have the stream available until it's
destroyed at the application level. This provides the flags we need to
verify the conditions that led to its closure, namely if RST was sent
or received, or if it was regularly closed using a double ES.

The first step consists in marking all closed streams as having already
sent an RST_STREAM frame. This will ensure that we can send an RST_STREAM
for a late transmission on a stream we have forgotten about instead of
risking to break the connection. The next steps consist in re-arranging
the H2_SS_CLOSED checks so that we can deliver a GOAWAY frame for the
few cases where an unexpected frame was received after a double ES.

By carefully taking care of these specificities, we can reduce by 4 the
number of remaining compliance issues.

Note: some tests start to become a bit long and to be repeated at various
places. Probably that adding a bitmask of allowed/forbidden frame types
per state and/or per situation could significantly help. It's likely
that some deeper tests in the frame handlers could also be removed now
as they can't be triggered anymore.

This fix should be backported to 1.8.
2017-12-27 18:44:22 +01:00
Willy Tarreau
a20a519b8f BUG/MEDIUM: h2: properly handle and report some stream errors
Some stream errors applied to half-closed and closed streams are not
properly reported, especially after the stream transistions to the
closed state. The reason is that the code checks for this "error"
stream state in order to send an RST frame. But if the stream was
just closed or was already closed, there's no way to validate this
condition, and the error is never reported to the peer.

In order to address this situation, we'll add a new FRAME_E demux state
which indicates that the previously parsed frame triggered a stream error
of type STREAM CLOSED that needs to be reported. Proceeding like this
will ensure that we don't lose that information even if we can't
immediately send the message. It also removes the confusion where FRAME_A
could be used either for ACKs or for RST.

The state transition has been added after every h2s_error() on the demux
path. It seems that we might need to have two distinct h2s_error()
functions, one for the mux and another one for the demux, though it
would provide little benefit. It also becomes more apparent that the
H2_SS_ERROR state is only used to detect the need to report an error
on the mux direction. Maybe this will have to be revisited later.

This simple change managed to eliminate 5 bugs reported by h2spec.

This fix must be backported to 1.8.
2017-12-27 18:34:50 +01:00
Willy Tarreau
b26881a5d5 BUG/MEDIUM: checks: properly set servers to stopping state on 404
Paul Lockaby reported that since 1.8, disable-on-404 doesn't work
anymore in that the server stay up despite returning 404. Cyril spotted
that this was caused by a copy-paste error introduced by commit 5a13351
("BUG/MEDIUM: log: check result details truncated.") causing
set_server_running() to be called instead of set_server_stopping() in
this case.

It can be reproduced with the simple test config below :

  defaults
     mode http
     timeout connect 1s
     timeout client  10s
     timeout server  10s

  listen http
     bind :8888
     option httpchk GET /
     http-check disable-on-404
     server s1 127.0.0.1:9001 check
     server s2 127.0.0.1:9002 check
     http-response add-header x-served-by %s

  listen s1
     bind :9001
     server next 127.0.0.1:9002
     http-response set-status 404

  frontend s2
     bind :9002
     http-request redirect location /

S1 is supposed to be stopping and s2 up, which is not the case. After
calling the correct function, only S2 is used now.

This needs to be backported to 1.8.
2017-12-23 11:16:49 +01:00
Willy Tarreau
a48c141f44 BUG/MAJOR: connection: refine the situations where we don't send shutw()
Since commit f9ce57e ("MEDIUM: connection: make conn_sock_shutw() aware
of lingering"), we refrain from performing the shutw() on the socket if
there is no lingering risk. But there is a problem with this in tunnel
and in TCP modes where a client is explicitly allowed to send a shutw
to the server, eventhough it it risky.

Not doing it creates this situation reported by Ricardo Fraile and
diagnosed by Christopher : a typical HTTP client (eg: curl) connecting
via the config below to an HTTP server would receive its response,
immediately close while the server remains in keep-alive mode. The
shutr() received by haproxy from the client is "propagated" to the
server side but not acted upon because fdtab[fd].linger_risk is set,
so we expect that the next close will immediately complete this
operation.

  listen proxy-tcp
    bind 127.0.0.1:8888
    mode tcp
    timeout connect 5s
    timeout server  10s
    timeout client  10s
    server server1 127.0.0.1:8000

But since the whole stream will not end until the server closes in
turn, the server doesn't close and haproxy expires on server timeout.
This problem has already struck by waking up an older bug and was
partially fixed with commit 8059351 ("BUG/MEDIUM: http: don't disable
lingering on requests with tunnelled responses") though it was not
enough.

The problem is that linger_risk is not suited here. In fact we need to
know whether or not it is desired to close normally or silently, and
whether or not a shutr() has already been received on this connection.

This is the approach this patch takes, and it solves the problem for
the various difficult modes (tcp, http-server-close, pretend-keepalive).

This fix needs to be backported to 1.8. Many thanks to Ricardo for
providing very detailed traces and configurations.
2017-12-22 18:54:05 +01:00
Willy Tarreau
d4569d1937 BUG/MEDIUM: cache: don't cache the response on no-cache="set-cookie"
If the server mentions no-cache="set-cookie" in the response headers,
we must guarantee that any set-cookie field will not be stored. We
cannot edit the stored response on the fly to trim the set-cookie
header so we can refrain from storing a response containing such a
header. In theory we could use TX_SCK_PRESENT for this but this one
is only set when the cookie is being watched by the configuration.
Since these responses are not very frequent and often accompanied
with a set-cookie header, let's simply refrain from caching whenever
such directive is present.

This needs to be backported to 1.8.
2017-12-22 18:03:04 +01:00
Willy Tarreau
504455c533 BUG/MEDIUM: cache: respect the request cache-control header
Till now if a client emitted a request featureing a cache-control header,
this one was not respected and a stale object could still be delievered.r
 This patch ensures that :
  - cache-control: no-cache disables retrieval from the cache but does
    not prevent the newly fetched object from being stored ;
  - cache-control: no-store can safely retrieve from the cache but prevents
    from storing any fetched object
  - cache-control: max-age/max-stale/min-fresh act like no-cache
  - pragma: no-cache acts like cache-control: no-cache.

This needs to be backported to 1.8.
2017-12-22 17:56:18 +01:00
Willy Tarreau
c9bd34c7e0 BUG/MEDIUM: cache: replace old object on store
Currently the cache aborts a store operation if the object to store
already exists in the cache. This is used to avoid storing multiple
copies at the same time on concurrent accesses. It causes an issue
though, which is that existing unexpired objects cannot be updated.
This happens when any request criterion disables the retrieval from
the cache (eg: with max-age or any other cache-control condition).

For now, let's simply replace the previous existing entry by unlinking
it from the index. This could possibly be improved in the future if
needed.

This fix needs to be backported to 1.8.
2017-12-22 17:56:18 +01:00
Willy Tarreau
7704b1e89a BUG/MEDIUM: cache: do not try to retrieve host-less requests from the cache
All HTTP/1.1 requests the Host header share the same hash key 0 and
will be return the first cached object. Let's add the check on the call
to sha1_hosturi() to prevent this from happening.

This must be backported to 1.8.
2017-12-22 17:56:17 +01:00
Willy Tarreau
0ad8e0dfea MINOR: http: add a function to check request's cache-control header field
The new function check_request_for_cacheability() is used to check if
a request may be served from the cache, and/or allows the response to
be stored into the cache. For this it checks the cache-control and
pragma header fields, and adjusts the existing TX_CACHEABLE and a new
TX_CACHE_IGNORE flags.

For now, just like its response side counterpart, it only checks the
first value of the header field. These functions should be reworked to
improve their parsers and validate all elements.
2017-12-22 17:56:17 +01:00
Willy Tarreau
faf2909f9f BUG/MINOR: cache: do not force the TX_CACHEABLE flag before checking cacheability
The cache used to set this flag before calling
check_response_for_cacheability() due to the way the flags were previously
set (too late), but this is a bad idea as it loses the information of the
implicit caching rules related to the method and the status code. Let's
only rely on what was determined during the request and response parsing
instead and not change it.

This fix must be backported to 1.8, and it requires that the following
patches are also merged :
 - MINOR: http: adjust the list of supposedly cacheable methods
 - MINOR: http: update the list of cacheable status codes as per RFC7231
 - MINOR: http: start to compute the transaction's cacheability from the request
 - BUG/MINOR: http: do not ignore cache-control: public
2017-12-22 15:49:15 +01:00
Willy Tarreau
d3900cc31d BUG/MINOR: http: properly detect max-age=0 and s-maxage=0 in responses
In 1.3.8, commit a15645d ("[MAJOR] completed the HTTP response processing.")
improved the response parser by taking care of the cache-control header
field. The parser is wrong because it is split in two parts, one checking
for elements containing an equal sign and the other one for those without.
The "max-age=0" and "s-maxage=0" tests were located at the wrong place and
thus have never matched. In practice the side effect was very minimal given
that this code used to be enabled only when checking if a cookie had the
risk of being cached or not. Recently in 1.8 it was also used to decide if
the response could be cached but in practice the cache takes care of these
values by itself so there is very limited impact.

This fix can be backported to all stable versions.
2017-12-22 15:49:15 +01:00
Willy Tarreau
12b32f212f BUG/MINOR: http: do not ignore cache-control: public
In check_response_for_cacheability(), we don't check the
cache-control flags if the response is already supposed not to be
cacheable. This was introduced very early when cache-control:public
was not checked, and it basically results in this last one not being
able to properly mark the response as cacheable if it uses a status
code which is non-cacheable by default. Till now the impact is very
limited as it doesn't check that cookies set on non-default status
codes are not cacheable, and it prevents the cache from caching such
responses.

Let's fix this by doing two things :
  - remove the test for !TX_CACHEABLE in the aforementionned function
  - however take care of 1xx status codes here (which used to be
    implicitly dealt with by the test above) and remove the explicit
    check for 101 in the caller

This fix must be backported to 1.8.
2017-12-22 14:43:26 +01:00
Willy Tarreau
83ece462b4 MINOR: http: start to compute the transaction's cacheability from the request
There has always been something odd with the way the cache-control flags
are checked. Since it was made for checking for the risk of leaking cookies
only, all the processing was done in the response. Because of this it is not
possible to reuse the transaction flags correctly for use with the cache.

This patch starts to change this by moving the method check in the request
so that we know very early whether the transaction is expected to be cacheable
and that this status evolves along with checked headers. For now it's not
enough to use from the cache yet but at least it makes the flag more
consistent along the transaction processing.
2017-12-22 14:43:26 +01:00
Willy Tarreau
c55ddce65c MINOR: http: update the list of cacheable status codes as per RFC7231
Since RFC2616, the following codes were added to the list of codes
cacheable by default : 204, 404, 405, 414, 501. For now this it only
checked by the checkcache option to detect cacheable cookies.
2017-12-22 14:43:26 +01:00
Willy Tarreau
24ea0bcb1d MINOR: http: adjust the list of supposedly cacheable methods
We used to have a rule inherited from RFC2616 saying that the POST
method was the only uncacheable one, but things have changed since
and RFC7231+7234 made it clear that in fact only GET/HEAD/OPTIONS/TRACE
are cacheable. Currently this rule is only used to detect cacheable
cookies.
2017-12-22 14:43:26 +01:00
Eric Salama
fe7456f3b7 BUG/MEDIUM: lua: fix crash when using bogus mode in register_service()
When using an incorrect 'mode' as 2nd argument of core.register_service(),
HAProxy crashes while displaying the error message.

To be backported to 1.8, 1.7 and 1.6.
2017-12-22 14:34:54 +01:00