When using 'debug dev fd', the output of laddr and raddr can contain
some garbage.
This patch replaces any control or non-printable character by a '.'.
Historically for size limitation reasons, we would only dump the
backtrace of stuck threads. The problem is that when triggering
a panic or other reasons, we have no backtrace, which effectively
limits it to the watchdog timer. It's also visible in "show threads"
which used to report backtraces for all threads in 2.4 and displays
none nowadays, making its use much more limited.
A first approach could be to just dump the thread that triggers the
panic (in addition to stuck threads). But that remains quite limited
since "show threads" would still display nothing. This patch takes a
better approach consisting in dumping all non-idle threads. This way
the output is less polluted that with the older approach (no need to
dump all those waiting in the poller), and all active threads are
visible, in panics as well as in "show threads". As such, the CLI
command "debug dev panic" now dmups backtraces again. This is already
a benefit which will ease testing of various locations against the
ability to resolve useful symbols.
Dealing with a core and a stripped executable is a pain when it comes
to finding pools, proxies or thread contexts. Let's put a pointer to
these heads and arrays in the post_mortem struct for easier location.
Other critical lists like this could possibly benefit from being added
later.
Here we now have:
- tgroup_info
- thread_info
- tgroup_ctx
- thread_ctx
- pools
- proxies
Example:
$ objdump -h haproxy|grep post
34 _post_mortem 000014b0 0000000000cfd400 0000000000cfd400 008fc400 2**8
(gdb) set $pm=(struct post_mortem*)0x0000000000cfd400
(gdb) p $pm->tgroup_ctx[0]
$8 = {
threads_harmless = 254,
threads_idle = 254,
stopping_threads = 0,
timers = {
b = {0x0, 0x0}
},
niced_tasks = 0,
__pad = 0xf5662c <ha_tgroup_ctx+44> "",
__end = 0xf56640 <ha_tgroup_ctx+64> ""
}
(gdb) info thr
Id Target Id Frame
* 1 Thread 0x7f9e7706a440 (LWP 21169) 0x00007f9e76a9c868 in raise () from /lib64/libc.so.6
2 Thread 0x7f9e76a60640 (LWP 21175) 0x00007f9e76b343c7 in wait4 () from /lib64/libc.so.6
3 Thread 0x7f9e7613d640 (LWP 21176) 0x00007f9e76b343c7 in wait4 () from /lib64/libc.so.6
4 Thread 0x7f9e7493a640 (LWP 21179) 0x00007f9e76b343c7 in wait4 () from /lib64/libc.so.6
5 Thread 0x7f9e7593c640 (LWP 21177) 0x00007f9e76b343c7 in wait4 () from /lib64/libc.so.6
6 Thread 0x7f9e7513b640 (LWP 21178) 0x00007f9e76b343c7 in wait4 () from /lib64/libc.so.6
7 Thread 0x7f9e6ffff640 (LWP 21180) 0x00007f9e76b343c7 in wait4 () from /lib64/libc.so.6
8 Thread 0x7f9e6f7fe640 (LWP 21181) 0x00007f9e76b343c7 in wait4 () from /lib64/libc.so.6
(gdb) p/x $pm->thread_info[0].pth_id
$12 = 0x7f9e7706a440
(gdb) p/x $pm->thread_info[1].pth_id
$13 = 0x7f9e76a60640
(gdb) set $px = *$pm->proxies
while ($px != 0)
printf "%#lx %s served=%u\n", $px, $px->id, $px->served
set $px = ($px)->next
end
0x125eda0 GLOBAL served=0
0x12645b0 stats served=0
0x1266940 comp served=0
0x1268e10 comp_bck served=0
0x1260cf0 <OCSP-UPDATE> served=0
0x12714c0 <HTTPCLIENT> served=0
Placing it in its own section will ease its finding, particularly in
gdb which is too dumb to find anything in memory. Now it will be
sufficient to issue this:
$ gdb -ex "info files" -ex "quit" ./haproxy core 2>/dev/null |grep _post_mortem
0x0000000000cfd300 - 0x0000000000cfe780 is _post_mortem
or this:
$ objdump -h haproxy|grep post
34 _post_mortem 00001480 0000000000cfd300 0000000000cfd300 008fc300 2**8
to spot the symbol's address. Then it can be read this way:
(gdb) p *(struct post_mortem *)0x0000000000cfd300
In order to ease finding of the post_mortem struct in core dumps, let's
make it start with a recognizable pattern of exactly 32 chars (to
preserve alignment):
"POST-MORTEM STARTS HERE+7654321\0"
It can then be found like this from gdb:
(gdb) find 0x000000012345678, 0x0000000100000000, 'P','O','S','T','-','M','O','R','T','E','M'
0xcfd300 <post_mortem>
1 pattern found.
Or easier with any other more practical tool (who as ever used "find" in
gdb, given that it cannot iterate over maps and is 100% useless?).
During 11th and 12th iteration of the development cycle for the H2 auto
rx window, several approaches were attempted to figure if another buffer
could be allocated or not. One of them consisted in looping back to the
beginning of the function requesting a new buffer slot and getting one
if the buffer was either apparently or confirmed full. The latest one
consisted in directly allocating the next buffer from the two places
where it's found to be proven full, instead of checking with the now
defunct h2s_may_get_rxbuf() if we were allowed to get once an loop.
That approach was retained. In this case the "full" variabled is no
longer needed, so let's get rid of it because the construct looks bogus
and confuses coverity (and possibly code readers as the intent is unclear
compared to the code).
Commit 091de0f9b2 ("MINOR: debug: slightly change the thread_dump_pointer
signification") caused the following warning to be emitted when threads
are disabled:
src/debug.c: In function 'ha_thread_dump_one':
src/debug.c:359:9: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
Let's just disguise the pointer to silence it. It should be backported
where the patch above was backported, since it was part of a series aiming
at making thread dumps more exploitable from core dumps.
There is no reason to disable the 0-copy data forwarding if an end-of-stream
was reported on the consumer side. Indeed, the consumer will send data in
this case. So there is no reason to check the read side here.
This patch may be backported as far as 2.9.
When the response forwarding is aborted, we must not report a client abort
if a EOS was seen on client side. On abort performed by the stream must be
considered.
This bug was introduced when the SHUTR was splitted in 2 flags.
This patch must be backported as far as 2.8.
When some data must be sent to the endpoint but an error was previously
reported, nothing is performed and we leave. But, in this case, the SC is not
notified the sends are blocked.
It is indeed an issue if the endpoint reports an error after consuming all
data from the SC. In the endpoint the outgoing data are trashed because of
the error, but on the SC, everything was sent, even if an error was also
reported.
Because of this bug, it is possible to have outgoing data blocked at the SC
level but without any write timeout armed. In some cases, this may lead to
blocking conditions where the stream is never closed.
So now, when outgoing data cannot be sent because an previous error was
triggered, a blocked send is reported. This way, it is possible to report a
write timeout.
This patch should fix the issue #2754. It must be backported as far as 2.8.
Each server is inserted in a global list named servers_list on
new_server(). This list is then only used to finalize servers
initialization after parsing.
On dynamic server creation, there is no issue as new_server() is under
thread isolation. However, when a server is deleted after its refcount
reached zero, srv_drop() removes it from servers_list without lock
protection. In the longterm, this can cause list corruption and crashes,
especially if multiple adjacent servers are removed in parallel.
To fix this, convert servers_list to a mt_list. This should not impact
performance as servers_list is not used during runtime outside of server
creation/deletion.
This should fix github issue #2733. Thanks to Chris Staite who first
found the issue here.
This must be backported up to 2.6.
If a dynamic server is added with check or agent-check, its refcount is
incremented after server keyword parsing. However, if add server fails
at a later stage, refcount is only decremented once, which prevented the
server to be fully released.
This causes a leak with a server which is detached from most of the
lists but still exits in the system.
This bug is considered minor as only a few conditions may cause a
failure in add server after check/agent-check initialization. This is
the case if there is a naming collision or the dynamic ID cannot be
generated.
To fix this, simply decrement server refcount on add server error path
if either check and/or agent-check are flagged as activated.
This bug is related to github issue #2733. Thanks to Chris Staite who
first found the leak.
This must be backported up to 2.6.
There are two parts in mworker_cli_proxy_create(): allocating and setting up
MASTER proxy and allocating and setting up servers on ipc_fd[0] of the
sockpairs shared with workers.
So, let's split mworker_cli_proxy_create() into two functions respectively.
Each of them takes **errmsg as an argument to write an error message, which may
be triggered by some subcalls. The content of this errmsg will allow to extend
the final alert message shown to user, if these new functions will fail.
The main goals of this split is to allow to move these two parts independantly
in future and makes the code of haproxy initialization in haproxy.c more
transparent.
Before refactoring master-worker architecture, resources to setup master CLI
for the new worker process (shared sockpair, entry in proc_list) were created
in init() before parsing the configuration and binding listening sockets. So,
master during its re-exec has had to cleanup the new worker's ressources in
a case, when it fails at some initialization step before the fork.
Now fork happens very early and worker parses its configuration by itself. If
it fails during the initialization stage, all clean ups (deleting the fds of
the shared sockpair, proc_list cleanup) are performed in SIGCHLD handler up to
catching the SIGCHLD corresponded to this new worker. So, there is no longer
need to call mworker_cleanup_proc() in mworker_reexec().
As for mworker_cleanlisteners(), there is no longer need to call this function.
Master parses now only "global" and "program" sections, so it allocates only
MASTER proxy, which is stopped in mworker_reexec() by mworker_cli_proxy_stop().
Let's keep the definitions of mworker_cleanlisteners() and
mworker_cleanup_proc() in mworker.c for the moment. We may reuse parts of its
code later.
As master-worker fork happens now at early init stage and worker then parses
its configuration and performs all initialization steps, let's duplicate
startup logs ring for it, just before the moment when it enters in its pollong
loop. Startup logs ring content is shown as an output of the "reload" master
CLI command and we should be able to dump here worker initialization logs.
Log messages are written in startup logs ring only, when mode MODE_STARTING is
set (see print_message()). So, to be able to keep in startup logs the last
worker alerts, let's withdraw MODE_STARTING and let's reset user messages
context respectively just before entering in polling loop.
This fix does not need to be backported as it is a part of previous patches
from this version, which refactor master-worker architecture.
This patch simplifies the code of startup_logs_init_shm(). We no longer re-exec
master process twice after each reload to free its unused memory, which it had
to allocate, because it has parsed all configuration sections. So, there is no
longer need to keep SHM fd opened between the first and the next reloads. We
can completely remove HAPROXY_STARTUPLOGS_FD.
In step_init_1() we continue to call startup_logs_init_shm() to open SHM and to
allocate startup logs ring area within it. In master-worker mode, worker
duplicates initial startup logs ring after sending its READY state to master.
Sharing the same ring between two processes until the worker finishes its
initialization allows to show at master CLI output worker's startup logs.
During the next reload master process should free the memory allocated for the
ring structure. Then after the execvp() it will reopen and map SHM area again
and it will reallocate again the ring structure.
After sending its "READY" status worker should not keep the access
to MASTER proxy, thus, it shouldn't be able to send any other commands further
to master process.
To achieve this, let's stop in master context master CLI listener attached on
the sockpair shared with worker. We do this just after receiving the worker's
status message.
When master enters in recovery mode after unsuccessfull reload
HAPROXY_LOAD_SUCCESS should be set as 0. Like this
cli_io_handler_show_cli_sock() could dump in master CLI its warnings and alerts,
saved in startup logs ring.
No need to backport this fix, as this is related to the previous patches in
this version to refactor master-worker architecture.
On systems where many libs are loaded, it's hard to track suspected
leaks. Having a per-DSO summary makes it more convenient. That's what
we're doing here by summarizing all calls per DSO before showing the
total.
The previous commit contains a bug in some COUNT_IF() relying on the pipe
inside the IOBUF. We must take care to have a pipe before checking its size.
No backport needed.
Debug counters were added on all connection error when pending data remain
blocked in the input or ouput buffers. The same is performed when the H1C is
released, when the connection is closed and when a timeout is reached. Idea
is to be able to count all cases where data are lost, especially the
outgoing ones.
gcc-4.8 is unhappy with the cfg_file initialization:
src/flt_spoe.c: In function 'parse_spoe_flt':
src/flt_spoe.c:2202:9: warning: missing braces around initializer [-Wmissing-braces]
struct cfgfile cfg_file = {0};
^
src/flt_spoe.c:2202:9: warning: (near initialization for 'cfg_file.list') [-Wmissing-braces]
This is due to the embedded list member. Initializing it to empty like
we do almost everywhere else makes it happy. No backport is needed as
this was changed in 3.1-dev5 only.
As described in GH #2765, there were situations where http connections
would be re-used for requests to different endpoints, which is obviously
unexpected. In GH #2765, this occured with httpclient and UNIX socket
combination, but later code analysis revealed that while disabling http
reuse on httpclient proxy helped, it didn't fix the underlying issue since
it was found that conn_calculate_hash_sockaddr() didn't take into account
families such as AF_UNIX or AF_CUST_SOCKPAIR, and because of that the
sock_addr part of the connection wasn't hashed.
To properly fix the issue, let's explicly handle UNIX (both regular and
ABNS) and AF_CUST_SOCKPAIR families, so that the destination address is
properly hashed. To prevent this bug from re-appearing: when the family
isn't known, instead of doing nothing like before, let's fall back to a
generic (unoptimal) hashing which hashes the whole sockaddr_storage struct
As a workaround, http-reuse may be disabled on impacted proxies.
(unfortunately this doesn't help for httpclient since reuse policy
defaults to safe and cannot be modified from the config)
It should be backported to all stable versions.
Shout out to @christopherhibbert for having reported the issue and
provided a trivial reproducer.
[ada: prior to 3.0, ctx adjt is required because conn_hash_update()'s
prototype is slightly different]
Sometimes it would be desirable to include some debugging output only
under certain conditions, but the end of the transfer is too late to
apply some rules.
Here we take the approach of making a converter ("when") that takes a
condition among an arbitrary list, and decides whether or not to let
the input sample pass through or not based on the condition. This
allows for example to log debugging information only when an error
was encountered during the processing (sort of an extension of
dontlog-normal). The conditions are quite limited (stopping, error,
normal, toapplet, forwarded, processed) and can be negated. The
converter can also be chained to use more complex conditions.
A suggested example will be:
# log "dbg={-}" when fine, or "dbg={... debug info ...}" on error:
log-format "$HAPROXY_HTTP_LOG_FMT dbg={%[bs.debug_str,when(!normal)]}"
The idea here is to record how many times a filter is being called on a
stream. We're incrementing the same counter all along, regardless of the
type of event, since the purpose is essentially to detect one that might
be misbehaving. The number of calls is reported in "show sess all" next
to the filter name. It may also help detect suboptimal processing. For
example compressing 1GB shows 138k calls to the compression filter, which
is roughly two calls per buffer. Maybe we wake up with incomplete buffers
and compress less. That's left for a future analysis.
Process_stream() is a complex function and a few times some lopos were
either witnessed or suspected. Each time this happens it's extremely
difficult to figure why because it involves combinations of analysers,
filters, errors etc.
Let's at least maintain a set of 4 counters per stream that report the
number of times we've been through each of the 4 most important blocks
(stconn changes, request analysers, response analysers, and propagation
of changes down). These ones are stored in the stream and reported in
"show sess all", just like they will be reported in panic dumps.
Now it is possible to have info about front and back H1 multiplexer. For instance:
<134>Oct 22 18:10:46 haproxy[3841864]: 127.0.0.1:44280 [22/Oct/2024:18:10:43.265] front-http back-http/www 0/0/-1/-1/3082 503 217 - - SC-- 1/1/0/0/3 0/0 "GET / HTTP/1.1" fs=< h1s=0x13b6f10 h1s.flg=0x14010 .sd.flg=0x50404601 .req.state=MSG_DONE .res.state=MSG_DONE .meth=GET status=503 .sd.flg=0x50404601 .sc.flg=0x00034482 .sc.app=0x11e4c30 .subs=(nil) h1c.flg=0x0 .sub=0 .ibuf
=0@(nil)+0/0 .obuf=0@(nil)+0/0 .task=0x1337d10 .exp=<NEVER> conn.flg=0x80000300> bs=< h1s=0x13bb400 h1s.flg=0x100010 .sd.flg=0x10400001 .req.state=MSG_RQBEFORE .res.state=MSG_RPBEFORE .meth=UNKNOWN status=0 .sd.flg=0x10400001 .sc.flg=0x0003c007 .sc.app=0x11e4c30 .subs=(nil) h1c.flg=0x80000000 .sub=0 .ibuf=0@(nil)+0/0 .obuf=0@(nil)+0/0 .task=0x12ba610 .exp=<NEVER> conn.flg=0x5c0300>
The have this log message, the log-format must be set to:
log-format "$HAPROXY_HTTP_LOG_FMT fs=<%[fs.debug_str]> bs=<%[bs.debug_str]>"
Debug counters are added to track errors about wrong the payload length
during the message formatting (on the sending path). Aborts are also
concerned. connection shutdowns and errors while the end of the message was
not reached are now tracked. On the sending path, shutdown performed while
all the message was not forwarded are tracked too.
Not all aborts are tracked for now but only those a bit ambiguous. Mainly,
aborts during the data forwarding are concerned. Those triggered during the
request or the response analysis are easier to analyze with the stream
termination state.
When abortonclose option is enabled on the backend, at the SC level, we must
still pretend the SE have more data to deliver to be able to receive the
EOS. It must be performed at 2 places:
* When the backend is set and the connection is requested. It is when the
option is seen for the first time.
* After a receive attempt, if the EOI flag is set on the sedesc.
Otherwise, when an abort is detected by the mux, the SC is not
notified.
This patch should fix the issue #2764.
This bug probably exists in all stable version but is only visible since
bca5e1423 ("OPTIM: stconn: Don't pretend mux have more data to deliver on
EOI/EOS/ERROR"). So I suggest to not backport it for now, except if the commit
above is backported.
When data are sent via the zero-copy data forwarding, in h2_done_ff, we must
be sure to remove the H2 stream from the send list if something is send. It
was only performed if no blocking condition was encountered. But we must
also do it if something is sent. Otherwise the transfer may be blocked till
timeout.
This patch must be backported as far as 2.9.
During the zero-copy data forwarding, the caller specify the maximum amount
of data the producer may push. However, the HTML stats applet does not use
it and can fill all the free space in the buffer. It is especially an issue
when the consumer is limited by a flow control, like the H2. Because we may
emit too large DATA frame in this case. It is especially visible with big
buffer (for instance 32k).
In the early age or zero-copy data forwarding, the caller was responsible to
pass a properly resized buffer. And during the different refactoring steps,
this has changed but the HTML stats applet was not updated accordingly.
To fix the bug, the buffer used to dump the HTML page is resized to be sure
not too much data are dumped.
This patch should solve the issue #2757. It must be backported to 3.0.
Issuing "debug dev counters" on the CLI will now scan all existing
counters, and report their count, type, location, function name, the
condition and an optional comment passed to the macro.
The command takes a number of arguments:
- "show": this is the default, it will just list the counters
- "reset": will reset the matching counters instead of listing them
- "all": by default, only non-zero counters are listed. With "all",
they are all listed
- "bug": restrict the reset or dump to counters of type "BUG" (BUG_ON usually)
- "chk": restrict the reset or dump to counters of type "CHK" (CHECK_IF)
- "cnt": restrict the reset or dump to counters of type "CNT" (COUNT_IF)
The types may be cumulated, and the option entered in any order. Here's
an example of the output of "debug dev counters show all bug":
Count Type Location function(): "condition" [comment]
0 BUG ring.h:114 ring_dup(): "max > ring_size(dst)"
0 BUG vecpair.h:223 vp_getblk_ofs(): "ofs >= v1->len + v2->len"
0 BUG buf.h:395 b_add(): "b->data + count > b->size"
0 BUG buf.h:106 b_room(): "b->data > b->size"
0 BUG task.h:328 _task_queue(): "(ulong)caller & 1"
0 BUG task.h:324 _task_queue(): "task->tid != tid"
0 BUG task.h:313 _task_queue(): "(ulong)caller & 1"
(...)
This is expected to be convenient combined with the use and abuse of
COUNT_IF() at select locations.
This macro works exactly like BUG_ON() except that it never logs anything
nor crashes, it only implements an atomic counter that is incremented on
every call. This can be used to count a number of unlikely events that are
worth checking at run time on setups showing unusual and unreproducible
behaviors.
These macros do not always kill the process, and sometimes it would be
nice to know if some match or not, and how many times (especially for the
CHECK_IF one).
This commit adds a new section "dbg_cnt" made of structs that contain
function name, file name, line number, check type, condition and match
count. A newe macro __DBG_COUNT() adds one to the counter, and is placed
inside _BUG_ON() and _BUG_ON_ONCE(). It's worth noting that the exact
type of the check is not very precise but in practice we don't care,
as most checks will cause the process to die anyway unless they're of
type _BUG_ON_ONCE() (used by CHECK_IF by default).
All of this is limited to !defined(USE_OBSOLETE_LINKER) because we're
creating a section, thus we need a modern linker to be able to scan
this section later. Doing so adds ~50kB to the executable due to the
~1266 BUG_ON() and others placed there. That's not huge in comparison
to the visibility it can provide.
The BUG_ON() macros are made of two levels so as to resolve the condition
to a string. However this doesn't offer much flexibility for performing
other operations when the condition is validated, so let's adjust them so
that the condition is checked in the outer macro and the operations are
performed in the inner one.
A stream may be shut without any HTX EOM reported to report a proper
closure. This is the case for QCS instances flagged with
QC_SF_UNKNOWN_PL_LENGTH. Shut is performed with an empty FIN emission
instead of a RESET_STREAM. This has been implemented since the following
patch :
24962dd178
BUG/MEDIUM: mux-quic: do not emit RESET_STREAM for unknown length
However, in case of HTTP/3, an empty FIN should only be done after a
full message is emitted, which requires at least a HEADERS frame. If an
empty FIN is emitted without it, client may interpret this as invalid
and close the connection. To prevent this, fallback to a RESET_STREAM
emission if no data were emitted on the stream.
This was reproduced using ngtcp2-client with 10% loss (-r 0.1) on a
remote host, with httpterm request "/?s=100k&C=1&b=0&P=400". An error
ERR_H3_FRAME_UNEXPECTED is returned by ngtcp2-client when the bug
occurs.
Note that this change is incomplete. The message validity depends solely
on the application protocol in use. As such, a new app_ops callback
should be implemented to ensure the stream is closed accordingly.
However, this first patch ensures that at least HTTP/3 case is valid
while keeping a minimal backport process.
This should be backported up to 2.8.
An empty STREAM frame can be emitted by QUIC MUX to notify about a
delayed FIN when there is no data left to transmit. This requires a
tedious comparison on stream offset in qmux_ctrl_send() to ensure an
empty stream frame is not always considered as retransmitted, which is
necessary to locally close the QCS instance.
Simplify this by unsubscribe from streamdesc layer when the QCS is
locally closed on FIN transmission notification. This prevents all
future retransmitted frames to be reported to the QCS instance,
especially any potentially retransmitted empty FIN.
Before this patch, when wrong argument was provided in the configuration for
mworker-max-reloads keyword, parser shows these errors below on the stderr:
[WARNING] (1820317) : config : parsing [haproxy.cfg:154] : (null)parsing [haproxy.cfg:154] : 'mworker-max-reloads' expects an integer argument.
In a case, when by mistake two arguments were provided instead of one, this has
also triggered a buggy error message:
[ALERT] (1820668) : config : parsing [haproxy.cfg:154] : 'mworker-max-reloads' cannot handle unexpected argument '45'.
[WARNING] (1820668) : config : parsing [haproxy.cfg:154] : (null)
So, as 'mworker-max-reloads' is parsed in discovery mode by master process
let's align now its parser with all others, which could be called for this
mode. Like this in cases, when there are too many args or argument isn't a
valid integer we return proper error codes to global section parser and
messages are formated properly.
This fix should be backported in all stable versions.
Initially we agreed to split builds into "latest" for development branch
and fixed 22.04 for stable branches. It got broken when "latest" label migrated
from ubuntu-22 to ubuntu-24 ... because of build cache. Cache key is built using
runner label, it was not prepared to use the same "latest" cache from ubuntu 22
on ubuntu 24. To make things clear, let's stick explicitely to ubuntu 24.
PCRE2 is recommended, PCRE was chosen for no reason. GHA Ubuntu 22 images include both libs,
but recent Ubuntu 24 does not. Let us prepare for Ubuntu 24
Commit cf3fe1eed ("MINOR: mux-h2/traces: print the size of the DATA
frames") added the size of the DATA frame to the traces. Unfortunately
it uses ullong instead of ulong to cast a pointer, which breaks the
build on 32-bit platforms. Let's just switch it to ulong which works
on both.