Commit Graph

9 Commits

Author SHA1 Message Date
Willy Tarreau 8178a5211c MAJOR: threads/plock: update the embedded library again
This updates the local copy of the plock library to benefit from finer
memory ordering, EBO on more operations such as when take_w() and stow()
wait for readers to leave  and refined EBO, especially on common operation
such as attempts to upgade R to S, and avoids a counter-productive prior
read in rtos() and take_r().

These changes have shown a 5% increase on regular operations on ARM,
a 33% performance increase on ARM on stick-tables and 2% on x86, and
a 14% and 4% improvements on peers updates respectively on ARM and x86.

The availability of relaxed operations will probably be useful for stats
counters which are still extremely expensive to update.

The following plock commits were included in this update:

  9db830b plock: support inlining exponential backoff code
  008d3c2 plock: make the rtos upgrade faster
  2f76dde atomic: clean up the generic xchg()
  3c6919b atomic: make sure that the no-return macros do not return a value
  97c2bb7 atomic: make the fallback bts use the pointed type for the shift
  f4c1880 atomic: also implement the missing pl_btr()
  8329b82 atomic: guard all generic definitions to make it easier to provide specific ones
  7c5cb62 atomic: use C11 atomics when available
  96afaf9 atomic: prefer the C11 definitions in general
  f3ec7a6 atomic: implement load/store/atomic barriers
  8bdbd1e atomic: add atomic load/stores
  0f604c0 atomic: add more _noret operations
  3fe35db atomic: remove the (void) cast from the C11 operations
  3b08a7c atomic: allow to define the fallback _noret variants
  28deb22 atomic: make x86 arithmetic operations the _noret variants
  8061fe2 atomic: handle modern compilers that support returning flags
  b8b91b7 atomic: add the fetch-and-<op> operations (pl_ld<op>)
  59817ca atomic: add memory order variants for most operations
  a40774f plock: explicitly make use of the pl_*_noret operations
  6f1861b plock: switch to pl_sub_noret_lax() for cancellation
  c013980 plock: use pl_ldadd{_lax,_acq,} instead of pl_xadd()
  382eea3 plock: use a release ordering when dropping the lock
  60d750d plock: use EBO when waiting for readers to leave in take_w() and stow()
  fc01c4f plock: improve EBO a little bit
  1ef6390 plock: switch to CAS + XADD for pl_take_r()
2023-08-11 19:03:35 +02:00
Your Name 1e237d037b MINOR: plock: use an ARMv8 instruction barrier for the pause instruction
As suggested by @AGSaidi in issue #958, on ARMv8 its convenient to use
an "isb" instruction in pl_cpu_relax() to improve fairness. Without it
I've met a few watchdog conditions on valid locks with 16 threads,
indicating that some threads couldn't manage to get it in 2 seconds. I
never happened again with it. In addition, the performance increased
by slightly more than 5% thanks to the reduced contention.

This should be backported as far as 2.2, possibly even 2.0.
2020-11-29 14:53:33 +01:00
Ilya Shipitsin 47d17182f4 CLEANUP: assorted typo fixes in the code and comments
This is 10th iteration of typo fixes
2020-06-26 11:27:28 +02:00
Willy Tarreau 2532bd2f81 BUILD: threads/plock: fix a build issue on Clang without optimization
[ plock commit 4c53fd3a0b2b1892817cebd0db012a52f4087850 ]

Pieter Baauw reported a build issue affecting haproxy after plock was
included. It happens that expressions of the form :

     if ((const) ? (expr1) : (expr2))
       do_something()

always produce code for both expr1 and expr2 on Clang when building
without optimization. The resulting asm code is even funny, basically
doing :

     mov reg, 1
     cmp reg, 1
     ...

This causes our sizeof() tests to fail to build because we purposely
dereference a fake function that reports the location and nature of the
inconsistency, but this fake function appears in the object code despite
all conditions being there to avoid it.

However the compiler is still smart enough to optimize away code doing

    if (const)
       do_something()

So we simply repeat the condition before do_something(), and the dummy
function is not referenced anymore unless really required.
2017-11-20 21:06:35 +01:00
Willy Tarreau b5f271555e MINOR: threads/build: atomic: replace the few inlines with macros
[ plock commit 61e255286ae32e83e1a3174dd7c49eda99880a8b]

There are a few inlines such as pl_barrier() and pl_cpu_relax() which
are used a lot. Unfortunately, while building test code at -O0, inlining
is disabled and these ones are called a lot and show up a lot in any
profile, are traced into when single-stepping with a debugger, etc, thus
they are polluting the landscape. Since they're single-asm statements,
there is no reason for not turning them into macros.

The result becomes fairly visible here at -O0 :

  $ size latency.inline latency.macro
     text    data     bss     dec     hex filename
    11431     692     656   12779    31eb treelock.inline
    10967     692     656   12315    301b treelock.macro

And it was verified that regularly optimized code remains strictly identical.
2017-11-20 21:06:35 +01:00
Willy Tarreau d0d8ba59d3 MINOR: threads/atomic: implement pl_bts() on non-x86
[ plock commit da17ba320aad3a8faf08e36fca604de9cad21fdd ]

This one was missing, it can be done using sync_fetch_and_or().
2017-11-20 21:06:03 +01:00
Willy Tarreau 01b8398b9e MINOR: threads/atomic: implement pl_mb() in asm on x86
[ plock commit 44081ea493dd78dab48076980e881748e9b33db5 ]

Older compilers (eg: gcc 3.4) don't provide __sync_synchronize() so let's
do it by hand on this platform.
2017-11-20 20:45:47 +01:00
Willy Tarreau 98409e34ca MINOR: threads/atomic: rename local variables in macros to avoid conflicts
[ plock commit bfac5887ebabb8ef753b0351f162265767eb219b ]

Local variable "t" was renamed "__pl_t" to limit the risk of conflicts
with existing variables in application code.
2017-11-20 20:45:38 +01:00
Emeric Brun 7122ab31b1 MINOR: threads: Add atomic-ops and plock includes in import dir
atomic-ops header contains some low-level functions to do atomic
operations. These operations are used by the progressive locks (plock).
2017-10-31 11:36:13 +01:00