atomics: some corrections to __sync builtins usage

We don't need to combine __sync_add_and_fetch with a memory barrier,
since these intrinsics are documented as using a full barrier already.

Use __sync_fetch_and_add instead of __sync_add_and_fetch; this gives
atomic_fetch_add() the correct return value (although we don't use it).

Use __sync_fetch_and_add to emulate atomic_load(). This should enforce
the full barrier semantics better. (This trick is stolen from the
FreeBSD based stdatomic.h emulation.)
This commit is contained in:
wm4 2014-05-28 22:37:37 +02:00
parent f289060259
commit 3238cd3dac
1 changed files with 3 additions and 3 deletions

View File

@ -51,11 +51,11 @@ typedef struct { volatile unsigned long long v; } atomic_ullong;
#elif HAVE_SYNC_BUILTINS
#define atomic_load(p) \
(__sync_synchronize(), (p)->v)
__sync_fetch_and_add(&(p)->v, 0)
#define atomic_store(p, val) \
((p)->v = (val), __sync_synchronize())
(__sync_synchronize(), (p)->v = (val), __sync_synchronize())
#define atomic_fetch_add(a, b) \
(__sync_add_and_fetch(&(a)->v, b), __sync_synchronize())
__sync_fetch_and_add(&(a)->v, b)
#else
# error "this should have been a configuration error, report a bug please"