diff mbox

[v5,06/18] atomics: add atomic_read_acquire and atomic_set_release

Message ID 1463196873-17737-7-git-send-email-cota@braap.org (mailing list archive)
State New, archived
Headers show

Commit Message

Emilio Cota May 14, 2016, 3:34 a.m. UTC
When __atomic is not available, we use full memory barriers instead
of smp/wmb, since acquire/release barriers apply to all memory
operations and not just to loads/stores, respectively.

Signed-off-by: Emilio G. Cota <cota@braap.org>
---
 include/qemu/atomic.h | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

Comments

Pranith Kumar May 15, 2016, 10:22 a.m. UTC | #1
Hi Emilio,

On Fri, May 13, 2016 at 11:34 PM, Emilio G. Cota <cota@braap.org> wrote:
> When __atomic is not available, we use full memory barriers instead
> of smp/wmb, since acquire/release barriers apply to all memory
> operations and not just to loads/stores, respectively.
>

If it is not too late can we rename this to
atomic_load_acquire()/atomic_store_release() like in the linux kernel?
Looks good either way.

Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
Emilio Cota May 16, 2016, 6:27 p.m. UTC | #2
On Sun, May 15, 2016 at 06:22:36 -0400, Pranith Kumar wrote:
> On Fri, May 13, 2016 at 11:34 PM, Emilio G. Cota <cota@braap.org> wrote:
> > When __atomic is not available, we use full memory barriers instead
> > of smp/wmb, since acquire/release barriers apply to all memory
> > operations and not just to loads/stores, respectively.
> 
> If it is not too late can we rename this to
> atomic_load_acquire()/atomic_store_release() like in the linux kernel?

I'd keep read/set just for consistency with the rest of the file.

BTW in the kernel, atomic_{read/set}_{acquire/release} are defined
in include/linux/atomic.h:

    #ifndef atomic_read_acquire
    #define  atomic_read_acquire(v)         smp_load_acquire(&(v)->counter)
    #endif

    #ifndef atomic_set_release
    #define  atomic_set_release(v, i)       smp_store_release(&(v)->counter, (i))
    #endif

The smp_load/store variants are called much more frequently, though.

Thanks,

		Emilio
Sergey Fedorov May 17, 2016, 4:53 p.m. UTC | #3
On 14/05/16 06:34, Emilio G. Cota wrote:
> When __atomic is not available, we use full memory barriers instead
> of smp/wmb, since acquire/release barriers apply to all memory
> operations and not just to loads/stores, respectively.
>
> Signed-off-by: Emilio G. Cota <cota@braap.org>
> ---
>  include/qemu/atomic.h | 27 +++++++++++++++++++++++++++
>

Update docs/atomics.txt? (The same for the previous patch.)

Kind regards,
Sergey
Paolo Bonzini May 17, 2016, 5:08 p.m. UTC | #4
On 17/05/2016 18:53, Sergey Fedorov wrote:
> On 14/05/16 06:34, Emilio G. Cota wrote:
>> > When __atomic is not available, we use full memory barriers instead
>> > of smp/wmb, since acquire/release barriers apply to all memory
>> > operations and not just to loads/stores, respectively.
>> >
>> > Signed-off-by: Emilio G. Cota <cota@braap.org>
>> > ---
>> >  include/qemu/atomic.h | 27 +++++++++++++++++++++++++++
>> >
> Update docs/atomics.txt? (The same for the previous patch.)

I'm okay with doing this separately.

Paolo
diff mbox

Patch

diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
index 6061a46..1766c22 100644
--- a/include/qemu/atomic.h
+++ b/include/qemu/atomic.h
@@ -56,6 +56,21 @@ 
     __atomic_store(ptr, &_val, __ATOMIC_RELAXED);     \
 } while(0)
 
+/* atomic read/set with acquire/release barrier */
+#define atomic_read_acquire(ptr)                      \
+    ({                                                \
+    QEMU_BUILD_BUG_ON(sizeof(*ptr) > sizeof(void *)); \
+    typeof(*ptr) _val;                                \
+    __atomic_load(ptr, &_val, __ATOMIC_ACQUIRE);      \
+    _val;                                             \
+    })
+
+#define atomic_set_release(ptr, i)  do {              \
+    QEMU_BUILD_BUG_ON(sizeof(*ptr) > sizeof(void *)); \
+    typeof(*ptr) _val = (i);                          \
+    __atomic_store(ptr, &_val, __ATOMIC_RELEASE);     \
+} while(0)
+
 /* Atomic RCU operations imply weak memory barriers */
 
 #define atomic_rcu_read(ptr)                          \
@@ -243,6 +258,18 @@ 
 #define atomic_read(ptr)       (*(__typeof__(*ptr) volatile*) (ptr))
 #define atomic_set(ptr, i)     ((*(__typeof__(*ptr) volatile*) (ptr)) = (i))
 
+/* atomic read/set with acquire/release barrier */
+#define atomic_read_acquire(ptr)    ({            \
+    typeof(*ptr) _val = atomic_read(ptr);         \
+    smp_mb();                                     \
+    _val;                                         \
+})
+
+#define atomic_set_release(ptr, i)  do {          \
+    smp_mb();                                     \
+    atomic_set(ptr, i);                           \
+} while (0)
+
 /**
  * atomic_rcu_read - reads a RCU-protected pointer to a local variable
  * into a RCU read-side critical section. The pointer can later be safely