From patchwork Sun Jan 10 14:18:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 7996171 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 2B7BE9F1C0 for ; Sun, 10 Jan 2016 14:21:40 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 32C7020295 for ; Sun, 10 Jan 2016 14:21:39 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [50.57.142.19]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6A5C3202B4 for ; Sun, 10 Jan 2016 14:21:37 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aIGpM-0008GK-Sf; Sun, 10 Jan 2016 14:18:40 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aIGpL-0008F5-Kt for xen-devel@lists.xenproject.org; Sun, 10 Jan 2016 14:18:39 +0000 Received: from [193.109.254.147] by server-14.bemta-14.messagelabs.com id 70/A7-07165-F3862965; Sun, 10 Jan 2016 14:18:39 +0000 X-Env-Sender: mst@redhat.com X-Msg-Ref: server-13.tower-27.messagelabs.com!1452435516!15821292!1 X-Originating-IP: [209.132.183.28] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 50352 invoked from network); 10 Jan 2016 14:18:37 -0000 Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 10 Jan 2016 14:18:37 -0000 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (Postfix) with ESMTPS id 09F2D8EA37; Sun, 10 Jan 2016 14:18:36 +0000 (UTC) Received: from redhat.com (vpn1-5-155.ams2.redhat.com [10.36.5.155]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP id u0AEISS6018406; Sun, 10 Jan 2016 09:18:29 -0500 Date: Sun, 10 Jan 2016 16:18:28 +0200 From: "Michael S. Tsirkin" To: linux-kernel@vger.kernel.org Message-ID: <1452426622-4471-15-git-send-email-mst@redhat.com> References: <1452426622-4471-1-git-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1452426622-4471-1-git-send-email-mst@redhat.com> X-Mutt-Fcc: =sent X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 Cc: linux-mips@linux-mips.org, linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, Peter Zijlstra , virtualization@lists.linux-foundation.org, "H. Peter Anvin" , sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-s390@vger.kernel.org, Russell King - ARM Linux , Arnd Bergmann , x86@kernel.org, xen-devel@lists.xenproject.org, Ingo Molnar , linux-xtensa@linux-xtensa.org, user-mode-linux-devel@lists.sourceforge.net, Stefano Stabellini , adi-buildroot-devel@lists.sourceforge.net, Thomas Gleixner , linux-metag@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Andrew Cooper , Joe Perches , linuxppc-dev@lists.ozlabs.org, David Miller Subject: [Xen-devel] [PATCH v3 14/41] asm-generic: add __smp_xxx wrappers X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On !SMP, most architectures define their barriers as compiler barriers. On SMP, most need an actual barrier. Make it possible to remove the code duplication for !SMP by defining low-level __smp_xxx barriers which do not depend on the value of SMP, then use them from asm-generic conditionally. Besides reducing code duplication, these low level APIs will also be useful for virtualization, where a barrier is sometimes needed even if !SMP since we might be talking to another kernel on the same SMP system. Both virtio and Xen drivers will benefit. The smp_xxx variants should use __smp_XXX ones or barrier() depending on SMP, identically for all architectures. We keep ifndef guards around them for now - once/if all architectures are converted to use the generic code, we'll be able to remove these. Suggested-by: Peter Zijlstra Signed-off-by: Michael S. Tsirkin Acked-by: Arnd Bergmann --- include/asm-generic/barrier.h | 91 ++++++++++++++++++++++++++++++++++++++----- 1 file changed, 82 insertions(+), 9 deletions(-) diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index 987b2e0..8752964 100644 --- a/include/asm-generic/barrier.h +++ b/include/asm-generic/barrier.h @@ -54,22 +54,38 @@ #define read_barrier_depends() do { } while (0) #endif +#ifndef __smp_mb +#define __smp_mb() mb() +#endif + +#ifndef __smp_rmb +#define __smp_rmb() rmb() +#endif + +#ifndef __smp_wmb +#define __smp_wmb() wmb() +#endif + +#ifndef __smp_read_barrier_depends +#define __smp_read_barrier_depends() read_barrier_depends() +#endif + #ifdef CONFIG_SMP #ifndef smp_mb -#define smp_mb() mb() +#define smp_mb() __smp_mb() #endif #ifndef smp_rmb -#define smp_rmb() rmb() +#define smp_rmb() __smp_rmb() #endif #ifndef smp_wmb -#define smp_wmb() wmb() +#define smp_wmb() __smp_wmb() #endif #ifndef smp_read_barrier_depends -#define smp_read_barrier_depends() read_barrier_depends() +#define smp_read_barrier_depends() __smp_read_barrier_depends() #endif #else /* !CONFIG_SMP */ @@ -92,23 +108,78 @@ #endif /* CONFIG_SMP */ +#ifndef __smp_store_mb +#define __smp_store_mb(var, value) do { WRITE_ONCE(var, value); __smp_mb(); } while (0) +#endif + +#ifndef __smp_mb__before_atomic +#define __smp_mb__before_atomic() __smp_mb() +#endif + +#ifndef __smp_mb__after_atomic +#define __smp_mb__after_atomic() __smp_mb() +#endif + +#ifndef __smp_store_release +#define __smp_store_release(p, v) \ +do { \ + compiletime_assert_atomic_type(*p); \ + __smp_mb(); \ + WRITE_ONCE(*p, v); \ +} while (0) +#endif + +#ifndef __smp_load_acquire +#define __smp_load_acquire(p) \ +({ \ + typeof(*p) ___p1 = READ_ONCE(*p); \ + compiletime_assert_atomic_type(*p); \ + __smp_mb(); \ + ___p1; \ +}) +#endif + +#ifdef CONFIG_SMP + +#ifndef smp_store_mb +#define smp_store_mb(var, value) __smp_store_mb(var, value) +#endif + +#ifndef smp_mb__before_atomic +#define smp_mb__before_atomic() __smp_mb__before_atomic() +#endif + +#ifndef smp_mb__after_atomic +#define smp_mb__after_atomic() __smp_mb__after_atomic() +#endif + +#ifndef smp_store_release +#define smp_store_release(p, v) __smp_store_release(p, v) +#endif + +#ifndef smp_load_acquire +#define smp_load_acquire(p) __smp_load_acquire(p) +#endif + +#else /* !CONFIG_SMP */ + #ifndef smp_store_mb -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0) +#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0) #endif #ifndef smp_mb__before_atomic -#define smp_mb__before_atomic() smp_mb() +#define smp_mb__before_atomic() barrier() #endif #ifndef smp_mb__after_atomic -#define smp_mb__after_atomic() smp_mb() +#define smp_mb__after_atomic() barrier() #endif #ifndef smp_store_release #define smp_store_release(p, v) \ do { \ compiletime_assert_atomic_type(*p); \ - smp_mb(); \ + barrier(); \ WRITE_ONCE(*p, v); \ } while (0) #endif @@ -118,10 +189,12 @@ do { \ ({ \ typeof(*p) ___p1 = READ_ONCE(*p); \ compiletime_assert_atomic_type(*p); \ - smp_mb(); \ + barrier(); \ ___p1; \ }) #endif +#endif + #endif /* !__ASSEMBLY__ */ #endif /* __ASM_GENERIC_BARRIER_H */