From patchwork Thu Jun 20 14:21:22 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 2757131 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id B6DB29F39E for ; Thu, 20 Jun 2013 16:21:14 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8559C203E8 for ; Thu, 20 Jun 2013 16:21:13 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 14998203DD for ; Thu, 20 Jun 2013 16:21:12 +0000 (UTC) Received: from merlin.infradead.org ([205.233.59.134]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Upfnk-0006sp-3Z; Thu, 20 Jun 2013 14:25:31 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Upfm2-0000MF-Gy; Thu, 20 Jun 2013 14:23:42 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1UpfkJ-0000Bt-8d for linux-arm-kernel@lists.infradead.org; Thu, 20 Jun 2013 14:22:08 +0000 Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com [10.1.203.36]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id r5KELVki015565; Thu, 20 Jun 2013 15:21:31 +0100 (BST) Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000) id EB3E5C2B1B; Thu, 20 Jun 2013 15:21:30 +0100 (BST) From: Will Deacon To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 08/12] ARM: spinlock: use inner-shareable dsb variant prior to sev instruction Date: Thu, 20 Jun 2013 15:21:22 +0100 Message-Id: <1371738086-6707-9-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 1.8.2.2 In-Reply-To: <1371738086-6707-1-git-send-email-will.deacon@arm.com> References: <1371738086-6707-1-git-send-email-will.deacon@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130620_102156_411012_E4C04B9F X-CRM114-Status: GOOD ( 16.38 ) X-Spam-Score: -8.2 (--------) Cc: Will Deacon X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When unlocking a spinlock, we use the sev instruction to signal other CPUs waiting on the lock. Since sev is not a memory access instruction, we require a dsb in order to ensure that the sev is not issued ahead of the store placing the lock in an unlocked state. However, as sev is only concerned with other processors in a multiprocessor system, we can restrict the scope of the preceding dsb to the inner-shareable domain. Furthermore, we can restrict the scope to consider only stores, since there are no independent loads on the unlock path. A side-effect of this change is that a spin_unlock operation no longer forces completion of pending TLB invalidation, something which we rely on when unlocking runqueues to ensure that CPU migration during TLB maintenance routines doesn't cause us to continue before the operation has completed. This patch adds the -ishst suffix to the ARMv7 definition of dsb_sev() and adds an inner-shareable dsb to the context-switch path when running a preemptible, SMP, v7 kernel. Reviewed-by: Catalin Marinas Signed-off-by: Will Deacon --- arch/arm/include/asm/spinlock.h | 2 +- arch/arm/include/asm/switch_to.h | 10 ++++++++++ 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/spinlock.h b/arch/arm/include/asm/spinlock.h index 6220e9f..5a0261e 100644 --- a/arch/arm/include/asm/spinlock.h +++ b/arch/arm/include/asm/spinlock.h @@ -46,7 +46,7 @@ static inline void dsb_sev(void) { #if __LINUX_ARM_ARCH__ >= 7 __asm__ __volatile__ ( - "dsb\n" + "dsb ishst\n" SEV ); #else diff --git a/arch/arm/include/asm/switch_to.h b/arch/arm/include/asm/switch_to.h index fa09e6b..c99e259 100644 --- a/arch/arm/include/asm/switch_to.h +++ b/arch/arm/include/asm/switch_to.h @@ -4,6 +4,16 @@ #include /* + * For v7 SMP cores running a preemptible kernel we may be pre-empted + * during a TLB maintenance operation, so execute an inner-shareable dsb + * to ensure that the maintenance completes in case we migrate to another + * CPU. + */ +#if defined(CONFIG_PREEMPT) && defined(CONFIG_SMP) && defined(CONFIG_CPU_V7) +#define finish_arch_switch(prev) dsb(ish) +#endif + +/* * switch_to(prev, next) should switch from task `prev' to `next' * `prev' will never be the same as `next'. schedule() itself * contains the memory barrier to tell GCC not to cache `current'.