From patchwork Tue Jul 23 11:09:18 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 2831970 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 6C48F9F4D4 for ; Tue, 23 Jul 2013 12:16:05 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7F011201EB for ; Tue, 23 Jul 2013 12:16:00 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1D7B3201E9 for ; Tue, 23 Jul 2013 12:15:59 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V1aXE-0002mF-TM; Tue, 23 Jul 2013 11:13:46 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1V1aVJ-0006aL-3o; Tue, 23 Jul 2013 11:11:41 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1V1aTY-0006Q0-Rw for linux-arm-kernel@lists.infradead.org; Tue, 23 Jul 2013 11:10:07 +0000 Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com [10.1.203.36]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id r6NB9Tkk028368; Tue, 23 Jul 2013 12:09:29 +0100 (BST) Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000) id 1E8FCC2B16; Tue, 23 Jul 2013 12:09:26 +0100 (BST) From: Will Deacon To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 06/12] ARM: tlb: reduce scope of barrier domains for TLB invalidation Date: Tue, 23 Jul 2013 12:09:18 +0100 Message-Id: <1374577764-32480-7-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 1.8.2.2 In-Reply-To: <1374577764-32480-1-git-send-email-will.deacon@arm.com> References: <1374577764-32480-1-git-send-email-will.deacon@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130723_070953_601541_DE01A591 X-CRM114-Status: GOOD ( 12.66 ) X-Spam-Score: -6.9 (------) Cc: Will Deacon X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Our TLB invalidation routines may require a barrier before the maintenance (in order to ensure pending page table writes are visible to the hardware walker) and barriers afterwards (in order to ensure completion of the maintenance and visibility in the instruction stream). Whilst this is expensive, the cost can be reduced somewhat by reducing the scope of the barrier instructions: - The barrier before only needs to apply to stores (pte writes) - Local ops are required only to affect the non-shareable domain - Global ops are required only to affect the inner-shareable domain This patch makes these changes for the TLB flushing code. Reviewed-by: Catalin Marinas Signed-off-by: Will Deacon --- arch/arm/include/asm/tlbflush.h | 36 ++++++++++++++++++------------------ 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/arch/arm/include/asm/tlbflush.h b/arch/arm/include/asm/tlbflush.h index 0fc1272..9297685 100644 --- a/arch/arm/include/asm/tlbflush.h +++ b/arch/arm/include/asm/tlbflush.h @@ -335,13 +335,13 @@ static inline void local_flush_tlb_all(void) const unsigned int __tlb_flag = __cpu_tlb_flags; if (tlb_flag(TLB_WB)) - dsb(); + dsb(nshst); __local_flush_tlb_all(); tlb_op(TLB_V7_UIS_FULL, "c8, c7, 0", zero); if (tlb_flag(TLB_BARRIER)) { - dsb(); + dsb(nsh); isb(); } } @@ -352,13 +352,13 @@ static inline void __flush_tlb_all(void) const unsigned int __tlb_flag = __cpu_tlb_flags; if (tlb_flag(TLB_WB)) - dsb(); + dsb(ishst); __local_flush_tlb_all(); tlb_op(TLB_V7_UIS_FULL, "c8, c3, 0", zero); if (tlb_flag(TLB_BARRIER)) { - dsb(); + dsb(ish); isb(); } } @@ -388,13 +388,13 @@ static inline void local_flush_tlb_mm(struct mm_struct *mm) const unsigned int __tlb_flag = __cpu_tlb_flags; if (tlb_flag(TLB_WB)) - dsb(); + dsb(nshst); __local_flush_tlb_mm(mm); tlb_op(TLB_V7_UIS_ASID, "c8, c7, 2", asid); if (tlb_flag(TLB_BARRIER)) - dsb(); + dsb(nsh); } static inline void __flush_tlb_mm(struct mm_struct *mm) @@ -402,7 +402,7 @@ static inline void __flush_tlb_mm(struct mm_struct *mm) const unsigned int __tlb_flag = __cpu_tlb_flags; if (tlb_flag(TLB_WB)) - dsb(); + dsb(ishst); __local_flush_tlb_mm(mm); #ifdef CONFIG_ARM_ERRATA_720789 @@ -412,7 +412,7 @@ static inline void __flush_tlb_mm(struct mm_struct *mm) #endif if (tlb_flag(TLB_BARRIER)) - dsb(); + dsb(ish); } static inline void @@ -445,13 +445,13 @@ local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr) uaddr = (uaddr & PAGE_MASK) | ASID(vma->vm_mm); if (tlb_flag(TLB_WB)) - dsb(); + dsb(nshst); __local_flush_tlb_page(vma, uaddr); tlb_op(TLB_V7_UIS_PAGE, "c8, c7, 1", uaddr); if (tlb_flag(TLB_BARRIER)) - dsb(); + dsb(nsh); } static inline void @@ -462,7 +462,7 @@ __flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr) uaddr = (uaddr & PAGE_MASK) | ASID(vma->vm_mm); if (tlb_flag(TLB_WB)) - dsb(); + dsb(ishst); __local_flush_tlb_page(vma, uaddr); #ifdef CONFIG_ARM_ERRATA_720789 @@ -472,7 +472,7 @@ __flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr) #endif if (tlb_flag(TLB_BARRIER)) - dsb(); + dsb(ish); } static inline void __local_flush_tlb_kernel_page(unsigned long kaddr) @@ -498,13 +498,13 @@ static inline void local_flush_tlb_kernel_page(unsigned long kaddr) kaddr &= PAGE_MASK; if (tlb_flag(TLB_WB)) - dsb(); + dsb(nshst); __local_flush_tlb_kernel_page(kaddr); tlb_op(TLB_V7_UIS_PAGE, "c8, c7, 1", kaddr); if (tlb_flag(TLB_BARRIER)) { - dsb(); + dsb(nsh); isb(); } } @@ -516,13 +516,13 @@ static inline void __flush_tlb_kernel_page(unsigned long kaddr) kaddr &= PAGE_MASK; if (tlb_flag(TLB_WB)) - dsb(); + dsb(ishst); __local_flush_tlb_kernel_page(kaddr); tlb_op(TLB_V7_UIS_PAGE, "c8, c3, 1", kaddr); if (tlb_flag(TLB_BARRIER)) { - dsb(); + dsb(ish); isb(); } } @@ -567,7 +567,7 @@ static inline void dummy_flush_tlb_a15_erratum(void) * Dummy TLBIMVAIS. Using the unmapped address 0 and ASID 0. */ asm("mcr p15, 0, %0, c8, c3, 1" : : "r" (0)); - dsb(); + dsb(ish); } #else static inline void dummy_flush_tlb_a15_erratum(void) @@ -596,7 +596,7 @@ static inline void flush_pmd_entry(void *pmd) tlb_l2_op(TLB_L2CLEAN_FR, "c15, c9, 1 @ L2 flush_pmd", pmd); if (tlb_flag(TLB_WB)) - dsb(); + dsb(ishst); } static inline void clean_pmd_entry(void *pmd)