From patchwork Wed Aug 21 14:58:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11107221 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 22B6A112C for ; Wed, 21 Aug 2019 14:58:58 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 00D4022DA7 for ; Wed, 21 Aug 2019 14:58:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="fZPt3Ai/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 00D4022DA7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mWaGzVLDd20ZdJa+eEHkNFjfz5HIze3wLLY91e0Iacc=; b=fZPt3Ai/BafFk0 6lJN7vGrhsaeQIFNKkzFmKP0OWq3wlepRhIjRDlt36Bz3PNN+rPEPCojmy/S6T86Kbwpm4g+dVrZK wZD8ZYFtvw4oU1Ahesx6+DTpazCUVkqmZABtRZ0gWaHqOLw0X71AJucim0lhsWVtYlw+ntdSk3oWg ujasJiPcVZ+76bwQT8nYP0mkwZXqxwCVgaAHxgPKoJb5tV33Jyit4y5NDoXkCIIPkv3/4XiMHjwsy +x4XqtwlpFkXD/6O6VGLSe8vcEHPletrgcoV/zNN3zYaVyObbig03qtSlXBB5LEOIGEC4cSD/CC17 DfrzXkZE3p6J5P/xifsw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1i0S4b-0006KJ-K5; Wed, 21 Aug 2019 14:58:53 +0000 Received: from [182.171.81.139] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1i0S4W-0006CV-Vz; Wed, 21 Aug 2019 14:58:49 +0000 From: Christoph Hellwig To: Palmer Dabbelt , Paul Walmsley Subject: [PATCH 6/6] riscv: move the TLB flush logic out of line Date: Wed, 21 Aug 2019 23:58:37 +0900 Message-Id: <20190821145837.3686-7-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190821145837.3686-1-hch@lst.de> References: <20190821145837.3686-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Atish Patra , linux-riscv@lists.infradead.org Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org The TLB flush logic is going to become more complex. Start moving it out of line. Signed-off-by: Christoph Hellwig Reviewed-by: Atish Patra Signed-off-by: Christoph Hellwig Reviewed-by: Atish Patra Signed-off-by: Paul Walmsley --- arch/riscv/include/asm/tlbflush.h | 37 ++++++------------------------- arch/riscv/mm/Makefile | 3 +++ arch/riscv/mm/tlbflush.c | 35 +++++++++++++++++++++++++++++ 3 files changed, 45 insertions(+), 30 deletions(-) create mode 100644 arch/riscv/mm/tlbflush.c diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h index df31fe2ed09c..075a784c66c5 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -25,8 +25,13 @@ static inline void local_flush_tlb_page(unsigned long addr) __asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory"); } -#ifndef CONFIG_SMP - +#ifdef CONFIG_SMP +void flush_tlb_all(void); +void flush_tlb_mm(struct mm_struct *mm); +void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr); +void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, + unsigned long end); +#else /* CONFIG_SMP */ #define flush_tlb_all() local_flush_tlb_all() #define flush_tlb_page(vma, addr) local_flush_tlb_page(addr) @@ -37,34 +42,6 @@ static inline void flush_tlb_range(struct vm_area_struct *vma, } #define flush_tlb_mm(mm) flush_tlb_all() - -#else /* CONFIG_SMP */ - -#include - -static inline void remote_sfence_vma(struct cpumask *cmask, unsigned long start, - unsigned long size) -{ - struct cpumask hmask; - - riscv_cpuid_to_hartid_mask(cmask, &hmask); - sbi_remote_sfence_vma(hmask.bits, start, size); -} - -#define flush_tlb_all() sbi_remote_sfence_vma(NULL, 0, -1) - -#define flush_tlb_range(vma, start, end) \ - remote_sfence_vma(mm_cpumask((vma)->vm_mm), start, (end) - (start)) - -static inline void flush_tlb_page(struct vm_area_struct *vma, - unsigned long addr) -{ - flush_tlb_range(vma, addr, addr + PAGE_SIZE); -} - -#define flush_tlb_mm(mm) \ - remote_sfence_vma(mm_cpumask(mm), 0, -1) - #endif /* CONFIG_SMP */ /* Flush a range of kernel pages */ diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile index 74055e1d6f21..9d9a17335686 100644 --- a/arch/riscv/mm/Makefile +++ b/arch/riscv/mm/Makefile @@ -13,4 +13,7 @@ obj-y += cacheflush.o obj-y += context.o obj-y += sifive_l2_cache.o +ifeq ($(CONFIG_MMU),y) +obj-$(CONFIG_SMP) += tlbflush.o +endif obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c new file mode 100644 index 000000000000..df93b26f1b9d --- /dev/null +++ b/arch/riscv/mm/tlbflush.c @@ -0,0 +1,35 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include + +void flush_tlb_all(void) +{ + sbi_remote_sfence_vma(NULL, 0, -1); +} + +static void __sbi_tlb_flush_range(struct cpumask *cmask, unsigned long start, + unsigned long size) +{ + struct cpumask hmask; + + riscv_cpuid_to_hartid_mask(cmask, &hmask); + sbi_remote_sfence_vma(hmask.bits, start, size); +} + +void flush_tlb_mm(struct mm_struct *mm) +{ + __sbi_tlb_flush_range(mm_cpumask(mm), 0, -1); +} + +void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) +{ + __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), addr, PAGE_SIZE); +} + +void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, + unsigned long end) +{ + __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), start, end - start); +}