From patchwork Thu Oct 19 14:01:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13429267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BBA41CDB465 for ; Thu, 19 Oct 2023 14:03:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZygxALYCYtqdS6SEdq0rSwceIaDT2BUQkC42jIX7E48=; b=l9zPR6bl1SK1EC j8PoL3tZ9guiKfzr9JKxCFBwG/IdYODccUezoeZSNGtrhir2iHW/mmqGpk62GGQt2tylXjCkbx6Ap tvzqMPB3Rb4JTTE27n2Zgc51lMHEXagAdWFVpRS/Ths09mr2LHFYnjtLuNyaPrZhdYEhCZRgOnltD InedmUUactSzjYT6Fxdcsxu00AyuFVqGMob+vVKGskcqfyvE3rMZ+8QGdjLRaWW+hu2CfWUAtY1bq /YE6hS8vgaw1h/l1eGyDDwmV/xVZs/nr4n9TNY7AgdkoPJU8/mr8kJNA/GFrKVFN/MTH5H2kRcAma d/41tc1TlIUPxCZ8LkcA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qtTcD-0000EZ-1k; Thu, 19 Oct 2023 14:03:09 +0000 Received: from mail-lf1-x12f.google.com ([2a00:1450:4864:20::12f]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qtTcA-0000DP-2O for linux-riscv@lists.infradead.org; Thu, 19 Oct 2023 14:03:08 +0000 Received: by mail-lf1-x12f.google.com with SMTP id 2adb3069b0e04-507a3b8b113so7411956e87.0 for ; Thu, 19 Oct 2023 07:03:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1697724184; x=1698328984; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hd/FLevx18XhR5UIn9smwlv93JRL3lzcOo0hNfovrwU=; b=ftaxJ8UycESTXhByzSPJO3n83ECkxgWv2S14mra1sTVI0De7pU0z+clIG3yiDP+Kbn f0MbrBOAF3q/spkIgfydSZHAhtzJ+yVUxjPTSXsIirH0AHVafXPyFHzNpbkX8NKxafWm KrFlJou66ivyo+2vtVg/pYIvvaDuejzJae1uQRcxxTBwinErXB1x9zz/wOfM++3+5n+I eMzGRsHd0TrXBb8SZR9gyJD6KmKrrRHMB21mc6mgsYLh34Mp45OSQk/xQRC4xc551zej 3uu6JIJeoL9E9d+Vn0l1m4oc0dpCheDNC3RGu1dFVuG40T4Gt68MyL85gZWGyhTj9t/V uy6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697724184; x=1698328984; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hd/FLevx18XhR5UIn9smwlv93JRL3lzcOo0hNfovrwU=; b=fg1GQNiqSIOQaD7wo1lvpkneJBXVfAGejiewEHlOGop9pKIVqx7KiDTW1JwXZC3Ogw 6xC/6TY8sCgjXzwFP/rQDVPtUOZT8mT6d1sIZTckLU+c6zMVPWVNQsLqSxs0OcgA50f7 nKknzWB5VfP3uEvWRUfLtZvt/X4mj1oaycb3XwpkKeOj9kbXrL+Gmw7c7xrlg29t7cAq cyLLLJY/VJ/YD136D1BTQRsZwn7zpY4GFjGxN196IVd2RSzP3cc4NgpNThAkmkoOmcwn qMFWvOtQVF1Pr8AF3lw07x/Yo3zFB4FSdWnsSbcp8hGnBjLmJILpHJHFnsYdOQlWf798 pbSg== X-Gm-Message-State: AOJu0YyGjA77gTNki8dBCtEvInbaEOBB0vMrelPbrZwL+R+SeRsPsjCt dtC2q3KL9YjEkSP6GrypeR/CZA== X-Google-Smtp-Source: AGHT+IEreDwOMoW56yjrW09GCEE/pI+6cCamWfZVDmTeqTnMJ6ObGDZTCddBTRTbchZ8BncLsOt9fw== X-Received: by 2002:ac2:5a04:0:b0:507:b993:bb86 with SMTP id q4-20020ac25a04000000b00507b993bb86mr1486713lfn.66.1697724183683; Thu, 19 Oct 2023 07:03:03 -0700 (PDT) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id q9-20020a05600000c900b0032d8eecf901sm4567033wrx.3.2023.10.19.07.03.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Oct 2023 07:03:03 -0700 (PDT) From: Alexandre Ghiti To: Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Mayuresh Chitale , Vincent Chen , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Samuel Holland , Lad Prabhakar Subject: [PATCH v5 1/4] riscv: Improve tlb_flush() Date: Thu, 19 Oct 2023 16:01:48 +0200 Message-Id: <20231019140151.21629-2-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231019140151.21629-1-alexghiti@rivosinc.com> References: <20231019140151.21629-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231019_070306_783599_7D92D896 X-CRM114-Status: GOOD ( 12.38 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lad Prabhakar , Alexandre Ghiti , Andrew Jones Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org For now, tlb_flush() simply calls flush_tlb_mm() which results in a flush of the whole TLB. So let's use mmu_gather fields to provide a more fine-grained flush of the TLB. Signed-off-by: Alexandre Ghiti Reviewed-by: Andrew Jones Reviewed-by: Samuel Holland Tested-by: Lad Prabhakar # On RZ/Five SMARC --- arch/riscv/include/asm/tlb.h | 8 +++++++- arch/riscv/include/asm/tlbflush.h | 3 +++ arch/riscv/mm/tlbflush.c | 7 +++++++ 3 files changed, 17 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h index 120bcf2ed8a8..1eb5682b2af6 100644 --- a/arch/riscv/include/asm/tlb.h +++ b/arch/riscv/include/asm/tlb.h @@ -15,7 +15,13 @@ static void tlb_flush(struct mmu_gather *tlb); static inline void tlb_flush(struct mmu_gather *tlb) { - flush_tlb_mm(tlb->mm); +#ifdef CONFIG_MMU + if (tlb->fullmm || tlb->need_flush_all) + flush_tlb_mm(tlb->mm); + else + flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end, + tlb_get_unmap_size(tlb)); +#endif } #endif /* _ASM_RISCV_TLB_H */ diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h index a09196f8de68..f5c4fb0ae642 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -32,6 +32,8 @@ static inline void local_flush_tlb_page(unsigned long addr) #if defined(CONFIG_SMP) && defined(CONFIG_MMU) void flush_tlb_all(void); void flush_tlb_mm(struct mm_struct *mm); +void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, + unsigned long end, unsigned int page_size); void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr); void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); @@ -52,6 +54,7 @@ static inline void flush_tlb_range(struct vm_area_struct *vma, } #define flush_tlb_mm(mm) flush_tlb_all() +#define flush_tlb_mm_range(mm, start, end, page_size) flush_tlb_all() #endif /* !CONFIG_SMP || !CONFIG_MMU */ /* Flush a range of kernel pages */ diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 77be59aadc73..fa03289853d8 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -132,6 +132,13 @@ void flush_tlb_mm(struct mm_struct *mm) __flush_tlb_range(mm, 0, -1, PAGE_SIZE); } +void flush_tlb_mm_range(struct mm_struct *mm, + unsigned long start, unsigned long end, + unsigned int page_size) +{ + __flush_tlb_range(mm, start, end - start, page_size); +} + void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { __flush_tlb_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); From patchwork Thu Oct 19 14:01:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13429269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8BB44CDB482 for ; Thu, 19 Oct 2023 14:04:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=eppukMfgHkQKmSBlRV9ARnaxWFK4laDuiiqWUS947Po=; b=UAbL357j7IgAYI eLeaPyBwwbwd+eePOL9hssSwygDY++00ZiingoJT70PhooFt+6zbHU0+I9tSqbOAmLa9LSdnAugYz HrP+bCohKaCFwXFWCDIE13/vOKQFG1ChXiV+KyKrAif3mE8vsHi2tz/CLqumJXQJIOyZMWrqr1vCt DLUWi4KGMSvNSPVOMig/oYV0SvLKFLFTe5ZuuG5/OzHi/32rryBAsThG7tf9TZmlhvcIy5Hr4IhFn JeX9w+4J/H9r710vN7T/TeXx9OhwHdUm87hjPaHnCtsEvNmjyem+2E6HDAcC6cm+SWQQ+EkaukNYM iKvRjCgaE6g5Dl8Zy85g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qtTdJ-0000eK-2S; Thu, 19 Oct 2023 14:04:17 +0000 Received: from mail-lj1-x22e.google.com ([2a00:1450:4864:20::22e]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qtTdG-0000b8-2e for linux-riscv@lists.infradead.org; Thu, 19 Oct 2023 14:04:16 +0000 Received: by mail-lj1-x22e.google.com with SMTP id 38308e7fff4ca-2b9338e4695so109319801fa.2 for ; Thu, 19 Oct 2023 07:04:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1697724251; x=1698329051; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/ApwA47W5sfKWNkq2IymUPAqINYWJl4bGVtXFqY2oGI=; b=Z/4wA3uoeUsbW+xvlRJDlheR+Yx8gftG8bzo/xj47OdmmwYfdzoCmtRKVByqBfJL3c RH3pr5IMqWTHsVTctyh2ZahmR3QvLue7kVOP/LNhLw7CS9gut75yrGF+OU9UKecmosug DAH+JJqnU5N8WOy3l+9cKbUCXhX+1S3oV8NefFYpHbtehvFp87ZVKY8jsDhSERvsFNax vLoo138YKVa+ZEPK3VPFRg1aL8oDbQ7MYRszJYDx1E7HM6YfSpvKaX6NqBp/sgKTQFu7 +FasPZC9fCuuidXsMCfzLTT7zBomVZ+2lea1jb9x9mNtQzRBg6/z+iTndnbsRgwtLLfF vtog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697724251; x=1698329051; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/ApwA47W5sfKWNkq2IymUPAqINYWJl4bGVtXFqY2oGI=; b=tHT7zPqHgcAcZQ2y/mSEG6ulaay+FEbX1jWCyHvSCKbJVE9ZzMGkLp0mw4Y+Z4SQME MLoIK3LI+781wa33xT+jueqWyfKUbo3b52xVcLLBpfokAjJiZejiyqOChXYUWKrqJTwi hYKSSAcgPqcljH9J4Gld7ZkBTZxWEzH2LbbfGUDbxBGaGsB3K4cCSLrNI+BkPEo4CSWC DXqWKAkfrs3aIgE74J2264XMFWD5MZYZnhbLxbJltU3D4cSMeQC5978Y6fTvBg1vqT9a PTnUPI++zD60kfoVYk7XR/MS4U8ASh8NmIOCvW/A01aL1vmdLJxK5ZaPQa4JYlDdQb/S UuJA== X-Gm-Message-State: AOJu0Ywiz4/+5micAsklgPRyWYa2X9lOOXCmykmeMyuPXnVL3bkXQSzu eCneNGN+pJ59Cr9/c7xNpzX57w== X-Google-Smtp-Source: AGHT+IEBYCq6FlJQnZX7dLdouPmrLmiH7GFteySmGUDpOnC8cPojpOsH5qTZKNPlCuCg8e75j80KCg== X-Received: by 2002:a05:651c:1502:b0:2c5:1ad0:e2ff with SMTP id e2-20020a05651c150200b002c51ad0e2ffmr2017092ljf.39.1697724250744; Thu, 19 Oct 2023 07:04:10 -0700 (PDT) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id az15-20020a05600c600f00b00406447b798bsm4543769wmb.37.2023.10.19.07.04.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Oct 2023 07:04:05 -0700 (PDT) From: Alexandre Ghiti To: Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Mayuresh Chitale , Vincent Chen , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Samuel Holland , Lad Prabhakar Cc: Alexandre Ghiti Subject: [PATCH v5 2/4] riscv: Improve flush_tlb_range() for hugetlb pages Date: Thu, 19 Oct 2023 16:01:49 +0200 Message-Id: <20231019140151.21629-3-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231019140151.21629-1-alexghiti@rivosinc.com> References: <20231019140151.21629-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231019_070414_859006_7E7A0AB4 X-CRM114-Status: GOOD ( 15.42 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org flush_tlb_range() uses a fixed stride of PAGE_SIZE and in its current form, when a hugetlb mapping needs to be flushed, flush_tlb_range() flushes the whole tlb: so set a stride of the size of the hugetlb mapping in order to only flush the hugetlb mapping. However, if the hugepage is a NAPOT region, all PTEs that constitute this mapping must be invalidated, so the stride size must actually be the size of the PTE. Note that THPs are directly handled by flush_pmd_tlb_range(). Signed-off-by: Alexandre Ghiti Reviewed-by: Samuel Holland Tested-by: Lad Prabhakar # --- arch/riscv/mm/tlbflush.c | 31 ++++++++++++++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index fa03289853d8..5933744df91a 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -3,6 +3,7 @@ #include #include #include +#include #include #include @@ -147,7 +148,35 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __flush_tlb_range(vma->vm_mm, start, end - start, PAGE_SIZE); + unsigned long stride_size; + + if (!is_vm_hugetlb_page(vma)) { + stride_size = PAGE_SIZE; + } else { + stride_size = huge_page_size(hstate_vma(vma)); + +#ifdef CONFIG_RISCV_ISA_SVNAPOT + /* + * As stated in the privileged specification, every PTE in a + * NAPOT region must be invalidated, so reset the stride in that + * case. + */ + if (has_svnapot()) { + if (stride_size >= PGDIR_SIZE) + stride_size = PGDIR_SIZE; + else if (stride_size >= P4D_SIZE) + stride_size = P4D_SIZE; + else if (stride_size >= PUD_SIZE) + stride_size = PUD_SIZE; + else if (stride_size >= PMD_SIZE) + stride_size = PMD_SIZE; + else + stride_size = PAGE_SIZE; + } +#endif + } + + __flush_tlb_range(vma->vm_mm, start, end - start, stride_size); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, From patchwork Thu Oct 19 14:01:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13429277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2622FCDB482 for ; Thu, 19 Oct 2023 14:05:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8xg5/OUx0r3k2JJkBI6fixMLvE0TCZ2rS/6OcW940Ew=; b=Cug53U9k0C2P/k iiWPQ+RU6SAC6NLP98IQDay4fv1eLXZZhOalS+7NFqtNT/SxdsUggNSOMHjJuAeM/chCJpolg1sPi f6RaMPpBRNXkeq1hbEw1ncaVNo7HpLN+ypRWHkVj262Bnsyhc3N8GjGngjDkrbeUNVZA3Ze/sX6pW H83jT3YmMYrKVeMYNf5MUelQBolwvdzJwqdRHLRphQY2aogsQhY4GQHpy3hx6DSqgXK1+dJvHZ52a 7Pmx8e2CdFOyYT77s2aRfum2b2yESoqZtiQGg4TcbHCrcayqsbL0bTuFdslOVHofnPm9pfG2MCZdS uWXpIsfPf7dLf6aKae8Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qtTeK-00011M-1Z; Thu, 19 Oct 2023 14:05:20 +0000 Received: from mail-wr1-x42e.google.com ([2a00:1450:4864:20::42e]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qtTeH-0000yY-0J for linux-riscv@lists.infradead.org; Thu, 19 Oct 2023 14:05:18 +0000 Received: by mail-wr1-x42e.google.com with SMTP id ffacd0b85a97d-31427ddd3fbso6481343f8f.0 for ; Thu, 19 Oct 2023 07:05:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1697724313; x=1698329113; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UcqaVPeAGwTbj9zEMGR6ijY363bXbBd6qhh4XcO8DLA=; b=eQW1HSSqrmXG3c+4/E+ldoEhnqU9xlYkm3BXs03OsxO5j9HyUxlwk9/bXnF33YUG06 qfVkxWOkWgZ3PBcdpu9aR0DDjMOxzEzBUY+JMqBlREeNogKEX/BVwaCi2cPtw6frk0IK D7z3Zve2/lU+osl4WLrgHDsULiHdXSZaKquxdsECGIo5CaTVK7uahPRAvIWf7I0ihaBN IMbFz/7uIuuROAQzxPXc42zZy+sfKRwtQQX8mWcI/sym5caILX4iUnQOUiwN5ttxmlTT mvC+/VCf6Ct/e685DMgMHn7pKOglzwXLGcosqYuz0+msO7rqniq1sT4k29qVDVohO7tN yb2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697724313; x=1698329113; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UcqaVPeAGwTbj9zEMGR6ijY363bXbBd6qhh4XcO8DLA=; b=Scsyvvff0+6AlUGRvaXokuUv2MmXacrIyxregz/RG82XKDrhMakPqTWKGzr5CsOarV mDvaL7VoDV0FPchMETWn2Eg3qsMfD1FXvYm7aLIyON5cbuxZ3TH/rC2vQ+HOh4azQC5k L+xPrfViGNAWNxOg8Hes0/ITJHdleXnhxLU3u0NPtyCJUHYaJbM9gEDhmc4b24S+EN4W n/RNHJf+A5nxNLMTzZGHSXK0KkUP6l0IsxCczm9szOJCu4r1eLPwvS3CKN/Ob1DbGIgG 2epfAnEU5mGsLFDAhLh5+yOmJ7PrP2JKHxyafrLhccQVNcoSTQGzHmPPIF2/kjueOmez InRg== X-Gm-Message-State: AOJu0YymMscEhTONsXVP2VBe8B227D1b8yE2MHJGJATxAj3Fclg1F4uN dbuEmYDvl7OAhWzxriGzH4NbYA== X-Google-Smtp-Source: AGHT+IHnKiCvjHaBH1kde0mGFwQY25hXd0xzfatBl3O3oogknqg699PegckRO25Yp/WvPwgt7YDLxw== X-Received: by 2002:a5d:5908:0:b0:32d:bafd:809f with SMTP id v8-20020a5d5908000000b0032dbafd809fmr1389930wrd.70.1697724313195; Thu, 19 Oct 2023 07:05:13 -0700 (PDT) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id q9-20020a05600000c900b0032d8eecf901sm4571012wrx.3.2023.10.19.07.05.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Oct 2023 07:05:12 -0700 (PDT) From: Alexandre Ghiti To: Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Mayuresh Chitale , Vincent Chen , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Samuel Holland , Lad Prabhakar Cc: Alexandre Ghiti , Andrew Jones , Lad Prabhakar Subject: [PATCH v5 3/4] riscv: Make __flush_tlb_range() loop over pte instead of flushing the whole tlb Date: Thu, 19 Oct 2023 16:01:50 +0200 Message-Id: <20231019140151.21629-4-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231019140151.21629-1-alexghiti@rivosinc.com> References: <20231019140151.21629-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231019_070517_136006_BE5B616B X-CRM114-Status: GOOD ( 22.66 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Currently, when the range to flush covers more than one page (a 4K page or a hugepage), __flush_tlb_range() flushes the whole tlb. Flushing the whole tlb comes with a greater cost than flushing a single entry so we should flush single entries up to a certain threshold so that: threshold * cost of flushing a single entry < cost of flushing the whole tlb. Co-developed-by: Mayuresh Chitale Signed-off-by: Mayuresh Chitale Signed-off-by: Alexandre Ghiti Reviewed-by: Andrew Jones Tested-by: Lad Prabhakar # On RZ/Five SMARC Reviewed-by: Samuel Holland Tested-by: Samuel Holland --- arch/riscv/include/asm/sbi.h | 3 - arch/riscv/include/asm/tlbflush.h | 3 + arch/riscv/kernel/sbi.c | 32 +++------ arch/riscv/mm/tlbflush.c | 115 +++++++++++++++--------------- 4 files changed, 72 insertions(+), 81 deletions(-) diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index 12dfda6bb924..0892f4421bc4 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -280,9 +280,6 @@ void sbi_set_timer(uint64_t stime_value); void sbi_shutdown(void); void sbi_send_ipi(unsigned int cpu); int sbi_remote_fence_i(const struct cpumask *cpu_mask); -int sbi_remote_sfence_vma(const struct cpumask *cpu_mask, - unsigned long start, - unsigned long size); int sbi_remote_sfence_vma_asid(const struct cpumask *cpu_mask, unsigned long start, diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h index f5c4fb0ae642..170a49c531c6 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -11,6 +11,9 @@ #include #include +#define FLUSH_TLB_MAX_SIZE ((unsigned long)-1) +#define FLUSH_TLB_NO_ASID ((unsigned long)-1) + #ifdef CONFIG_MMU extern unsigned long asid_mask; diff --git a/arch/riscv/kernel/sbi.c b/arch/riscv/kernel/sbi.c index c672c8ba9a2a..5a62ed1da453 100644 --- a/arch/riscv/kernel/sbi.c +++ b/arch/riscv/kernel/sbi.c @@ -11,6 +11,7 @@ #include #include #include +#include /* default SBI version is 0.1 */ unsigned long sbi_spec_version __ro_after_init = SBI_SPEC_VERSION_DEFAULT; @@ -376,32 +377,15 @@ int sbi_remote_fence_i(const struct cpumask *cpu_mask) } EXPORT_SYMBOL(sbi_remote_fence_i); -/** - * sbi_remote_sfence_vma() - Execute SFENCE.VMA instructions on given remote - * harts for the specified virtual address range. - * @cpu_mask: A cpu mask containing all the target harts. - * @start: Start of the virtual address - * @size: Total size of the virtual address range. - * - * Return: 0 on success, appropriate linux error code otherwise. - */ -int sbi_remote_sfence_vma(const struct cpumask *cpu_mask, - unsigned long start, - unsigned long size) -{ - return __sbi_rfence(SBI_EXT_RFENCE_REMOTE_SFENCE_VMA, - cpu_mask, start, size, 0, 0); -} -EXPORT_SYMBOL(sbi_remote_sfence_vma); - /** * sbi_remote_sfence_vma_asid() - Execute SFENCE.VMA instructions on given - * remote harts for a virtual address range belonging to a specific ASID. + * remote harts for a virtual address range belonging to a specific ASID or not. * * @cpu_mask: A cpu mask containing all the target harts. * @start: Start of the virtual address * @size: Total size of the virtual address range. - * @asid: The value of address space identifier (ASID). + * @asid: The value of address space identifier (ASID), or FLUSH_TLB_NO_ASID + * for flushing all address spaces. * * Return: 0 on success, appropriate linux error code otherwise. */ @@ -410,8 +394,12 @@ int sbi_remote_sfence_vma_asid(const struct cpumask *cpu_mask, unsigned long size, unsigned long asid) { - return __sbi_rfence(SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID, - cpu_mask, start, size, asid, 0); + if (asid == FLUSH_TLB_NO_ASID) + return __sbi_rfence(SBI_EXT_RFENCE_REMOTE_SFENCE_VMA, + cpu_mask, start, size, 0, 0); + else + return __sbi_rfence(SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID, + cpu_mask, start, size, asid, 0); } EXPORT_SYMBOL(sbi_remote_sfence_vma_asid); diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 5933744df91a..c27ba720e35f 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -9,28 +9,50 @@ static inline void local_flush_tlb_all_asid(unsigned long asid) { - __asm__ __volatile__ ("sfence.vma x0, %0" - : - : "r" (asid) - : "memory"); + if (asid != FLUSH_TLB_NO_ASID) + __asm__ __volatile__ ("sfence.vma x0, %0" + : + : "r" (asid) + : "memory"); + else + local_flush_tlb_all(); } static inline void local_flush_tlb_page_asid(unsigned long addr, unsigned long asid) { - __asm__ __volatile__ ("sfence.vma %0, %1" - : - : "r" (addr), "r" (asid) - : "memory"); + if (asid != FLUSH_TLB_NO_ASID) + __asm__ __volatile__ ("sfence.vma %0, %1" + : + : "r" (addr), "r" (asid) + : "memory"); + else + local_flush_tlb_page(addr); } -static inline void local_flush_tlb_range(unsigned long start, - unsigned long size, unsigned long stride) +/* + * Flush entire TLB if number of entries to be flushed is greater + * than the threshold below. + */ +static unsigned long tlb_flush_all_threshold __read_mostly = 64; + +static void local_flush_tlb_range_threshold_asid(unsigned long start, + unsigned long size, + unsigned long stride, + unsigned long asid) { - if (size <= stride) - local_flush_tlb_page(start); - else - local_flush_tlb_all(); + u16 nr_ptes_in_range = DIV_ROUND_UP(size, stride); + int i; + + if (nr_ptes_in_range > tlb_flush_all_threshold) { + local_flush_tlb_all_asid(asid); + return; + } + + for (i = 0; i < nr_ptes_in_range; ++i) { + local_flush_tlb_page_asid(start, asid); + start += stride; + } } static inline void local_flush_tlb_range_asid(unsigned long start, @@ -38,8 +60,10 @@ static inline void local_flush_tlb_range_asid(unsigned long start, { if (size <= stride) local_flush_tlb_page_asid(start, asid); - else + else if (size == FLUSH_TLB_MAX_SIZE) local_flush_tlb_all_asid(asid); + else + local_flush_tlb_range_threshold_asid(start, size, stride, asid); } static void __ipi_flush_tlb_all(void *info) @@ -52,7 +76,7 @@ void flush_tlb_all(void) if (riscv_use_ipi_for_rfence()) on_each_cpu(__ipi_flush_tlb_all, NULL, 1); else - sbi_remote_sfence_vma(NULL, 0, -1); + sbi_remote_sfence_vma_asid(NULL, 0, FLUSH_TLB_MAX_SIZE, FLUSH_TLB_NO_ASID); } struct flush_tlb_range_data { @@ -69,18 +93,12 @@ static void __ipi_flush_tlb_range_asid(void *info) local_flush_tlb_range_asid(d->start, d->size, d->stride, d->asid); } -static void __ipi_flush_tlb_range(void *info) -{ - struct flush_tlb_range_data *d = info; - - local_flush_tlb_range(d->start, d->size, d->stride); -} - static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, unsigned long size, unsigned long stride) { struct flush_tlb_range_data ftd; struct cpumask *cmask = mm_cpumask(mm); + unsigned long asid = FLUSH_TLB_NO_ASID; unsigned int cpuid; bool broadcast; @@ -90,39 +108,24 @@ static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, cpuid = get_cpu(); /* check if the tlbflush needs to be sent to other CPUs */ broadcast = cpumask_any_but(cmask, cpuid) < nr_cpu_ids; - if (static_branch_unlikely(&use_asid_allocator)) { - unsigned long asid = atomic_long_read(&mm->context.id) & asid_mask; - - if (broadcast) { - if (riscv_use_ipi_for_rfence()) { - ftd.asid = asid; - ftd.start = start; - ftd.size = size; - ftd.stride = stride; - on_each_cpu_mask(cmask, - __ipi_flush_tlb_range_asid, - &ftd, 1); - } else - sbi_remote_sfence_vma_asid(cmask, - start, size, asid); - } else { - local_flush_tlb_range_asid(start, size, stride, asid); - } + + if (static_branch_unlikely(&use_asid_allocator)) + asid = atomic_long_read(&mm->context.id) & asid_mask; + + if (broadcast) { + if (riscv_use_ipi_for_rfence()) { + ftd.asid = asid; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range_asid, + &ftd, 1); + } else + sbi_remote_sfence_vma_asid(cmask, + start, size, asid); } else { - if (broadcast) { - if (riscv_use_ipi_for_rfence()) { - ftd.asid = 0; - ftd.start = start; - ftd.size = size; - ftd.stride = stride; - on_each_cpu_mask(cmask, - __ipi_flush_tlb_range, - &ftd, 1); - } else - sbi_remote_sfence_vma(cmask, start, size); - } else { - local_flush_tlb_range(start, size, stride); - } + local_flush_tlb_range_asid(start, size, stride, asid); } put_cpu(); @@ -130,7 +133,7 @@ static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, void flush_tlb_mm(struct mm_struct *mm) { - __flush_tlb_range(mm, 0, -1, PAGE_SIZE); + __flush_tlb_range(mm, 0, FLUSH_TLB_MAX_SIZE, PAGE_SIZE); } void flush_tlb_mm_range(struct mm_struct *mm, From patchwork Thu Oct 19 14:01:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13429280 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9ADD0CDB482 for ; Thu, 19 Oct 2023 14:06:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gow8ZiM9/iNZyrRsXClRHqcYVSVzPTfq9r5Lev9yHQM=; b=OS1nWUHNiLXUUD J7cVS4FLaLVGUwNDDMnBeFox1naDcyQ5Ir4ZCoew51QaP8+INE0LE8Axnzf4Mi9/NgtkYnd3zw/1b 0mT6U5XrEi4Ot9VJe3YmbHVOpOIlGUsYQucIE53xJHvBlxUQKNH8jEyllOLRaRfq78fHuI/ASa388 Bw4g4Wn1YFnbQqv01YCX90zJONaGWPkIRqwzGt6uYlr6EjsSrDLHiptor9bk2y7T2MCdhMGiEirF7 kQN1mcXvmRKtAdcgVRJBLo4FotMeDIQT2/eh77WIJIxWttCFq5BAD9h+mVtMdaKuubZ74sgSB3/Em ikhbHtrjJ5F/fqFAkO0Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qtTfG-0001Or-35; Thu, 19 Oct 2023 14:06:18 +0000 Received: from mail-wr1-x429.google.com ([2a00:1450:4864:20::429]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qtTfE-0001Ml-0T for linux-riscv@lists.infradead.org; Thu, 19 Oct 2023 14:06:17 +0000 Received: by mail-wr1-x429.google.com with SMTP id ffacd0b85a97d-32d9552d765so6219071f8f.2 for ; Thu, 19 Oct 2023 07:06:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1697724375; x=1698329175; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kwR/Ke20NAcUS81IhTmctZxyEuc+6gPtGwlWuG8Kglw=; b=QtSPGKEwJzhjNTETm1+QognODAPGJEfdPvm1K6XksMRqJepgLAGIv5i9cvaMn8DAcI BAKMGLh/3+kesva4XHRPobnJBMPx0EWH/jvkuouw5ZKEKeqt36x+oG7iTT0JfvQqQ0vL kEoz7VOQ4ZqMeOZCzaXN2/OQWZy1cDsyJpA/Pff6FfpM3D5TilKvBTZpUq5GqT6SM38o JbJq7F1Orcny4AxUA/GwJp4EfUD3sAAhcUXETPScwmBb1aIENUXqjM63vON/cGO5T4X1 KnvS3/H3WfY4CTHR4MHzvMb8aJRJZUuYyTGVkH3rGk9Kc7kgzSyMJqCAKZGeW0cF+SQs /4SA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697724375; x=1698329175; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kwR/Ke20NAcUS81IhTmctZxyEuc+6gPtGwlWuG8Kglw=; b=C2zcqAs539o8KwWLv/5ZZ0ihV2rbM+6dtA6MFxtPjoR8kfiAyZDaYDelrAggmJ5D63 G/Lta/nM4ufiAP7CywXKVYL57KFbl1mf/L8Z45yvdRF/M9gm5R4EvBmESMLiAH1R+IVA u5qDDfzqIUqyJGGT5TBtAgYaNebsV/rrSJksTV44VLL37XQFEI7GVQCJ5WHQf8vsPiHT 7r7pSAZHa5fdFeThqOMu87nBPCsPCj9cJBmi6CuizCTKWFVWXkF+c+DtGEBsQBa7gg7w gG632wtOOpubSJoKBtQO4v4b6o07yLL/bfBBVhuHfTtiASLNuJ7wKJCnul040wvmFfJJ A5cg== X-Gm-Message-State: AOJu0YyJQllsUg6S6I2PBozMPkqe04WE0POycWKobP9EXVgp4KtJ4PQC wR4sqmFfWNTivrLQoCpFeeGeZw== X-Google-Smtp-Source: AGHT+IEj56EYfD3VGGct58Hoxz0IsXjkWxjSdBpSEK26F7D/MeS2KrVfdBwf1rOZcSIiBfUvUcP6qg== X-Received: by 2002:a5d:6e0b:0:b0:32d:8505:b9d7 with SMTP id h11-20020a5d6e0b000000b0032d8505b9d7mr1610093wrz.43.1697724374647; Thu, 19 Oct 2023 07:06:14 -0700 (PDT) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id u11-20020a5d514b000000b0032db4825495sm4572197wrt.22.2023.10.19.07.06.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Oct 2023 07:06:14 -0700 (PDT) From: Alexandre Ghiti To: Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Mayuresh Chitale , Vincent Chen , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Samuel Holland , Lad Prabhakar Cc: Alexandre Ghiti , Andrew Jones , Lad Prabhakar Subject: [PATCH v5 4/4] riscv: Improve flush_tlb_kernel_range() Date: Thu, 19 Oct 2023 16:01:51 +0200 Message-Id: <20231019140151.21629-5-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231019140151.21629-1-alexghiti@rivosinc.com> References: <20231019140151.21629-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231019_070616_182887_1058636C X-CRM114-Status: GOOD ( 17.38 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This function used to simply flush the whole tlb of all harts, be more subtile and try to only flush the range. The problem is that we can only use PAGE_SIZE as stride since we don't know the size of the underlying mapping and then this function will be improved only if the size of the region to flush is < threshold * PAGE_SIZE. Signed-off-by: Alexandre Ghiti Reviewed-by: Andrew Jones Tested-by: Lad Prabhakar # On RZ/Five SMARC Reviewed-by: Samuel Holland Tested-by: Samuel Holland --- arch/riscv/include/asm/tlbflush.h | 11 ++++++----- arch/riscv/mm/tlbflush.c | 33 ++++++++++++++++++++++--------- 2 files changed, 30 insertions(+), 14 deletions(-) diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h index 170a49c531c6..8f3418c5f172 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -40,6 +40,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr); void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); +void flush_tlb_kernel_range(unsigned long start, unsigned long end); #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, @@ -56,15 +57,15 @@ static inline void flush_tlb_range(struct vm_area_struct *vma, local_flush_tlb_all(); } -#define flush_tlb_mm(mm) flush_tlb_all() -#define flush_tlb_mm_range(mm, start, end, page_size) flush_tlb_all() -#endif /* !CONFIG_SMP || !CONFIG_MMU */ - /* Flush a range of kernel pages */ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end) { - flush_tlb_all(); + local_flush_tlb_all(); } +#define flush_tlb_mm(mm) flush_tlb_all() +#define flush_tlb_mm_range(mm, start, end, page_size) flush_tlb_all() +#endif /* !CONFIG_SMP || !CONFIG_MMU */ + #endif /* _ASM_RISCV_TLBFLUSH_H */ diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index c27ba720e35f..7e182f2bc0ab 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -97,19 +97,27 @@ static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, unsigned long size, unsigned long stride) { struct flush_tlb_range_data ftd; - struct cpumask *cmask = mm_cpumask(mm); + struct cpumask *cmask, full_cmask; unsigned long asid = FLUSH_TLB_NO_ASID; - unsigned int cpuid; bool broadcast; - if (cpumask_empty(cmask)) - return; + if (mm) { + unsigned int cpuid; + + cmask = mm_cpumask(mm); + if (cpumask_empty(cmask)) + return; - cpuid = get_cpu(); - /* check if the tlbflush needs to be sent to other CPUs */ - broadcast = cpumask_any_but(cmask, cpuid) < nr_cpu_ids; + cpuid = get_cpu(); + /* check if the tlbflush needs to be sent to other CPUs */ + broadcast = cpumask_any_but(cmask, cpuid) < nr_cpu_ids; + } else { + cpumask_setall(&full_cmask); + cmask = &full_cmask; + broadcast = true; + } - if (static_branch_unlikely(&use_asid_allocator)) + if (static_branch_unlikely(&use_asid_allocator) && mm) asid = atomic_long_read(&mm->context.id) & asid_mask; if (broadcast) { @@ -128,7 +136,8 @@ static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, local_flush_tlb_range_asid(start, size, stride, asid); } - put_cpu(); + if (mm) + put_cpu(); } void flush_tlb_mm(struct mm_struct *mm) @@ -181,6 +190,12 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, __flush_tlb_range(vma->vm_mm, start, end - start, stride_size); } + +void flush_tlb_kernel_range(unsigned long start, unsigned long end) +{ + __flush_tlb_range(NULL, start, end - start, PAGE_SIZE); +} + #ifdef CONFIG_TRANSPARENT_HUGEPAGE void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)