From patchwork Tue Jan 3 14:12:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 13087745 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B03A0C3DA7D for ; Tue, 3 Jan 2023 17:05:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kCVzK9jDgWP1pZzayzJz2H2kpRQ0JJlV4Ij9thr/H4Q=; b=0FGQ9r6vI6EYe7 ZtBlZp/arstdQMjz14qZlhNdVBAiyn8PFO2522j4h3pOSbedIQJNTXsyM+ghVLmnEGcGknjA5jbxT CdoPNu9ibqEMPW52eqSvZcjhJx0CyGz6566aEavRJC8VdnOtXCTrt7RY0amccEztF1DBh7HDxt/91 bXbwmDof/c4ITZT6awDKob1jnGwbvgH+EOCs71DLKnkDg6SZqOrWwXm1pAvOl1JOLhhF0/V1390A5 KIxuFZeZ4COYB2SpbZYSYNLWh7MQT+Lj4avnD3DRk9iA6FLMPguiJTS/5zS4PHKzEv+OKHqf2r3VD duOkwRIm1CUEffduLK9g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCkj3-003E3F-Hb; Tue, 03 Jan 2023 17:05:21 +0000 Received: from mail-pj1-x1036.google.com ([2607:f8b0:4864:20::1036]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pCi2P-001quX-DK for linux-riscv@lists.infradead.org; Tue, 03 Jan 2023 14:13:13 +0000 Received: by mail-pj1-x1036.google.com with SMTP id o8-20020a17090a9f8800b00223de0364beso35988319pjp.4 for ; Tue, 03 Jan 2023 06:13:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UyYgBnGIgCz0adPXlD9wrM9sYGYzSlN2q//39kO7eCQ=; b=VFrCfQGvUaFBTByIk9Fw+SoRN2adKAy5nq1pk3GknMGICwVIeq6PqVcguH2UT7fw14 scTfEiPCmS5EzqItFCCz3rcibVJcuNQ2y5jWui8Whhs1MAG7NkznD1dP+mLOxBglEv/Q gmleGq2bTYjk+2DCP2SnIOYTMUk/rkE5BTphu1CXzN7nSQxD6hc2f9QyvgaqcHYR1t/q zAMczPc1mVjoTPuXRUXL90hUwUJJ0GvIiejgH+mIbnfVvlDT7OivXJm00f8S9D+AhWF0 3+PLUFoH+qxDSD5zwV7z3ThGwTPIeCrpIN/mI6txHvOc0J7WxwvvKSwQ/G2/6uzfapKl 5jkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UyYgBnGIgCz0adPXlD9wrM9sYGYzSlN2q//39kO7eCQ=; b=R1J3x+POO0pAk0VkmpbBQaVzvANkux642rH1QWBtF5jJkTICYz2YV7tZlwUoSAUvZR zW7TDyg7RCHyLTsLgKhpkNZJ9CLYWUvHKdsoYyInDFyQtyq0x/vFz0VV1a1WJgZoNgQa 62bRCthesCDYGw18DY8Uy/bCdsPG03dNSsAq4Tc1LdNBoXefPm6KG0vowZyjcOTNUoeS 7E2ogjsUMEupPkNr4Y1Eo5E3R7IYs5a3xkvSS5V1IcdRMa3rHrqZQe66ssPTgGo5us/V 4RjVlMVQ1iIDkkkfatuok6qHUWqK+DCAt6mAd/P6xYwAvi6cMh2NTioyDx3PaulpAMkj 0alA== X-Gm-Message-State: AFqh2krgNofr5NiHcq/NrTUchoGfWMwiXJKNHMGzk27Xh8rWx8HJjquc 225xYfSwLvx2UzkUKKNhdEca6Q== X-Google-Smtp-Source: AMrXdXttb+oVh5eibKiteLalBjeH5MUTIPsfMvziRRV5ZsaGNN266xnWVWLFBygP/lCOLay51PTgqg== X-Received: by 2002:a17:903:200b:b0:192:467e:7379 with SMTP id s11-20020a170903200b00b00192467e7379mr39572832pla.49.1672755185537; Tue, 03 Jan 2023 06:13:05 -0800 (PST) Received: from anup-ubuntu-vm.localdomain ([171.76.85.241]) by smtp.gmail.com with ESMTPSA id x16-20020a1709027c1000b00192b0a07891sm8598286pll.101.2023.01.03.06.13.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 06:13:05 -0800 (PST) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Daniel Lezcano Cc: Hector Martin , Sven Peter , Alyssa Rosenzweig , Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, asahi@lists.linux.dev, Anup Patel , Atish Patra Subject: [PATCH v16 6/9] RISC-V: Use IPIs for remote TLB flush when possible Date: Tue, 3 Jan 2023 19:42:18 +0530 Message-Id: <20230103141221.772261-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230103141221.772261-1-apatel@ventanamicro.com> References: <20230103141221.772261-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230103_061310_828492_79E04025 X-CRM114-Status: GOOD ( 13.32 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org If we have specialized interrupt controller (such as AIA IMSIC) which allows supervisor mode to directly inject IPIs without any assistance from M-mode or HS-mode then using such specialized interrupt controller, we can do remote TLB flushes directly from supervisor mode instead of using the SBI RFENCE calls. This patch extends remote TLB flush functions to use supervisor mode IPIs whenever direct supervisor mode IPIs.are supported by interrupt controller. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/mm/tlbflush.c | 93 +++++++++++++++++++++++++++++++++------- 1 file changed, 78 insertions(+), 15 deletions(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index ce7dfc81bb3f..91078377344d 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -7,14 +7,62 @@ #include #include +static inline void local_flush_tlb_range(unsigned long start, + unsigned long size, unsigned long stride) +{ + if (size <= stride) + local_flush_tlb_page(start); + else + local_flush_tlb_all(); +} + +static inline void local_flush_tlb_range_asid(unsigned long start, + unsigned long size, unsigned long stride, unsigned long asid) +{ + if (size <= stride) + local_flush_tlb_page_asid(start, asid); + else + local_flush_tlb_all_asid(asid); +} + +static void __ipi_flush_tlb_all(void *info) +{ + local_flush_tlb_all(); +} + void flush_tlb_all(void) { - sbi_remote_sfence_vma(NULL, 0, -1); + if (riscv_use_ipi_for_rfence()) + on_each_cpu(__ipi_flush_tlb_all, NULL, 1); + else + sbi_remote_sfence_vma(NULL, 0, -1); +} + +struct flush_tlb_range_data { + unsigned long asid; + unsigned long start; + unsigned long size; + unsigned long stride; +}; + +static void __ipi_flush_tlb_range_asid(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range_asid(d->start, d->size, d->stride, d->asid); +} + +static void __ipi_flush_tlb_range(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range(d->start, d->size, d->stride); } -static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, - unsigned long size, unsigned long stride) +static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, + unsigned long size, unsigned long stride) { + struct flush_tlb_range_data ftd; struct cpumask *pmask = &mm->context.tlb_stale_mask; struct cpumask *cmask = mm_cpumask(mm); unsigned int cpuid; @@ -39,19 +87,34 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, cpumask_andnot(pmask, pmask, cmask); if (broadcast) { - sbi_remote_sfence_vma_asid(cmask, start, size, asid); - } else if (size <= stride) { - local_flush_tlb_page_asid(start, asid); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = asid; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range_asid, + &ftd, 1); + } else + sbi_remote_sfence_vma_asid(cmask, + start, size, asid); } else { - local_flush_tlb_all_asid(asid); + local_flush_tlb_range_asid(start, size, stride, asid); } } else { if (broadcast) { - sbi_remote_sfence_vma(cmask, start, size); - } else if (size <= stride) { - local_flush_tlb_page(start); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = 0; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range, + &ftd, 1); + } else + sbi_remote_sfence_vma(cmask, start, size); } else { - local_flush_tlb_all(); + local_flush_tlb_range(start, size, stride); } } @@ -60,23 +123,23 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, void flush_tlb_mm(struct mm_struct *mm) { - __sbi_tlb_flush_range(mm, 0, -1, PAGE_SIZE); + __flush_tlb_range(mm, 0, -1, PAGE_SIZE); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { - __sbi_tlb_flush_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); } void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PAGE_SIZE); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PMD_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PMD_SIZE); } #endif