From patchwork Wed Jul 20 15:23:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 12924133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 72669C43334 for ; Wed, 20 Jul 2022 15:25:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Fo4uBLeY1CKENfSjbL6+jBCG+dg5CSVXAkXMz8AY260=; b=Pipb6n3TCKA9bS AI2dL5wRQLFLWSW1gHUKcBKxbB0Fu6ih3mgSkQ4Q9gmM3qr9nsjc5ycCP+wcI2db8VVGDrCWEpZgu V4nJpx5x59iPPeELyWICHuK+MZ45P+E119hM1aBzbGvrL4dqyPU+4Xl31mYKxRH+/6XS4ee2SdF2/ Q+sPI6FkgGp5LoRA+1NYdW2Y5Nn91vpy8z8HHZc/r4UuUzUAoCvIIE5NFkTsRAlFTRjbHRAhZbGQT aM7vM5qAxFxoSO4Ieqb063mS66TVqdQ3NDRP75shP/twhcBQcTTOCLdy/UuuGXBshSYJJQ6JfKR+X UAoYRSpOh9APnYJh3V1Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEBZG-007WfF-MU; Wed, 20 Jul 2022 15:24:54 +0000 Received: from mail-pj1-x102f.google.com ([2607:f8b0:4864:20::102f]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEBZC-007WXz-Tz for linux-riscv@lists.infradead.org; Wed, 20 Jul 2022 15:24:52 +0000 Received: by mail-pj1-x102f.google.com with SMTP id d7-20020a17090a564700b001f209736b89so2437276pji.0 for ; Wed, 20 Jul 2022 08:24:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BjmiwAV+qkEddhUojBQVUbZIbS/TIGSArMUYW9Y9o7E=; b=RMq5xdChvJISFxWl8Tqlwztq0yo22ltZhFRnXfeeWFZlnEAUmYEKuEfu6kPM8+FjPP LKnatGm/2Ea3btZVhNGlA2a4wPKSPc8dV0YBiANILzpttn5c3WQRBbghxvEz8NzQclM0 5CzwZZ9ukVPaImUntUPwNyrjXLoelBTUN271XzTA2+QqunM86zKC8PRX230jTezEZwx+ nVlB4P2AJD9jKBo2iZx31r4NSgHdJiAKYky6ripRfhGXdJ418YD1l9/N4RN99jxbutz6 PiM66bCBLxGDOzfyNHMopnR28yvedc5emKIVF/HV/4tKV7Lm0687IeByegMlSUxMdnyO WDNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BjmiwAV+qkEddhUojBQVUbZIbS/TIGSArMUYW9Y9o7E=; b=jlniS5OZOrw9v8PXiupsd/JCOZPmmp6LHXJ3ZAhrTIdNb1howlX2otVoCTQlpBKs8h oRM8hJ6ynJCzZmgJLZveE4dz2oCppIXRcNI79NgYRbLl4qRqg8uAwhmD0WPiyx/iN+f6 i7+DpF4+rlhXfd7bC8pKQNbpZLxj+yq5ZZUz2X1xw7kZrCwbf2GA8JpLLD+eaW6QeYs1 w6qUNdbtkeMCTyUd+wd0DcwDaoLer9aTTZxCvL2JWmfQRPjKkYu3EV2tA/j9B3+B+il9 WH7yNcpu8Qnr58OFgO8WXH4rEkt4RTGyET6iITp9pHSopTScCoY0koZfjWT5qUC8m/cT FgKA== X-Gm-Message-State: AJIora9UUHOV7COiqnrJAJwxXDTllNdbqvd8Ky99UmDW2XGbXuJG7QMf QVSPkSrmZPj41HFdCWHP4cvatw== X-Google-Smtp-Source: AGRyM1uLnhIoHzzVGoBgwM67G2CdvQsGLcCEAntAnzo3LMd8aPuMxitTdBRSV34Adq1VdEt6NKJgTA== X-Received: by 2002:a17:90a:7aca:b0:1f1:ff59:fe7e with SMTP id b10-20020a17090a7aca00b001f1ff59fe7emr6171945pjl.11.1658330686691; Wed, 20 Jul 2022 08:24:46 -0700 (PDT) Received: from anup-ubuntu64-vm.. ([122.179.42.230]) by smtp.gmail.com with ESMTPSA id w15-20020a1709026f0f00b0016cf8f0bdd5sm6013031plk.108.2022.07.20.08.24.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Jul 2022 08:24:46 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Daniel Lezcano Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v7 6/7] RISC-V: Use IPIs for remote TLB flush when possible Date: Wed, 20 Jul 2022 20:53:47 +0530 Message-Id: <20220720152348.2889109-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220720152348.2889109-1-apatel@ventanamicro.com> References: <20220720152348.2889109-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220720_082451_003977_31D12ADE X-CRM114-Status: GOOD ( 13.23 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org If we have specialized interrupt controller (such as AIA IMSIC) which allows supervisor mode to directly inject IPIs without any assistance from M-mode or HS-mode then using such specialized interrupt controller, we can do remote TLB flushes directly from supervisor mode instead of using the SBI RFENCE calls. This patch extends remote TLB flush functions to use supervisor mode IPIs whenever direct supervisor mode IPIs.are supported by interrupt controller. Signed-off-by: Anup Patel --- arch/riscv/mm/tlbflush.c | 93 +++++++++++++++++++++++++++++++++------- 1 file changed, 78 insertions(+), 15 deletions(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 37ed760d007c..27a7db8eb2c4 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -23,14 +23,62 @@ static inline void local_flush_tlb_page_asid(unsigned long addr, : "memory"); } +static inline void local_flush_tlb_range(unsigned long start, + unsigned long size, unsigned long stride) +{ + if (size <= stride) + local_flush_tlb_page(start); + else + local_flush_tlb_all(); +} + +static inline void local_flush_tlb_range_asid(unsigned long start, + unsigned long size, unsigned long stride, unsigned long asid) +{ + if (size <= stride) + local_flush_tlb_page_asid(start, asid); + else + local_flush_tlb_all_asid(asid); +} + +static void __ipi_flush_tlb_all(void *info) +{ + local_flush_tlb_all(); +} + void flush_tlb_all(void) { - sbi_remote_sfence_vma(NULL, 0, -1); + if (riscv_use_ipi_for_rfence()) + on_each_cpu(__ipi_flush_tlb_all, NULL, 1); + else + sbi_remote_sfence_vma(NULL, 0, -1); +} + +struct flush_tlb_range_data { + unsigned long asid; + unsigned long start; + unsigned long size; + unsigned long stride; +}; + +static void __ipi_flush_tlb_range_asid(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range_asid(d->start, d->size, d->stride, d->asid); +} + +static void __ipi_flush_tlb_range(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range(d->start, d->size, d->stride); } -static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, - unsigned long size, unsigned long stride) +static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, + unsigned long size, unsigned long stride) { + struct flush_tlb_range_data ftd; struct cpumask *cmask = mm_cpumask(mm); unsigned int cpuid; bool broadcast; @@ -45,19 +93,34 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, unsigned long asid = atomic_long_read(&mm->context.id); if (broadcast) { - sbi_remote_sfence_vma_asid(cmask, start, size, asid); - } else if (size <= stride) { - local_flush_tlb_page_asid(start, asid); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = asid; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range_asid, + &ftd, 1); + } else + sbi_remote_sfence_vma_asid(cmask, + start, size, asid); } else { - local_flush_tlb_all_asid(asid); + local_flush_tlb_range_asid(start, size, stride, asid); } } else { if (broadcast) { - sbi_remote_sfence_vma(cmask, start, size); - } else if (size <= stride) { - local_flush_tlb_page(start); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = 0; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range, + &ftd, 1); + } else + sbi_remote_sfence_vma(cmask, start, size); } else { - local_flush_tlb_all(); + local_flush_tlb_range(start, size, stride); } } @@ -66,23 +129,23 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, void flush_tlb_mm(struct mm_struct *mm) { - __sbi_tlb_flush_range(mm, 0, -1, PAGE_SIZE); + __flush_tlb_range(mm, 0, -1, PAGE_SIZE); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { - __sbi_tlb_flush_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); } void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PAGE_SIZE); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PMD_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PMD_SIZE); } #endif