From patchwork Sun May 30 16:49:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 12288381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 940F7C47092 for ; Sun, 30 May 2021 16:50:38 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4F59F611CE for ; Sun, 30 May 2021 16:50:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F59F611CE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8A8sshfgJpQAA1H1bkmsUTeA74GJBj6Z/A4kInu2vOk=; b=r8oXumZrna6qW1 XdVMXBbPjrkPnQHW9bxHXX3ysPJcqJa1G77tXQKpuZ3z0ahvBE9UpHfEy8jPjMRVxDA0njK60cxC+ FsBOFaYJxTTb2zwp0EQv7rvNODxEHVEo1LZyiD0pw1RUI/XVtxOFIl2EpVetoleJMO6TADFPBvzOt yXOVqGCiRt6bw9vV8EhRtyoZWnjF/2Q8mWPM5HEzuu6ni/EEFWvgIZSq+sFxoC9B32tkDVFYecCS/ Cj3GQFLzYwg3n39cqaKpoKs7OezXc9ddp0Op06x3YE2ErkwGftiW0CGlcN8lxdKJmA6s9pvGKvds9 1VeOX/IdIsKayzPbVRsA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lnOds-009tr3-Az; Sun, 30 May 2021 16:50:24 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lnOdp-009tq7-0H for linux-riscv@lists.infradead.org; Sun, 30 May 2021 16:50:22 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 9459D61027; Sun, 30 May 2021 16:50:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1622393420; bh=7QeWtBcpmgX2h1U9CSCvY1o63rcwMf7sxdiWDnALUwM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qJoAYQlWEwdM35MRPZa8cvx3sFeawkf+1sbNxYZ5uetxWas7b4pr3b4EHhsdFEBbZ Vqn67zpyeTosp8e/ajzAQvCcnihdg3LaxejLHP4pLn/Nmvyo0VAGCqKanWdYZZtPDc 219NJL9B2Gi9WXvDENZ4ZknONDSGW7uXNOrHhUbqsNQZ8AuHkaqTNyGGd0XTzokZ/S WawAns5UQiJf+gJhoAjNeSXjIbLX1XN2FQeI5de6cyDP/EwvzI/KNZRFG9KoyHjD+g biE9rbnoiGfZjs3XTau61B7hhhrlNyL0HJvYkmeApwRLl3HnWVI/7d5su34RBhZVjJ RxqWgQTiUgxpg== From: guoren@kernel.org To: guoren@kernel.org, anup.patel@wdc.com, palmerdabbelt@google.com, arnd@arndb.de, hch@lst.de Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, Guo Ren , Atish Patra Subject: [PATCH V5 3/3] riscv: tlbflush: Optimize coding convention Date: Sun, 30 May 2021 16:49:26 +0000 Message-Id: <1622393366-46079-4-git-send-email-guoren@kernel.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1622393366-46079-1-git-send-email-guoren@kernel.org> References: <1622393366-46079-1-git-send-email-guoren@kernel.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210530_095021_123537_A7A69250 X-CRM114-Status: GOOD ( 17.28 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Guo Ren Passing the mm_struct as the first argument, as we can derive both the cpumask and asid from it instead of doing that in the callers. But more importantly, the static branch check can be moved deeper into the code to avoid a lot of duplication. Also add FIXME comment on the non-ASID code switches to a global flush once flushing more than a single page. Link: https://lore.kernel.org/linux-riscv/CAJF2gTQpDYtEdw6ZrTVZUYqxGdhLPs25RjuUiQtz=xN2oKs2fw@mail.gmail.com/T/#m30f7e8d02361f21f709bc3357b9f6ead1d47ed43 Signed-off-by: Guo Ren Co-Developed-by: Christoph Hellwig Cc: Christoph Hellwig Cc: Palmer Dabbelt Cc: Anup Patel Cc: Atish Patra --- arch/riscv/mm/tlbflush.c | 91 ++++++++++++++++++++++-------------------------- 1 file changed, 41 insertions(+), 50 deletions(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 87b4e52..facca6e 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -12,56 +12,59 @@ void flush_tlb_all(void) } /* - * This function must not be called with cmask being null. + * This function must not be called with mm_cpumask(mm) being null. * Kernel may panic if cmask is NULL. */ -static void __sbi_tlb_flush_range(struct cpumask *cmask, unsigned long start, +static void __sbi_tlb_flush_range(struct mm_struct *mm, + unsigned long start, unsigned long size) { + struct cpumask *cmask = mm_cpumask(mm); struct cpumask hmask; unsigned int cpuid; + bool local; if (cpumask_empty(cmask)) return; cpuid = get_cpu(); - if (cpumask_any_but(cmask, cpuid) >= nr_cpu_ids) { - /* local cpu is the only cpu present in cpumask */ - if (size <= PAGE_SIZE) - local_flush_tlb_page(start); - else - local_flush_tlb_all(); - } else { - riscv_cpuid_to_hartid_mask(cmask, &hmask); - sbi_remote_sfence_vma(cpumask_bits(&hmask), start, size); - } + /* + * check if the tlbflush needs to be sent to other CPUs, local + * cpu is the only cpu present in cpumask. + */ + local = !(cpumask_any_but(cmask, cpuid) < nr_cpu_ids); - put_cpu(); -} - -static void __sbi_tlb_flush_range_asid(struct cpumask *cmask, - unsigned long start, - unsigned long size, - unsigned long asid) -{ - struct cpumask hmask; - unsigned int cpuid; - - if (cpumask_empty(cmask)) - return; - - cpuid = get_cpu(); + if (static_branch_likely(&use_asid_allocator)) { + unsigned long asid = atomic_long_read(&mm->context.id); - if (cpumask_any_but(cmask, cpuid) >= nr_cpu_ids) { - if (size == -1) - local_flush_tlb_all_asid(asid); - else - local_flush_tlb_range_asid(start, size, asid); + if (likely(local)) { + if (size == -1) + local_flush_tlb_all_asid(asid); + else + local_flush_tlb_range_asid(start, size, asid); + } else { + riscv_cpuid_to_hartid_mask(cmask, &hmask); + sbi_remote_sfence_vma_asid(cpumask_bits(&hmask), + start, size, asid); + } } else { - riscv_cpuid_to_hartid_mask(cmask, &hmask); - sbi_remote_sfence_vma_asid(cpumask_bits(&hmask), - start, size, asid); + if (likely(local)) { + /* + * FIXME: The non-ASID code switches to a global flush + * once flushing more than a single page. It's made by + * commit 6efb16b1d551 (RISC-V: Issue a tlb page flush + * if possible). + */ + if (size <= PAGE_SIZE) + local_flush_tlb_page(start); + else + local_flush_tlb_all(); + } else { + riscv_cpuid_to_hartid_mask(cmask, &hmask); + sbi_remote_sfence_vma(cpumask_bits(&hmask), + start, size); + } } put_cpu(); @@ -69,28 +72,16 @@ static void __sbi_tlb_flush_range_asid(struct cpumask *cmask, void flush_tlb_mm(struct mm_struct *mm) { - if (static_branch_unlikely(&use_asid_allocator)) - __sbi_tlb_flush_range_asid(mm_cpumask(mm), 0, -1, - atomic_long_read(&mm->context.id)); - else - __sbi_tlb_flush_range(mm_cpumask(mm), 0, -1); + __sbi_tlb_flush_range(mm, 0, -1); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { - if (static_branch_unlikely(&use_asid_allocator)) - __sbi_tlb_flush_range_asid(mm_cpumask(vma->vm_mm), addr, PAGE_SIZE, - atomic_long_read(&vma->vm_mm->context.id)); - else - __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), addr, PAGE_SIZE); + __sbi_tlb_flush_range(vma->vm_mm, addr, PAGE_SIZE); } void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - if (static_branch_unlikely(&use_asid_allocator)) - __sbi_tlb_flush_range_asid(mm_cpumask(vma->vm_mm), start, end - start, - atomic_long_read(&vma->vm_mm->context.id)); - else - __sbi_tlb_flush_range(mm_cpumask(vma->vm_mm), start, end - start); + __sbi_tlb_flush_range(vma->vm_mm, start, end - start); }