From patchwork Thu Mar 13 10:41:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Miko=C5=82aj_Lenczewski?= X-Patchwork-Id: 14014724 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0F6C3C282DE for ; Thu, 13 Mar 2025 10:54:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc: To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=2id1FSHV4DFSuZT3RvxDLgF8Ruq1rRumgcR/RC4tlyo=; b=uDTUbvIWUS2o7iPCvuNwX9VeHj OGhhCYzVhveN/779aqbUZ51C4tueJ3isBz8UT8iNMoN02rXakJMStS4M6qWwE1UTD7N+o2rYFj0iN qrmD5FFIzH8k8yrEENV3wsFHWCfXEpTGtetmguJ+ibdU5bZiAcPLPH7VBCWqeqB1ONhyPMq1dddmX gk3pt3EkBaOQNheL1A4Qb4TdGQQV/JedQc0GJS5AHGD9GaPWtJCFyU8QKzLkgAi1vNiHI5tKJEcjM vGMl1g3UrrGiO3/8obbUklErCfYdX1aWjnDG/POmmEUoMc4mUQfiqCwgak1fsJF/RS7wUsbs58N9f Do6n8/JQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tsgCo-0000000Ax1m-1s3x; Thu, 13 Mar 2025 10:54:26 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tsg2C-0000000Auo4-30dI for linux-arm-kernel@lists.infradead.org; Thu, 13 Mar 2025 10:43:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 82C4527DC; Thu, 13 Mar 2025 03:43:38 -0700 (PDT) Received: from mazurka.cambridge.arm.com (mazurka.cambridge.arm.com [10.2.80.18]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EC0163F694; Thu, 13 Mar 2025 03:43:23 -0700 (PDT) From: =?utf-8?q?Miko=C5=82aj_Lenczewski?= To: ryan.roberts@arm.com, suzuki.poulose@arm.com, yang@os.amperecomputing.com, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, jean-philippe@linaro.org, robin.murphy@arm.com, joro@8bytes.org, akpm@linux-foundation.org, mark.rutland@arm.com, joey.gouly@arm.com, maz@kernel.org, james.morse@arm.com, broonie@kernel.org, anshuman.khandual@arm.com, oliver.upton@linux.dev, ioworker0@gmail.com, baohua@kernel.org, david@redhat.com, jgg@ziepe.ca, shameerali.kolothum.thodi@huawei.com, nicolinc@nvidia.com, mshavit@google.com, jsnitsel@redhat.com, smostafa@google.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, iommu@lists.linux.dev Cc: =?utf-8?q?Miko=C5=82aj_Lenczewski?= Subject: [PATCH v3 3/3] arm64/mm: Elide tlbi in contpte_convert() under BBML2 Date: Thu, 13 Mar 2025 10:41:12 +0000 Message-ID: <20250313104111.24196-5-miko.lenczewski@arm.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250313104111.24196-2-miko.lenczewski@arm.com> References: <20250313104111.24196-2-miko.lenczewski@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250313_034328_799923_49C23104 X-CRM114-Status: GOOD ( 12.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When converting a region via contpte_convert() to use mTHP, we have two different goals. We have to mark each entry as contiguous, and we would like to smear the dirty and young (access) bits across all entries in the contiguous block. Currently, we do this by first accumulating the dirty and young bits in the block, using an atomic __ptep_get_and_clear() and the relevant pte_{dirty,young}() calls, performing a tlbi, and finally smearing the correct bits across the block using __set_ptes(). This approach works fine for BBM level 0, but with support for BBM level 2 we are allowed to reorder the tlbi to after setting the pagetable entries. This reordering reduces the likelyhood of a concurrent page walk finding an invalid (not present) PTE. This reduces the likelyhood of a fault in other threads, and improves performance marginally (more so when there are more threads). If we support bbml2 without conflict aborts however, we can avoid the final flush altogether and have hardware manage the tlb entries for us. Avoiding flushes is a win. Signed-off-by: MikoĊ‚aj Lenczewski Reviewed-by: Ryan Roberts --- arch/arm64/mm/contpte.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 55107d27d3f8..77ed03b30b72 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -68,7 +68,8 @@ static void contpte_convert(struct mm_struct *mm, unsigned long addr, pte = pte_mkyoung(pte); } - __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); + if (!system_supports_bbml2_noabort()) + __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); __set_ptes(mm, start_addr, start_ptep, pte, CONT_PTES); }