From patchwork Mon Dec 4 10:54:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13478184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B75BC4167B for ; Mon, 4 Dec 2023 10:55:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B4DD66B0294; Mon, 4 Dec 2023 05:54:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AFDD86B029A; Mon, 4 Dec 2023 05:54:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9EC9F6B029C; Mon, 4 Dec 2023 05:54:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8EE2A6B0294 for ; Mon, 4 Dec 2023 05:54:59 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 554281201A9 for ; Mon, 4 Dec 2023 10:54:59 +0000 (UTC) X-FDA: 81528828318.04.FB99BE0 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf25.hostedemail.com (Postfix) with ESMTP id 6C7C5A001B for ; Mon, 4 Dec 2023 10:54:57 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf25.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701687297; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=MOp9o0j0JmU2ZJHhd/sYFgw8kkjUNr0e99NjFytP48k=; b=JTjng1RysEqw8h1yi53oX6LQhTyxtMQwsriaroZyicl5r6Y89Ae7mIGCphbav1LMhK3C2g 5r8MlD8CngBNjrt+AxMPd2pjQ3Nxg18UIUCvdpg53ztFECHLWZqSRlUe/cP4fD7n2q9Ntj AT4jmCR2mwUUyYOn22gW7QNzr+fi+BQ= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf25.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701687297; a=rsa-sha256; cv=none; b=fwj0KkFKaxPUpLxDopyGgqo47GYeRMcJWiURPnc+vYGGvYRNd3igMT7Ksls1CKwnUgdeCo dtBhKlueuSmTcq2FVBFELjiYDOVzJlZ9sXnS/qkIYLw2bP5/WC6A71+SFQEO5THy31tJcz OwWh+LuoLvLUxJHSaaAi6w7kKTky4UQ= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B4207139F; Mon, 4 Dec 2023 02:55:43 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0B3673F6C4; Mon, 4 Dec 2023 02:54:52 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Alistair Popple , Yang Shi Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 00/15] Transparent Contiguous PTEs for User Mappings Date: Mon, 4 Dec 2023 10:54:25 +0000 Message-Id: <20231204105440.61448-1-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 6C7C5A001B X-Stat-Signature: mbp9pcagzxmyfp5gbdqdbiktcnfpx4ye X-HE-Tag: 1701687297-55562 X-HE-Meta: U2FsdGVkX1/7023ZGP+nEzCo3Q3pbAxoj/tyvBly2rnUgiN/m3PdI9D0lyYyqU0RTtqBhes5EdShuHj/eJQiJ1m8Eovi6g0GtEJjip9OEdYg0wVv1eCdcyZqe+DOHk+hF+i520EFPnBD9LEiZLDi9BTXCQ128dFFzQJN2yD/kzyUvlpiLpOHAJDQkJPAvc5yvziEUyV1tVzSIDE3fXQ50OtjcyrEl5zpCOb2W9d3RQaUhzQgWxizCSTXwusTkkaFlztNIChsx+kcqIuDlxg4JV0qYSplAQp2dL3xy/sYbk2swe168vGnTXpDTFVKaPhhM1hB+yC6duM//oWgIdyhnsTbaBGssaK4lN/LO3FgTK0nsglTnYkGjZCQR0CwfXy6Dmed6EktYYV9RxZQJvHgqqpEcJESk1h+KV6NwausAIKphZfXA/w6lt9Dm+B/HfIZw7ei1LWacloeKd5PFiHXPb3e0/XPjdRxdn9iVSh80Vb98I5XNSeZxzRsqib/e9KcAnTO5a68FVBBpdkjhRUuDEWPu9qnP1ohDT3Z0offSdl1IDn1gnJv9WAaEQk3n6R+IN6TwBYhHeiiF7Wql8K1p3nb/BSz5u9n4xYHfnCSJDedwBvyskz1M7Ll9BDpm7l2jsAaMcWoR00yCLqxoXH+626Z8s3QKjDk3mzEyD8kpt2DoPt5MUYaUQXcm9YmqD0XaVkvEXgmqFcsQokgfAydaXnqUn3mtoLVx6QWCjXCYwWq+Km/yyaN3UWtVzw8GIZBUmScKkJ1Y8+68YnOoneZmQe7BwVPC4MERIR4ywBbnWu3FhRHzhY5k1XiK0tVRJccIGbwSgnEOJWRTEB9pItNBPMlMVIWg0Bd1ZKWcs2VOKDcTJ7tL8OgB31sVDBiEPKleSjlyuJiTu6pSRM7ibH2M2Eu4/gOAHn6BjrnOH/DkH2g8DZrfi4NOD7cMsykGMcjOsCrg/VVkc8S7a0T7m0 cnNFD+kH 2kYB+bR7cIqknocfqj5FdDfQeyoEZOqj7PAnxDh+5W+0xFjIRw/TzjW5rLNCR5odbYUsbFSMxGxkNiffdMPK8tHwxAGGRTOJOEuRgT1bSqabNGNIgqpq32zBGQk28HqyIG7cTp7u+d0S1rR3qiTzmcrl2gV0PXZqTFwUUqtqdhinEyj/cCVcu8j+TYG2CRElxlakN0yiHLqDXIakre6xfz5e4062t8947E8b2xEZJkEa2DrI02LrmSxMQidUVIX9uv7argM5EB02WVAcLSPuQijR4JJkNn+0W30VdWDTw2CDejX8yPHJNjzCLJg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi All, This is v3 of a series to opportunistically and transparently use contpte mappings (set the contiguous bit in ptes) for user memory when those mappings meet the requirements. It is part of a wider effort to improve performance by allocating and mapping variable-sized blocks of memory (folios). One aim is for the 4K kernel to approach the performance of the 16K kernel, but without breaking compatibility and without the associated increase in memory. Another aim is to benefit the 16K and 64K kernels by enabling 2M THP, since this is the contpte size for those kernels. We have good performance data that demonstrates both aims are being met (see below). Of course this is only one half of the change. We require the mapped physical memory to be the correct size and alignment for this to actually be useful (i.e. 64K for 4K pages, or 2M for 16K/64K pages). Fortunately folios are solving this problem for us. Filesystems that support it (XFS, AFS, EROFS, tmpfs, ...) will allocate large folios up to the PMD size today, and more filesystems are coming. And the other half of my work, to enable "multi-size THP" (large folios) for anonymous memory, makes contpte sized folios prevalent for anonymous memory too [3]. Optimistically, I would really like to get this series merged for v6.8; there is a chance that the multi-size THP series will also get merged for that version (although at this point pretty small). But even if it doesn't, this series still benefits file-backed memory from the file systems that support large folios so shouldn't be held up for it. Additionally I've got data that shows this series adds no regression when the system has no appropriate large folios. All dependecies listed against v1 are now resolved; This series applies cleanly against v6.7-rc1. Note that the first two patchs are for core-mm and provides the refactoring to make some crucial optimizations possible - which are then implemented in patches 14 and 15. The remaining patches are arm64-specific. Testing ======= I've tested this series together with multi-size THP [3] on both Ampere Altra (bare metal) and Apple M2 (VM): - mm selftests (inc new tests written for multi-size THP); no regressions - Speedometer Java script benchmark in Chromium web browser; no issues - Kernel compilation; no issues - Various tests under high memory pressure with swap enabled; no issues Performance =========== John Hubbard at Nvidia has indicated dramatic 10x performance improvements for some workloads at [4], when using 64K base page kernel. You can also see the original performance results I posted against v1 [1] which are still valid. I've additionally run the kernel compilation and speedometer benchmarks on a system with multi-size THP disabled and large folio support for file-backed memory intentionally disabled; I see no change in performance in this case (i.e. no regression when this change is "present but not useful"). Changes since v2 [2] ==================== - Removed contpte_ptep_get_and_clear_full() optimisation for exit() (v2#14), and replaced with a batch-clearing approach using a new arch helper, clear_ptes() (v3#2 and v3#15) (Alistair and Barry) - (v2#1 / v3#1) - Fixed folio refcounting so that refcount >= mapcount always (DavidH) - Reworked batch demarcation to avoid pte_pgprot() (DavidH) - Reverted return semantic of copy_present_page() and instead fix it up in copy_present_ptes() (Alistair) - Removed page_cont_mapped_vaddr() and replaced with simpler logic (Alistair) - Made batch accounting clearer in copy_pte_range() (Alistair) - (v2#12 / v3#13) - Renamed contpte_fold() -> contpte_convert() and hoisted setting/ clearing CONT_PTE bit to higher level (Alistair) Changes since v1 [1] ==================== - Export contpte_* symbols so that modules can continue to call inline functions (e.g. ptep_get) which may now call the contpte_* functions (thanks to JohnH) - Use pte_valid() instead of pte_present() where sensible (thanks to Catalin) - Factor out (pte_valid() && pte_cont()) into new pte_valid_cont() helper (thanks to Catalin) - Fixed bug in contpte_ptep_set_access_flags() where TLBIs were missed (thanks to Catalin) - Added ARM64_CONTPTE expert Kconfig (enabled by default) (thanks to Anshuman) - Simplified contpte_ptep_get_and_clear_full() - Improved various code comments [1] https://lore.kernel.org/linux-arm-kernel/20230622144210.2623299-1-ryan.roberts@arm.com/ [2] https://lore.kernel.org/linux-arm-kernel/20231115163018.1303287-1-ryan.roberts@arm.com/ [3] https://lore.kernel.org/linux-arm-kernel/20231204102027.57185-1-ryan.roberts@arm.com/ [4] https://lore.kernel.org/linux-mm/c507308d-bdd4-5f9e-d4ff-e96e4520be85@nvidia.com/ Thanks, Ryan Ryan Roberts (15): mm: Batch-copy PTE ranges during fork() mm: Batch-clear PTE ranges during zap_pte_range() arm64/mm: set_pte(): New layer to manage contig bit arm64/mm: set_ptes()/set_pte_at(): New layer to manage contig bit arm64/mm: pte_clear(): New layer to manage contig bit arm64/mm: ptep_get_and_clear(): New layer to manage contig bit arm64/mm: ptep_test_and_clear_young(): New layer to manage contig bit arm64/mm: ptep_clear_flush_young(): New layer to manage contig bit arm64/mm: ptep_set_wrprotect(): New layer to manage contig bit arm64/mm: ptep_set_access_flags(): New layer to manage contig bit arm64/mm: ptep_get(): New layer to manage contig bit arm64/mm: Split __flush_tlb_range() to elide trailing DSB arm64/mm: Wire up PTE_CONT for user mappings arm64/mm: Implement ptep_set_wrprotects() to optimize fork() arm64/mm: Implement clear_ptes() to optimize exit() arch/arm64/Kconfig | 10 +- arch/arm64/include/asm/pgtable.h | 343 ++++++++++++++++++++--- arch/arm64/include/asm/tlbflush.h | 13 +- arch/arm64/kernel/efi.c | 4 +- arch/arm64/kernel/mte.c | 2 +- arch/arm64/kvm/guest.c | 2 +- arch/arm64/mm/Makefile | 1 + arch/arm64/mm/contpte.c | 436 ++++++++++++++++++++++++++++++ arch/arm64/mm/fault.c | 12 +- arch/arm64/mm/fixmap.c | 4 +- arch/arm64/mm/hugetlbpage.c | 40 +-- arch/arm64/mm/kasan_init.c | 6 +- arch/arm64/mm/mmu.c | 16 +- arch/arm64/mm/pageattr.c | 6 +- arch/arm64/mm/trans_pgd.c | 6 +- include/asm-generic/tlb.h | 9 + include/linux/pgtable.h | 39 +++ mm/memory.c | 258 +++++++++++++----- mm/mmu_gather.c | 14 + 19 files changed, 1067 insertions(+), 154 deletions(-) create mode 100644 arch/arm64/mm/contpte.c Tested-by: John Hubbard --- 2.25.1