From patchwork Wed Jan 29 22:41:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank van der Linden X-Patchwork-Id: 13954212 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEDE3C02190 for ; Wed, 29 Jan 2025 22:43:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 987AA2800A3; Wed, 29 Jan 2025 17:42:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E8BE28008C; Wed, 29 Jan 2025 17:42:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 73DE02800A3; Wed, 29 Jan 2025 17:42:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 4EC7128008C for ; Wed, 29 Jan 2025 17:42:44 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 043A01C7057 for ; Wed, 29 Jan 2025 22:42:43 +0000 (UTC) X-FDA: 83061965448.16.C33DE46 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf13.hostedemail.com (Postfix) with ESMTP id 11DF020013 for ; Wed, 29 Jan 2025 22:42:41 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="bJD0ps/5"; spf=pass (imf13.hostedemail.com: domain of 34K6aZwQKCOsSiQYTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--fvdl.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=34K6aZwQKCOsSiQYTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738190562; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=csxUnMb8aBY+LA7oJhb6s5xyn3ur++3PPVPSPZ4uvnE=; b=amiabfC/3LtquULpl4ubXOFMmpPj8YZj2UWPEjcXq0k+DXAhSB/UQq2RIjud4IZKt2b9qz CTdcEnk4vkRACEIRzX5cLNqSCL+gLU0yd/PK76l0Lb9lA0/jvJIK2rg1p14g+w8TI2xMCs bOmNOQFnHmXDrmju7+j/loO+23TQ2I4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738190562; a=rsa-sha256; cv=none; b=rMhV2SjofgVt2Q+CtgH5o5jIBFrSkIjdSiit4+LbhNnz3+to91SdRExmQa5vrIuKf80SuM eqEuGYaQf4BcFJ7xTRV7BNDuqzIsQZIBxKRKbU8qT4TrY81kfa0wuSmgUXmvwzfNzFkkBL 88Y7OZC0QIwvB1mCEsTyQKmfP7AEQIQ= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="bJD0ps/5"; spf=pass (imf13.hostedemail.com: domain of 34K6aZwQKCOsSiQYTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--fvdl.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=34K6aZwQKCOsSiQYTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2efa74481fdso244049a91.1 for ; Wed, 29 Jan 2025 14:42:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738190561; x=1738795361; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=csxUnMb8aBY+LA7oJhb6s5xyn3ur++3PPVPSPZ4uvnE=; b=bJD0ps/5tDjxyA0vYYAtzgOLz0xcWOdv9zQETbmbuvXwVCokha2cqFNi+96FPjf19U 1MN/d4UvvmhJWH3rZDPkzQesFtd07MXAl7U06ErIXJCT5W9eeHnKzEtpioiC5ymcFAfX 2Z/W6MX54aGkr2uJ5xX+2z/rat4quUFj7uh5969jbxUYGD04G3zoTeIgg3mfMn6vsOrm ib7Ti3v3AZGhjrjgnG++kuXx/4NMhmDIX3kLT8utWYmxejAeAzP7senmsF9864mal1dG +OQ5sU49MndqG6Yq+LsXtmiyFruZo1eix3xtfrExnr1X83LG0P0mHCAB317bo9Df8N2T Lmyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738190561; x=1738795361; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=csxUnMb8aBY+LA7oJhb6s5xyn3ur++3PPVPSPZ4uvnE=; b=t5f0RglOPd+5V8NxVMpUBjZwLoBdfs/UDP6mJvV84QApmXP9v+sdhnpaPCswhcAXaO 5odtXXssU2DvydW0UFhJB8rAueq/kGIk0DwLjAQY9yAuPjsT7Hzr8OO+b8f24+FtKJ00 i3+nVApil6fVwYcjzNGh80hEat1jqApch0g28MuhkjqE1yB64ImjYdtd+edRMekB7BjT RN+AK3fRgA9AVRAQoftKlOjKo6A2+K8+22KaJQqCx1SZwwXA9OOXMGZKta7tADRMTixV e5cWz7msU6fDn8Gwuho2qoEiPHxC5YIC2RfdScoJSFXsECQrtAorROajLMHYfiqxXcEN fBWA== X-Forwarded-Encrypted: i=1; AJvYcCWvShhJGxDNYas5UvsBahcgacly7LDc7GulED4ap26g8OR2foyAw/IE/OIYj3axUFhnMJ7DMeHZiA==@kvack.org X-Gm-Message-State: AOJu0Yx7XO6Zq7jenv3SXyUa90SQo2aom/y52WOs5HoaozFFsv40HzfQ 3QVooRHuJ52imy64jSvQrEJaTfjgSsRJRsitmSZJkZhefXk63XVoig9S68vMgaAUbwCXaA== X-Google-Smtp-Source: AGHT+IE8Pv/vPlQg3Zsgd/miPk81S+cEO0cUOfh7UtusaUeUsumjXw+Yjo59fdOEUHr7Es0q7li1hf72 X-Received: from pfbbe5.prod.google.com ([2002:a05:6a00:1f05:b0:728:e3af:6bb0]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:3a0f:b0:71e:e4f:3e58 with SMTP id d2e1a72fcca58-72fd0c623ccmr6357836b3a.17.1738190560818; Wed, 29 Jan 2025 14:42:40 -0800 (PST) Date: Wed, 29 Jan 2025 22:41:44 +0000 In-Reply-To: <20250129224157.2046079-1-fvdl@google.com> Mime-Version: 1.0 References: <20250129224157.2046079-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250129224157.2046079-16-fvdl@google.com> Subject: [PATCH v2 15/28] mm/sparse: add vmemmap_*_hvo functions From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, Frank van der Linden X-Rspamd-Queue-Id: 11DF020013 X-Stat-Signature: 3umormp39dxsf9hy19o5jdnj51x8bxbp X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1738190561-94300 X-HE-Meta: U2FsdGVkX191uITfqzKknvG7tp9INXM5OUS/UN8mljJJPGudKghJY+elZ/BFXzrcB7qQWVtNyoIXs0Z02zCtaw3YbS7seFRm/PO0nosiibDjVdYPMKCOevjfwQmtfNnBCkxXDrbvOk2uKDYsKOxa53c69vYTufcDtMY5E5vt2rMtq4OuUxX/IZq9p5xZdzLdFPBXSfKJIsOClm43VPzryOtUxPZoNsU3tKHX3lE2Q2xuxxu36rccC4v/8V+x5rwk/NTWi6nV8ZxtSX665HD/n2y5SI3EGsw9R4DFRDvuRmoFb3EXRipCgIp/nYGD2pZ9Wnej/o7s6PwD79UcapVFrDer01GOdB0MfhL5cmPcosnbE6OSyCFJEj9mSX2XszvRRmlp+XuyXZ+H2NSVzvULTAi03lEP2shDD6YHwOxyRcmE4H15pN8cpZBa1HCXm7vq8sm30+DG0x/Ms+zZ4uLiR45j0vU2/uz7C10EX9bZzFH/QsLZrwMi5SQOLTSGBhpNWL2Vq4UVgsm6cnVmaJEpn5vVaugUmY3FXu2C/QsS1oDIrNZocJ2SeXcSYzkDpOUNgY+6aQ6/Kp+FVdW7XyBLIKpWEFTHun1uE74yu5mYfdSXZiP7jlM9TNMlHm9LviNvbbnAmabF7pGsGXY0SaBnMjti4qhyr78N4xKyrjW6GNAtvZ3IkzVA+Ui65vekN16Ur8+mv7RXLBrIOrIrBsq6vCdKJ+7+w28/yA4UW78fn6zE9Q/jnkICrEMG0JOdYhTqlnU59zrdaa2qKJYWqFOsFmgeYO1W9l8hheBZaOFsjW1N1VT0t1GxpDGp/kwg3yPtu6G3RE0c+TX6wxQh5v5C7x+vDTQpqFdu7k4skk4jIR2eZl0oBetfAh4eaS8sHYNucb+cliKaarTaA8C8lCy/oY6+6ZZd5NXJ7BUlAmQPT/PN0KNyMGcea2jl6Rn/WEyhkOzV/OR5ldgjMCrc1pJ zbDF2L0v ah+SeLY/+AT2mWZG55kIL/Lz1Epku2/38Ar4xJKuUZwsA/+jc3m33w7YCy0/i5y3sKAM5+x81cY4XOHv6G7Z3YoWrdXWoU2E24HIXlvec0SfDacESfX3khMeO5ikqb9OSzOPZdc7TGf7yYrg1j7tNV+mDRLsEKem6PG7ZP++Tah6T4tXnd5n0AW4vB4tGf/pERoLKy3bq8PBwh1zhULu+Q1emjPg+9Fi2+kGNHrfwTX8EpE8qBDjdI+Y/YYR0mQJLhwCK9ADU5mpDnp0ZY8SxuP88MjhQceR1YNrJqBnPCVXVNtPMx3d+wg+CqixdFt2e5Gbb6lQns8fok8wNPUJkSb1aShtAWRIsC2SS28K+8pAII9e56gWMFjgC+GzNfP6PcDalsQqx/vZr2DbX7ZTjicHII89tkNI+yM3d2E+kTJoV/WLeuV5/EN+pZwlWlCO0ROUrub0+gHCLkfXllly8IHVFn6Tzbg5E+Rhm+3q9DB8MXmcJJk7EGqaTASRa+4Lj2gAGPIiODiYMp8JXJb1w3qb8fgbkz4kVVl0zqZpQpjEU72D1f9COdWuCt4LCOC0flqMFdd2qFdqMTHM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add a few functions to enable early HVO: vmemmap_populate_hvo vmemmap_undo_hvo vmemmap_wrprotect_hvo The populate and undo functions are expected to be used in early init, from the sparse_init_nid_early() function. The wrprotect function is to be used, potentially, later. To implement these functions, mostly re-use the existing compound pages vmemmap logic used by DAX. vmemmap_populate_address has its argument changed a bit in this commit: the page structure passed in to be reused in the mapping is replaced by a PFN and a flag. The flag indicates whether an extra ref should be taken on the vmemmap page containing the head page structure. Taking the ref is appropriate to for DAX / ZONE_DEVICE, but not for HugeTLB HVO. The HugeTLB vmemmap optimization maps tail page structure pages read-only. The vmemmap_wrprotect_hvo function that does this is implemented separately, because it cannot be guaranteed that reserved page structures will not be write accessed during memory initialization. Even with CONFIG_DEFERRED_STRUCT_PAGE_INIT, they might still be written to (if they are at the bottom of a zone). So, vmemmap_populate_hvo leaves the tail page structure pages RW initially, and then later during initialization, after memmap init is fully done, vmemmap_wrprotect_hvo must be called to finish the job. Subsequent commits will use these functions for early HugeTLB HVO. Signed-off-by: Frank van der Linden --- include/linux/mm.h | 9 ++- mm/sparse-vmemmap.c | 141 +++++++++++++++++++++++++++++++++++++++----- 2 files changed, 135 insertions(+), 15 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index df83653ed6e3..0463c062fd7a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3837,7 +3837,8 @@ p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node); pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node); pmd_t *vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node); pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, - struct vmem_altmap *altmap, struct page *reuse); + struct vmem_altmap *altmap, unsigned long ptpfn, + unsigned long flags); void *vmemmap_alloc_block(unsigned long size, int node); struct vmem_altmap; void *vmemmap_alloc_block_buf(unsigned long size, int node, @@ -3853,6 +3854,12 @@ int vmemmap_populate_hugepages(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap); int vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap); +int vmemmap_populate_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); +int vmemmap_undo_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); +void vmemmap_wrprotect_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); void vmemmap_populate_print_last(void); #ifdef CONFIG_MEMORY_HOTPLUG void vmemmap_free(unsigned long start, unsigned long end, diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 8751c46c35e4..bee22ca93654 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -30,6 +30,13 @@ #include #include +#include + +/* + * Flags for vmemmap_populate_range and friends. + */ +/* Get a ref on the head page struct page, for ZONE_DEVICE compound pages */ +#define VMEMMAP_POPULATE_PAGEREF 0x0001 #include "internal.h" @@ -144,17 +151,18 @@ void __meminit vmemmap_verify(pte_t *pte, int node, pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, unsigned long flags) { pte_t *pte = pte_offset_kernel(pmd, addr); if (pte_none(ptep_get(pte))) { pte_t entry; void *p; - if (!reuse) { + if (!ptpfn) { p = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap); if (!p) return NULL; + ptpfn = PHYS_PFN(__pa(p)); } else { /* * When a PTE/PMD entry is freed from the init_mm @@ -165,10 +173,10 @@ pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, * and through vmemmap_populate_compound_pages() when * slab is available. */ - get_page(reuse); - p = page_to_virt(reuse); + if (flags & VMEMMAP_POPULATE_PAGEREF) + get_page(pfn_to_page(ptpfn)); } - entry = pfn_pte(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL); + entry = pfn_pte(ptpfn, PAGE_KERNEL); set_pte_at(&init_mm, addr, pte, entry); } return pte; @@ -238,7 +246,8 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node) static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, + unsigned long flags) { pgd_t *pgd; p4d_t *p4d; @@ -258,7 +267,7 @@ static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, pmd = vmemmap_pmd_populate(pud, addr, node); if (!pmd) return NULL; - pte = vmemmap_pte_populate(pmd, addr, node, altmap, reuse); + pte = vmemmap_pte_populate(pmd, addr, node, altmap, ptpfn, flags); if (!pte) return NULL; vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); @@ -269,13 +278,15 @@ static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, static int __meminit vmemmap_populate_range(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, + unsigned long flags) { unsigned long addr = start; pte_t *pte; for (; addr < end; addr += PAGE_SIZE) { - pte = vmemmap_populate_address(addr, node, altmap, reuse); + pte = vmemmap_populate_address(addr, node, altmap, + ptpfn, flags); if (!pte) return -ENOMEM; } @@ -286,7 +297,107 @@ static int __meminit vmemmap_populate_range(unsigned long start, int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { - return vmemmap_populate_range(start, end, node, altmap, NULL); + return vmemmap_populate_range(start, end, node, altmap, 0, 0); +} + +/* + * Undo populate_hvo, and replace it with a normal base page mapping. + * Used in memory init in case a HVO mapping needs to be undone. + * + * This can happen when it is discovered that a memblock allocated + * hugetlb page spans multiple zones, which can only be verified + * after zones have been initialized. + * + * We know that: + * 1) The first @headsize / PAGE_SIZE vmemmap pages were individually + * allocated through memblock, and mapped. + * + * 2) The rest of the vmemmap pages are mirrors of the last head page. + */ +int __meminit vmemmap_undo_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + unsigned long maddr, pfn; + pte_t *pte; + int headpages; + + /* + * Should only be called early in boot, so nothing will + * be accessing these page structures. + */ + WARN_ON(!early_boot_irqs_disabled); + + headpages = headsize >> PAGE_SHIFT; + + /* + * Clear mirrored mappings for tail page structs. + */ + for (maddr = addr + headsize; maddr < end; maddr += PAGE_SIZE) { + pte = virt_to_kpte(maddr); + pte_clear(&init_mm, maddr, pte); + } + + /* + * Clear and free mappings for head page and first tail page + * structs. + */ + for (maddr = addr; headpages-- > 0; maddr += PAGE_SIZE) { + pte = virt_to_kpte(maddr); + pfn = pte_pfn(ptep_get(pte)); + pte_clear(&init_mm, maddr, pte); + memblock_phys_free(PFN_PHYS(pfn), PAGE_SIZE); + } + + flush_tlb_kernel_range(addr, end); + + return vmemmap_populate(addr, end, node, NULL); +} + +/* + * Write protect the mirrored tail page structs for HVO. This will be + * called from the hugetlb code when gathering and initializing the + * memblock allocated gigantic pages. The write protect can't be + * done earlier, since it can't be guaranteed that the reserved + * page structures will not be written to during initialization, + * even if CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled. + * + * The PTEs are known to exist, and nothing else should be touching + * these pages. The caller is responsible for any TLB flushing. + */ +void vmemmap_wrprotect_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + unsigned long maddr; + pte_t *pte; + + for (maddr = addr + headsize; maddr < end; maddr += PAGE_SIZE) { + pte = virt_to_kpte(maddr); + ptep_set_wrprotect(&init_mm, maddr, pte); + } +} + +/* + * Populate vmemmap pages HVO-style. The first page contains the head + * page and needed tail pages, the other ones are mirrors of the first + * page. + */ +int __meminit vmemmap_populate_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + pte_t *pte; + unsigned long maddr; + + for (maddr = addr; maddr < addr + headsize; maddr += PAGE_SIZE) { + pte = vmemmap_populate_address(maddr, node, NULL, 0, 0); + if (!pte) + return -ENOMEM; + } + + /* + * Reuse the last page struct page mapped above for the rest. + */ + return vmemmap_populate_range(maddr, end, node, NULL, + pte_pfn(ptep_get(pte)), 0); } void __weak __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node, @@ -409,7 +520,8 @@ static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, * with just tail struct pages. */ return vmemmap_populate_range(start, end, node, NULL, - pte_page(ptep_get(pte))); + pte_pfn(ptep_get(pte)), + VMEMMAP_POPULATE_PAGEREF); } size = min(end - start, pgmap_vmemmap_nr(pgmap) * sizeof(struct page)); @@ -417,13 +529,13 @@ static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, unsigned long next, last = addr + size; /* Populate the head page vmemmap page */ - pte = vmemmap_populate_address(addr, node, NULL, NULL); + pte = vmemmap_populate_address(addr, node, NULL, 0, 0); if (!pte) return -ENOMEM; /* Populate the tail pages vmemmap page */ next = addr + PAGE_SIZE; - pte = vmemmap_populate_address(next, node, NULL, NULL); + pte = vmemmap_populate_address(next, node, NULL, 0, 0); if (!pte) return -ENOMEM; @@ -433,7 +545,8 @@ static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, */ next += PAGE_SIZE; rc = vmemmap_populate_range(next, last, node, NULL, - pte_page(ptep_get(pte))); + pte_pfn(ptep_get(pte)), + VMEMMAP_POPULATE_PAGEREF); if (rc) return -ENOMEM; }