From patchwork Mon Jan 27 23:21:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank van der Linden X-Patchwork-Id: 13951856 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63ED5C02188 for ; Mon, 27 Jan 2025 23:23:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AA5B42801C5; Mon, 27 Jan 2025 18:22:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A59D328013A; Mon, 27 Jan 2025 18:22:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 880BE2801C5; Mon, 27 Jan 2025 18:22:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 674FC28013A for ; Mon, 27 Jan 2025 18:22:51 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 2DDCA1A06F6 for ; Mon, 27 Jan 2025 23:22:51 +0000 (UTC) X-FDA: 83054808942.07.D1B650A Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf15.hostedemail.com (Postfix) with ESMTP id 347BEA000B for ; Mon, 27 Jan 2025 23:22:49 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=KvZcJgTl; spf=pass (imf15.hostedemail.com: domain of 3SBWYZwQKCBc2I083BB381.zB985AHK-997Ixz7.BE3@flex--fvdl.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3SBWYZwQKCBc2I083BB381.zB985AHK-997Ixz7.BE3@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738020169; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=csxUnMb8aBY+LA7oJhb6s5xyn3ur++3PPVPSPZ4uvnE=; b=Ff5cKRykEI47HMceo+VysUE4lja+7thSWBVIW82h+Do4gleKCo8LYuUwIQyvCqN1nI/Fzn 4h87gKo2/JpTeyf2AtkVOlViBe4Zc/gzGqWj8P9s2OGLjIzFsDk5MvGFsuoHE6RUGoH/cw gzC52uNZya+5KrSj6rOANoTjDIIeoCg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738020169; a=rsa-sha256; cv=none; b=RKKj6QGVnXHpKe+EbUofr4ZTd8KDOulSnyol15hn4iAx3XU3XmGMVMvoZDEQ0vQVCss8mF R/fjPjuBGbMy1StXRVxm9phoDyK5FZkcD/SIryt5MCgZOA8DkjnTvIf218lLps4sLaT/FO UOCLD7p6Byos6Cqy7o8Jh9G4FBpDiHc= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=KvZcJgTl; spf=pass (imf15.hostedemail.com: domain of 3SBWYZwQKCBc2I083BB381.zB985AHK-997Ixz7.BE3@flex--fvdl.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3SBWYZwQKCBc2I083BB381.zB985AHK-997Ixz7.BE3@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ef909597d9so14693930a91.3 for ; Mon, 27 Jan 2025 15:22:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738020168; x=1738624968; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=csxUnMb8aBY+LA7oJhb6s5xyn3ur++3PPVPSPZ4uvnE=; b=KvZcJgTl01MTRTCSQV4SVieeHfaGNpCHYk39rRL+DmPGKqk1nAUZTzYWG2P53WQjXG BsLS7y0gw/hpDaf5wrPjD9Jka8lNZG83NtKUrNqU/WI+nramb9Q6rzCGJ2xKKUNTK2ae rzWUbZJAEc1rHQIX3+itjU4F/stqplaqXb95Yrymne7Om5IEs9hNzSVh8vzl3a2zU07R Um6IkxPdeolKrgxXvqNlscxAf01gXWJ0/Ho8+Hjy3qxpSUyITOP2K79dj5WxPVc8Fc7U BOkao0Jvn+WXHDDSo16BqHVpk2Cv/tJVgg812HUoYJBOOf1rbq2aoS/SytaaHTSeTzdL X4Gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738020168; x=1738624968; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=csxUnMb8aBY+LA7oJhb6s5xyn3ur++3PPVPSPZ4uvnE=; b=PS6sdRX1dZgMfjWtMI5+6vv8F8Z0gH4h/B8KNmjys2Jhkrn54hyPxXtA8t7z8pl/Iw syr/jqEJYdcKdxetscbn3GShYzpWsXONb/JAf0AAU5pHCcObBQJmM88Ow+uqj9g/b+/I zH1LnzvpsWJIhA3SEpUlbGqCQuXKxeCX1Z2LduFFRohYDtejg+OgVcEzvDiA966qLZFL PQlD6kTjpaPLzO3Gxz5kqmH/mtwVTHsXAUo6B0N4RHocT9makponZMo7U4LnVmZdVZct lHLyggbsxJtnHG4VpGcNZyzDFVsDCcpmXhY/emDOLm2sko3x/Eixjoziw58v9aZm4gyD 7/uA== X-Forwarded-Encrypted: i=1; AJvYcCUbFKl3E1cYCDtoOrgrZ85FgOxfDOFzPugI03qnHxl0O/yGRZA60NiNbBu7kkre76bEqPeLdsEDbQ==@kvack.org X-Gm-Message-State: AOJu0Ywn5GHWsf+hLtnHL1iTRUtJLGJMOqrdnJ+P+XHjTuQA5tsaKVL9 36AIBWpVs0l7YDd3aMxb5fnP3ioZkx6+eLQBmcIOxbDyG76XQvjQRQOoGUAMbsWnTog66g== X-Google-Smtp-Source: AGHT+IH3jxo8XkEjr0W40F795lbVGky6ZUEM2NpGkkNXzuQcK4yqmdMK+8XDOm7CSZWx+GP8e91KqO52 X-Received: from pfau6.prod.google.com ([2002:a05:6a00:aa86:b0:72d:7781:cdb9]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:7b07:b0:1e1:ae83:ad04 with SMTP id adf61e73a8af0-1eb215902cbmr48631941637.27.1738020168078; Mon, 27 Jan 2025 15:22:48 -0800 (PST) Date: Mon, 27 Jan 2025 23:21:55 +0000 In-Reply-To: <20250127232207.3888640-1-fvdl@google.com> Mime-Version: 1.0 References: <20250127232207.3888640-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250127232207.3888640-16-fvdl@google.com> Subject: [PATCH 15/27] mm/sparse: add vmemmap_*_hvo functions From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usama.arif@bytedance.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, Frank van der Linden X-Stat-Signature: 7ygmcofepegqftdx99ztimnbgpcue6ay X-Rspamd-Queue-Id: 347BEA000B X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1738020169-637403 X-HE-Meta: U2FsdGVkX1+gNuNXXUl+PQaSGNehJQSFndq6uV1B6v1zG10KPpwnwXGfzwEaaOAY6NmwTyrd4ggkLd3ll7em1HVXzX2hpdzemB7uTnmyMdKi/Kg50vS/5o4t7KpcvEa+y6VBr3aVOoSfIaqVfcjUjuDkO+cexS6h3Wg82ObKPwDThPNTM/c+1r88PpVox73jyBm5I+q1lZAao0a0T/coWeUSwi42eMyqT2fTCzSEmtGUbPbMMs3gkVX9ftKn+URGz69Kqkf9DER+rRhWxHhvtpCdYhta+pWuAd/G3EXHwC3JaoY2Minx59gJlHXQ4swsgzRPreuhdfeC+mk853A8wg5CdVjZZd0KfqNU+/GNTJNAVjLOPuM3H76ZxtK1CAcUwnNBkFr8zxYs5tmgmTQ24tGNjpcFKPbluNNt99ARs8s9YevfiiTq6Exa/RTWGoxDcHfMAtC9uHoRNp7hGsiYR9pCsRiDkSObEZOGJBi08obxEQVX5QJYXwbPWvPRKAUdD2OOD/c0GcXmB5HGJdBJ+ymvnbO9PSho7AsrmGRCrjnyZULkVN9s6chBbGTKrozEQkkN76OAZMBPpwR8PHS4VUVlu4b/DdZNxJkVfeSgbE51OwOTNkJTbhMGKyUkW+hVdHCMEbqje4VXbtDzl/cQbFlk2LiuY0baB/nAedKVbzDIgq4du4B/2JEhtsLkcUgF/xSqmtC4Um7JNtUael3GdQKc8xHkPrv3oqIkhqnfdWLxMPTRxq9wKtlMY3WhfSozMGoCWTz4ODM9YDEyJu1AokhhHPOWEex0kOFc0+cxLe3z42M3tvu5TOLyCw5anGMfXl9b7J5nc1TKbg0oNqCAdsx3jmRe5/6gbVZjOE8WwYTj4InjZd9kpmUNN8NUd9Hut9NSQ8DUgmmx6kzTTdwWok4e72LQPoiwre0ZSKzgiRG8FNEI6w6ItXvlhnuUWANwdXfHTOdKkKJSXtDDfaJ L45KPL5r TsLRKe8deULwaRk7LvGSq9kElbyZj60SNcnoCmo/f7gH9Pd/BLpZAwBg0mSwvOHLDeu9MFMdyUcBoau19n/OBjmhpaMUupYQv3N1pIZNoSJMmyE+GymVXpI32AqNTRCV21wDCckC2yvtirBXI90196R7zlXD0B2W8LeQfFRLs+vxUap21IUX2kucZ9281evi5aM7O5IZWogjCbVpd8CyBaxvtLjoXFvMq+v9GfpGT4kJMQ8+a4UKNWYiYg6ggzEwGndMPzTT9a7v+BgWl0NBub8nJxheFpKnlN1Lil3JLz9rpXulHQ8wvNjRAj52dMV4xMTZ5uIWxhSzsjK4Q3/1KvKwSGvKA0QhnrWNtJ7eYDUOPtBVoRQoAC3alhmv27VTceC7tJBlBsW4DDxIe3aDS0jXfpRgJBFszrn79VlhF+SJgnUsTa+mLSSmBKdX1OFYTgyPmgDY1dtM/VSLoW22kbB83oFrhA3eVuTMj6kaqxLEqNQ1ddks5kS5UkX7qp7Ka8us/imSDPL/SRidBZ9d/nlIGjbFoda8WUGCNHjTHJgwoLnra57QsFMc350wL90uZ26/f X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add a few functions to enable early HVO: vmemmap_populate_hvo vmemmap_undo_hvo vmemmap_wrprotect_hvo The populate and undo functions are expected to be used in early init, from the sparse_init_nid_early() function. The wrprotect function is to be used, potentially, later. To implement these functions, mostly re-use the existing compound pages vmemmap logic used by DAX. vmemmap_populate_address has its argument changed a bit in this commit: the page structure passed in to be reused in the mapping is replaced by a PFN and a flag. The flag indicates whether an extra ref should be taken on the vmemmap page containing the head page structure. Taking the ref is appropriate to for DAX / ZONE_DEVICE, but not for HugeTLB HVO. The HugeTLB vmemmap optimization maps tail page structure pages read-only. The vmemmap_wrprotect_hvo function that does this is implemented separately, because it cannot be guaranteed that reserved page structures will not be write accessed during memory initialization. Even with CONFIG_DEFERRED_STRUCT_PAGE_INIT, they might still be written to (if they are at the bottom of a zone). So, vmemmap_populate_hvo leaves the tail page structure pages RW initially, and then later during initialization, after memmap init is fully done, vmemmap_wrprotect_hvo must be called to finish the job. Subsequent commits will use these functions for early HugeTLB HVO. Signed-off-by: Frank van der Linden --- include/linux/mm.h | 9 ++- mm/sparse-vmemmap.c | 141 +++++++++++++++++++++++++++++++++++++++----- 2 files changed, 135 insertions(+), 15 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index df83653ed6e3..0463c062fd7a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3837,7 +3837,8 @@ p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node); pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node); pmd_t *vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node); pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, - struct vmem_altmap *altmap, struct page *reuse); + struct vmem_altmap *altmap, unsigned long ptpfn, + unsigned long flags); void *vmemmap_alloc_block(unsigned long size, int node); struct vmem_altmap; void *vmemmap_alloc_block_buf(unsigned long size, int node, @@ -3853,6 +3854,12 @@ int vmemmap_populate_hugepages(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap); int vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap); +int vmemmap_populate_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); +int vmemmap_undo_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); +void vmemmap_wrprotect_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); void vmemmap_populate_print_last(void); #ifdef CONFIG_MEMORY_HOTPLUG void vmemmap_free(unsigned long start, unsigned long end, diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 8751c46c35e4..bee22ca93654 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -30,6 +30,13 @@ #include #include +#include + +/* + * Flags for vmemmap_populate_range and friends. + */ +/* Get a ref on the head page struct page, for ZONE_DEVICE compound pages */ +#define VMEMMAP_POPULATE_PAGEREF 0x0001 #include "internal.h" @@ -144,17 +151,18 @@ void __meminit vmemmap_verify(pte_t *pte, int node, pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, unsigned long flags) { pte_t *pte = pte_offset_kernel(pmd, addr); if (pte_none(ptep_get(pte))) { pte_t entry; void *p; - if (!reuse) { + if (!ptpfn) { p = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap); if (!p) return NULL; + ptpfn = PHYS_PFN(__pa(p)); } else { /* * When a PTE/PMD entry is freed from the init_mm @@ -165,10 +173,10 @@ pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, * and through vmemmap_populate_compound_pages() when * slab is available. */ - get_page(reuse); - p = page_to_virt(reuse); + if (flags & VMEMMAP_POPULATE_PAGEREF) + get_page(pfn_to_page(ptpfn)); } - entry = pfn_pte(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL); + entry = pfn_pte(ptpfn, PAGE_KERNEL); set_pte_at(&init_mm, addr, pte, entry); } return pte; @@ -238,7 +246,8 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node) static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, + unsigned long flags) { pgd_t *pgd; p4d_t *p4d; @@ -258,7 +267,7 @@ static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, pmd = vmemmap_pmd_populate(pud, addr, node); if (!pmd) return NULL; - pte = vmemmap_pte_populate(pmd, addr, node, altmap, reuse); + pte = vmemmap_pte_populate(pmd, addr, node, altmap, ptpfn, flags); if (!pte) return NULL; vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); @@ -269,13 +278,15 @@ static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, static int __meminit vmemmap_populate_range(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, + unsigned long flags) { unsigned long addr = start; pte_t *pte; for (; addr < end; addr += PAGE_SIZE) { - pte = vmemmap_populate_address(addr, node, altmap, reuse); + pte = vmemmap_populate_address(addr, node, altmap, + ptpfn, flags); if (!pte) return -ENOMEM; } @@ -286,7 +297,107 @@ static int __meminit vmemmap_populate_range(unsigned long start, int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { - return vmemmap_populate_range(start, end, node, altmap, NULL); + return vmemmap_populate_range(start, end, node, altmap, 0, 0); +} + +/* + * Undo populate_hvo, and replace it with a normal base page mapping. + * Used in memory init in case a HVO mapping needs to be undone. + * + * This can happen when it is discovered that a memblock allocated + * hugetlb page spans multiple zones, which can only be verified + * after zones have been initialized. + * + * We know that: + * 1) The first @headsize / PAGE_SIZE vmemmap pages were individually + * allocated through memblock, and mapped. + * + * 2) The rest of the vmemmap pages are mirrors of the last head page. + */ +int __meminit vmemmap_undo_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + unsigned long maddr, pfn; + pte_t *pte; + int headpages; + + /* + * Should only be called early in boot, so nothing will + * be accessing these page structures. + */ + WARN_ON(!early_boot_irqs_disabled); + + headpages = headsize >> PAGE_SHIFT; + + /* + * Clear mirrored mappings for tail page structs. + */ + for (maddr = addr + headsize; maddr < end; maddr += PAGE_SIZE) { + pte = virt_to_kpte(maddr); + pte_clear(&init_mm, maddr, pte); + } + + /* + * Clear and free mappings for head page and first tail page + * structs. + */ + for (maddr = addr; headpages-- > 0; maddr += PAGE_SIZE) { + pte = virt_to_kpte(maddr); + pfn = pte_pfn(ptep_get(pte)); + pte_clear(&init_mm, maddr, pte); + memblock_phys_free(PFN_PHYS(pfn), PAGE_SIZE); + } + + flush_tlb_kernel_range(addr, end); + + return vmemmap_populate(addr, end, node, NULL); +} + +/* + * Write protect the mirrored tail page structs for HVO. This will be + * called from the hugetlb code when gathering and initializing the + * memblock allocated gigantic pages. The write protect can't be + * done earlier, since it can't be guaranteed that the reserved + * page structures will not be written to during initialization, + * even if CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled. + * + * The PTEs are known to exist, and nothing else should be touching + * these pages. The caller is responsible for any TLB flushing. + */ +void vmemmap_wrprotect_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + unsigned long maddr; + pte_t *pte; + + for (maddr = addr + headsize; maddr < end; maddr += PAGE_SIZE) { + pte = virt_to_kpte(maddr); + ptep_set_wrprotect(&init_mm, maddr, pte); + } +} + +/* + * Populate vmemmap pages HVO-style. The first page contains the head + * page and needed tail pages, the other ones are mirrors of the first + * page. + */ +int __meminit vmemmap_populate_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + pte_t *pte; + unsigned long maddr; + + for (maddr = addr; maddr < addr + headsize; maddr += PAGE_SIZE) { + pte = vmemmap_populate_address(maddr, node, NULL, 0, 0); + if (!pte) + return -ENOMEM; + } + + /* + * Reuse the last page struct page mapped above for the rest. + */ + return vmemmap_populate_range(maddr, end, node, NULL, + pte_pfn(ptep_get(pte)), 0); } void __weak __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node, @@ -409,7 +520,8 @@ static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, * with just tail struct pages. */ return vmemmap_populate_range(start, end, node, NULL, - pte_page(ptep_get(pte))); + pte_pfn(ptep_get(pte)), + VMEMMAP_POPULATE_PAGEREF); } size = min(end - start, pgmap_vmemmap_nr(pgmap) * sizeof(struct page)); @@ -417,13 +529,13 @@ static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, unsigned long next, last = addr + size; /* Populate the head page vmemmap page */ - pte = vmemmap_populate_address(addr, node, NULL, NULL); + pte = vmemmap_populate_address(addr, node, NULL, 0, 0); if (!pte) return -ENOMEM; /* Populate the tail pages vmemmap page */ next = addr + PAGE_SIZE; - pte = vmemmap_populate_address(next, node, NULL, NULL); + pte = vmemmap_populate_address(next, node, NULL, 0, 0); if (!pte) return -ENOMEM; @@ -433,7 +545,8 @@ static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, */ next += PAGE_SIZE; rc = vmemmap_populate_range(next, last, node, NULL, - pte_page(ptep_get(pte))); + pte_pfn(ptep_get(pte)), + VMEMMAP_POPULATE_PAGEREF); if (rc) return -ENOMEM; }