From patchwork Wed Jan 29 22:41:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank van der Linden X-Patchwork-Id: 13954215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E31EAC0218D for ; Wed, 29 Jan 2025 22:43:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 555AC28026B; Wed, 29 Jan 2025 17:42:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 47BEB28008C; Wed, 29 Jan 2025 17:42:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 237C528008C; Wed, 29 Jan 2025 17:42:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8439F28026B for ; Wed, 29 Jan 2025 17:42:51 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 49EB5A0486 for ; Wed, 29 Jan 2025 22:42:51 +0000 (UTC) X-FDA: 83061965742.03.6F55264 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf14.hostedemail.com (Postfix) with ESMTP id 7B6D6100010 for ; Wed, 29 Jan 2025 22:42:49 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=H0Vuc4DM; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of 36K6aZwQKCPMaqYgbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--fvdl.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=36K6aZwQKCPMaqYgbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--fvdl.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738190569; a=rsa-sha256; cv=none; b=7zgc5ypCtdJ3XVj7pLzxwCPQyf06e9fIhtzk5PAXfF1T22xVbJKpGuppQReSKt9d9eJ2b0 iGGtex0DaBa1xN//OhBUdUSd5BC4uxvOkXRFb2DObE4aZ2oKdcXqy7WZ28G3p4auyJ2TGV 9jypLL5LKfgOWfoTjo7cYv2QLJUcntY= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=H0Vuc4DM; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of 36K6aZwQKCPMaqYgbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--fvdl.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=36K6aZwQKCPMaqYgbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--fvdl.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738190569; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=98CKfAXCouaMhImAnm0e8KjEMzprOEcP3h/4GQttOQs=; b=o/VKa8d9ZBS5rTUjB0XoTZcVb8kxCPKzdoh0plJI+7ZwDfBZa2GzJnyVFK4UBRNbDLkPL4 A5Vo7hWyCqIUFcBPqCTX8oFl4KgT+2DvP9IRMLo0Bc9u5HaN4hXAG6cCVW5i7qcoGv4oly 4okGe0lvJj896rgRLa/VPz6pxeC/qCI= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-216750b679eso2289995ad.1 for ; Wed, 29 Jan 2025 14:42:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738190568; x=1738795368; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=98CKfAXCouaMhImAnm0e8KjEMzprOEcP3h/4GQttOQs=; b=H0Vuc4DMoFdIlmXvOgZTkmJSvqNNTTi39NJQkvKskWWbF6EmEjHLHa8P3HgXt2kk8/ JKam2R1dUH48+UyQZun86FIEj8sRls23vl3TJHbqHpkZ8yApf+eoz5TUD7n7y11HT3Sc STVKQhAVoYbrS0lAnWWN6k0scIxNyAzkHj5LCozwYGJp6L8TE0Nl7wx/1jEqmPxh/sW+ NsX8cTbPUjgLFcxsYU9jQzItkP/Q+jlEsg9dhp4kUeWDINL9DioLX0qpaJe7enCHo5j3 F2TrtfhYDD0FKPYhgmZOuChTWMKwEy58FW634vt08LyC4IEUKm2J7HWZQi6m4zPHnjdw Z72Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738190568; x=1738795368; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=98CKfAXCouaMhImAnm0e8KjEMzprOEcP3h/4GQttOQs=; b=tHV0QrmcqtVavK6zHxDGipv56G5QPB+S9ytYoY2WUANWXYqge6rezimD99Sl1jBN6y H0RgTyNfimHrK65ute3LfamSPPHBuLZIaLJfs63T0lEEqQlDcPnCd0c7MebWRYozGSn+ JsTfnvI76hj8gbZCtubESHhtd8TzMgEWwlifr3dDbYUNditaKq1OblCl6mmDaKovu+AA Zewuy5w+ZL/hwthOKQn0ZHtWyXBCgmXK5/vzNVPIEjVwtJ0BwhSiamhdWwD2lOR3rDW8 HZ5FhCKKNADD1rVTGgJZdMGhAqqVi+5V114XXj+ehrnICQu7qthsR71MeeSHHmNID5ru yoGg== X-Forwarded-Encrypted: i=1; AJvYcCX6akCaxSzp8zh0Lz1M5JNri+j6bybN6qcty2r9mInYgPqJTS8mPyImWV+VMGhBlMJdfcKrBVflqw==@kvack.org X-Gm-Message-State: AOJu0Yz2KPahfrb/olwG2q2+ZIxI3wGD6ZlIFm+yVaWuChK5SUvJsgVc UQgkPriUOo4ZV4uYm6UOfEOW8kJo6gw/mZaOVFCXHRgUSEJg0wX++ER3g+GjcvEagBwffQ== X-Google-Smtp-Source: AGHT+IEY1cyX1Tlj45D+PYHBljqRW8zszqnM/nXeXUYLBp6biLr4qmhI4En+OAtFaTTk0P/jl6YDzVhE X-Received: from pfbb14.prod.google.com ([2002:a05:6a00:ac8e:b0:72a:a7a4:9a53]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:a1e:b0:725:b347:c3cc with SMTP id d2e1a72fcca58-72fd0c7bfadmr7456497b3a.23.1738190568330; Wed, 29 Jan 2025 14:42:48 -0800 (PST) Date: Wed, 29 Jan 2025 22:41:49 +0000 In-Reply-To: <20250129224157.2046079-1-fvdl@google.com> Mime-Version: 1.0 References: <20250129224157.2046079-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250129224157.2046079-21-fvdl@google.com> Subject: [PATCH v2 20/28] mm/hugetlb: do pre-HVO for bootmem allocated pages From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, Frank van der Linden X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 7B6D6100010 X-Stat-Signature: fco6gwguh4wbms9f3qmehmet1nswhs64 X-HE-Tag: 1738190569-705718 X-HE-Meta: U2FsdGVkX19gG0TjYDPiU0Bd8vcoMa4F8hpe4LoXhRQonrbtNUL/pbasQnxufKfYOazySQXStn5rj3F6ycuaqR5988EidW63FL669xfCEj8NFT6HhUy0mEbllJlCwc6P9BdJd7ximFRslf5glq1I+Ay9z97SsdDqd09cueeE+TgDSlXDroaURBYOtVRTrc9k1jBs+7l7BiByjiyf1l/F6HeUlceBuBV2pNEvILHHMS8RcFbyCo7w+ZSTjyEz/u+AXOiOzsJbeTjj6VQp8YdigpHtIijlRBm+6W2hrrgVHc3NrFSM8zkrg3+LbFkCfiDl9+nDfcNwS+qvrJG8nIDsbo47xi9ohYLOM+Ry/CDfih17WdjFcSP5nK7EUsnxMZRKscGe8o+Gweqq78C6gAZKuJgit3gyKfzmnkx6vI3Y10x3ECzMs2E+LOAZvErJTgCoISdvFHcZx63K17Ai239Zp25Zqna1MM0+TpLzvEMNCJsTE/5dv20KEmx3ww021L9kZqeKFvRvyy6/UoT/UrqFwNp7nQmUAlmBaLGLln/WUI+XQMyssEjrNmVXMzXdKD2K7sg23gMWES3pnAs+SfCg4B+4IiuiWsCZr6GBbrfvdYVoDXmsdqh3hF17aXbrZe31NUoSKvKpUEFE/tGhKrVVx5mS/bmwzSHrzOr1RawLyeeFb8pkucjCASN7kiMRyDaET5mDmD6SlQ88weQnd5TSXiDcde7zzwpMqBL4w2K7DnvraHgjtq3VnWwESlvi4EZR4kIIK6NvggJ3ehYL4iwYrbXKzB9u+giqI6s5qEDyIulXxuDL2ePYZgOi/ddq/aPAgpRl1eopDff6yEOM6zKzpid25n+FsdlrRcWfz59Bme6ChF/4mxUjYMldAVrpeOOeGoV9O9aoOYXTuwp9JzawUwzbkXkFwS+NpKFtJyMtrQEo1QJsuXPNfGO0AKyN6O95hRethFJ1mBd1EbF5yEE pyqYnyo/ Gv4RdR7QE7edZHIW0+6eI5fD1uVA7l8vDH1CubJ+snjn+sP009c8OhtAGZ0kd5rgKjkyRuJv3Nkg53+MhyUwY9HitM0gYzayGgY1dchfnFNc5M+nVXvvSkNp1gmGMgSPt/b3j+fuwND/by/qntAS54DT8xoXmuoxxa9DqEgxlwZTrUpgbKSOpDD3YU6dDutu1gok56EStwCJ0gc5lNFzbaCREaXrcaKe+nnphuIjqgYD+1oqzDIu8vh+GXJ04ImnAhny8nVbHHOXReqjFckkzroVMFyFCgLNHGTxvxPWfJl4dDLDQ8eHIuNo5C/6HfXUdSaBnHoh/Ik7+VuDOdXiOYxQcCQIHqjroF/Do3Lqu8TAda4opQa+O+RFeJ7MaqG0XZaqphKoD1ycXQYjAq3lrG62WThslG7XqJFJRBNDD2GOfd7TJi0xfByotQthRKZF28K76a5IxKAUA4ucJ+6Op0eni2misoeyh6sQZK8Ye49ZSJ/5bsK9qbJ9lgvDdMeu5SSzQ5r3RTWkOvdoBUToeBai3yw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: For large systems, the overhead of vmemmap pages for hugetlb is substantial. It's about 1.5% of memory, which is about 45G for a 3T system. If you want to configure most of that system for hugetlb (e.g. to use as backing memory for VMs), there is a chance of running out of memory on boot, even though you know that the 45G will become available later. To avoid this scenario, and since it's a waste to first allocate and then free that 45G during boot, do pre-HVO for hugetlb bootmem allocated pages ('gigantic' pages). pre-HVO is done by adding functions that are called from sparse_init_nid_early and sparse_init_nid_late. The first is called before memmap allocation, so it takes care of allocating memmap HVO-style. The second verifies that all bootmem pages look good, specifically it checks that they do not intersect with multiple zones. This can only be done from sparse_init_nid_late path, when zones have been initialized. The hugetlb page size must be aligned to the section size, and aligned to the size of memory described by the number of page structures contained in one PMD (since pre-HVO is not prepared to split PMDs). This should be true for most 'gigantic' pages, it is for 1G pages on x86, where both of these alignment requirements are 128M. This will only have an effect if hugetlb_bootmem_alloc was called early in boot. If not, it won't do anything, and HVO for bootmem hugetlb pages works as before. Signed-off-by: Frank van der Linden --- include/linux/hugetlb.h | 2 + mm/hugetlb.c | 4 +- mm/hugetlb_vmemmap.c | 143 ++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 6 ++ mm/sparse-vmemmap.c | 4 ++ 5 files changed, 157 insertions(+), 2 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 10a7ce2b95e1..2512463bca49 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -687,6 +687,8 @@ struct huge_bootmem_page { #define HUGE_BOOTMEM_HVO 0x0001 #define HUGE_BOOTMEM_ZONES_VALID 0x0002 +bool hugetlb_bootmem_page_zones_valid(int nid, struct huge_bootmem_page *m); + int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list); int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn); struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index b48f8638c9af..5af544960052 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3311,8 +3311,8 @@ static void __init prep_and_add_bootmem_folios(struct hstate *h, } } -static bool __init hugetlb_bootmem_page_zones_valid(int nid, - struct huge_bootmem_page *m) +bool __init hugetlb_bootmem_page_zones_valid(int nid, + struct huge_bootmem_page *m) { unsigned long start_pfn; bool valid; diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index be6b33ecbc8e..9a99dfa3c495 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -743,6 +743,149 @@ void hugetlb_vmemmap_optimize_bootmem_folios(struct hstate *h, struct list_head __hugetlb_vmemmap_optimize_folios(h, folio_list, true); } +#ifdef CONFIG_SPARSEMEM_VMEMMAP_PREINIT + +/* Return true of a bootmem allocated HugeTLB page should be pre-HVO-ed */ +static bool vmemmap_should_optimize_bootmem_page(struct huge_bootmem_page *m) +{ + unsigned long section_size, psize, pmd_vmemmap_size; + phys_addr_t paddr; + + if (!READ_ONCE(vmemmap_optimize_enabled)) + return false; + + if (!hugetlb_vmemmap_optimizable(m->hstate)) + return false; + + psize = huge_page_size(m->hstate); + paddr = virt_to_phys(m); + + /* + * Pre-HVO only works if the bootmem huge page + * is aligned to the section size. + */ + section_size = (1UL << PA_SECTION_SHIFT); + if (!IS_ALIGNED(paddr, section_size) || + !IS_ALIGNED(psize, section_size)) + return false; + + /* + * The pre-HVO code does not deal with splitting PMDS, + * so the bootmem page must be aligned to the number + * of base pages that can be mapped with one vmemmap PMD. + */ + pmd_vmemmap_size = (PMD_SIZE / (sizeof(struct page))) << PAGE_SHIFT; + if (!IS_ALIGNED(paddr, pmd_vmemmap_size) || + !IS_ALIGNED(psize, pmd_vmemmap_size)) + return false; + + return true; +} + +/* + * Initialize memmap section for a gigantic page, HVO-style. + */ +void __init hugetlb_vmemmap_init_early(int nid) +{ + unsigned long psize, paddr, section_size; + unsigned long ns, i, pnum, pfn, nr_pages; + unsigned long start, end; + struct huge_bootmem_page *m = NULL; + void *map; + + /* + * Noting to do if bootmem pages were not allocated + * early in boot, or if HVO wasn't enabled in the + * first place. + */ + if (!hugetlb_bootmem_allocated()) + return; + + if (!READ_ONCE(vmemmap_optimize_enabled)) + return; + + section_size = (1UL << PA_SECTION_SHIFT); + + list_for_each_entry(m, &huge_boot_pages[nid], list) { + if (!vmemmap_should_optimize_bootmem_page(m)) + continue; + + nr_pages = pages_per_huge_page(m->hstate); + psize = nr_pages << PAGE_SHIFT; + paddr = virt_to_phys(m); + pfn = PHYS_PFN(paddr); + map = pfn_to_page(pfn); + start = (unsigned long)map; + end = start + nr_pages * sizeof(struct page); + + if (vmemmap_populate_hvo(start, end, nid, + HUGETLB_VMEMMAP_RESERVE_SIZE) < 0) + continue; + + memmap_boot_pages_add(HUGETLB_VMEMMAP_RESERVE_SIZE / PAGE_SIZE); + + pnum = pfn_to_section_nr(pfn); + ns = psize / section_size; + + for (i = 0; i < ns; i++) { + sparse_init_early_section(nid, map, pnum, + SECTION_IS_VMEMMAP_PREINIT); + map += section_map_size(); + pnum++; + } + + m->flags |= HUGE_BOOTMEM_HVO; + } +} + +void __init hugetlb_vmemmap_init_late(int nid) +{ + struct huge_bootmem_page *m, *tm; + unsigned long phys, nr_pages, start, end; + unsigned long pfn, nr_mmap; + struct hstate *h; + void *map; + + if (!hugetlb_bootmem_allocated()) + return; + + if (!READ_ONCE(vmemmap_optimize_enabled)) + return; + + list_for_each_entry_safe(m, tm, &huge_boot_pages[nid], list) { + if (!(m->flags & HUGE_BOOTMEM_HVO)) + continue; + + phys = virt_to_phys(m); + h = m->hstate; + pfn = PHYS_PFN(phys); + nr_pages = pages_per_huge_page(h); + + if (!hugetlb_bootmem_page_zones_valid(nid, m)) { + /* + * Oops, the hugetlb page spans multiple zones. + * Remove it from the list, and undo HVO. + */ + list_del(&m->list); + + map = pfn_to_page(pfn); + + start = (unsigned long)map; + end = start + nr_pages * sizeof(struct page); + + vmemmap_undo_hvo(start, end, nid, + HUGETLB_VMEMMAP_RESERVE_SIZE); + nr_mmap = end - start - HUGETLB_VMEMMAP_RESERVE_SIZE; + memmap_boot_pages_add(DIV_ROUND_UP(nr_mmap, PAGE_SIZE)); + + memblock_phys_free(phys, huge_page_size(h)); + continue; + } else + m->flags |= HUGE_BOOTMEM_ZONES_VALID; + } +} +#endif + static const struct ctl_table hugetlb_vmemmap_sysctls[] = { { .procname = "hugetlb_optimize_vmemmap", diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 926b8b27b5cb..0031e49b12f7 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -9,6 +9,8 @@ #ifndef _LINUX_HUGETLB_VMEMMAP_H #define _LINUX_HUGETLB_VMEMMAP_H #include +#include +#include /* * Reserve one vmemmap page, all vmemmap addresses are mapped to it. See @@ -25,6 +27,10 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *h, void hugetlb_vmemmap_optimize_folio(const struct hstate *h, struct folio *folio); void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list); void hugetlb_vmemmap_optimize_bootmem_folios(struct hstate *h, struct list_head *folio_list); +#ifdef CONFIG_SPARSEMEM_VMEMMAP_PREINIT +void hugetlb_vmemmap_init_early(int nid); +void hugetlb_vmemmap_init_late(int nid); +#endif static inline unsigned int hugetlb_vmemmap_size(const struct hstate *h) diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index bee22ca93654..29647fd3d606 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -32,6 +32,8 @@ #include #include +#include "hugetlb_vmemmap.h" + /* * Flags for vmemmap_populate_range and friends. */ @@ -594,6 +596,7 @@ struct page * __meminit __populate_section_memmap(unsigned long pfn, */ void __init sparse_vmemmap_init_nid_early(int nid) { + hugetlb_vmemmap_init_early(nid); } /* @@ -604,5 +607,6 @@ void __init sparse_vmemmap_init_nid_early(int nid) */ void __init sparse_vmemmap_init_nid_late(int nid) { + hugetlb_vmemmap_init_late(nid); } #endif